WorldWideScience

Sample records for scheme reconciling accurate

  1. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Energy Technology Data Exchange (ETDEWEB)

    Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  2. A more accurate scheme for calculating Earth's skin temperature

    Science.gov (United States)

    Tsuang, Ben-Jei; Tu, Chia-Ying; Tsai, Jeng-Lin; Dracup, John A.; Arpe, Klaus; Meyers, Tilden

    2009-02-01

    The theoretical framework of the vertical discretization of a ground column for calculating Earth’s skin temperature is presented. The suggested discretization is derived from the evenly heat-content discretization with the optimal effective thickness for layer-temperature simulation. For the same level number, the suggested discretization is more accurate in skin temperature as well as surface ground heat flux simulations than those used in some state-of-the-art models. A proposed scheme (“op(3,2,0)”) can reduce the normalized root-mean-square error (or RMSE/STD ratio) of the calculated surface ground heat flux of a cropland site significantly to 2% (or 0.9 W m-2), from 11% (or 5 W m-2) by a 5-layer scheme used in ECMWF, from 19% (or 8 W m-2) by a 5-layer scheme used in ECHAM, and from 74% (or 32 W m-2) by a single-layer scheme used in the UCLA GCM. Better accuracy can be achieved by including more layers to the vertical discretization. Similar improvements are expected for other locations with different land types since the numerical error is inherited into the models for all the land types. The proposed scheme can be easily implemented into state-of-the-art climate models for the temperature simulation of snow, ice and soil.

  3. A discontinous Galerkin finite element method with an efficient time integration scheme for accurate simulations

    KAUST Repository

    Liu, Meilin; Bagci, Hakan

    2011-01-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results

  4. A fast and accurate dihedral interpolation loop subdivision scheme

    Science.gov (United States)

    Shi, Zhuo; An, Yalei; Wang, Zhongshuai; Yu, Ke; Zhong, Si; Lan, Rushi; Luo, Xiaonan

    2018-04-01

    In this paper, we propose a fast and accurate dihedral interpolation Loop subdivision scheme for subdivision surfaces based on triangular meshes. In order to solve the problem of surface shrinkage, we keep the limit condition unchanged, which is important. Extraordinary vertices are handled using modified Butterfly rules. Subdivision schemes are computationally costly as the number of faces grows exponentially at higher levels of subdivision. To address this problem, our approach is to use local surface information to adaptively refine the model. This is achieved simply by changing the threshold value of the dihedral angle parameter, i.e., the angle between the normals of a triangular face and its adjacent faces. We then demonstrate the effectiveness of the proposed method for various 3D graphic triangular meshes, and extensive experimental results show that it can match or exceed the expected results at lower computational cost.

  5. A simple, robust and efficient high-order accurate shock-capturing scheme for compressible flows: Towards minimalism

    Science.gov (United States)

    Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi

    2018-06-01

    Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.

  6. A discontinous Galerkin finite element method with an efficient time integration scheme for accurate simulations

    KAUST Repository

    Liu, Meilin

    2011-07-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results show that this new time integration scheme uses considerably larger time steps than the fourth-order Runge-Kutta method when combined with a DG-FEM using higher-order spatial discretization/basis functions for high accuracy. © 2011 IEEE.

  7. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  8. A third order accurate Lagrangian finite element scheme for the computation of generalized molecular stress function fluids

    DEFF Research Database (Denmark)

    Fasano, Andrea; Rasmussen, Henrik K.

    2017-01-01

    A third order accurate, in time and space, finite element scheme for the numerical simulation of three- dimensional time-dependent flow of the molecular stress function type of fluids in a generalized formu- lation is presented. The scheme is an extension of the K-BKZ Lagrangian finite element me...

  9. An Optimally Stable and Accurate Second-Order SSP Runge-Kutta IMEX Scheme for Atmospheric Applications

    Science.gov (United States)

    Rokhzadi, Arman; Mohammadian, Abdolmajid; Charron, Martin

    2018-01-01

    The objective of this paper is to develop an optimized implicit-explicit (IMEX) Runge-Kutta scheme for atmospheric applications focusing on stability and accuracy. Following the common terminology, the proposed method is called IMEX-SSP2(2,3,2), as it has second-order accuracy and is composed of diagonally implicit two-stage and explicit three-stage parts. This scheme enjoys the Strong Stability Preserving (SSP) property for both parts. This new scheme is applied to nonhydrostatic compressible Boussinesq equations in two different arrangements, including (i) semiimplicit and (ii) Horizontally Explicit-Vertically Implicit (HEVI) forms. The new scheme preserves the SSP property for larger regions of absolute monotonicity compared to the well-studied scheme in the same class. In addition, numerical tests confirm that the IMEX-SSP2(2,3,2) improves the maximum stable time step as well as the level of accuracy and computational cost compared to other schemes in the same class. It is demonstrated that the A-stability property as well as satisfying "second-stage order" and stiffly accurate conditions lead the proposed scheme to better performance than existing schemes for the applications examined herein.

  10. Construction of second order accurate monotone and stable residual distribution schemes for unsteady flow problems

    International Nuclear Information System (INIS)

    Abgrall, Remi; Mezine, Mohamed

    2003-01-01

    The aim of this paper is to construct upwind residual distribution schemes for the time accurate solution of hyperbolic conservation laws. To do so, we evaluate a space-time fluctuation based on a space-time approximation of the solution and develop new residual distribution schemes which are extensions of classical steady upwind residual distribution schemes. This method has been applied to the solution of scalar advection equation and to the solution of the compressible Euler equations both in two space dimensions. The first version of the scheme is shown to be, at least in its first order version, unconditionally energy stable and possibly conditionally monotonicity preserving. Using an idea of Csik et al. [Space-time residual distribution schemes for hyperbolic conservation laws, 15th AIAA Computational Fluid Dynamics Conference, Anahein, CA, USA, AIAA 2001-2617, June 2001], we modify the formulation to end up with a scheme that is unconditionally energy stable and unconditionally monotonicity preserving. Several numerical examples are shown to demonstrate the stability and accuracy of the method

  11. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    Science.gov (United States)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  12. A Haptic Feedback Scheme to Accurately Position a Virtual Wrist Prosthesis Using a Three-Node Tactor Array.

    Directory of Open Access Journals (Sweden)

    Andrew Erwin

    Full Text Available In this paper, a novel haptic feedback scheme, used for accurately positioning a 1DOF virtual wrist prosthesis through sensory substitution, is presented. The scheme employs a three-node tactor array and discretely and selectively modulates the stimulation frequency of each tactor to relay 11 discrete haptic stimuli to the user. Able-bodied participants were able to move the virtual wrist prosthesis via a surface electromyography based controller. The participants evaluated the feedback scheme without visual or audio feedback and relied solely on the haptic feedback alone to correctly position the hand. The scheme was evaluated through both normal (perpendicular and shear (lateral stimulations applied on the forearm. Normal stimulations were applied through a prototype device previously developed by the authors while shear stimulations were generated using an ubiquitous coin motor vibrotactor. Trials with no feedback served as a baseline to compare results within the study and to the literature. The results indicated that using normal and shear stimulations resulted in accurately positioning the virtual wrist, but were not significantly different. Using haptic feedback was substantially better than no feedback. The results found in this study are significant since the feedback scheme allows for using relatively few tactors to relay rich haptic information to the user and can be learned easily despite a relatively short amount of training. Additionally, the results are important for the haptic community since they contradict the common conception in the literature that normal stimulation is inferior to shear. From an ergonomic perspective normal stimulation has the potential to benefit upper limb amputees since it can operate at lower frequencies than shear-based vibrotactors while also generating less noise. Through further tuning of the novel haptic feedback scheme and normal stimulation device, a compact and comfortable sensory substitution

  13. Reconcile: A Coreference Resolution Research Platform

    Energy Technology Data Exchange (ETDEWEB)

    Stoyanov, V; Cardie, C; Gilbert, N; Riloff, E; Buttler, D; Hysom, D

    2009-10-29

    Despite the availability of standard data sets and metrics, approaches to the problem of noun phrase coreference resolution are hard to compare empirically due to the different evaluation setting stemming, in part, from the lack of comprehensive coreference resolution research platforms. In this tech report we present Reconcile, a coreference resolution research platform that aims to facilitate the implementation of new approaches to coreference resolution as well as the comparison of existing approaches. We discuss Reconcile's architecture and give results of running Reconcile on six data sets using four evaluation metrics, showing that Reconcile's performance is comparable to state-of-the-art systems in coreference resolution.

  14. Development of highly accurate approximate scheme for computing the charge transfer integral

    Energy Technology Data Exchange (ETDEWEB)

    Pershin, Anton; Szalay, Péter G. [Laboratory for Theoretical Chemistry, Institute of Chemistry, Eötvös Loránd University, P.O. Box 32, H-1518 Budapest (Hungary)

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  15. Reconciling tensor and scalar observables in G-inflation

    Science.gov (United States)

    Ramírez, Héctor; Passaglia, Samuel; Motohashi, Hayato; Hu, Wayne; Mena, Olga

    2018-04-01

    The simple m2phi2 potential as an inflationary model is coming under increasing tension with limits on the tensor-to-scalar ratio r and measurements of the scalar spectral index ns. Cubic Galileon interactions in the context of the Horndeski action can potentially reconcile the observables. However, we show that this cannot be achieved with only a constant Galileon mass scale because the interactions turn off too slowly, leading also to gradient instabilities after inflation ends. Allowing for a more rapid transition can reconcile the observables but moderately breaks the slow-roll approximation leading to a relatively large and negative running of the tilt αs that can be of order ns‑1. We show that the observables on CMB and large scale structure scales can be predicted accurately using the optimized slow-roll approach instead of the traditional slow-roll expansion. Upper limits on |αs| place a lower bound of rgtrsim 0.005 and, conversely, a given r places a lower bound on |αs|, both of which are potentially observable with next generation CMB and large scale structure surveys.

  16. Further test of new pairing scheme used in overhaul of BCS theory

    International Nuclear Information System (INIS)

    Zheng, X.H.; Walmsley, D.G.

    2014-01-01

    Highlights: • Explanation of a new pairing scheme to overhaul BCS theory. • Prediction of superconductor properties from normal state resistivity. • Applications to Nb, Pb, Al, Ta, Mo, Ir and W, T c between 9.5 and 0.012 K. • High accuracy compared with measured energy gap of Nb, Pb, Al and Ta. • Prediction of energy gap for Mo, Ir and W (so far not measured). - Abstract: A new electron pairing scheme, rectifying a fundamental flaw of the BCS theory, is tested extensively. It postulates that superconductivity arises solely from residual umklapp scattering when it is not in competition for the same destination electron states with normal scattering. It reconciles a long standing theoretical discrepancy in the strength of the electron–phonon interaction between the normal and superconductive states. The new scheme is exploited to calculate the superconductive electron–phonon spectral density, α 2 F(ν), entirely on the basis of normal state electrical resistivity. This leads to first principles superconductive properties (zero temperature energy gap and tunnelling conductance) in seven metals which turn out to be highly accurate when compared with known data; in other cases experimental verification is invited. The transition temperatures involved vary over almost three orders of magnitude: from 9.5 K for niobium to 0.012 K for tungsten

  17. An accurate scheme by block method for third order ordinary ...

    African Journals Online (AJOL)

    problems of ordinary differential equations is presented in this paper. The approach of collocation approximation is adopted in the derivation of the scheme and then the scheme is applied as simultaneous integrator to special third order initial value problem of ordinary differential equations. This implementation strategy is ...

  18. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Science.gov (United States)

    Rybynok, V. O.; Kyriacou, P. A.

    2007-10-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  19. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Energy Technology Data Exchange (ETDEWEB)

    Rybynok, V O; Kyriacou, P A [City University, London (United Kingdom)

    2007-10-15

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  20. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    International Nuclear Information System (INIS)

    Rybynok, V O; Kyriacou, P A

    2007-01-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media

  1. Reconciling privacy and security

    NARCIS (Netherlands)

    Lieshout, M.J. van; Friedewald, M.; Wright, D.; Gutwirth, S.

    2013-01-01

    This paper considers the relationship between privacy and security and, in particular, the traditional "trade-off" paradigm. The issue is this: how, in a democracy, can one reconcile the trend towards increasing security (for example, as manifested by increasing surveillance) with the fundamental

  2. TVD schemes in one and two space dimensions

    International Nuclear Information System (INIS)

    Leveque, R.J.; Goodman, J.B.; New York Univ., NY)

    1985-01-01

    The recent development of schemes which are second order accurate in smooth regions has made it possible to overcome certain difficulties which used to arise in numerical computations of discontinuous solutions of conservation laws. The present investigation is concerned with scalar conservation laws, taking into account the employment of total variation diminishing (TVD) schemes. The concept of a TVD scheme was introduced by Harten et al. (1976). Harten et al. first constructed schemes which are simultaneously TVD and second order accurate on smooth solutions. In the present paper, a summary is provided of recently conducted work in this area. Attention is given to TVD schemes in two space dimensions, a second order accurate TVD scheme in one dimension, and the entropy condition and spreading of rarefaction waves. 19 references

  3. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    Science.gov (United States)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  4. Lagrange-Flux Schemes: Reformulating Second-Order Accurate Lagrange-Remap Schemes for Better Node-Based HPC Performance

    Directory of Open Access Journals (Sweden)

    De Vuyst Florian

    2016-11-01

    Full Text Available In a recent paper [Poncet R., Peybernes M., Gasc T., De Vuyst F. (2016 Performance modeling of a compressible hydrodynamics solver on multicore CPUs, in “Parallel Computing: on the road to Exascale”], we have achieved the performance analysis of staggered Lagrange-remap schemes, a class of solvers widely used for hydrodynamics applications. This paper is devoted to the rethinking and redesign of the Lagrange-remap process for achieving better performance using today’s computing architectures. As an unintended outcome, the analysis has lead us to the discovery of a new family of solvers – the so-called Lagrange-flux schemes – that appear to be promising for the CFD community.

  5. Low- and high-order accurate boundary conditions: From Stokes to Darcy porous flow modeled with standard and improved Brinkman lattice Boltzmann schemes

    International Nuclear Information System (INIS)

    Silva, Goncalo; Talon, Laurent; Ginzburg, Irina

    2017-01-01

    and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.

  6. Low- and high-order accurate boundary conditions: From Stokes to Darcy porous flow modeled with standard and improved Brinkman lattice Boltzmann schemes

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Goncalo, E-mail: goncalo.nuno.silva@gmail.com [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France); Talon, Laurent, E-mail: talon@fast.u-psud.fr [CNRS (UMR 7608), Laboratoire FAST, Batiment 502, Campus University, 91405 Orsay (France); Ginzburg, Irina, E-mail: irina.ginzburg@irstea.fr [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France)

    2017-04-15

    and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.

  7. An efficient and accurate two-stage fourth-order gas-kinetic scheme for the Euler and Navier-Stokes equations

    Science.gov (United States)

    Pan, Liang; Xu, Kun; Li, Qibing; Li, Jiequan

    2016-12-01

    For computational fluid dynamics (CFD), the generalized Riemann problem (GRP) solver and the second-order gas-kinetic scheme (GKS) provide a time-accurate flux function starting from a discontinuous piecewise linear flow distributions around a cell interface. With the adoption of time derivative of the flux function, a two-stage Lax-Wendroff-type (L-W for short) time stepping method has been recently proposed in the design of a fourth-order time accurate method for inviscid flow [21]. In this paper, based on the same time-stepping method and the second-order GKS flux function [42], a fourth-order gas-kinetic scheme is constructed for the Euler and Navier-Stokes (NS) equations. In comparison with the formal one-stage time-stepping third-order gas-kinetic solver [24], the current fourth-order method not only reduces the complexity of the flux function, but also improves the accuracy of the scheme. In terms of the computational cost, a two-dimensional third-order GKS flux function takes about six times of the computational time of a second-order GKS flux function. However, a fifth-order WENO reconstruction may take more than ten times of the computational cost of a second-order GKS flux function. Therefore, it is fully legitimate to develop a two-stage fourth order time accurate method (two reconstruction) instead of standard four stage fourth-order Runge-Kutta method (four reconstruction). Most importantly, the robustness of the fourth-order GKS is as good as the second-order one. In the current computational fluid dynamics (CFD) research, it is still a difficult problem to extend the higher-order Euler solver to the NS one due to the change of governing equations from hyperbolic to parabolic type and the initial interface discontinuity. This problem remains distinctively for the hypersonic viscous and heat conducting flow. The GKS is based on the kinetic equation with the hyperbolic transport and the relaxation source term. The time-dependent GKS flux function

  8. ADER discontinuous Galerkin schemes for general-relativistic ideal magnetohydrodynamics

    Science.gov (United States)

    Fambri, F.; Dumbser, M.; Köppel, S.; Rezzolla, L.; Zanotti, O.

    2018-03-01

    We present a new class of high-order accurate numerical algorithms for solving the equations of general-relativistic ideal magnetohydrodynamics in curved spacetimes. In this paper we assume the background spacetime to be given and static, i.e. we make use of the Cowling approximation. The governing partial differential equations are solved via a new family of fully-discrete and arbitrary high-order accurate path-conservative discontinuous Galerkin (DG) finite-element methods combined with adaptive mesh refinement and time accurate local timestepping. In order to deal with shock waves and other discontinuities, the high-order DG schemes are supplemented with a novel a-posteriori subcell finite-volume limiter, which makes the new algorithms as robust as classical second-order total-variation diminishing finite-volume methods at shocks and discontinuities, but also as accurate as unlimited high-order DG schemes in smooth regions of the flow. We show the advantages of this new approach by means of various classical two- and three-dimensional benchmark problems on fixed spacetimes. Finally, we present a performance and accuracy comparisons between Runge-Kutta DG schemes and ADER high-order finite-volume schemes, showing the higher efficiency of DG schemes.

  9. The role of a fertilizer trial in reconciling agricultural expectations and landscape ecology requirements on an opencast coal site in South Wales, United Kingdom

    International Nuclear Information System (INIS)

    Humphries, C.E.L.; Humphries, R.N.; Wesemann, H.

    1999-01-01

    Since the 1940s the restoration of opencast coal sites in the UK has been predominantly to productive agriculture and forestry. With new UK government policies on sustainability and biodiversity such land uses may be no longer be acceptable or appropriate in the upland areas of South Wales. A scheme was prepared for the upland Nant Helen site with the objective of restoring the landscape ecology of the site; it included acid grassland to provide the landscape setting and for grazing. The scheme met with the approval of the planning authority. An initial forty hectares (about 13% of the site) was restored between 1993 and 1996. While the approved low intensity grazing and low fertilizer regime met the requirements of the planning authority and the statutory agencies, it was not meeting the expectations of the grazers who had grazing rights to the land. To help reconcile the apparent conflict a fertilizer trial was set up. The trial demonstrated that additional fertilizer and intensive grazing was required to meet the nutritional needs of sheep. It also showed typical upland stocking densities of sheep could be achieved with the acid grassland without the need for reseeding with lowland types. However this was not acceptable to the authority and agencies as such fertilizer and grazing regimes would be detrimental to the landscape and ecological objectives of the restoration scheme. A compromise was agreed whereby grazing intensity and additional fertilizer have been zoned. This has been implemented and is working to the satisfaction of all parties. Without the fertilizer trial it is unlikely that the different interests could have been reconciled

  10. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM+-up scheme

    International Nuclear Information System (INIS)

    Chang, Chih-Hao; Liou, Meng-Sing

    2007-01-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion

  11. Accurate and Simple Calibration of DLP Projector Systems

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination...

  12. Has bioscience reconciled mind and body?

    Science.gov (United States)

    Davies, Carmel; Redmond, Catherine; Toole, Sinead O; Coughlan, Barbara

    2016-09-01

    The aim of this discursive paper is to explore the question 'has biological science reconciled mind and body?'. This paper has been inspired by the recognition that bioscience has a historical reputation for privileging the body over the mind. The disregard for the mind (emotions and behaviour) cast bioscience within a 'mind-body problem' paradigm. It has also led to inherent limitations in its capacity to contribute to understanding the complex nature of health. This is a discursive paper. Literature from the history and sociology of science and psychoneuroimmunology (1975-2015) inform the arguments in this paper. The historical and sociological literature provides the basis for a socio-cultural debate on mind-body considerations in science since the 1970s. The psychoneuroimmunology literature draws on mind-body bioscientific theory as a way to demonstrate how science is reconciling mind and body and advancing its understanding of the interconnections between emotions, behaviour and health. Using sociological and biological evidence, this paper demonstrates how bioscience is embracing and advancing its understanding of mind-body interconnectedness. It does this by demonstrating the emotional and behavioural alterations that are caused by two common phenomena; prolonged, chronic peripheral inflammation and prolonged psychological stress. The evidence and arguments provided has global currency that advances understanding of the inter-relationship between emotions, behaviour and health. This paper shows how bioscience has reconciled mind and body. In doing so, it has advanced an understanding of science's contribution to the inter-relationship between emotions, behaviour and health. The biological evidence supporting mind-body science has relevance to clinical practice for nurses and other healthcare professions. This paper discusses how this evidence can inform and enhance clinical practice directly and through research, education and policy. © 2015 John Wiley

  13. Reconciling Anti-essentialism and Quantitative Methodology

    DEFF Research Database (Denmark)

    Jensen, Mathias Fjællegaard

    2017-01-01

    Quantitative methodology has a contested role in feminist scholarship which remains almost exclusively qualitative. Considering Irigaray’s notion of mimicry, Spivak’s strategic essentialism, and Butler’s contingent foundations, the essentialising implications of quantitative methodology may prove...... the potential to reconcile anti-essentialism and quantitative methodology, and thus, to make peace in the quantitative/qualitative Paradigm Wars....

  14. Reconciling Contracts and Relational Governance through Strategic Contracting

    DEFF Research Database (Denmark)

    Petersen, Bent; Østergaard, Kim

    2018-01-01

    on contract types, such as strategic versus conventional, may reconcile the enduring research controversy between the substitution and complements perspectives. Practical implications: Today, formal contracts with foreign distributors tend to resemble “prenuptial agreements”. The opportunity for relational...

  15. Construction of second-order accurate monotone and stable residual distribution schemes for steady problems

    International Nuclear Information System (INIS)

    Abgrall, Remi; Mezine, Mohamed

    2004-01-01

    After having recalled the basic concepts of residual distribution (RD) schemes, we provide a systematic construction of distribution schemes able to handle general unstructured meshes, extending the work of Sidilkover. Then, by using the concept of simple waves, we show how to generalize this technique to symmetrizable linear systems. A stability analysis is provided. We formally extend this construction to the Euler equations. Several test cases are presented to validate our approach

  16. Reconciling current approaches to blindsight

    DEFF Research Database (Denmark)

    Overgaard, Morten; Mogensen, Jesper

    2015-01-01

    After decades of research, blindsight is still a mysterious and controversial topic in consciousness research. Currently, many researchers tend to think of it as an ideal phenomenon to investigate neural correlates of consciousness, whereas others believe that blindsight is in fact a kind...... of degraded vision rather than "truly blind". This article considers both perspectives and finds that both have difficulties understanding all existing evidence about blindsight. In order to reconcile the perspectives, we suggest two specific criteria for a good model of blindsight, able to encompass all...

  17. (Ir)reconcilable differences? The debate concerning nursing and technology.

    Science.gov (United States)

    Sandelowski, M

    1997-01-01

    To review and critique the debate concerning nursing and technology. Technology has been considered both at one and at odds with nursing. Mitcham's (1994) concepts of technological optimism and romanticism. Nursing literature since 1960. Historical analysis. Technological optimists in nursing have viewed technology as an extension of and as readily assimilable into humanistic nursing practice, and nursing as socially advantaged by technology. Technological romantics have viewed technology as irreconcilable with nursing culture, as an expression of masculine culture, and as recirculating existing gender and social inequalities. Both optimists and romantics essentialize technology and nursing, treating the two as singular and fixed entities. The (ir)reconcilability of nursing and technology may be a function of how devices are used by people in different contexts, or of the (ir)reconcilability of views of technology in nursing.

  18. Bilingual Children's Literature as a Tool Reflecting Non-Reconciled and Reconciled Identities in the Ethiopian Community in Israel

    Science.gov (United States)

    Kalnisky, Esther; Baratz, Lea

    2018-01-01

    This study investigates the manner in which new and veteran Ethiopian immigrant students in Israel perceive their identity by investigating their attitudes towards children's books written in both Hebrew and Amharic. Two major types of identity were revealed: (1) a non-reconciled identity that seeks to minimise the visibility of one's ethnic…

  19. A new numerical scheme for the simulation of active magnetic regenerators

    DEFF Research Database (Denmark)

    Torregrosa-Jaime, B.; Engelbrecht, Kurt; Payá, J.

    2014-01-01

    A 1D model of a parallel-plate active magnetic regenerator (AMR) has been developed based on a new numerical scheme. With respect to the implicit scheme, the new scheme achieves accurate results, minimizes computational time and prevents numerical errors. The model has been used to check the boun...

  20. A Novel Iris Segmentation Scheme

    Directory of Open Access Journals (Sweden)

    Chen-Chung Liu

    2014-01-01

    Full Text Available One of the key steps in the iris recognition system is the accurate iris segmentation from its surrounding noises including pupil, sclera, eyelashes, and eyebrows of a captured eye-image. This paper presents a novel iris segmentation scheme which utilizes the orientation matching transform to outline the outer and inner iris boundaries initially. It then employs Delogne-Kåsa circle fitting (instead of the traditional Hough transform to further eliminate the outlier points to extract a more precise iris area from an eye-image. In the extracted iris region, the proposed scheme further utilizes the differences in the intensity and positional characteristics of the iris, eyelid, and eyelashes to detect and delete these noises. The scheme is then applied on iris image database, UBIRIS.v1. The experimental results show that the presented scheme provides a more effective and efficient iris segmentation than other conventional methods.

  1. Hybrid flux splitting schemes for numerical resolution of two-phase flows

    Energy Technology Data Exchange (ETDEWEB)

    Flaatten, Tore

    2003-07-01

    This thesis deals with the construction of numerical schemes for approximating. solutions to a hyperbolic two-phase flow model. Numerical schemes for hyperbolic models are commonly divided in two main classes: Flux Vector Splitting (FVS) schemes which are based on scalar computations and Flux Difference Splitting (FDS) schemes which are based on matrix computations. FVS schemes are more efficient than FDS schemes, but FDS schemes are more accurate. The canonical FDS schemes are the approximate Riemann solvers which are based on a local decomposition of the system into its full wave structure. In this thesis the mathematical structure of the model is exploited to construct a class of hybrid FVS/FDS schemes, denoted as Mixture Flux (MF) schemes. This approach is based on a splitting of the system in two components associated with the pressure and volume fraction variables respectively, and builds upon hybrid FVS/FDS schemes previously developed for one-phase flow models. Through analysis and numerical experiments it is demonstrated that the MF approach provides several desirable features, including (1) Improved efficiency compared to standard approximate Riemann solvers, (2) Robustness under stiff conditions, (3) Accuracy on linear and nonlinear phenomena. In particular it is demonstrated that the framework allows for an efficient weakly implicit implementation, focusing on an accurate resolution of slow transients relevant for the petroleum industry. (author)

  2. Finite volume schemes with equilibrium type discretization of source terms for scalar conservation laws

    International Nuclear Information System (INIS)

    Botchorishvili, Ramaz; Pironneau, Olivier

    2003-01-01

    We develop here a new class of finite volume schemes on unstructured meshes for scalar conservation laws with stiff source terms. The schemes are of equilibrium type, hence with uniform bounds on approximate solutions, valid in cell entropy inequalities and exact for some equilibrium states. Convergence is investigated in the framework of kinetic schemes. Numerical tests show high computational efficiency and a significant advantage over standard cell centered discretization of source terms. Equilibrium type schemes produce accurate results even on test problems for which the standard approach fails. For some numerical tests they exhibit exponential type convergence rate. In two of our numerical tests an equilibrium type scheme with 441 nodes on a triangular mesh is more accurate than a standard scheme with 5000 2 grid points

  3. An extrapolation scheme for solid-state NMR chemical shift calculations

    Science.gov (United States)

    Nakajima, Takahito

    2017-06-01

    Conventional quantum chemical and solid-state physical approaches include several problems to accurately calculate solid-state nuclear magnetic resonance (NMR) properties. We propose a reliable computational scheme for solid-state NMR chemical shifts using an extrapolation scheme that retains the advantages of these approaches but reduces their disadvantages. Our scheme can satisfactorily yield solid-state NMR magnetic shielding constants. The estimated values have only a small dependence on the low-level density functional theory calculation with the extrapolation scheme. Thus, our approach is efficient because the rough calculation can be performed in the extrapolation scheme.

  4. Accurate and simple measurement method of complex decay schemes radionuclide activity

    International Nuclear Information System (INIS)

    Legrand, J.; Clement, C.; Bac, C.

    1975-01-01

    A simple method for the measurement of the activity is described. It consists of using a well-type sodium iodide crystal whose efficiency mith monoenergetic photon rays has been computed or measured. For each radionuclide with a complex decay scheme a total efficiency is computed; it is shown that the efficiency is very high, near 100%. The associated incertainty is low, in spite of the important uncertainties on the different parameters used in the computation. The method has been applied to the measurement of the 152 Eu primary reference [fr

  5. Readiness to reconcile and post-traumatic distress in German survivors of wartime rapes in 1945.

    Science.gov (United States)

    Eichhorn, S; Stammel, N; Glaesmer, H; Klauer, T; Freyberger, H J; Knaevelsrud, C; Kuwert, P

    2015-05-01

    Sexual violence and wartime rapes are prevalent crimes in violent conflicts all over the world. Processes of reconciliation are growing challenges in post-conflict settings. Despite this, so far few studies have examined the psychological consequences and their mediating factors. Our study aimed at investigating the degree of longtime readiness to reconcile and its associations with post-traumatic distress within a sample of German women who experienced wartime rapes in 1945. A total of 23 wartime rape survivors were compared to age- and gender-matched controls with WWII-related non-sexual traumatic experiences. Readiness to reconcile was assessed with the Readiness to Reconcile Inventory (RRI-13). The German version of the Post-traumatic Diagnostic Scale (PDS) was used to assess post-traumatic stress disorder (PTSD) symptomatology. Readiness to reconcile in wartime rape survivors was higher in those women who reported less post-traumatic distress, whereas the subscale "openness to interaction" showed the strongest association with post-traumatic symptomatology. Moreover, wartime rape survivors reported fewer feelings of revenge than women who experienced other traumatization in WWII. Our results are in line with previous research, indicating that readiness to reconcile impacts healing processes in the context of conflict-related traumatic experiences. Based on the long-lasting post-traumatic symptomatology we observed that our findings highlight the need for psychological treatment of wartime rape survivors worldwide, whereas future research should continue focusing on reconciliation within the therapeutic process.

  6. Asymptotic analysis of discrete schemes for non-equilibrium radiation diffusion

    International Nuclear Information System (INIS)

    Cui, Xia; Yuan, Guang-wei; Shen, Zhi-jun

    2016-01-01

    Motivated by providing well-behaved fully discrete schemes in practice, this paper extends the asymptotic analysis on time integration methods for non-equilibrium radiation diffusion in [2] to space discretizations. Therein studies were carried out on a two-temperature model with Larsen's flux-limited diffusion operator, both the implicitly balanced (IB) and linearly implicit (LI) methods were shown asymptotic-preserving. In this paper, we focus on asymptotic analysis for space discrete schemes in dimensions one and two. First, in construction of the schemes, in contrast to traditional first-order approximations, asymmetric second-order accurate spatial approximations are devised for flux-limiters on boundary, and discrete schemes with second-order accuracy on global spatial domain are acquired consequently. Then by employing formal asymptotic analysis, the first-order asymptotic-preserving property for these schemes and furthermore for the fully discrete schemes is shown. Finally, with the help of manufactured solutions, numerical tests are performed, which demonstrate quantitatively the fully discrete schemes with IB time evolution indeed have the accuracy and asymptotic convergence as theory predicts, hence are well qualified for both non-equilibrium and equilibrium radiation diffusion. - Highlights: • Provide AP fully discrete schemes for non-equilibrium radiation diffusion. • Propose second order accurate schemes by asymmetric approach for boundary flux-limiter. • Show first order AP property of spatially and fully discrete schemes with IB evolution. • Devise subtle artificial solutions; verify accuracy and AP property quantitatively. • Ideas can be generalized to 3-dimensional problems and higher order implicit schemes.

  7. A Practical Voter-Verifiable Election Scheme.

    OpenAIRE

    Chaum, D; Ryan, PYA; Schneider, SA

    2005-01-01

    We present an election scheme designed to allow voters to verify that their vote is accurately included in the count. The scheme provides a high degree of transparency whilst ensuring the secrecy of votes. Assurance is derived from close auditing of all the steps of the vote recording and counting process with minimal dependence on the system components. Thus, assurance arises from verification of the election rather than having to place trust in the correct behaviour of components of the vot...

  8. Using an electronic prescribing system to ensure accurate medication lists in a large multidisciplinary medical group.

    Science.gov (United States)

    Stock, Ron; Scott, Jim; Gurtel, Sharon

    2009-05-01

    Although medication safety has largely focused on reducing medication errors in hospitals, the scope of adverse drug events in the outpatient setting is immense. A fundamental problem occurs when a clinician lacks immediate access to an accurate list of the medications that a patient is taking. Since 2001, PeaceHealth Medical Group (PHMG), a multispecialty physician group, has been using an electronic prescribing system that includes medication-interaction warnings and allergy checks. Yet, most practitioners recognized the remaining potential for error, especially because there was no assurance regarding the accuracy of information on the electronic medical record (EMR)-generated medication list. PeaceHealth developed and implemented a standardized approach to (1) review and reconcile the medication list for every patient at each office visit and (2) report on the results obtained within the PHMG clinics. In 2005, PeaceHealth established the ambulatory medication reconciliation project to develop a reliable, efficient process for maintaining accurate patient medication lists. Each of PeaceHealth's five regions created a medication reconciliation task force to redesign its clinical practice, incorporating the systemwide aims and agreed-on key process components for every ambulatory visit. Implementation of the medication reconciliation process at the PHMG clinics resulted in a substantial increase in the number of accurate medication lists, with fewer discrepancies between what the patient is actually taking and what is recorded in the EMR. The PeaceHealth focus on patient safety, and particularly the reduction of medication errors, has involved a standardized approach for reviewing and reconciling medication lists for every patient visiting a physician office. The standardized processes can be replicated at other ambulatory clinics-whether or not electronic tools are available.

  9. Reconciling White-Box and Black-Box Perspectives on Behavioral Self-adaptation

    DEFF Research Database (Denmark)

    Bruni, Roberto; Corradini, Andrea; Gadducci, Fabio

    2015-01-01

    This paper proposes to reconcile two perspectives on behavioral adaptation commonly taken at different stages of the engineering of autonomic computing systems. Requirements engineering activities often take a black-box perspective: A system is considered to be adaptive with respect to an environ......This paper proposes to reconcile two perspectives on behavioral adaptation commonly taken at different stages of the engineering of autonomic computing systems. Requirements engineering activities often take a black-box perspective: A system is considered to be adaptive with respect...... to an environment whenever the system is able to satisfy its goals irrespectively of the environment perturbations. Modeling and programming engineering activities often take a white-box perspective: A system is equipped with suitable adaptation mechanisms and its behavior is classified as adaptive depending...

  10. Analysis of a fourth-order compact scheme for convection-diffusion

    International Nuclear Information System (INIS)

    Yavneh, I.

    1997-01-01

    In, 1984 Gupta et al. introduced a compact fourth-order finite-difference convection-diffusion operator with some very favorable properties. In particular, this scheme does not seem to suffer excessively from spurious oscillatory behavior, and it converges with standard methods such as Gauss Seidel or SOR (hence, multigrid) regardless of the diffusion. This scheme has been rederived, developed (including some variations), and applied in both convection-diffusion and Navier-Stokes equations by several authors. Accurate solutions to high Reynolds-number flow problems at relatively coarse resolutions have been reported. These solutions were often compared to those obtained by lower order discretizations, such as second-order central differences and first-order upstream discretizations. The latter, it was stated, achieved far less accurate results due to the artificial viscosity, which the compact scheme did not include. We show here that, while the compact scheme indeed does not suffer from a cross-stream artificial viscosity (as does the first-order upstream scheme when the characteristic direction is not aligned with the grid), it does include a streamwise artificial viscosity that is inversely proportional to the natural viscosity. This term is not always benign. 7 refs., 1 fig., 1 tab

  11. Reconciling research and community priorities in participatory trials: application to Padres Informados/Jovenes Preparados.

    Science.gov (United States)

    Allen, Michele L; Garcia-Huidobro, Diego; Bastian, Tiana; Hurtado, G Ali; Linares, Roxana; Svetaz, María Veronica

    2017-06-01

    Participatory research (PR) trials aim to achieve the dual, and at times competing, demands of producing an intervention and research process that address community perspectives and priorities, while establishing intervention effectiveness. To identify research and community priorities that must be reconciled in the areas of collaborative processes, study design and aim and study implementation quality in order to successfully conduct a participatory trial. We describe how this reconciliation was approached in the smoking prevention participatory trial Padres Informados/Jovenes Preparados (Informed Parents/Prepared Youth) and evaluate the success of our reconciled priorities. Data sources to evaluate success of the reconciliations included a survey of all partners regarding collaborative group processes, intervention participant recruitment and attendance and surveys of enrolled study participants assessing intervention outcomes. While we successfully achieved our reconciled collaborative processes and implementation quality goals, we did not achieve our reconciled goals in study aim and design. Due in part to the randomized wait-list control group design chosen in the reconciliation process, we were not able to demonstrate overall efficacy of the intervention or offer timely services to families in need of support. Achieving the goals of participatory trials is challenging but may yield community and research benefits. Innovative research designs are needed to better support the complex goals of participatory trials. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. ENSEMBLE methods to reconcile disparate national long range dispersion forecasts

    DEFF Research Database (Denmark)

    Mikkelsen, Torben; Galmarini, S.; Bianconi, R.

    2003-01-01

    ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparatenational forecasts for long-range dispersion...... emergency and meteorological forecasting centres, which may choose to integrate them directly intooperational emergency information systems, or possibly use them as a basis for future system development.......ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparatenational forecasts for long-range dispersion....... ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidentalatmospheric release of radioactive material. A series of new decision-making “ENSEMBLE” procedures...

  13. A Spatial Discretization Scheme for Solving the Transport Equation on Unstructured Grids of Polyhedra

    International Nuclear Information System (INIS)

    Thompson, K.G.

    2000-01-01

    In this work, we develop a new spatial discretization scheme that may be used to numerically solve the neutron transport equation. This new discretization extends the family of corner balance spatial discretizations to include spatial grids of arbitrary polyhedra. This scheme enforces balance on subcell volumes called corners. It produces a lower triangular matrix for sweeping, is algebraically linear, is non-negative in a source-free absorber, and produces a robust and accurate solution in thick diffusive regions. Using an asymptotic analysis, we design the scheme so that in thick diffusive regions it will attain the same solution as an accurate polyhedral diffusion discretization. We then refine the approximations in the scheme to reduce numerical diffusion in vacuums, and we attempt to capture a second order truncation error. After we develop this Upstream Corner Balance Linear (UCBL) discretization we analyze its characteristics in several limits. We complete a full diffusion limit analysis showing that we capture the desired diffusion discretization in optically thick and highly scattering media. We review the upstream and linear properties of our discretization and then demonstrate that our scheme captures strictly non-negative solutions in source-free purely absorbing media. We then demonstrate the minimization of numerical diffusion of a beam and then demonstrate that the scheme is, in general, first order accurate. We also note that for slab-like problems our method actually behaves like a second-order method over a range of cell thicknesses that are of practical interest. We also discuss why our scheme is first order accurate for truly 3D problems and suggest changes in the algorithm that should make it a second-order accurate scheme. Finally, we demonstrate 3D UCBL's performance on several very different test problems. We show good performance in diffusive and streaming problems. We analyze truncation error in a 3D problem and demonstrate robustness in a

  14. An efficient numerical scheme for the simulation of parallel-plate active magnetic regenerators

    DEFF Research Database (Denmark)

    Torregrosa-Jaime, Bárbara; Corberán, José M.; Payá, Jorge

    2015-01-01

    A one-dimensional model of a parallel-plate active magnetic regenerator (AMR) is presented in this work. The model is based on an efficient numerical scheme which has been developed after analysing the heat transfer mechanisms in the regenerator bed. The new finite difference scheme optimally com...... to the fully implicit scheme, the proposed scheme achieves more accurate results, prevents numerical errors and requires less computational effort. In AMR simulations the new scheme can reduce the computational time by 88%....

  15. Reconciling Islam and feminism.

    Science.gov (United States)

    Hashim, I

    1999-03-01

    This paper objects to the popular view that Islam supports a segregated social system where women are marginalized, and argues that certain Islamic texts are supportive of women's rights. The article proposes that Islam reconcile with feminism by returning to the Qur'an. The Qur'an provides rights which address the common complaints of women such as lack of freedom to make decisions for themselves and the inability to earn an income. One example is a verse in the Qur'an (4:34) that is frequently interpreted as giving women complete control over their own income and property. This article also explains how Islam has been used as a method of controlling women, particularly in the practices of veiling and purdah (seclusion). The article points out the need to engage in Islam from a position of knowing, and to ensure that Muslim women have access to this knowledge. It is only through this knowledge that women can assert their rights and challenge patriarchal interpretations of Islam.

  16. Space-Time Transformation in Flux-form Semi-Lagrangian Schemes

    Directory of Open Access Journals (Sweden)

    Peter C. Chu Chenwu Fan

    2010-01-01

    Full Text Available With a finite volume approach, a flux-form semi-Lagrangian (TFSL scheme with space-time transformation was developed to provide stable and accurate algorithm in solving the advection-diffusion equation. Different from the existing flux-form semi-Lagrangian schemes, the temporal integration of the flux from the present to the next time step is transformed into a spatial integration of the flux at the side of a grid cell (space for the present time step using the characteristic-line concept. The TFSL scheme not only keeps the good features of the semi-Lagrangian schemes (no Courant number limitation, but also has higher accuracy (of a second order in both time and space. The capability of the TFSL scheme is demonstrated by the simulation of the equatorial Rossby-soliton propagation. Computational stability and high accuracy makes this scheme useful in ocean modeling, computational fluid dynamics, and numerical weather prediction.

  17. An efficient discontinuous Galerkin finite element method for highly accurate solution of maxwell equations

    KAUST Repository

    Liu, Meilin

    2012-08-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.

  18. An efficient discontinuous Galerkin finite element method for highly accurate solution of maxwell equations

    KAUST Repository

    Liu, Meilin; Sirenko, Kostyantyn; Bagci, Hakan

    2012-01-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.

  19. Efficient Scheme for Chemical Flooding Simulation

    Directory of Open Access Journals (Sweden)

    Braconnier Benjamin

    2014-07-01

    Full Text Available In this paper, we investigate an efficient implicit scheme for the numerical simulation of chemical enhanced oil recovery technique for oil fields. For the sake of brevity, we only focus on flows with polymer to describe the physical and numerical models. In this framework, we consider a black oil model upgraded with the polymer modeling. We assume the polymer only transported in the water phase or adsorbed on the rock following a Langmuir isotherm. The polymer reduces the water phase mobility which can change drastically the behavior of water oil interfaces. Then, we propose a fractional step technique to resolve implicitly the system. The first step is devoted to the resolution of the black oil subsystem and the second to the polymer mass conservation. In such a way, jacobian matrices coming from the implicit formulation have a moderate size and preserve solvers efficiency. Nevertheless, the coupling between the black-oil subsystem and the polymer is not fully resolved. For efficiency and accuracy comparison, we propose an explicit scheme for the polymer for which large time step is prohibited due to its CFL (Courant-Friedrichs-Levy criterion and consequently approximates accurately the coupling. Numerical experiments with polymer are simulated : a core flood, a 5-spot reservoir with surfactant and ions and a 3D real case. Comparisons are performed between the polymer explicit and implicit scheme. They prove that our polymer implicit scheme is efficient, robust and resolves accurately the coupling physics. The development and the simulations have been performed with the software PumaFlow [PumaFlow (2013 Reference manual, release V600, Beicip Franlab].

  20. Low enrolment in Ugandan Community Health Insurance Schemes: underlying causes and policy implications

    Directory of Open Access Journals (Sweden)

    Criel Bart

    2007-07-01

    Full Text Available Abstract Background Despite the promotion of Community Health Insurance (CHI in Uganda in the second half of the 90's, mainly under the impetus of external aid organisations, overall membership has remained low. Today, some 30,000 persons are enrolled in about a dozen different schemes located in Central and Southern Uganda. Moreover, most of these schemes were created some 10 years ago but since then, only one or two new schemes have been launched. The dynamic of CHI has apparently come to a halt. Methods A case study evaluation was carried out on two selected CHI schemes: the Ishaka and the Save for Health Uganda (SHU schemes. The objective of this evaluation was to explore the reasons for the limited success of CHI. The evaluation involved review of the schemes' records, key informant interviews and exit polls with both insured and non-insured patients. Results Our research points to a series of not mutually exclusive explanations for this under-achievement at both the demand and the supply side of health care delivery. On the demand side, the following elements have been identified: lack of basic information on the scheme's design and operation, limited understanding of the principles underlying CHI, limited community involvement and lack of trust in the management of the schemes, and, last but not least, problems in people's ability to pay the insurance premiums. On the supply-side, we have identified the following explanations: limited interest and knowledge of health care providers and managers of CHI, and the absence of a coherent policy framework for the development of CHI. Conclusion The policy implications of this study refer to the need for the government to provide the necessary legislative, technical and regulative support to CHI development. The main policy challenge however is the need to reconcile the government of Uganda's interest in promoting CHI with the current policy of abolition of user fees in public facilities.

  1. TE/TM scheme for computation of electromagnetic fields in accelerators

    International Nuclear Information System (INIS)

    Zagorodnov, Igor; Weiland, Thomas

    2005-01-01

    We propose a new two-level economical conservative scheme for short-range wake field calculation in three dimensions. The scheme does not have dispersion in the longitudinal direction and is staircase free (second order convergent). Unlike the finite-difference time domain method (FDTD), it is based on a TE/TM like splitting of the field components in time. Additionally, it uses an enhanced alternating direction splitting of the transverse space operator that makes the scheme computationally as effective as the conventional FDTD method. Unlike the FDTD ADI and low-order Strang methods, the splitting error in our scheme is only of fourth order. As numerical examples show, the new scheme is much more accurate on the long-time scale than the conventional FDTD approach

  2. Stable and high order accurate difference methods for the elastic wave equation in discontinuous media

    KAUST Repository

    Duru, Kenneth; Virta, Kristoffer

    2014-01-01

    to be discontinuous. The key feature is the highly accurate and provably stable treatment of interfaces where media discontinuities arise. We discretize in space using high order accurate finite difference schemes that satisfy the summation by parts rule. Conditions

  3. How self-interactions can reconcile sterile neutrinos with cosmology.

    Science.gov (United States)

    Hannestad, Steen; Hansen, Rasmus Sloth; Tram, Thomas

    2014-01-24

    Short baseline neutrino oscillation experiments have shown hints of the existence of additional sterile neutrinos in the eV mass range. However, such neutrinos seem incompatible with cosmology because they have too large of an impact on cosmic structure formation. Here we show that new interactions in the sterile neutrino sector can prevent their production in the early Universe and reconcile short baseline oscillation experiments with cosmology.

  4. Accurate adiabatic singlet-triplet gaps in atoms and molecules employing the third-order spin-flip algebraic diagrammatic construction scheme for the polarization propagator

    Energy Technology Data Exchange (ETDEWEB)

    Lefrancois, Daniel; Dreuw, Andreas, E-mail: dreuw@uni-heidelberg.de [Interdisciplinary Center for Scientific Computing, Ruprecht-Karls University, Im Neuenheimer Feld 205, 69120 Heidelberg (Germany); Rehn, Dirk R. [Departments of Physics, Chemistry and Biology, Linköping University, SE-581 83 Linköping (Sweden)

    2016-08-28

    For the calculation of adiabatic singlet-triplet gaps (STG) in diradicaloid systems the spin-flip (SF) variant of the algebraic diagrammatic construction (ADC) scheme for the polarization propagator in third order perturbation theory (SF-ADC(3)) has been applied. Due to the methodology of the SF approach the singlet and triplet states are treated on an equal footing since they are part of the same determinant subspace. This leads to a systematically more accurate description of, e.g., diradicaloid systems than with the corresponding non-SF single-reference methods. Furthermore, using analytical excited state gradients at ADC(3) level, geometry optimizations of the singlet and triplet states were performed leading to a fully consistent description of the systems, leading to only small errors in the calculated STGs ranging between 0.6 and 2.4 kcal/mol with respect to experimental references.

  5. AN ACCURATE ORBITAL INTEGRATOR FOR THE RESTRICTED THREE-BODY PROBLEM AS A SPECIAL CASE OF THE DISCRETE-TIME GENERAL THREE-BODY PROBLEM

    International Nuclear Information System (INIS)

    Minesaki, Yukitaka

    2013-01-01

    For the restricted three-body problem, we propose an accurate orbital integration scheme that retains all conserved quantities of the two-body problem with two primaries and approximately preserves the Jacobi integral. The scheme is obtained by taking the limit as mass approaches zero in the discrete-time general three-body problem. For a long time interval, the proposed scheme precisely reproduces various periodic orbits that cannot be accurately computed by other generic integrators

  6. Reconciling atmospheric temperatures in the early Archean

    DEFF Research Database (Denmark)

    Pope, Emily Catherine; Bird, Dennis K.; Rosing, Minik Thorleif

    rock record. The goal of this study is to compile and reconcile Archean geologic and geochemical features that are in some way controlled by surface temperature and/or atmospheric composition, so that at the very least paleoclimate models can be checked by physical limits. Data used to this end include...... weathering on climate). Selective alteration of δD in Isua rocks to values of -130 to -100‰ post-dates ca. 3.55Ga Ameralik dikes, but may be associated with a poorly defined 2.6-2.8Ga metamorphic event that is coincident with the amalgamation of the “Kenorland supercontinent.”...

  7. Asynchronous discrete event schemes for PDEs

    Science.gov (United States)

    Stone, D.; Geiger, S.; Lord, G. J.

    2017-08-01

    A new class of asynchronous discrete-event simulation schemes for advection-diffusion-reaction equations is introduced, based on the principle of allowing quanta of mass to pass through faces of a (regular, structured) Cartesian finite volume grid. The timescales of these events are linked to the flux on the face. The resulting schemes are self-adaptive, and local in both time and space. Experiments are performed on realistic physical systems related to porous media flow applications, including a large 3D advection diffusion equation and advection diffusion reaction systems. The results are compared to highly accurate reference solutions where the temporal evolution is computed with exponential integrator schemes using the same finite volume discretisation. This allows a reliable estimation of the solution error. Our results indicate a first order convergence of the error as a control parameter is decreased, and we outline a framework for analysis.

  8. An Energy Decaying Scheme for Nonlinear Dynamics of Shells

    Science.gov (United States)

    Bottasso, Carlo L.; Bauchau, Olivier A.; Choi, Jou-Young; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    A novel integration scheme for nonlinear dynamics of geometrically exact shells is developed based on the inextensible director assumption. The new algorithm is designed so as to imply the strict decay of the system total mechanical energy at each time step, and consequently unconditional stability is achieved in the nonlinear regime. Furthermore, the scheme features tunable high frequency numerical damping and it is therefore stiffly accurate. The method is tested for a finite element spatial formulation of shells based on mixed interpolations of strain tensorial components and on a two-parameter representation of director rotations. The robustness of the, scheme is illustrated with the help of numerical examples.

  9. Reconciling Work and Family Life

    DEFF Research Database (Denmark)

    Holt, Helle

    The problems of balancing work and family life have within the last years been heavily debated in the countries of the European Union. This anthology deals with the question of how to obtain a better balance between work and family life. Focus is set on the role of companies. The anthology tries...... to shed some light on questions such as: How can compagnies become more family friendly? What are the barriers and how can they be overcome? What is the social outcome when companies are playing an active role in employees’ possiblities for combining family life and work life? How are the solutions...... on work/ family unbalance/ problems related to the growing social problems related to unemployment? The anthology is the result of a reseach-network on ”Work-place Contributions ro Reconcile Work and Family Life” funded by the European Commission, DG V, and co-coordinated by the editors....

  10. A Novel Iterative Scheme for the Very Fast and Accurate Solution of Non-LTE Radiative Transfer Problems

    Science.gov (United States)

    Trujillo Bueno, J.; Fabiani Bendicho, P.

    1995-12-01

    Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel

  11. Multigrid time-accurate integration of Navier-Stokes equations

    Science.gov (United States)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1993-01-01

    Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.

  12. Reconciling Ethnic and National Identities in a Divided Society: The ...

    African Journals Online (AJOL)

    Reconciling Ethnic and National Identities in a Divided Society: The Nigerian Dilemma of Nation-State Building. Abu Bakarr Bah. Abstract. « Réconcilier les Identités Ethniques et Nationales dans une Société Divisée: Le Dilemme Nigérian de la Construction de l\\'Etat-Nation ». Résumé Il s\\'agit ici d\\'une analyse théorique et ...

  13. An empirical comparison of alternative schemes for combining electricity spot price forecasts

    International Nuclear Information System (INIS)

    Nowotarski, Jakub; Raviv, Eran; Trück, Stefan; Weron, Rafał

    2014-01-01

    In this comprehensive empirical study we critically evaluate the use of forecast averaging in the context of electricity prices. We apply seven averaging and one selection scheme and perform a backtesting analysis on day-ahead electricity prices in three major European and US markets. Our findings support the additional benefit of combining forecasts of individual methods for deriving more accurate predictions, however, the performance is not uniform across the considered markets and periods. In particular, equally weighted pooling of forecasts emerges as a simple, yet powerful technique compared with other schemes that rely on estimated combination weights, but only when there is no individual predictor that consistently outperforms its competitors. Constrained least squares regression (CLS) offers a balance between robustness against such well performing individual methods and relatively accurate forecasts, on average better than those of the individual predictors. Finally, some popular forecast averaging schemes – like ordinary least squares regression (OLS) and Bayesian Model Averaging (BMA) – turn out to be unsuitable for predicting day-ahead electricity prices. - Highlights: • So far the most extensive study on combining forecasts for electricity spot prices • 12 stochastic models, 8 forecast combination schemes and 3 markets considered • Our findings support the additional benefit of combining forecasts for deriving more accurate predictions • Methods that allow for unconstrained weights, such as OLS averaging, should be avoided • We recommend a backtesting exercise to identify the preferred forecast averaging method for the data at hand

  14. Asymptotically stable fourth-order accurate schemes for the diffusion equation on complex shapes

    International Nuclear Information System (INIS)

    Abarbanel, S.; Ditkowski, A.

    1997-01-01

    An algorithm which solves the multidimensional diffusion equation on complex shapes to fourth-order accuracy and is asymptotically stable in time is presented. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty-like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions, fail. The ability of the paradigm to be applied to arbitrary geometric domains is an important feature of the algorithm. 5 refs., 14 figs

  15. Implicit time accurate simulation of unsteady flow

    Science.gov (United States)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  16. Consensus group sessions are useful to reconcile stakeholders’ perspectives about network performance evaluation

    Directory of Open Access Journals (Sweden)

    Marie-Eve Lamontagne

    2010-12-01

    Full Text Available Background: Having a common vision among network stakeholders is an important ingredient to developing a performance evaluation process. Consensus methods may be a viable means to reconcile the perceptions of different stakeholders about the dimensions to include in a performance evaluation framework.Objectives: To determine whether individual organizations within traumatic brain injury (TBI networks differ in perceptions about the importance of performance dimensions for the evaluation of TBI networks and to explore the extent to which group consensus sessions could reconcile these perceptions.Methods: We used TRIAGE, a consensus technique that combines an individual and a group data collection phase to explore the perceptions of network stakeholders and to reach a consensus within structured group discussions.Results: One hundred and thirty-nine professionals from 43 organizations within eight TBI networks participated in the individual data collection; 62 professionals from these same organisations contributed to the group data collection. The extent of consensus based on questionnaire results (e.g. individual data collection was low, however, 100% agreement was obtained for each network during the consensus group sessions. The median importance scores and mean ranks attributed to the dimensions by individuals compared to groups did not differ greatly. Group discussions were found useful in understanding the reasons motivating the scoring, for resolving differences among participants, and for harmonizing their values.Conclusion: Group discussions, as part of a consensus technique, appear to be a useful process to reconcile diverging perceptions of network performance among stakeholders.

  17. Efficient scheme for parametric fitting of data in arbitrary dimensions.

    Science.gov (United States)

    Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching

    2008-07-01

    We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.

  18. About efficient quasi-Newtonian schemes for variational calculations in nuclear structure

    International Nuclear Information System (INIS)

    Puddu, G.

    2009-01-01

    The Broyden-Fletcher-Goldhaber-Shanno (BFGS) quasi-Newtonian scheme is known as the most efficient scheme for variational calculations of energies. This scheme is actually a member of a one-parameter family of variational methods, known as the Broyden β-family. In some applications to light nuclei using microscopically derived effective Hamiltonians starting from accurate nucleon-nucleon potentials, we actually found other members of the same family which have better performance than the BFGS method. We also extend the Broyden β -family of algorithms to a two-parameter family of rank-three updates which has even better performances. (orig.)

  19. Could a scheme for licensing smokers work in Australia?

    Science.gov (United States)

    Magnusson, Roger S; Currow, David C

    2013-08-05

    In this article, we evaluate the possible advantages and disadvantages of a licensing scheme that would require adult smokers to verify their right to purchase tobacco products at point of sale using a smart-card licence. A survey of Australian secondary school students conducted in 2011 found that half of 17-2013-old smokers and one-fifth of 12-2013-old smokers believed it was "easy" or "very easy" to purchase cigarettes themselves. Reducing tobacco use by adolescents now is central to the future course of the current epidemic of tobacco-caused disease, since most current adult smokers began to smoke as adolescents--at a time when they were unable to purchase tobacco lawfully. The requirement for cigarette retailers to reconcile all stock purchased from wholesalers against a digital record of retail sales to licensed smokers would create a robust incentive for retailers to comply with laws that prohibit tobacco sales to children. Foreseeable objections to introducing a smokers licence need to be taken into account, but once we move beyond the "shock of the new", it is difficult to identify anything about a smokers licence that is particularly offensive or demeaning. A smoker licensing scheme deserves serious consideration for its potential to dramatically curtail retailers' violation of the law against selling tobacco to minors, to impose stricter accountability for sale of a uniquely harmful drug and to allow intelligent use of information about smokers' purchases to help smokers quit.

  20. ENSEMBLE methods to reconcile disparate national long range dispersion forecasts

    OpenAIRE

    Mikkelsen, Torben; Galmarini, S.; Bianconi, R.; French, S.

    2003-01-01

    ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparatenational forecasts for long-range dispersion. ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an a...

  1. Reconciling controversies about the 'global warming hiatus'.

    Science.gov (United States)

    Medhaug, Iselin; Stolpe, Martin B; Fischer, Erich M; Knutti, Reto

    2017-05-03

    Between about 1998 and 2012, a time that coincided with political negotiations for preventing climate change, the surface of Earth seemed hardly to warm. This phenomenon, often termed the 'global warming hiatus', caused doubt in the public mind about how well anthropogenic climate change and natural variability are understood. Here we show that apparently contradictory conclusions stem from different definitions of 'hiatus' and from different datasets. A combination of changes in forcing, uptake of heat by the oceans, natural variability and incomplete observational coverage reconciles models and data. Combined with stronger recent warming trends in newer datasets, we are now more confident than ever that human influence is dominant in long-term warming.

  2. The new Exponential Directional Iterative (EDI) 3-D Sn scheme for parallel adaptive differencing

    International Nuclear Information System (INIS)

    Sjoden, G.E.

    2005-01-01

    The new Exponential Directional Iterative (EDI) discrete ordinates (Sn) scheme for 3-D Cartesian Coordinates is presented. The EDI scheme is a logical extension of the positive, efficient Exponential Directional Weighted (EDW) Sn scheme currently used as the third level of the adaptive spatial differencing algorithm in the PENTRAN parallel discrete ordinates solver. Here, the derivation and advantages of the EDI scheme are presented; EDI uses EDW-rendered exponential coefficients as initial starting values to begin a fixed point iteration of the exponential coefficients. One issue that required evaluation was an iterative cutoff criterion to prevent the application of an unstable fixed point iteration; although this was needed in some cases, it was readily treated with a default to EDW. Iterative refinement of the exponential coefficients in EDI typically converged in fewer than four fixed point iterations. Moreover, EDI yielded more accurate angular fluxes compared to the other schemes tested, particularly in streaming conditions. Overall, it was found that the EDI scheme was up to an order of magnitude more accurate than the EDW scheme on a given mesh interval in streaming cases, and is potentially a good candidate as a fourth-level differencing scheme in the PENTRAN adaptive differencing sequence. The 3-D Cartesian computational cost of EDI was only about 20% more than the EDW scheme, and about 40% more than Diamond Zero (DZ). More evaluation and testing are required to determine suitable upgrade metrics for EDI to be fully integrated into the current adaptive spatial differencing sequence in PENTRAN. (author)

  3. Numerical Investigation of a Novel Wiring Scheme Enabling Simple and Accurate Impedance Cytometry

    Directory of Open Access Journals (Sweden)

    Federica Caselli

    2017-09-01

    Full Text Available Microfluidic impedance cytometry is a label-free approach for high-throughput analysis of particles and cells. It is based on the characterization of the dielectric properties of single particles as they flow through a microchannel with integrated electrodes. However, the measured signal depends not only on the intrinsic particle properties, but also on the particle trajectory through the measuring region, thus challenging the resolution and accuracy of the technique. In this work we show via simulation that this issue can be overcome without resorting to particle focusing, by means of a straightforward modification of the wiring scheme for the most typical and widely used microfluidic impedance chip.

  4. On the modelling of compressible inviscid flow problems using AUSM schemes

    Directory of Open Access Journals (Sweden)

    Hajžman M.

    2007-11-01

    Full Text Available During last decades, upwind schemes have become a popular method in the field of computational fluid dynamics. Although they are only first order accurate, AUSM (Advection Upstream Splitting Method schemes proved to be well suited for modelling of compressible flows due to their robustness and ability of capturing shock discontinuities. In this paper, we review the composition of the AUSM flux-vector splitting scheme and its improved version noted AUSM+, proposed by Liou, for the solution of the Euler equations. Mach number splitting functions operating with values from adjacent cells are used to determine numerical convective fluxes and pressure splitting is used for the evaluation of numerical pressure fluxes. Both versions of the AUSM scheme are applied for solving some test problems such as one-dimensional shock tube problem and three dimensional GAMM channel. Features of the schemes are discussed in comparison with some explicit central schemes of the first order accuracy (Lax-Friedrichs and of the second order accuracy (MacCormack.

  5. High-Order Hyperbolic Residual-Distribution Schemes on Arbitrary Triangular Grids

    Science.gov (United States)

    Mazaheri, Alireza; Nishikawa, Hiroaki

    2015-01-01

    In this paper, we construct high-order hyperbolic residual-distribution schemes for general advection-diffusion problems on arbitrary triangular grids. We demonstrate that the second-order accuracy of the hyperbolic schemes can be greatly improved by requiring the scheme to preserve exact quadratic solutions. We also show that the improved second-order scheme can be easily extended to third-order by further requiring the exactness for cubic solutions. We construct these schemes based on the LDA and the SUPG methodology formulated in the framework of the residual-distribution method. For both second- and third-order-schemes, we construct a fully implicit solver by the exact residual Jacobian of the second-order scheme, and demonstrate rapid convergence of 10-15 iterations to reduce the residuals by 10 orders of magnitude. We demonstrate also that these schemes can be constructed based on a separate treatment of the advective and diffusive terms, which paves the way for the construction of hyperbolic residual-distribution schemes for the compressible Navier-Stokes equations. Numerical results show that these schemes produce exceptionally accurate and smooth solution gradients on highly skewed and anisotropic triangular grids, including curved boundary problems, using linear elements. We also present Fourier analysis performed on the constructed linear system and show that an under-relaxation parameter is needed for stabilization of Gauss-Seidel relaxation.

  6. Indexed variation graphs for efficient and accurate resistome profiling.

    Science.gov (United States)

    Rowe, Will P M; Winn, Martyn D

    2018-05-14

    Antimicrobial resistance remains a major threat to global health. Profiling the collective antimicrobial resistance genes within a metagenome (the "resistome") facilitates greater understanding of antimicrobial resistance gene diversity and dynamics. In turn, this can allow for gene surveillance, individualised treatment of bacterial infections and more sustainable use of antimicrobials. However, resistome profiling can be complicated by high similarity between reference genes, as well as the sheer volume of sequencing data and the complexity of analysis workflows. We have developed an efficient and accurate method for resistome profiling that addresses these complications and improves upon currently available tools. Our method combines a variation graph representation of gene sets with an LSH Forest indexing scheme to allow for fast classification of metagenomic sequence reads using similarity-search queries. Subsequent hierarchical local alignment of classified reads against graph traversals enables accurate reconstruction of full-length gene sequences using a scoring scheme. We provide our implementation, GROOT, and show it to be both faster and more accurate than a current reference-dependent tool for resistome profiling. GROOT runs on a laptop and can process a typical 2 gigabyte metagenome in 2 minutes using a single CPU. Our method is not restricted to resistome profiling and has the potential to improve current metagenomic workflows. GROOT is written in Go and is available at https://github.com/will-rowe/groot (MIT license). will.rowe@stfc.ac.uk. Supplementary data are available at Bioinformatics online.

  7. Media, messages, and medication: strategies to reconcile what patients hear, what they want, and what they need from medications.

    Science.gov (United States)

    Kravitz, Richard L; Bell, Robert A

    2013-01-01

    Over the past 30 years, patients' options for accessing information about prescription drugs have expanded dramatically. In this narrative review, we address four questions: (1) What information sources are patients exposed to, and are they paying attention? (2) Is the information they hear credible and accurate? (3) When patients ask for a prescription, what do they really want and need? Finally, (4) How can physicians reconcile what patients hear, want, and need? A critical synthesis of the literature is reported. Observations indicate that the public is generally aware of and attends to a growing body of health information resources, including traditional news media, advertising, and social networking. However, lay audiences often have no reliable way to assess the accuracy of health information found in the media, on the Internet, or in direct-to-consumer advertising. This inability to assess the information can lead to decision paralysis, with patients questioning what is known, what is knowable, and what their physicians know. Many patients have specific expectations for the care they wish to receive and have little difficulty making those expectations known. However, there are hazards in assuming that patients' expressed desires are direct reflections of their underlying wants or needs. In trying to reconcile patients' wants and needs for information about prescription medicines, a combination of policy and clinical initiatives may offer greater promise than either approach alone. Patients are bombarded by information about medicines. The problem is not a lack of information; rather, it is knowing what information to trust. Making sure patients get the medications they need and are prepared to take them safely requires a combination of policy and clinical interventions.

  8. ¨ A Dilemma of Abundance: Governance Challenges of Reconciling Shale Gas Development and Climate Change Mitigation

    Directory of Open Access Journals (Sweden)

    Karena Shaw

    2013-05-01

    Full Text Available Shale gas proponents argue this unconventional fossil fuel offers a “bridge” towards a cleaner energy system by offsetting higher-carbon fuels such as coal. The technical feasibility of reconciling shale gas development with climate action remains contested. However, we here argue that governance challenges are both more pressing and more profound. Reconciling shale gas and climate action requires institutions capable of responding effectively to uncertainty; intervening to mandate emissions reductions and internalize costs to industry; and managing the energy system strategically towards a lower carbon future. Such policy measures prove challenging, particularly in jurisdictions that stand to benefit economically from unconventional fuels. We illustrate this dilemma through a case study of shale gas development in British Columbia, Canada, a global leader on climate policy that is nonetheless struggling to manage gas development for mitigation. The BC case is indicative of the constraints jurisdictions face both to reconcile gas development and climate action, and to manage the industry adequately to achieve social licence and minimize resistance. More broadly, the case attests to the magnitude of change required to transform our energy systems to mitigate climate change.

  9. A national quality control scheme for serum HGH assays

    International Nuclear Information System (INIS)

    Hunter, W.M.; McKenzie, I.

    1979-01-01

    In the autumn of 1975 the Supraregional Assay Service established a Quality Control Sub-Committee and the intra-laboratory QC Scheme for Growth Hormone (HGH) assays which is described here has served, in many respects, as a pilot scheme for protein RIA. Major improvements in accuracy, precision and between-laboratory agreement can be brought about by intensively interactive quality control schemes. A common standard is essential and should consist of ampoules used for one or only a small number of assays. Accuracy and agreement were not good enough to allow the overall means to serve as target values but a group of 11 laboratories were sufficiently accurate to provide a 'reference group mean' to so serve. Gross non-specificity was related to poor assay design and was quickly eliminated. Within-laboratory between-batch variability was much worse than that normally claimed for simple protein hormone RIA. A full report on this Scheme will appear shortly in Annals of Clinical Biochemistry. (Auth.)

  10. A method for accurate computation of elastic and discrete inelastic scattering transfer matrix

    International Nuclear Information System (INIS)

    Garcia, R.D.M.; Santina, M.D.

    1986-05-01

    A method for accurate computation of elastic and discrete inelastic scattering transfer matrices is discussed. In particular, a partition scheme for the source energy range that avoids integration over intervals containing points where the integrand has discontinuous derivative is developed. Five-figure accurate numerical results are obtained for several test problems with the TRAMA program which incorporates the porposed method. A comparison with numerical results from existing processing codes is also presented. (author) [pt

  11. Additive operator-difference schemes splitting schemes

    CERN Document Server

    Vabishchevich, Petr N

    2013-01-01

    Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy

  12. Accurate performance analysis of opportunistic decode-and-forward relaying

    KAUST Repository

    Tourki, Kamel

    2011-07-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may be considered unusable, and the destination may use a selection combining technique. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end outage probability for a transmission rate R. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.

  13. SOLVING FRACTIONAL-ORDER COMPETITIVE LOTKA-VOLTERRA MODEL BY NSFD SCHEMES

    Directory of Open Access Journals (Sweden)

    S.ZIBAEI

    2016-12-01

    Full Text Available In this paper, we introduce fractional-order into a model competitive Lotka- Volterra prey-predator system. We will discuss the stability analysis of this fractional system. The non-standard nite difference (NSFD scheme is implemented to study the dynamic behaviors in the fractional-order Lotka-Volterra system. Proposed non-standard numerical scheme is compared with the forward Euler and fourth order Runge-Kutta methods. Numerical results show that the NSFD approach is easy and accurate for implementing when applied to fractional-order Lotka-Volterra model.

  14. A digital memories based user authentication scheme with privacy preservation.

    Directory of Open Access Journals (Sweden)

    JunLiang Liu

    Full Text Available The traditional username/password or PIN based authentication scheme, which still remains the most popular form of authentication, has been proved insecure, unmemorable and vulnerable to guessing, dictionary attack, key-logger, shoulder-surfing and social engineering. Based on this, a large number of new alternative methods have recently been proposed. However, most of them rely on users being able to accurately recall complex and unmemorable information or using extra hardware (such as a USB Key, which makes authentication more difficult and confusing. In this paper, we propose a Digital Memories based user authentication scheme adopting homomorphic encryption and a public key encryption design which can protect users' privacy effectively, prevent tracking and provide multi-level security in an Internet & IoT environment. Also, we prove the superior reliability and security of our scheme compared to other schemes and present a performance analysis and promising evaluation results.

  15. Robust second-order scheme for multi-phase flow computations

    Science.gov (United States)

    Shahbazi, Khosro

    2017-06-01

    A robust high-order scheme for the multi-phase flow computations featuring jumps and discontinuities due to shock waves and phase interfaces is presented. The scheme is based on high-order weighted-essentially non-oscillatory (WENO) finite volume schemes and high-order limiters to ensure the maximum principle or positivity of the various field variables including the density, pressure, and order parameters identifying each phase. The two-phase flow model considered besides the Euler equations of gas dynamics consists of advection of two parameters of the stiffened-gas equation of states, characterizing each phase. The design of the high-order limiter is guided by the findings of Zhang and Shu (2011) [36], and is based on limiting the quadrature values of the density, pressure and order parameters reconstructed using a high-order WENO scheme. The proof of positivity-preserving and accuracy is given, and the convergence and the robustness of the scheme are illustrated using the smooth isentropic vortex problem with very small density and pressure. The effectiveness and robustness of the scheme in computing the challenging problem of shock wave interaction with a cluster of tightly packed air or helium bubbles placed in a body of liquid water is also demonstrated. The superior performance of the high-order schemes over the first-order Lax-Friedrichs scheme for computations of shock-bubble interaction is also shown. The scheme is implemented in two-dimensional space on parallel computers using message passing interface (MPI). The proposed scheme with limiter features approximately 50% higher number of inter-processor message communications compared to the corresponding scheme without limiter, but with only 10% higher total CPU time. The scheme is provably second-order accurate in regions requiring positivity enforcement and higher order in the rest of domain.

  16. Accurate method of the magnetic field measurement of quadrupole magnets

    International Nuclear Information System (INIS)

    Kumada, M.; Sakai, I.; Someya, H.; Sasaki, H.

    1983-01-01

    We present an accurate method of the magnetic field measurement of the quadrupole magnet. The method of obtaining the information of the field gradient and the effective focussing length is given. A new scheme to obtain the information of the skew field components is also proposed. The relative accuracy of the measurement was 1 x 10 -4 or less. (author)

  17. Accurate outage analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we investigate a dual-hop decode-and-forward opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. We first derive the exact statistics of received signal-to-noise (SNR) over each hop with co-located relays, in terms of probability density function (PDF). Then, the PDFs are used to determine very accurate closed-form expression for the outage probability for a transmission rate R. Furthermore, we perform asymptotic analysis and we deduce the diversity order of the scheme. We validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.

  18. Accurate outage analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2011-04-01

    In this paper, we investigate a dual-hop decode-and-forward opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. We first derive the exact statistics of received signal-to-noise (SNR) over each hop with co-located relays, in terms of probability density function (PDF). Then, the PDFs are used to determine very accurate closed-form expression for the outage probability for a transmission rate R. Furthermore, we perform asymptotic analysis and we deduce the diversity order of the scheme. We validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.

  19. High-order upwind schemes for the wave equation on overlapping grids: Maxwell's equations in second-order form

    Science.gov (United States)

    Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.

    2018-01-01

    High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw [1] how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemann problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. The upwind scheme is shown to be robust and provide high-order accuracy.

  20. Reconciling controversies about the ‘global warming hiatus’

    Science.gov (United States)

    Medhaug, Iselin; Stolpe, Martin B.; Fischer, Erich M.; Knutti, Reto

    2017-05-01

    Between about 1998 and 2012, a time that coincided with political negotiations for preventing climate change, the surface of Earth seemed hardly to warm. This phenomenon, often termed the ‘global warming hiatus’, caused doubt in the public mind about how well anthropogenic climate change and natural variability are understood. Here we show that apparently contradictory conclusions stem from different definitions of ‘hiatus’ and from different datasets. A combination of changes in forcing, uptake of heat by the oceans, natural variability and incomplete observational coverage reconciles models and data. Combined with stronger recent warming trends in newer datasets, we are now more confident than ever that human influence is dominant in long-term warming.

  1. Defect correction and multigrid for an efficient and accurate computation of airfoil flows

    NARCIS (Netherlands)

    Koren, B.

    1988-01-01

    Results are presented for an efficient solution method for second-order accurate discretizations of the 2D steady Euler equations. The solution method is based on iterative defect correction. Several schemes are considered for the computation of the second-order defect. In each defect correction

  2. Accurate Quantitation of Water-amide Proton Exchange Rates Using the Phase-Modulated CLEAN Chemical EXchange (CLEANEX-PM) Approach with a Fast-HSQC (FHSQC) Detection Scheme

    International Nuclear Information System (INIS)

    Hwang, Tsang-Lin; Zijl, Peter C.M. van; Mori, Susumu

    1998-01-01

    Measurement of exchange rates between water and NH protons by magnetization transfer methods is often complicated by artifacts, such as intramolecular NOEs, and/or TOCSY transfer from Cα protons coincident with the water frequency, or exchange-relayed NOEs from fast exchanging hydroxyl or amine protons. By applying the Phase-Modulated CLEAN chemical EXchange (CLEANEX-PM) spin-locking sequence, 135 o (x) 120 o (-x) 110 o (x) 110 o (-x) 120 o (x) 135 o (-x) during the mixing period, these artifacts can be eliminated, revealing an unambiguous water-NH exchange spectrum. In this paper, the CLEANEX-PM mixing scheme is combined with Fast-HSQC (FHSQC) detection and used to obtain accurate chemical exchange rates from the initial slope analysis for a sample of 15N labeled staphylococcal nuclease. The results are compared to rates obtained using Water EXchange filter (WEX) II-FHSQC, and spin-echo-filtered WEX II-FHSQC measurements, and clearly identify the spurious NOE contributions in the exchange system

  3. A stable higher order space time Galerkin marching-on-in-time scheme

    KAUST Repository

    Pray, Andrew J.; Shanker, Balasubramaniam; Bagci, Hakan

    2013-01-01

    We present a method for the stable solution of time-domain integral equations. The method uses a technique developed in [1] to accurately evaluate matrix elements. As opposed to existing stabilization schemes, the method presented uses higher order

  4. An accurate reactive power control study in virtual flux droop control

    Science.gov (United States)

    Wang, Aimeng; Zhang, Jia

    2017-12-01

    This paper investigates the problem of reactive power sharing based on virtual flux droop method. Firstly, flux droop control method is derived, where complicated multiple feedback loops and parameter regulation are avoided. Then, the reasons for inaccurate reactive power sharing are theoretically analyzed. Further, a novel reactive power control scheme is proposed which consists of three parts: compensation control, voltage recovery control and flux droop control. Finally, the proposed reactive power control strategy is verified in a simplified microgrid model with two parallel DGs. The simulation results show that the proposed control scheme can achieve accurate reactive power sharing and zero deviation of voltage. Meanwhile, it has some advantages of simple control and excellent dynamic and static performance.

  5. The Threat Detection System that Cried Wolf: Reconciling Developers with Operators

    Science.gov (United States)

    2017-01-01

    human response time. Journal of Experimental Psychology: Applied , 1(1), 19–33. doi:10.1037/1076-898X.1.1.19 L3 Communications Cyterra. (2012). AN/PSS...taking the chance that a true threat will not appear. This article reviews statistical concepts to reconcile the performance metrics that summarize a...concepts are already well known within the statistics and human factors communities, they are not often immediately understood in the DoD and DHS

  6. Compact high order schemes with gradient-direction derivatives for absorbing boundary conditions

    Science.gov (United States)

    Gordon, Dan; Gordon, Rachel; Turkel, Eli

    2015-09-01

    We consider several compact high order absorbing boundary conditions (ABCs) for the Helmholtz equation in three dimensions. A technique called "the gradient method" (GM) for ABCs is also introduced and combined with the high order ABCs. GM is based on the principle of using directional derivatives in the direction of the wavefront propagation. The new ABCs are used together with the recently introduced compact sixth order finite difference scheme for variable wave numbers. Experiments on problems with known analytic solutions produced very accurate results, demonstrating the efficacy of the high order schemes, particularly when combined with GM. The new ABCs are then applied to the SEG/EAGE Salt model, showing the advantages of the new schemes.

  7. Numerical study of read scheme in one-selector one-resistor crossbar array

    Science.gov (United States)

    Kim, Sungho; Kim, Hee-Dong; Choi, Sung-Jin

    2015-12-01

    A comprehensive numerical circuit analysis of read schemes of a one selector-one resistance change memory (1S1R) crossbar array is carried out. Three schemes-the ground, V/2, and V/3 schemes-are compared with each other in terms of sensing margin and power consumption. Without the aid of a complex analytical approach or SPICE-based simulation, a simple numerical iteration method is developed to simulate entire current flows and node voltages within a crossbar array. Understanding such phenomena is essential in successfully evaluating the electrical specifications of selectors for suppressing intrinsic drawbacks of crossbar arrays, such as sneaky current paths and series line resistance problems. This method provides a quantitative tool for the accurate analysis of crossbar arrays and provides guidelines for developing an optimal read scheme, array configuration, and selector device specifications.

  8. Third Order Reconstruction of the KP Scheme for Model of River Tinnelva

    Directory of Open Access Journals (Sweden)

    Susantha Dissanayake

    2017-01-01

    Full Text Available The Saint-Venant equation/Shallow Water Equation is used to simulate flow of river, flow of liquid in an open channel, tsunami etc. The Kurganov-Petrova (KP scheme which was developed based on the local speed of discontinuity propagation, can be used to solve hyperbolic type partial differential equations (PDEs, hence can be used to solve the Saint-Venant equation. The KP scheme is semi discrete: PDEs are discretized in the spatial domain, resulting in a set of Ordinary Differential Equations (ODEs. In this study, the common 2nd order KP scheme is extended into 3rd order scheme while following the Weighted Essentially Non-Oscillatory (WENO and Central WENO (CWENO reconstruction steps. Both the 2nd order and 3rd order schemes have been used in simulation in order to check the suitability of the KP schemes to solve hyperbolic type PDEs. The simulation results indicated that the 3rd order KP scheme shows some better stability compared to the 2nd order scheme. Computational time for the 3rd order KP scheme for variable step-length ode solvers in MATLAB is less compared to the computational time of the 2nd order KP scheme. In addition, it was confirmed that the order of the time integrators essentially should be lower compared to the order of the spatial discretization. However, for computation of abrupt step changes, the 2nd order KP scheme shows a more accurate solution.

  9. Reconciling the Chinese Financial Development with its Economic Growth: A Discursive Essay

    OpenAIRE

    Maswana, Jean-Claude

    2005-01-01

    China‘s strong economic performance and its financial development outcomes are extremely difficult to reconcile with the dominant verdict that its financial system is seriously inefficient. Using an evolutionary perspective as a metaphor, this essay offered suggestions that adaptive efficiency criteria may help solve the apparent puzzle. An adaptive efficiency criterion offers conceptual as well as methodological approaches to resolving this puzzle and contradiction. The essay‘s discussions r...

  10. Preliminary Study of 1D Thermal-Hydraulic System Analysis Code Using the Higher-Order Numerical Scheme

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Won Woong; Lee, Jeong Ik [KAIST, Daejeon (Korea, Republic of)

    2016-05-15

    The existing nuclear system analysis codes such as RELAP5, TRAC, MARS and SPACE use the first-order numerical scheme in both space and time discretization. However, the first-order scheme is highly diffusive and less accurate due to the first order of truncation error. So, the numerical diffusion problem which makes the gradients to be smooth in the regions where the gradients should be steep can occur during the analysis, which often predicts less conservatively than the reality. Therefore, the first-order scheme is not always useful in many applications such as boron solute transport. RELAP7 which is an advanced nuclear reactor system safety analysis code using the second-order numerical scheme in temporal and spatial discretization is being developed by INL (Idaho National Laboratory) since 2011. Therefore, for better predictive performance of the safety of nuclear reactor systems, more accurate nuclear reactor system analysis code is needed for Korea too to follow the global trend of nuclear safety analysis. Thus, this study will evaluate the feasibility of applying the higher-order numerical scheme to the next generation nuclear system analysis code to provide the basis for the better nuclear system analysis code development. The accuracy is enhanced in the spatial second-order scheme and the numerical diffusion problem is alleviated while indicates significantly lower maximum Courant limit and the numerical dispersion issue which produces spurious oscillation and non-physical results in the higher-order scheme. If the spatial scheme is the first order scheme then the temporal second-order scheme provides almost the same result with the temporal firstorder scheme. However, when the temporal second order scheme and the spatial second-order scheme are applied together, the numerical dispersion can occur more severely. For the more in-depth study, the verification and validation of the NTS code built in MATLAB will be conducted further and expanded to handle two

  11. DSMC-LBM mapping scheme for rarefied and non-rarefied gas flows

    NARCIS (Netherlands)

    Di Staso, G.; Clercx, H.J.H.; Succi, S.; Toschi, F.

    2016-01-01

    We present the formulation of a kinetic mapping scheme between the Direct Simulation Monte Carlo (DSMC) and the Lattice Boltzmann Method (LBM) which is at the basis of the hybrid model used to couple the two methods in view of efficiently and accurately simulate isothermal flows characterized by

  12. Secure and Privacy-Preserving Body Sensor Data Collection and Query Scheme

    Directory of Open Access Journals (Sweden)

    Hui Zhu

    2016-02-01

    Full Text Available With the development of body sensor networks and the pervasiveness of smart phones, different types of personal data can be collected in real time by body sensors, and the potential value of massive personal data has attracted considerable interest recently. However, the privacy issues of sensitive personal data are still challenging today. Aiming at these challenges, in this paper, we focus on the threats from telemetry interface and present a secure and privacy-preserving body sensor data collection and query scheme, named SPCQ, for outsourced computing. In the proposed SPCQ scheme, users’ personal information is collected by body sensors in different types and converted into multi-dimension data, and each dimension is converted into the form of a number and uploaded to the cloud server, which provides a secure, efficient and accurate data query service, while the privacy of sensitive personal information and users’ query data is guaranteed. Specifically, based on an improved homomorphic encryption technology over composite order group, we propose a special weighted Euclidean distance contrast algorithm (WEDC for multi-dimension vectors over encrypted data. With the SPCQ scheme, the confidentiality of sensitive personal data, the privacy of data users’ queries and accurate query service can be achieved in the cloud server. Detailed analysis shows that SPCQ can resist various security threats from telemetry interface. In addition, we also implement SPCQ on an embedded device, smart phone and laptop with a real medical database, and extensive simulation results demonstrate that our proposed SPCQ scheme is highly efficient in terms of computation and communication costs.

  13. Secure and Privacy-Preserving Body Sensor Data Collection and Query Scheme.

    Science.gov (United States)

    Zhu, Hui; Gao, Lijuan; Li, Hui

    2016-02-01

    With the development of body sensor networks and the pervasiveness of smart phones, different types of personal data can be collected in real time by body sensors, and the potential value of massive personal data has attracted considerable interest recently. However, the privacy issues of sensitive personal data are still challenging today. Aiming at these challenges, in this paper, we focus on the threats from telemetry interface and present a secure and privacy-preserving body sensor data collection and query scheme, named SPCQ, for outsourced computing. In the proposed SPCQ scheme, users' personal information is collected by body sensors in different types and converted into multi-dimension data, and each dimension is converted into the form of a number and uploaded to the cloud server, which provides a secure, efficient and accurate data query service, while the privacy of sensitive personal information and users' query data is guaranteed. Specifically, based on an improved homomorphic encryption technology over composite order group, we propose a special weighted Euclidean distance contrast algorithm (WEDC) for multi-dimension vectors over encrypted data. With the SPCQ scheme, the confidentiality of sensitive personal data, the privacy of data users' queries and accurate query service can be achieved in the cloud server. Detailed analysis shows that SPCQ can resist various security threats from telemetry interface. In addition, we also implement SPCQ on an embedded device, smart phone and laptop with a real medical database, and extensive simulation results demonstrate that our proposed SPCQ scheme is highly efficient in terms of computation and communication costs.

  14. A second-order iterative implicit-explicit hybrid scheme for hyperbolic systems of conservation laws

    International Nuclear Information System (INIS)

    Dai, Wenlong; Woodward, P.R.

    1996-01-01

    An iterative implicit-explicit hybrid scheme is proposed for hyperbolic systems of conservation laws. Each wave in a system may be implicitly, or explicitly, or partially implicitly and partially explicitly treated depending on its associated Courant number in each numerical cell, and the scheme is able to smoothly switch between implicit and explicit calculations. The scheme is of Godunov-type in both explicit and implicit regimes, is in a strict conservation form, and is accurate to second-order in both space and time for all Courant numbers. The computer code for the scheme is easy to vectorize. Multicolors proposed in this paper may reduce the number of iterations required to reach a converged solution by several orders for a large time step. The feature of the scheme is shown through numerical examples. 38 refs., 12 figs

  15. Parallel computation of fluid-structural interactions using high resolution upwind schemes

    Science.gov (United States)

    Hu, Zongjun

    An efficient and accurate solver is developed to simulate the non-linear fluid-structural interactions in turbomachinery flutter flows. A new low diffusion E-CUSP scheme, Zha CUSP scheme, is developed to improve the efficiency and accuracy of the inviscid flux computation. The 3D unsteady Navier-Stokes equations with the Baldwin-Lomax turbulence model are solved using the finite volume method with the dual-time stepping scheme. The linearized equations are solved with Gauss-Seidel line iterations. The parallel computation is implemented using MPI protocol. The solver is validated with 2D cases for its turbulence modeling, parallel computation and unsteady calculation. The Zha CUSP scheme is validated with 2D cases, including a supersonic flat plate boundary layer, a transonic converging-diverging nozzle and a transonic inlet diffuser. The Zha CUSP2 scheme is tested with 3D cases, including a circular-to-rectangular nozzle, a subsonic compressor cascade and a transonic channel. The Zha CUSP schemes are proved to be accurate, robust and efficient in these tests. The steady and unsteady separation flows in a 3D stationary cascade under high incidence and three inlet Mach numbers are calculated to study the steady state separation flow patterns and their unsteady oscillation characteristics. The leading edge vortex shedding is the mechanism behind the unsteady characteristics of the high incidence separated flows. The separation flow characteristics is affected by the inlet Mach number. The blade aeroelasticity of a linear cascade with forced oscillating blades is studied using parallel computation. A simplified two-passage cascade with periodic boundary condition is first calculated under a medium frequency and a low incidence. The full scale cascade with 9 blades and two end walls is then studied more extensively under three oscillation frequencies and two incidence angles. The end wall influence and the blade stability are studied and compared under different

  16. An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models

    KAUST Repository

    El Gharamti, Mohamad; Valstar, Johan R.; Hoteit, Ibrahim

    2014-01-01

    Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system's parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.

  17. An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models

    KAUST Repository

    El Gharamti, Mohamad

    2014-09-01

    Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.

  18. A study to reduce the numerical diffusion of upwind scheme in two dimensional convection phenomena analysis

    International Nuclear Information System (INIS)

    Lee, Goung Jin; Kim, Soong Pyung

    1990-01-01

    In solving the convection-diffusion phenomena, it is common to use central difference scheme or upwind scheme. The central difference scheme has second order accuracy, while the upwind scheme is only first order accurate. However, since the variation rising in the convection-diffusion problem is exponential, central difference scheme ceased to be a good method for anything but extremely small values of Δx. At large values of Δx, which is all one can afford in most practical problems, it is the upwind scheme that gives more reasonable results than the central scheme. But in the conventional upwind scheme, since the accuracy is only first order, false diffusion is somewhat large, and when the real diffusion is smaller than the numerical diffusion, solutions may be very errorneous. So in this paper, a method to reduce the numerical diffusion of upwind scheme is studied. Developed scheme uses same number of nodes as conventional upwind scheme, but it considers the direction of flow more sophistically. As a conclusion, the developed scheme shows very good results. It can reduce false diffusion greatly with the cost of small complexity. Also, algorithm of the developed scheme is presented at appendix. (Author)

  19. A Note on Symplectic, Multisymplectic Scheme in Finite Element Method

    Institute of Scientific and Technical Information of China (English)

    GUO Han-Ying; JI Xiao-Mei; LI Yu-Qi; WU Ke

    2001-01-01

    We find that with uniform mesh, the numerical schemes derived from finite element method can keep a preserved symplectic structure in one-dimensional case and a preserved multisymplectic structure in two-dimensional case respectively. These results are in fact the intrinsic reason why the numerical experiments show that such finite element algorithms are accurate in practice.``

  20. TE/TM alternating direction scheme for wake field calculation in 3D

    Energy Technology Data Exchange (ETDEWEB)

    Zagorodnov, Igor [Institut fuer Theorie Elektromagnetischer Felder (TEMF), Technische Universitaet Darmstadt, Schlossgartenstrasse 8, D-64289 Darmstadt (Germany)]. E-mail: zagor@temf.de; Weiland, Thomas [Institut fuer Theorie Elektromagnetischer Felder (TEMF), Technische Universitaet Darmstadt, Schlossgartenstrasse 8, D-64289 Darmstadt (Germany)

    2006-03-01

    In the future, accelerators with very short bunches will be used. It demands developing new numerical approaches for long-time calculation of electromagnetic fields in the vicinity of relativistic bunches. The conventional FDTD scheme, used in MAFIA, ABCI and other wake and PIC codes, suffers from numerical grid dispersion and staircase approximation problem. As an effective cure of the dispersion problem, a numerical scheme without dispersion in longitudinal direction can be used as it was shown by Novokhatski et al. [Transition dynamics of the wake fields of ultrashort bunches, TESLA Report 2000-03, DESY, 2000] and Zagorodnov et al. [J. Comput. Phys. 191 (2003) 525]. In this paper, a new economical conservative scheme for short-range wake field calculation in 3D is presented. As numerical examples show, the new scheme is much more accurate on long-time scale than the conventional FDTD approach.

  1. Secure Dynamic access control scheme of PHR in cloud computing.

    Science.gov (United States)

    Chen, Tzer-Shyong; Liu, Chia-Hui; Chen, Tzer-Long; Chen, Chin-Sheng; Bau, Jian-Guo; Lin, Tzu-Ching

    2012-12-01

    With the development of information technology and medical technology, medical information has been developed from traditional paper records into electronic medical records, which have now been widely applied. The new-style medical information exchange system "personal health records (PHR)" is gradually developed. PHR is a kind of health records maintained and recorded by individuals. An ideal personal health record could integrate personal medical information from different sources and provide complete and correct personal health and medical summary through the Internet or portable media under the requirements of security and privacy. A lot of personal health records are being utilized. The patient-centered PHR information exchange system allows the public autonomously maintain and manage personal health records. Such management is convenient for storing, accessing, and sharing personal medical records. With the emergence of Cloud computing, PHR service has been transferred to storing data into Cloud servers that the resources could be flexibly utilized and the operation cost can be reduced. Nevertheless, patients would face privacy problem when storing PHR data into Cloud. Besides, it requires a secure protection scheme to encrypt the medical records of each patient for storing PHR into Cloud server. In the encryption process, it would be a challenge to achieve accurately accessing to medical records and corresponding to flexibility and efficiency. A new PHR access control scheme under Cloud computing environments is proposed in this study. With Lagrange interpolation polynomial to establish a secure and effective PHR information access scheme, it allows to accurately access to PHR with security and is suitable for enormous multi-users. Moreover, this scheme also dynamically supports multi-users in Cloud computing environments with personal privacy and offers legal authorities to access to PHR. From security and effectiveness analyses, the proposed PHR access

  2. Accurate and computationally efficient prediction of thermochemical properties of biomolecules using the generalized connectivity-based hierarchy.

    Science.gov (United States)

    Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-08-14

    In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.

  3. A comparative study of upwind and MacCormack schemes for CAA benchmark problems

    Science.gov (United States)

    Viswanathan, K.; Sankar, L. N.

    1995-01-01

    In this study, upwind schemes and MacCormack schemes are evaluated as to their suitability for aeroacoustic applications. The governing equations are cast in a curvilinear coordinate system and discretized using finite volume concepts. A flux splitting procedure is used for the upwind schemes, where the signals crossing the cell faces are grouped into two categories: signals that bring information from outside into the cell, and signals that leave the cell. These signals may be computed in several ways, with the desired spatial and temporal accuracy achieved by choosing appropriate interpolating polynomials. The classical MacCormack schemes employed here are fourth order accurate in time and space. Results for categories 1, 4, and 6 of the workshop's benchmark problems are presented. Comparisons are also made with the exact solutions, where available. The main conclusions of this study are finally presented.

  4. BlueDetect: An iBeacon-Enabled Scheme for Accurate and Energy-Efficient Indoor-Outdoor Detection and Seamless Location-Based Service.

    Science.gov (United States)

    Zou, Han; Jiang, Hao; Luo, Yiwen; Zhu, Jianjie; Lu, Xiaoxuan; Xie, Lihua

    2016-02-22

    The location and contextual status (indoor or outdoor) is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS) for individuals. In addition, optimizations of building management systems (BMS), such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO) detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS) easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption.

  5. BlueDetect: An iBeacon-Enabled Scheme for Accurate and Energy-Efficient Indoor-Outdoor Detection and Seamless Location-Based Service

    Directory of Open Access Journals (Sweden)

    Han Zou

    2016-02-01

    Full Text Available The location and contextual status (indoor or outdoor is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS for individuals. In addition, optimizations of building management systems (BMS, such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption.

  6. A GPS Sensing Strategy for Accurate and Energy-Efficient Outdoor-to-Indoor Handover in Seamless Localization Systems

    Directory of Open Access Journals (Sweden)

    Yungeun Kim

    2012-01-01

    Full Text Available Indoor localization systems typically locate users on their own local coordinates, while outdoor localization systems use global coordinates. To achieve seamless localization from outdoors to indoors, a handover technique that accurately provides a starting position to the indoor localization system is needed. However, existing schemes assume that a starting position is known a priori or uses a naïve approach to consider the last location obtained from GPS as the handover point. In this paper, we propose an accurate handover scheme that monitors the signal-to-noise ratio (SNR of the effective GPS satellites that are selected according to their altitude. We also propose an energy-efficient handover mechanism that reduces the GPS sampling interval gradually. Accuracy and energy efficiency are experimentally validated with the GPS logs obtained in real life.

  7. Accelerating Monte Carlo Molecular Simulations Using Novel Extrapolation Schemes Combined with Fast Database Generation on Massively Parallel Machines

    KAUST Repository

    Amir, Sahar Z.

    2013-05-01

    We introduce an efficient thermodynamically consistent technique to extrapolate and interpolate normalized Canonical NVT ensemble averages like pressure and energy for Lennard-Jones (L-J) fluids. Preliminary results show promising applicability in oil and gas modeling, where accurate determination of thermodynamic properties in reservoirs is challenging. The thermodynamic interpolation and thermodynamic extrapolation schemes predict ensemble averages at different thermodynamic conditions from expensively simulated data points. The methods reweight and reconstruct previously generated database values of Markov chains at neighboring temperature and density conditions. To investigate the efficiency of these methods, two databases corresponding to different combinations of normalized density and temperature are generated. One contains 175 Markov chains with 10,000,000 MC cycles each and the other contains 3000 Markov chains with 61,000,000 MC cycles each. For such massive database creation, two algorithms to parallelize the computations have been investigated. The accuracy of the thermodynamic extrapolation scheme is investigated with respect to classical interpolation and extrapolation. Finally, thermodynamic interpolation benefiting from four neighboring Markov chains points is implemented and compared with previous schemes. The thermodynamic interpolation scheme using knowledge from the four neighboring points proves to be more accurate than the thermodynamic extrapolation from the closest point only, while both thermodynamic extrapolation and thermodynamic interpolation are more accurate than the classical interpolation and extrapolation. The investigated extrapolation scheme has great potential in oil and gas reservoir modeling.That is, such a scheme has the potential to speed up the MCMC thermodynamic computation to be comparable with conventional Equation of State approaches in efficiency. In particular, this makes it applicable to large-scale optimization of L

  8. On Converting Secret Sharing Scheme to Visual Secret Sharing Scheme

    Directory of Open Access Journals (Sweden)

    Wang Daoshun

    2010-01-01

    Full Text Available Abstract Traditional Secret Sharing (SS schemes reconstruct secret exactly the same as the original one but involve complex computation. Visual Secret Sharing (VSS schemes decode the secret without computation, but each share is m times as big as the original and the quality of the reconstructed secret image is reduced. Probabilistic visual secret sharing (Prob.VSS schemes for a binary image use only one subpixel to share the secret image; however the probability of white pixels in a white area is higher than that in a black area in the reconstructed secret image. SS schemes, VSS schemes, and Prob. VSS schemes have various construction methods and advantages. This paper first presents an approach to convert (transform a -SS scheme to a -VSS scheme for greyscale images. The generation of the shadow images (shares is based on Boolean XOR operation. The secret image can be reconstructed directly by performing Boolean OR operation, as in most conventional VSS schemes. Its pixel expansion is significantly smaller than that of VSS schemes. The quality of the reconstructed images, measured by average contrast, is the same as VSS schemes. Then a novel matrix-concatenation approach is used to extend the greyscale -SS scheme to a more general case of greyscale -VSS scheme.

  9. Assessment of the reduction methods used to develop chemical schemes: building of a new chemical scheme for VOC oxidation suited to three-dimensional multiscale HOx-NOx-VOC chemistry simulations

    Directory of Open Access Journals (Sweden)

    S. Szopa

    2005-01-01

    Full Text Available The objective of this work was to develop and assess an automatic procedure to generate reduced chemical schemes for the atmospheric photooxidation of volatile organic carbon (VOC compounds. The procedure is based on (i the development of a tool for writing the fully explicit schemes for VOC oxidation (see companion paper Aumont et al., 2005, (ii the application of several commonly used reduction methods to the fully explicit scheme, and (iii the assessment of resulting errors based on direct comparison between the reduced and full schemes. The reference scheme included seventy emitted VOCs chosen to be representative of both anthropogenic and biogenic emissions, and their atmospheric degradation chemistry required more than two million reactions among 350000 species. Three methods were applied to reduce the size of the reference chemical scheme: (i use of operators, based on the redundancy of the reaction sequences involved in the VOC oxidation, (ii grouping of primary species having similar reactivities into surrogate species and (iii grouping of some secondary products into surrogate species. The number of species in the final reduced scheme is 147, this being small enough for practical inclusion in current three-dimensional models. Comparisons between the fully explicit and reduced schemes, carried out with a box model for several typical tropospheric conditions, showed that the reduced chemical scheme accurately predicts ozone concentrations and some other aspects of oxidant chemistry for both polluted and clean tropospheric conditions.

  10. A spatiotemporal-based scheme for efficient registration-based segmentation of thoracic 4-D MRI.

    Science.gov (United States)

    Yang, Y; Van Reeth, E; Poh, C L; Tan, C H; Tham, I W K

    2014-05-01

    Dynamic three-dimensional (3-D) (four-dimensional, 4-D) magnetic resonance (MR) imaging is gaining importance in the study of pulmonary motion for respiratory diseases and pulmonary tumor motion for radiotherapy. To perform quantitative analysis using 4-D MR images, segmentation of anatomical structures such as the lung and pulmonary tumor is required. Manual segmentation of entire thoracic 4-D MRI data that typically contains many 3-D volumes acquired over several breathing cycles is extremely tedious, time consuming, and suffers high user variability. This requires the development of new automated segmentation schemes for 4-D MRI data segmentation. Registration-based segmentation technique that uses automatic registration methods for segmentation has been shown to be an accurate method to segment structures for 4-D data series. However, directly applying registration-based segmentation to segment 4-D MRI series lacks efficiency. Here we propose an automated 4-D registration-based segmentation scheme that is based on spatiotemporal information for the segmentation of thoracic 4-D MR lung images. The proposed scheme saved up to 95% of computation amount while achieving comparable accurate segmentations compared to directly applying registration-based segmentation to 4-D dataset. The scheme facilitates rapid 3-D/4-D visualization of the lung and tumor motion and potentially the tracking of tumor during radiation delivery.

  11. Explicit solution of the time domain magnetic field integral equation using a predictor-corrector scheme

    KAUST Repository

    Ulku, Huseyin Arda; Bagci, Hakan; Michielssen, Eric

    2012-01-01

    An explicit yet stable marching-on-in-time (MOT) scheme for solving the time domain magnetic field integral equation (TD-MFIE) is presented. The stability of the explicit scheme is achieved via (i) accurate evaluation of the MOT matrix elements using closed form expressions and (ii) a PE(CE) m type linear multistep method for time marching. Numerical results demonstrate the accuracy and stability of the proposed explicit MOT-TD-MFIE solver. © 2012 IEEE.

  12. Explicit solution of the time domain magnetic field integral equation using a predictor-corrector scheme

    KAUST Repository

    Ulku, Huseyin Arda

    2012-09-01

    An explicit yet stable marching-on-in-time (MOT) scheme for solving the time domain magnetic field integral equation (TD-MFIE) is presented. The stability of the explicit scheme is achieved via (i) accurate evaluation of the MOT matrix elements using closed form expressions and (ii) a PE(CE) m type linear multistep method for time marching. Numerical results demonstrate the accuracy and stability of the proposed explicit MOT-TD-MFIE solver. © 2012 IEEE.

  13. Controlled braking scheme for a wheeled walking aid

    OpenAIRE

    Coyle, Eugene; O'Dwyer, Aidan; Young, Eileen; Sullivan, Kevin; Toner, A.

    2006-01-01

    A wheeled walking aid with an embedded controlled braking system is described. The frame of the prototype is based on combining features of standard available wheeled walking aids. A braking scheme has been designed using hydraulic disc brakes to facilitate accurate and sensitive controlled stopping of the walker by the user, and if called upon, by automatic action. Braking force is modulated via a linear actuating stepping motor. A microcontroller is used for control of both stepper movement...

  14. A comparison of resampling schemes for estimating model observer performance with small ensembles

    Science.gov (United States)

    Elshahaby, Fatma E. A.; Jha, Abhinav K.; Ghaly, Michael; Frey, Eric C.

    2017-09-01

    In objective assessment of image quality, an ensemble of images is used to compute the 1st and 2nd order statistics of the data. Often, only a finite number of images is available, leading to the issue of statistical variability in numerical observer performance. Resampling-based strategies can help overcome this issue. In this paper, we compared different combinations of resampling schemes (the leave-one-out (LOO) and the half-train/half-test (HT/HT)) and model observers (the conventional channelized Hotelling observer (CHO), channelized linear discriminant (CLD) and channelized quadratic discriminant). Observer performance was quantified by the area under the ROC curve (AUC). For a binary classification task and for each observer, the AUC value for an ensemble size of 2000 samples per class served as a gold standard for that observer. Results indicated that each observer yielded a different performance depending on the ensemble size and the resampling scheme. For a small ensemble size, the combination [CHO, HT/HT] had more accurate rankings than the combination [CHO, LOO]. Using the LOO scheme, the CLD and CHO had similar performance for large ensembles. However, the CLD outperformed the CHO and gave more accurate rankings for smaller ensembles. As the ensemble size decreased, the performance of the [CHO, LOO] combination seriously deteriorated as opposed to the [CLD, LOO] combination. Thus, it might be desirable to use the CLD with the LOO scheme when smaller ensemble size is available.

  15. An unstaggered central scheme on nonuniform grids for the simulation of a compressible two-phase flow model

    Energy Technology Data Exchange (ETDEWEB)

    Touma, Rony [Department of Computer Science & Mathematics, Lebanese American University, Beirut (Lebanon); Zeidan, Dia [School of Basic Sciences and Humanities, German Jordanian University, Amman (Jordan)

    2016-06-08

    In this paper we extend a central finite volume method on nonuniform grids to the case of drift-flux two-phase flow problems. The numerical base scheme is an unstaggered, non oscillatory, second-order accurate finite volume scheme that evolves a piecewise linear numerical solution on a single grid and uses dual cells intermediately while updating the numerical solution to avoid the resolution of the Riemann problems arising at the cell interfaces. We then apply the numerical scheme and solve a classical drift-flux problem. The obtained results are in good agreement with corresponding ones appearing in the recent literature, thus confirming the potential of the proposed scheme.

  16. Asymptotic diffusion limit of cell temperature discretisation schemes for thermal radiation transport

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, Richard P., E-mail: richard.smedley-stevenson@awe.co.uk [AWE PLC, Aldermaston, Reading, Berkshire, RG7 4PR (United Kingdom); Department of Earth Science and Engineering, Imperial College London, SW7 2AZ (United Kingdom); McClarren, Ryan G., E-mail: rmcclarren@ne.tamu.edu [Department of Nuclear Engineering, Texas A & M University, College Station, TX 77843-3133 (United States)

    2015-04-01

    This paper attempts to unify the asymptotic diffusion limit analysis of thermal radiation transport schemes, for a linear-discontinuous representation of the material temperature reconstructed from cell centred temperature unknowns, in a process known as ‘source tilting’. The asymptotic limits of both Monte Carlo (continuous in space) and deterministic approaches (based on linear-discontinuous finite elements) for solving the transport equation are investigated in slab geometry. The resulting discrete diffusion equations are found to have nonphysical terms that are proportional to any cell-edge discontinuity in the temperature representation. Based on this analysis it is possible to design accurate schemes for representing the material temperature, for coupling thermal radiation transport codes to a cell centred representation of internal energy favoured by ALE (arbitrary Lagrange–Eulerian) hydrodynamics schemes.

  17. Asymptotic diffusion limit of cell temperature discretisation schemes for thermal radiation transport

    International Nuclear Information System (INIS)

    Smedley-Stevenson, Richard P.; McClarren, Ryan G.

    2015-01-01

    This paper attempts to unify the asymptotic diffusion limit analysis of thermal radiation transport schemes, for a linear-discontinuous representation of the material temperature reconstructed from cell centred temperature unknowns, in a process known as ‘source tilting’. The asymptotic limits of both Monte Carlo (continuous in space) and deterministic approaches (based on linear-discontinuous finite elements) for solving the transport equation are investigated in slab geometry. The resulting discrete diffusion equations are found to have nonphysical terms that are proportional to any cell-edge discontinuity in the temperature representation. Based on this analysis it is possible to design accurate schemes for representing the material temperature, for coupling thermal radiation transport codes to a cell centred representation of internal energy favoured by ALE (arbitrary Lagrange–Eulerian) hydrodynamics schemes

  18. A strong shock tube problem calculated by different numerical schemes

    Science.gov (United States)

    Lee, Wen Ho; Clancy, Sean P.

    1996-05-01

    Calculated results are presented for the solution of a very strong shock tube problem on a coarse mesh using (1) MESA code, (2) UNICORN code, (3) Schulz hydro, and (4) modified TVD scheme. The first two codes are written in Eulerian coordinates, whereas methods (3) and (4) are in Lagrangian coordinates. MESA and UNICORN codes are both of second order and use different monotonic advection method to avoid the Gibbs phenomena. Code (3) uses typical artificial viscosity for inviscid flow, whereas code (4) uses a modified TVD scheme. The test problem is a strong shock tube problem with a pressure ratio of 109 and density ratio of 103 in an ideal gas. For no mass-matching case, Schulz hydro is better than TVD scheme. In the case of mass-matching, there is no difference between them. MESA and UNICORN results are nearly the same. However, the computed positions such as the contact discontinuity (i.e. the material interface) are not as accurate as the Lagrangian methods.

  19. Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations

    Science.gov (United States)

    Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul

    2015-01-01

    The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems. PMID:26070067

  20. In vitro transcription accurately predicts lac repressor phenotype in vivo in Escherichia coli

    Directory of Open Access Journals (Sweden)

    Matthew Almond Sochor

    2014-07-01

    Full Text Available A multitude of studies have looked at the in vivo and in vitro behavior of the lac repressor binding to DNA and effector molecules in order to study transcriptional repression, however these studies are not always reconcilable. Here we use in vitro transcription to directly mimic the in vivo system in order to build a self consistent set of experiments to directly compare in vivo and in vitro genetic repression. A thermodynamic model of the lac repressor binding to operator DNA and effector is used to link DNA occupancy to either normalized in vitro mRNA product or normalized in vivo fluorescence of a regulated gene, YFP. An accurate measurement of repressor, DNA and effector concentrations were made both in vivo and in vitro allowing for direct modeling of the entire thermodynamic equilibrium. In vivo repression profiles are accurately predicted from the given in vitro parameters when molecular crowding is considered. Interestingly, our measured repressor–operator DNA affinity differs significantly from previous in vitro measurements. The literature values are unable to replicate in vivo binding data. We therefore conclude that the repressor-DNA affinity is much weaker than previously thought. This finding would suggest that in vitro techniques that are specifically designed to mimic the in vivo process may be necessary to replicate the native system.

  1. Female entrepreneurship and challenges faced by women entrepreneurs to reconcile conflicts between work and family: multiple case study in travel agencies

    Directory of Open Access Journals (Sweden)

    Rivanda Meira Teixeira

    2016-03-01

    Full Text Available Women have gained more and more space in various professional areas and this development also occurs in the field of entrepreneurship. In Brazil GEM 2013 identified for the first time, that the number of new woman entrepreneurs was higher than male entrepreneurs. However, it is recognized that women entrepreneurs face many difficulties when trying to reconcile their companies with the family. The main objective of this research is to analyse the challenges faced by women entrepreneurs of travel agencies to reconcile the conflict between work and family. This study adopted the multiple cases research strategy and were selected seven women creators and managers of travel agencies in the cities of Aracaju and the Barra dos Coqueiros, in the state of Sergipe (east coast of Brazil. In an attempt to reconcile well the multiple roles these women often face the frustration and guilt. At this moment, it shows the importance of emotional contribution of husband and children. It is noticed that the search for balance between the conflicting demands generate emotional distress and / or physical.

  2. A New Grünwald-Letnikov Derivative Derived from a Second-Order Scheme

    Directory of Open Access Journals (Sweden)

    B. A. Jacobs

    2015-01-01

    Full Text Available A novel derivation of a second-order accurate Grünwald-Letnikov-type approximation to the fractional derivative of a function is presented. This scheme is shown to be second-order accurate under certain modifications to account for poor accuracy in approximating the asymptotic behavior near the lower limit of differentiation. Some example functions are chosen and numerical results are presented to illustrate the efficacy of this new method over some other popular choices for discretizing fractional derivatives.

  3. A Hierarchical Control Scheme for Reactive Power and Harmonic Current Sharing in Islanded Microgrids

    DEFF Research Database (Denmark)

    Lorzadeh, Iman; Firoozabadi, Mehdi Savaghebi; Askarian Abyaneh, Hossein

    2015-01-01

    In this paper, a hierarchical control scheme consisting of primary and secondary levels is proposed for achieving accurate reactive power and harmonic currents sharing among interface inverters of distributed generators (DGs) in islanded microgrids. Firstly, fundamental and main harmonic componen...

  4. Harvested Energy Prediction Schemes for Wireless Sensor Networks: Performance Evaluation and Enhancements

    Directory of Open Access Journals (Sweden)

    Muhammad

    2017-01-01

    Full Text Available We review harvested energy prediction schemes to be used in wireless sensor networks and explore the relative merits of landmark solutions. We propose enhancements to the well-known Profile-Energy (Pro-Energy model, the so-called Improved Profile-Energy (IPro-Energy, and compare its performance with Accurate Solar Irradiance Prediction Model (ASIM, Pro-Energy, and Weather Conditioned Moving Average (WCMA. The performance metrics considered are the prediction accuracy and the execution time which measure the implementation complexity. In addition, the effectiveness of the considered models, when integrated in an energy management scheme, is also investigated in terms of the achieved throughput and the energy consumption. Both solar irradiance and wind power datasets are used for the evaluation study. Our results indicate that the proposed IPro-Energy scheme outperforms the other candidate models in terms of the prediction accuracy achieved by up to 78% for short term predictions and 50% for medium term prediction horizons. For long term predictions, its prediction accuracy is comparable to the Pro-Energy model but outperforms the other models by up to 64%. In addition, the IPro scheme is able to achieve the highest throughput when integrated in the developed energy management scheme. Finally, the ASIM scheme reports the smallest implementation complexity.

  5. On Richardson extrapolation for low-dissipation low-dispersion diagonally implicit Runge-Kutta schemes

    Science.gov (United States)

    Havasi, Ágnes; Kazemi, Ehsan

    2018-04-01

    In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.

  6. Assessment of some high-order finite difference schemes on the scalar conservation law with periodical conditions

    Directory of Open Access Journals (Sweden)

    Alina BOGOI

    2016-12-01

    Full Text Available Supersonic/hypersonic flows with strong shocks need special treatment in Computational Fluid Dynamics (CFD in order to accurately capture the discontinuity location and his magnitude. To avoid numerical instabilities in the presence of discontinuities, the numerical schemes must generate low dissipation and low dispersion error. Consequently, the algorithms used to calculate the time and space-derivatives, should exhibit a low amplitude and phase error. This paper focuses on the comparison of the numerical results obtained by simulations with some high resolution numerical schemes applied on linear and non-linear one-dimensional conservation low. The analytical solutions are provided for all benchmark tests considering smooth periodical conditions. All the schemes converge to the proper weak solution for linear flux and smooth initial conditions. However, when the flux is non-linear, the discontinuities may develop from smooth initial conditions and the shock must be correctly captured. All the schemes accurately identify the shock position, with the price of the numerical oscillation in the vicinity of the sudden variation. We believe that the identification of this pure numerical behavior, without physical relevance, in 1D case is extremely useful to avoid problems related to the stability and convergence of the solution in the general 3D case.

  7. Creative Minds: The Search for the Reconciling Principles of Science, the Humanities, Arts and Religion

    Science.gov (United States)

    England, Richard

    2009-01-01

    Since before the time of writers such as Plato in his "Republic" and "Timaeus"; Martianus Capella in "The Marriage of Mercury and Philology"; Boethius in "De institutione musica"; Kepler in "The Harmony of the Universe"; and many others, there have been attempts to reconcile the various disciplines in the sciences, arts, humanities, and religion…

  8. A soft computing scheme incorporating ANN and MOV energy in fault detection, classification and distance estimation of EHV transmission line with FSC.

    Science.gov (United States)

    Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab

    2016-01-01

    In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.

  9. [Reconciling activities of working women providing care and the influence of structural and cultural factors].

    Science.gov (United States)

    Preuß, M

    2015-07-01

    Today, an increasing proportion of society has to reconcile eldercare and work. This task poses challenges for them, which they meet through an adjustment of their everyday living arrangements. These coping strategies have been so far scarcely noted within research on the reconciliation of elder care and employment. Knowledge about the active dealing with this parallel involvement in both spheres of life is of vital importance when wanting to derive precisely tailored support measures for employed care givers. A goal of this article is to deliver insight on reconciling activities of employed women who provide care, while it tries to specify respective factors which determine those actions. Moreover, an ideal typology is presented, which systematizes these associations. With this ideal typology, conceptual instruments have been developed which illustrate the complex reality of the reconciliation actions and the dependence on various coping resources. In gerontological practice, these findings may provide support to design an intervention strategy tailored to the individual situation that addresses the everyday level of action and strengthens the performance of those affected.

  10. Boosting flood warning schemes with fast emulator of detailed hydrodynamic models

    Science.gov (United States)

    Bellos, V.; Carbajal, J. P.; Leitao, J. P.

    2017-12-01

    Floods are among the most destructive catastrophic events and their frequency has incremented over the last decades. To reduce flood impact and risks, flood warning schemes are installed in flood prone areas. Frequently, these schemes are based on numerical models which quickly provide predictions of water levels and other relevant observables. However, the high complexity of flood wave propagation in the real world and the need of accurate predictions in urban environments or in floodplains hinders the use of detailed simulators. This sets the difficulty, we need fast predictions that meet the accuracy requirements. Most physics based detailed simulators although accurate, will not fulfill the speed demand. Even if High Performance Computing techniques are used (the magnitude of required simulation time is minutes/hours). As a consequence, most flood warning schemes are based in coarse ad-hoc approximations that cannot take advantage a detailed hydrodynamic simulation. In this work, we present a methodology for developing a flood warning scheme using an Gaussian Processes based emulator of a detailed hydrodynamic model. The methodology consists of two main stages: 1) offline stage to build the emulator; 2) online stage using the emulator to predict and generate warnings. The offline stage consists of the following steps: a) definition of the critical sites of the area under study, and the specification of the observables to predict at those sites, e.g. water depth, flow velocity, etc.; b) generation of a detailed simulation dataset to train the emulator; c) calibration of the required parameters (if measurements are available). The online stage is carried on using the emulator to predict the relevant observables quickly, and the detailed simulator is used in parallel to verify key predictions of the emulator. The speed gain given by the emulator allows also to quantify uncertainty in predictions using ensemble methods. The above methodology is applied in real

  11. Reconciling the Perspective of Practitioner and Service User: Findings from The Aphasia in Scotland Study

    Science.gov (United States)

    Law, James; Huby, Guro; Irving, Anne-Marie; Pringle, Ann-Marie; Conochie, Douglas; Haworth, Catherine; Burston, Amanda

    2010-01-01

    Background: It is widely accepted that service users should be actively involved in new service developments, but there remain issues about how best to consult with them and how to reconcile their views with those of service providers. Aims: This paper uses data from The Aphasia in Scotland study, set up by NHS Quality Improvement Scotland to…

  12. Social Constructivism: Does It Succeed in Reconciling Individual Cognition with Social Teaching and Learning Practices in Mathematics?

    Science.gov (United States)

    Bozkurt, Gulay

    2017-01-01

    This article examines the literature associated with social constructivism. It discusses whether social constructivism succeeds in reconciling individual cognition with social teaching and learning practices. After reviewing the meaning of individual cognition and social constructivism, two views--Piaget and Vygotsky's--accounting for learning…

  13. An asymptotic preserving unified gas kinetic scheme for gray radiative transfer equations

    International Nuclear Information System (INIS)

    Sun, Wenjun; Jiang, Song; Xu, Kun

    2015-01-01

    The solutions of radiative transport equations can cover both optical thin and optical thick regimes due to the large variation of photon's mean-free path and its interaction with the material. In the small mean free path limit, the nonlinear time-dependent radiative transfer equations can converge to an equilibrium diffusion equation due to the intensive interaction between radiation and material. In the optical thin limit, the photon free transport mechanism will emerge. In this paper, we are going to develop an accurate and robust asymptotic preserving unified gas kinetic scheme (AP-UGKS) for the gray radiative transfer equations, where the radiation transport equation is coupled with the material thermal energy equation. The current work is based on the UGKS framework for the rarefied gas dynamics [14], and is an extension of a recent work [12] from a one-dimensional linear radiation transport equation to a nonlinear two-dimensional gray radiative system. The newly developed scheme has the asymptotic preserving (AP) property in the optically thick regime in the capturing of diffusive solution without using a cell size being smaller than the photon's mean free path and time step being less than the photon collision time. Besides the diffusion limit, the scheme can capture the exact solution in the optical thin regime as well. The current scheme is a finite volume method. Due to the direct modeling for the time evolution solution of the interface radiative intensity, a smooth transition of the transport physics from optical thin to optical thick can be accurately recovered. Many numerical examples are included to validate the current approach

  14. SU-C-207B-03: A Geometrical Constrained Chan-Vese Based Tumor Segmentation Scheme for PET

    International Nuclear Information System (INIS)

    Chen, L; Zhou, Z; Wang, J

    2016-01-01

    Purpose: Accurate segmentation of tumor in PET is challenging when part of tumor is connected with normal organs/tissues with no difference in intensity. Conventional segmentation methods, such as thresholding or region growing, cannot generate satisfactory results in this case. We proposed a geometrical constrained Chan-Vese based scheme to segment tumor in PET for this special case by considering the similarity between two adjacent slices. Methods: The proposed scheme performs segmentation in a slice-by-slice fashion where an accurate segmentation of one slice is used as the guidance for segmentation of rest slices. For a slice that the tumor is not directly connected to organs/tissues with similar intensity values, a conventional clustering-based segmentation method under user’s guidance is used to obtain an exact tumor contour. This is set as the initial contour and the Chan-Vese algorithm is applied for segmenting the tumor in the next adjacent slice by adding constraints of tumor size, position and shape information. This procedure is repeated until the last slice of PET containing tumor. The proposed geometrical constrained Chan-Vese based algorithm was implemented in Matlab and its performance was tested on several cervical cancer patients where cervix and bladder are connected with similar activity values. The positive predictive values (PPV) are calculated to characterize the segmentation accuracy of the proposed scheme. Results: Tumors were accurately segmented by the proposed method even when they are connected with bladder in the image with no difference in intensity. The average PPVs were 0.9571±0.0355 and 0.9894±0.0271 for 17 slices and 11 slices of PET from two patients, respectively. Conclusion: We have developed a new scheme to segment tumor in PET images for the special case that the tumor is quite similar to or connected to normal organs/tissues in the image. The proposed scheme can provide a reliable way for segmenting tumors.

  15. SU-C-207B-03: A Geometrical Constrained Chan-Vese Based Tumor Segmentation Scheme for PET

    Energy Technology Data Exchange (ETDEWEB)

    Chen, L; Zhou, Z; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: Accurate segmentation of tumor in PET is challenging when part of tumor is connected with normal organs/tissues with no difference in intensity. Conventional segmentation methods, such as thresholding or region growing, cannot generate satisfactory results in this case. We proposed a geometrical constrained Chan-Vese based scheme to segment tumor in PET for this special case by considering the similarity between two adjacent slices. Methods: The proposed scheme performs segmentation in a slice-by-slice fashion where an accurate segmentation of one slice is used as the guidance for segmentation of rest slices. For a slice that the tumor is not directly connected to organs/tissues with similar intensity values, a conventional clustering-based segmentation method under user’s guidance is used to obtain an exact tumor contour. This is set as the initial contour and the Chan-Vese algorithm is applied for segmenting the tumor in the next adjacent slice by adding constraints of tumor size, position and shape information. This procedure is repeated until the last slice of PET containing tumor. The proposed geometrical constrained Chan-Vese based algorithm was implemented in Matlab and its performance was tested on several cervical cancer patients where cervix and bladder are connected with similar activity values. The positive predictive values (PPV) are calculated to characterize the segmentation accuracy of the proposed scheme. Results: Tumors were accurately segmented by the proposed method even when they are connected with bladder in the image with no difference in intensity. The average PPVs were 0.9571±0.0355 and 0.9894±0.0271 for 17 slices and 11 slices of PET from two patients, respectively. Conclusion: We have developed a new scheme to segment tumor in PET images for the special case that the tumor is quite similar to or connected to normal organs/tissues in the image. The proposed scheme can provide a reliable way for segmenting tumors.

  16. AOM reconciling of crystal field parameters for UCl 3, UBr 3, UI 3 series

    Science.gov (United States)

    Gajek, Z.; Mulak, J.

    1990-07-01

    Available inelastic neutron scattering interpretations of crystal field effect in the uranium trihalides have been verified in terms of Angular Overlap Model. For UCl 3 a good reconciling of both INS and optical interpretations of crystal field effect has been obtained. On the contrary, the parameterizations for UBr 3 and UI 3 were found to be highly artificial and suggestion is given to experimentalists to reinterpret their INS spectra.

  17. AOM reconciling of crystal field parameters for UCl3, UBr3, Ul3 series

    International Nuclear Information System (INIS)

    Gajek, Z.; Mulak, J.

    1990-01-01

    Available inelastic neutron scattering interpretations of crystal field effect in the uranium trihalides have been verified in terms of Angular Overlap Model. For UCl 3 a good reconciling of both INS and optical interpretations of crystal field effect has been obtained. On the contrary, the parameterizations for UBr 3 and UI 3 were found to be highly artificial and suggestion is given to experimentalists to reinterpret their INS spectra

  18. A strong shock tube problem calculated by different numerical schemes

    International Nuclear Information System (INIS)

    Lee, W.H.; Clancy, S.P.

    1996-01-01

    Calculated results are presented for the solution of a very strong shock tube problem on a coarse mesh using (1) MESA code, (2) UNICORN code, (3) Schulz hydro, and (4) modified TVD scheme. The first two codes are written in Eulerian coordinates, whereas methods (3) and (4) are in Lagrangian coordinates. MESA and UNICORN codes are both of second order and use different monotonic advection method to avoid the Gibbs phenomena. Code (3) uses typical artificial viscosity for inviscid flow, whereas code (4) uses a modified TVD scheme. The test problem is a strong shock tube problem with a pressure ratio of 10 9 and density ratio of 10 3 in an ideal gas. For no mass-matching case, Schulz hydro is better than TVD scheme. In the case of mass-matching, there is no difference between them. MESA and UNICORN results are nearly the same. However, the computed positions such as the contact discontinuity (i.e. the material interface) are not as accurate as the Lagrangian methods. copyright 1996 American Institute of Physics

  19. Stochastic porous media modeling and high-resolution schemes for numerical simulation of subsurface immiscible fluid flow transport

    Science.gov (United States)

    Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah

    2018-04-01

    This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual

  20. Reconciling professional identity: A grounded theory of nurse academics' role modelling for undergraduate students.

    Science.gov (United States)

    Baldwin, A; Mills, J; Birks, M; Budden, L

    2017-12-01

    Role modelling by experienced nurses, including nurse academics, is a key factor in the process of preparing undergraduate nursing students for practice, and may contribute to longevity in the workforce. A grounded theory study was undertaken to investigate the phenomenon of nurse academics' role modelling for undergraduate students. The study sought to answer the research question: how do nurse academics role model positive professional behaviours for undergraduate students? The aims of this study were to: theorise a process of nurse academic role modelling for undergraduate students; describe the elements that support positive role modelling by nurse academics; and explain the factors that influence the implementation of academic role modelling. The study sample included five second year nursing students and sixteen nurse academics from Australia and the United Kingdom. Data was collected from observation, focus groups and individual interviews. This study found that in order for nurse academics to role model professional behaviours for nursing students, they must reconcile their own professional identity. This paper introduces the theory of reconciling professional identity and discusses the three categories that comprise the theory, creating a context for learning, creating a context for authentic rehearsal and mirroring identity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Alcohol Use During Pregnancy in a South African Community: Reconciling Knowledge, Norms, and Personal Experience.

    Science.gov (United States)

    Watt, Melissa H; Eaton, Lisa A; Dennis, Alexis C; Choi, Karmel W; Kalichman, Seth C; Skinner, Donald; Sikkema, Kathleen J

    2016-01-01

    Due to high rates of fetal alcohol spectrum disorder (FASD) in South Africa, reducing alcohol use during pregnancy is a pressing public health priority. The aim of this study was to qualitatively explore knowledge and attitudes about maternal alcohol consumption among women who reported alcohol use during pregnancy. The study was conducted in Cape Town, South Africa. Participants were pregnant or within 1 year postpartum and self-reported alcohol use during pregnancy. In-depth interviews explored personal experiences with drinking during pregnancy, community norms and attitudes towards maternal drinking, and knowledge about FASD. Transcripts were analyzed using a content analytic approach, including narrative memos and data display matrices. Interviews revealed competing attitudes. Women received anti-drinking messages from several sources, but these sources were not highly valued and the messages often contradicted social norms. Women were largely unfamiliar with FASD, and their knowledge of impacts of fetal alcohol exposure was often inaccurate. Participants' personal experiences influenced their attitudes about the effects of alcohol during pregnancy, which led to internalization of misinformation. The data revealed a moral conflict that confronted women in this setting, leaving women feeling judged, ambivalent, or defensive about their behaviors, and ultimately creating uncertainty about their alcohol use behaviors. Data revealed the need to deliver accurate information about the harms of fetal alcohol exposure through sources perceived as trusted and reliable. Individual-level interventions to help women reconcile competing attitudes and identify motivations for reducing alcohol use during pregnancy would be beneficial.

  2. Double beta decay in the generalized seniority scheme

    International Nuclear Information System (INIS)

    Pittel, S.; Engel, J.; Vogel, P.; Ji Xiangdong

    1990-01-01

    A generalized-seniority truncation scheme is used in shell-model calculations of double beta decay matrix elements. Calculations are carried out for 78 Ge, 82 Se and 128,130 Te. Matrix elements calculated for the two-neutrino decay mode are small compared to weak-coupling shell-model calculations and support the suppression mechanism first observed in the quasi-particle random phase approximation. Matrix elements for the neutrinoless mode are similar to those of the weak-coupling shell model, suggesting that these matrix elements can be pinned down fairly accurately. (orig.)

  3. Reconciling Ourselves to Reality: Arendt, Education and the Challenge of Being at Home in the World

    Science.gov (United States)

    Biesta, Gert

    2016-01-01

    In this paper, I explore the educational significance of the work of Hannah Arendt through reflections on four papers that constitute this special issue. I focus on the challenge of reconciling ourselves to reality, that is, of being at home in the world. Although Arendt's idea of being at home in the world is connected to her explorations of…

  4. Reconciling patient and provider priorities for improving the care of critically ill patients: A consensus method and qualitative analysis of decision making.

    Science.gov (United States)

    McKenzie, Emily; Potestio, Melissa L; Boyd, Jamie M; Niven, Daniel J; Brundin-Mather, Rebecca; Bagshaw, Sean M; Stelfox, Henry T

    2017-12-01

    Providers have traditionally established priorities for quality improvement; however, patients and their family members have recently become involved in priority setting. Little is known about how to reconcile priorities of different stakeholder groups into a single prioritized list that is actionable for organizations. To describe the decision-making process for establishing consensus used by a diverse panel of stakeholders to reconcile two sets of quality improvement priorities (provider/decision maker priorities n=9; patient/family priorities n=19) into a single prioritized list. We employed a modified Delphi process with a diverse group of panellists to reconcile priorities for improving care of critically ill patients in the intensive care unit (ICU). Proceedings were audio-recorded, transcribed and analysed using qualitative content analysis to explore the decision-making process for establishing consensus. Nine panellists including three providers, three decision makers and three family members of previously critically ill patients. Panellists rated and revised 28 priorities over three rounds of review and reached consensus on the "Top 5" priorities for quality improvement: transition of patient care from ICU to hospital ward; family presence and effective communication; delirium screening and management; early mobilization; and transition of patient care between ICU providers. Four themes were identified as important for establishing consensus: storytelling (sharing personal experiences), amalgamating priorities (negotiating priority scope), considering evaluation criteria and having a priority champion. Our study demonstrates the feasibility of incorporating families of patients into a multistakeholder prioritization exercise. The approach described can be used to guide consensus building and reconcile priorities of diverse stakeholder groups. © 2017 The Authors Health Expectations Published by John Wiley & Sons Ltd.

  5. Improvement of a land surface model for accurate prediction of surface energy and water balances

    International Nuclear Information System (INIS)

    Katata, Genki

    2009-02-01

    In order to predict energy and water balances between the biosphere and atmosphere accurately, sophisticated schemes to calculate evaporation and adsorption processes in the soil and cloud (fog) water deposition on vegetation were implemented in the one-dimensional atmosphere-soil-vegetation model including CO 2 exchange process (SOLVEG2). Performance tests in arid areas showed that the above schemes have a significant effect on surface energy and water balances. The framework of the above schemes incorporated in the SOLVEG2 and instruction for running the model are documented. With further modifications of the model to implement the carbon exchanges between the vegetation and soil, deposition processes of materials on the land surface, vegetation stress-growth-dynamics etc., the model is suited to evaluate an effect of environmental loads to ecosystems by atmospheric pollutants and radioactive substances under climate changes such as global warming and drought. (author)

  6. A Time Marching Scheme for Solving Volume Integral Equations on Nonlinear Scatterers

    KAUST Repository

    Bagci, Hakan

    2015-01-07

    Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marchingon-in-time (MOT) schemes. Unlike finite difference and finite element schemes, MOT-TDVIE solvers require discretization of only the scatterers, do not call for artificial absorbing boundary conditions, and are more robust to numerical phase dispersion. On the other hand, their computational cost is high, they suffer from late-time instabilities, and their implicit nature makes incorporation of nonlinear constitutive relations more difficult. Development of plane-wave time-domain (PWTD) and FFT-based schemes has significantly reduced the computational cost of the MOT-TDVIE solvers. Additionally, latetime instability problem has been alleviated for all practical purposes with the development of accurate integration schemes and specially designed temporal basis functions. Addressing the third challenge is the topic of this presentation. I will talk about an explicit MOT scheme developed for solving the TDVIE on scatterers with nonlinear material properties. The proposed scheme separately discretizes the TDVIE and the nonlinear constitutive relation between electric field intensity and flux density. The unknown field intensity and flux density are expanded using half and full Schaubert-Wilton-Glisson (SWG) basis functions in space and polynomial temporal interpolators in time. The resulting coupled system of the discretized TDVIE and constitutive relation is integrated in time using an explicit P E(CE) m scheme to yield the unknown expansion coefficients. Explicitness of time marching allows for straightforward incorporation of the nonlinearity as a function evaluation on the right hand side of the coupled system of equations. Consequently, the resulting MOT scheme does not call for a Newton-like nonlinear solver. Numerical examples, which demonstrate the applicability

  7. A Time Marching Scheme for Solving Volume Integral Equations on Nonlinear Scatterers

    KAUST Repository

    Bagci, Hakan

    2015-01-01

    Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marchingon-in-time (MOT) schemes. Unlike finite difference and finite element schemes, MOT-TDVIE solvers require discretization of only the scatterers, do not call for artificial absorbing boundary conditions, and are more robust to numerical phase dispersion. On the other hand, their computational cost is high, they suffer from late-time instabilities, and their implicit nature makes incorporation of nonlinear constitutive relations more difficult. Development of plane-wave time-domain (PWTD) and FFT-based schemes has significantly reduced the computational cost of the MOT-TDVIE solvers. Additionally, latetime instability problem has been alleviated for all practical purposes with the development of accurate integration schemes and specially designed temporal basis functions. Addressing the third challenge is the topic of this presentation. I will talk about an explicit MOT scheme developed for solving the TDVIE on scatterers with nonlinear material properties. The proposed scheme separately discretizes the TDVIE and the nonlinear constitutive relation between electric field intensity and flux density. The unknown field intensity and flux density are expanded using half and full Schaubert-Wilton-Glisson (SWG) basis functions in space and polynomial temporal interpolators in time. The resulting coupled system of the discretized TDVIE and constitutive relation is integrated in time using an explicit P E(CE) m scheme to yield the unknown expansion coefficients. Explicitness of time marching allows for straightforward incorporation of the nonlinearity as a function evaluation on the right hand side of the coupled system of equations. Consequently, the resulting MOT scheme does not call for a Newton-like nonlinear solver. Numerical examples, which demonstrate the applicability

  8. Analysis of sensitivity to different parameterization schemes for a subtropical cyclone

    Science.gov (United States)

    Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.

    2018-05-01

    A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.

  9. Use of Whole-Genus Genome Sequence Data To Develop a Multilocus Sequence Typing Tool That Accurately Identifies Yersinia Isolates to the Species and Subspecies Levels

    Science.gov (United States)

    Hall, Miquette; Chattaway, Marie A.; Reuter, Sandra; Savin, Cyril; Strauch, Eckhard; Carniel, Elisabeth; Connor, Thomas; Van Damme, Inge; Rajakaruna, Lakshani; Rajendram, Dunstan; Jenkins, Claire; Thomson, Nicholas R.

    2014-01-01

    The genus Yersinia is a large and diverse bacterial genus consisting of human-pathogenic species, a fish-pathogenic species, and a large number of environmental species. Recently, the phylogenetic and population structure of the entire genus was elucidated through the genome sequence data of 241 strains encompassing every known species in the genus. Here we report the mining of this enormous data set to create a multilocus sequence typing-based scheme that can identify Yersinia strains to the species level to a level of resolution equal to that for whole-genome sequencing. Our assay is designed to be able to accurately subtype the important human-pathogenic species Yersinia enterocolitica to whole-genome resolution levels. We also report the validation of the scheme on 386 strains from reference laboratory collections across Europe. We propose that the scheme is an important molecular typing system to allow accurate and reproducible identification of Yersinia isolates to the species level, a process often inconsistent in nonspecialist laboratories. Additionally, our assay is the most phylogenetically informative typing scheme available for Y. enterocolitica. PMID:25339391

  10. Asymptotic preserving and all-regime Lagrange-Projection like numerical schemes: application to two-phase flows in low mach regime

    International Nuclear Information System (INIS)

    Girardin, Mathieu

    2014-01-01

    Two-phase flows in Pressurized Water Reactors belong to a wide range of Mach number flows. Computing accurate approximate solutions of those flows may be challenging from a numerical point of view as classical finite volume methods are too diffusive in the low Mach regime. In this thesis, we are interested in designing and studying some robust numerical schemes that are stable for large time steps and accurate even on coarse meshes for a wide range of flow regimes. An important feature is the strategy to construct those schemes. We use a mixed implicit-explicit strategy based on an operator splitting to solve fast and slow phenomena separately. Then, we introduce a modification of a Suliciu type relaxation scheme to improve the accuracy of the numerical scheme in some regime of interest. Two approaches have been used to assess the ability of our numerical schemes to deal with a wide range of flow regimes. The first approach, based on the asymptotic preserving property, has been used for the gas dynamics equations with stiff source terms. The second approach, based on the all-regime property, has been used for the gas dynamics equations and the homogeneous two-phase flows models HRM and HEM in the low Mach regime. We obtained some robustness and stability properties for our numerical schemes. In particular, some discrete entropy inequalities are shown. Numerical evidences, in 1D and in 2D on unstructured meshes, assess the gain in term of accuracy and CPU time of those asymptotic preserving and all-regime numerical schemes in comparison with classical finite volume methods. (author) [fr

  11. Implicit and explicit schemes for mass consistency preservation in hybrid particle/finite-volume algorithms for turbulent reactive flows

    International Nuclear Information System (INIS)

    Popov, Pavel P.; Pope, Stephen B.

    2014-01-01

    This work addresses the issue of particle mass consistency in Large Eddy Simulation/Probability Density Function (LES/PDF) methods for turbulent reactive flows. Numerical schemes for the implicit and explicit enforcement of particle mass consistency (PMC) are introduced, and their performance is examined in a representative LES/PDF application, namely the Sandia–Sydney Bluff-Body flame HM1. A new combination of interpolation schemes for velocity and scalar fields is found to better satisfy PMC than multilinear and fourth-order Lagrangian interpolation. A second-order accurate time-stepping scheme for stochastic differential equations (SDE) is found to improve PMC relative to Euler time stepping, which is the first time that a second-order scheme is found to be beneficial, when compared to a first-order scheme, in an LES/PDF application. An explicit corrective velocity scheme for PMC enforcement is introduced, and its parameters optimized to enforce a specified PMC criterion with minimal corrective velocity magnitudes

  12. Speeding up Monte Carlo molecular simulation by a non-conservative early rejection scheme

    KAUST Repository

    Kadoura, Ahmad Salim

    2015-04-23

    Monte Carlo (MC) molecular simulation describes fluid systems with rich information, and it is capable of predicting many fluid properties of engineering interest. In general, it is more accurate and representative than equations of state. On the other hand, it requires much more computational effort and simulation time. For that purpose, several techniques have been developed in order to speed up MC molecular simulations while preserving their precision. In particular, early rejection schemes are capable of reducing computational cost by reaching the rejection decision for the undesired MC trials at an earlier stage in comparison to the conventional scheme. In a recent work, we have introduced a ‘conservative’ early rejection scheme as a method to accelerate MC simulations while producing exactly the same results as the conventional algorithm. In this paper, we introduce a ‘non-conservative’ early rejection scheme, which is much faster than the conservative scheme, yet it preserves the precision of the method. The proposed scheme is tested for systems of structureless Lennard-Jones particles in both canonical and NVT-Gibbs ensembles. Numerical experiments were conducted at several thermodynamic conditions for different number of particles. Results show that at certain thermodynamic conditions, the non-conservative method is capable of doubling the speed of the MC molecular simulations in both canonical and NVT-Gibbs ensembles. © 2015 Taylor & Francis

  13. Dynamic spectro-polarimeter based on a modified Michelson interferometric scheme.

    Science.gov (United States)

    Dembele, Vamara; Jin, Moonseob; Baek, Byung-Joon; Kim, Daesuk

    2016-06-27

    A simple dynamic spectro-polarimeter based on a modified Michelson interferometric scheme is described. The proposed system can extract a spectral Stokes vector of a transmissive anisotropic object. Detail theoretical background is derived and experiments are conducted to verify the feasibility of the proposed novel snapshot spectro-polarimeter. The proposed dynamic spectro-polarimeter enables us to extract highly accurate spectral Stokes vector of any transmissive anisotropic object with a frame rate of more than 20Hz.

  14. Plasma simulation with the Differential Algebraic Cubic Interpolated Propagation scheme

    Energy Technology Data Exchange (ETDEWEB)

    Utsumi, Takayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    A computer code based on the Differential Algebraic Cubic Interpolated Propagation scheme has been developed for the numerical solution of the Boltzmann equation for a one-dimensional plasma with immobile ions. The scheme advects the distribution function and its first derivatives in the phase space for one time step by using a numerical integration method for ordinary differential equations, and reconstructs the profile in phase space by using a cubic polynomial within a grid cell. The method gives stable and accurate results, and is efficient. It is successfully applied to a number of equations; the Vlasov equation, the Boltzmann equation with the Fokker-Planck or the Bhatnagar-Gross-Krook (BGK) collision term and the relativistic Vlasov equation. The method can be generalized in a straightforward way to treat cases such as problems with nonperiodic boundary conditions and higher dimensional problems. (author)

  15. FASTSIM2: a second-order accurate frictional rolling contact algorithm

    Science.gov (United States)

    Vollebregt, E. A. H.; Wilders, P.

    2011-01-01

    In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.

  16. Exact analysis of Packet Reversed Packet Combining Scheme and Modified Packet Combining Scheme; and a combined scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-07-01

    Packet combining scheme is a well defined simple error correction scheme for the detection and correction of errors at the receiver. Although it permits a higher throughput when compared to other basic ARQ protocols, packet combining (PC) scheme fails to correct errors when errors occur in the same bit locations of copies. In a previous work, a scheme known as Packet Reversed Packet Combining (PRPC) Scheme that will correct errors which occur at the same bit location of erroneous copies, was studied however PRPC does not handle a situation where a packet has more than 1 error bit. The Modified Packet Combining (MPC) Scheme that can correct double or higher bit errors was studied elsewhere. Both PRPC and MPC schemes are believed to offer higher throughput in previous studies, however neither adequate investigation nor exact analysis was done to substantiate this claim of higher throughput. In this work, an exact analysis of both PRPC and MPC is carried out and the results reported. A combined protocol (PRPC and MPC) is proposed and the analysis shows that it is capable of offering even higher throughput and better error correction capability at high bit error rate (BER) and larger packet size. (author)

  17. Multiple-correction hybrid k-exact schemes for high-order compressible RANS-LES simulations on fully unstructured grids

    Science.gov (United States)

    Pont, Grégoire; Brenner, Pierre; Cinnella, Paola; Maugars, Bruno; Robinet, Jean-Christophe

    2017-12-01

    A Godunov's type unstructured finite volume method suitable for highly compressible turbulent scale-resolving simulations around complex geometries is constructed by using a successive correction technique. First, a family of k-exact Godunov schemes is developed by recursively correcting the truncation error of the piecewise polynomial representation of the primitive variables. The keystone of the proposed approach is a quasi-Green gradient operator which ensures consistency on general meshes. In addition, a high-order single-point quadrature formula, based on high-order approximations of the successive derivatives of the solution, is developed for flux integration along cell faces. The proposed family of schemes is compact in the algorithmic sense, since it only involves communications between direct neighbors of the mesh cells. The numerical properties of the schemes up to fifth-order are investigated, with focus on their resolvability in terms of number of mesh points required to resolve a given wavelength accurately. Afterwards, in the aim of achieving the best possible trade-off between accuracy, computational cost and robustness in view of industrial flow computations, we focus more specifically on the third-order accurate scheme of the family, and modify locally its numerical flux in order to reduce the amount of numerical dissipation in vortex-dominated regions. This is achieved by switching from the upwind scheme, mostly applied in highly compressible regions, to a fourth-order centered one in vortex-dominated regions. An analytical switch function based on the local grid Reynolds number is adopted in order to warrant numerical stability of the recentering process. Numerical applications demonstrate the accuracy and robustness of the proposed methodology for compressible scale-resolving computations. In particular, supersonic RANS/LES computations of the flow over a cavity are presented to show the capability of the scheme to predict flows with shocks

  18. Reconciling incongruous qualitative and quantitative findings in mixed methods research: exemplars from research with drug using populations.

    Science.gov (United States)

    Wagner, Karla D; Davidson, Peter J; Pollini, Robin A; Strathdee, Steffanie A; Washburn, Rachel; Palinkas, Lawrence A

    2012-01-01

    Mixed methods research is increasingly being promoted in the health sciences as a way to gain more comprehensive understandings of how social processes and individual behaviours shape human health. Mixed methods research most commonly combines qualitative and quantitative data collection and analysis strategies. Often, integrating findings from multiple methods is assumed to confirm or validate the findings from one method with the findings from another, seeking convergence or agreement between methods. Cases in which findings from different methods are congruous are generally thought of as ideal, whilst conflicting findings may, at first glance, appear problematic. However, the latter situation provides the opportunity for a process through which apparently discordant results are reconciled, potentially leading to new emergent understandings of complex social phenomena. This paper presents three case studies drawn from the authors' research on HIV risk amongst injection drug users in which mixed methods studies yielded apparently discrepant results. We use these case studies (involving injection drug users [IDUs] using a Needle/Syringe Exchange Program in Los Angeles, CA, USA; IDUs seeking to purchase needle/syringes at pharmacies in Tijuana, Mexico; and young street-based IDUs in San Francisco, CA, USA) to identify challenges associated with integrating findings from mixed methods projects, summarize lessons learned, and make recommendations for how to more successfully anticipate and manage the integration of findings. Despite the challenges inherent in reconciling apparently conflicting findings from qualitative and quantitative approaches, in keeping with others who have argued in favour of integrating mixed methods findings, we contend that such an undertaking has the potential to yield benefits that emerge only through the struggle to reconcile discrepant results and may provide a sum that is greater than the individual qualitative and quantitative parts

  19. An Efficient Semi-fragile Watermarking Scheme for Tamper Localization and Recovery

    Science.gov (United States)

    Hou, Xiang; Yang, Hui; Min, Lianquan

    2018-03-01

    To solve the problem that remote sensing images are vulnerable to be tampered, a semi-fragile watermarking scheme was proposed. Binary random matrix was used as the authentication watermark, which was embedded by quantizing the maximum absolute value of directional sub-bands coefficients. The average gray level of every non-overlapping 4×4 block was adopted as the recovery watermark, which was embedded in the least significant bit. Watermarking detection could be done directly without resorting to the original images. Experimental results showed our method was robust against rational distortions to a certain extent. At the same time, it was fragile to malicious manipulation, and realized accurate localization and approximate recovery of the tampered regions. Therefore, this scheme can protect the security of remote sensing image effectively.

  20. Study on the improvement of the convective differencing scheme for the high-accuracy and stable resolution of the numerical solution

    International Nuclear Information System (INIS)

    Shin, J. K.; Choi, Y. D.

    1992-01-01

    QUICKER scheme has several attractive properties. However, under highly convective conditions, it produces overshoots and possibly some oscillations on each side of steps in the dependent variable when the flow is convected at an angle oblique to the grid line. Fortunately, it is possible to modify the QUICKER scheme using non-linear and linear functional relationship. Details of the development of polynomial upwinding scheme are given in this paper, where it is seen that this non-linear scheme has also third order accuracy. This polynomial upwinding scheme is used as the basis for the SHARPER and SMARTER schemes. Another revised scheme was developed by partial modification of QUICKER scheme using CDS and UPWIND schemes (QUICKUP). These revised schemes are tested at the well known bench mark flows, Two-Dimensional Pure Convection Flows in Oblique-Step, Lid Driven Cavity Flows and Buoyancy Driven Cavity Flows. For remain absolutely monotonic without overshoot and oscillation. QUICKUP scheme is more accurate than any other scheme in their relative accuracy. In high Reynolds number Lid Driven Catity Flow, SMARTER and SHARPER schemes retain lower computational cost than QUICKER and QUICKUP schemes, but computed velocity values in the revised schemes produced less predicted values than QUICKER scheme which is strongly effected by overshoot and undershoot values. Also, in Buoyancy Driven Cavity Flow, SMARTER, SHARPER and QUICKUP schemes give acceptable results. (Author)

  1. Thermally-Driven Mantle Plumes Reconcile Hot-spot Observations

    Science.gov (United States)

    Davies, D.; Davies, J.

    2008-12-01

    Hot-spots are anomalous regions of magmatism that cannot be directly associated with plate tectonic processes (e.g. Morgan, 1972). They are widely regarded as the surface expression of upwelling mantle plumes. Hot-spots exhibit variable life-spans, magmatic productivity and fixity (e.g. Ito and van Keken, 2007). This suggests that a wide-range of upwelling structures coexist within Earth's mantle, a view supported by geochemical and seismic evidence, but, thus far, not reproduced by numerical models. Here, results from a new, global, 3-D spherical, mantle convection model are presented, which better reconcile hot-spot observations, the key modification from previous models being increased convective vigor. Model upwellings show broad-ranging dynamics; some drift slowly, while others are more mobile, displaying variable life-spans, intensities and migration velocities. Such behavior is consistent with hot-spot observations, indicating that the mantle must be simulated at the correct vigor and in the appropriate geometry to reproduce Earth-like dynamics. Thermally-driven mantle plumes can explain the principal features of hot-spot volcanism on Earth.

  2. Closed-Loop Autofocus Scheme for Scanning Electron Microscope

    Directory of Open Access Journals (Sweden)

    Cui Le

    2015-01-01

    Full Text Available In this paper, we present a full scale autofocus approach for scanning electron microscope (SEM. The optimal focus (in-focus position of the microscope is achieved by maximizing the image sharpness using a vision-based closed-loop control scheme. An iterative optimization algorithm has been designed using the sharpness score derived from image gradient information. The proposed method has been implemented and validated using a tungsten gun SEM at various experimental conditions like varying raster scan speed, magnification at real-time. We demonstrate that the proposed autofocus technique is accurate, robust and fast.

  3. Development of a 3D cell-centered Lagrangian scheme for the numerical modeling of the gas dynamics and hyper-elasticity systems

    International Nuclear Information System (INIS)

    Georges, Gabriel

    2016-01-01

    High Energy Density Physics (HEDP) flows are multi-material flows characterized by strong shock waves and large changes in the domain shape due to rare faction waves. Numerical schemes based on the Lagrangian formalism are good candidates to model this kind of flows since the computational grid follows the fluid motion. This provides accurate results around the shocks as well as a natural tracking of multi-material interfaces and free-surfaces. In particular, cell-centered Finite Volume Lagrangian schemes such as GLACE (Godunov-type Lagrangian scheme Conservative for total Energy) and EUCCLHYD (Explicit Unstructured Cell-Centered Lagrangian Hydrodynamics) provide good results on both the modeling of gas dynamics and elastic-plastic equations. The work produced during this PhD thesis is in continuity with the work of Maire and Nkonga [JCP, 2009] for the hydrodynamic part and the work of Kluth and Despres [JCP, 2010] for the hyper elasticity part. More precisely, the aim of this thesis is to develop robust and accurate methods for the 3D extension of the EUCCLHYD scheme with a second-order extension based on MUSCL (Monotonic Upstream-centered Scheme for Conservation Laws) and GRP (Generalized Riemann Problem) procedures. A particular care is taken on the preservation of symmetries and the monotonicity of the solutions. The scheme robustness and accuracy are assessed on numerous Lagrangian test cases for which the 3D extensions are very challenging. (author) [fr

  4. Derivation and Analysis of a Low-Cost, High-performance Analogue BPCM Control Scheme for Class-D Audio Power Amplifiers

    OpenAIRE

    Høyerby, Mikkel Christian Wendelboe; Andersen, Michael A. E.

    2005-01-01

    This paper presents a low-cost analogue control scheme for class-D audio power amplifiers. The scheme is based around bandpass current-mode (BPCM) control, and provides ample stability margins and low distortion over a wide range of operating conditions. Implementation is very simple and does not require the use of operational amplifiers. Small-signal behavior of the controller is accurately predicted, and design is carried out using standard transfer function based linear control methodology...

  5. Computational Aero-Acoustic Using High-order Finite-Difference Schemes

    DEFF Research Database (Denmark)

    Zhu, Wei Jun; Shen, Wen Zhong; Sørensen, Jens Nørkær

    2007-01-01

    are solved using the in-house flow solver EllipSys2D/3D which is a second-order finite volume code. The acoustic solution is found by solving the acoustic equations using high-order finite difference schemes. The incompressible flow equations and the acoustic equations are solved at the same time levels......In this paper, a high-order technique to accurately predict flow-generated noise is introduced. The technique consists of solving the viscous incompressible flow equations and inviscid acoustic equations using a incompressible/compressible splitting technique. The incompressible flow equations...

  6. Finite Boltzmann schemes

    NARCIS (Netherlands)

    Sman, van der R.G.M.

    2006-01-01

    In the special case of relaxation parameter = 1 lattice Boltzmann schemes for (convection) diffusion and fluid flow are equivalent to finite difference/volume (FD) schemes, and are thus coined finite Boltzmann (FB) schemes. We show that the equivalence is inherent to the homology of the

  7. Developing support schemes for electric renewable energy in France: how to reconcile integration and deployment challenges?

    International Nuclear Information System (INIS)

    Mathieu, Mathilde; Ruedinger, Andreas

    2016-01-01

    The reform for a greater integration of support schemes in the electricity market is not a marginal development, and should allow for a transition period for market actors to adapt. Lessons from the experience of neighboring countries will be valuable, especially in view of greater regional harmonization in the future. Better integration of solutions for reducing demand and greater system flexibility would also be advisable going forward. Finally, it is also essential to evaluate the impact of the reform on the risk of electricity market concentration and a reduced diversity of actors; as well as of the potential increase in barriers to entry which could hinder the emergence of collaborative or citizen projects, as these are crucial for improving project acceptance and sharing RES costs. Through stronger exposure to market signals, market premia can assist the technical and economic integration of renewable energy (RES). The resultant advantages in terms of improvements in forecasting and marketing tools, negative price management and support for more valuable technologies and practices in the system closely depends, however, on the precise calibration of the mechanisms involved. To address this, it seems essential to learn from the experiences of neighboring countries and to plan an adequate transition period for all actors to adapt to the change in regulation. The rise in transaction costs and risk premia can lead to additional costs under the new mechanisms. Direct costs, which are linked to the marketing of electricity and to rules aimed at curtailing negative prices, remain limited. However, a cost-benefit analysis must consider the impact of changes in regulation on risk perception and the cost of capital for financing projects - a determinant factor in the economic viability of the project. This further implies a need to consider complementary measures aimed at reducing the financial risks to limit production costs and incremental costs for society. The push

  8. Incorporation of exact boundary conditions into a discontinuous galerkin finite element method for accurately solving 2d time-dependent maxwell equations

    KAUST Repository

    Sirenko, Kostyantyn; Liu, Meilin; Bagci, Hakan

    2013-01-01

    A scheme that discretizes exact absorbing boundary conditions (EACs) to incorporate them into a time-domain discontinuous Galerkin finite element method (TD-DG-FEM) is described. The proposed TD-DG-FEM with EACs is used for accurately characterizing

  9. Sliding-MOMP Based Channel Estimation Scheme for ISDB-T Systems

    Directory of Open Access Journals (Sweden)

    Ziji Ma

    2016-01-01

    Full Text Available Compressive sensing based channel estimation has shown its advantage of accurate reconstruction for sparse signal with less pilots for OFDM systems. However, high computational cost requirement of CS method, due to linear programming, significantly restricts its implementation in practical applications. In this paper, we propose a reduced complexity channel estimation scheme of modified orthogonal matching pursuit with sliding windows for ISDB-T (Integrated Services Digital Broadcasting for Terrestrial system. The proposed scheme can reduce the computational cost by limiting the searching region as well as making effective use of the last estimation result. In addition, adaptive tracking strategy with sliding sampling window can improve the robustness of CS based methods to guarantee its accuracy of channel matrix reconstruction, even for fast time-variant channels. The computer simulation demonstrates its impact on improving bit error rate and computational complexity for ISDB-T system.

  10. A generalized form of the Bernoulli Trial collision scheme in DSMC: Derivation and evaluation

    Science.gov (United States)

    Roohi, Ehsan; Stefanov, Stefan; Shoja-Sani, Ahmad; Ejraei, Hossein

    2018-02-01

    The impetus of this research is to present a generalized Bernoulli Trial collision scheme in the context of the direct simulation Monte Carlo (DSMC) method. Previously, a subsequent of several collision schemes have been put forward, which were mathematically based on the Kac stochastic model. These include Bernoulli Trial (BT), Ballot Box (BB), Simplified Bernoulli Trial (SBT) and Intelligent Simplified Bernoulli Trial (ISBT) schemes. The number of considered pairs for a possible collision in the above-mentioned schemes varies between N (l) (N (l) - 1) / 2 in BT, 1 in BB, and (N (l) - 1) in SBT or ISBT, where N (l) is the instantaneous number of particles in the lth cell. Here, we derive a generalized form of the Bernoulli Trial collision scheme (GBT) where the number of selected pairs is any desired value smaller than (N (l) - 1), i.e., Nsel < (N (l) - 1), keeping the same the collision frequency and accuracy of the solution as the original SBT and BT models. We derive two distinct formulas for the GBT scheme, where both formula recover BB and SBT limits if Nsel is set as 1 and N (l) - 1, respectively, and provide accurate solutions for a wide set of test cases. The present generalization further improves the computational efficiency of the BT-based collision models compared to the standard no time counter (NTC) and nearest neighbor (NN) collision models.

  11. A stable higher order space time Galerkin marching-on-in-time scheme

    KAUST Repository

    Pray, Andrew J.

    2013-07-01

    We present a method for the stable solution of time-domain integral equations. The method uses a technique developed in [1] to accurately evaluate matrix elements. As opposed to existing stabilization schemes, the method presented uses higher order basis functions in time to improve the accuracy of the solver. The method is validated by showing convergence in temporal basis function order, time step size, and geometric discretization order. © 2013 IEEE.

  12. A Modification of the Fuzzy Logic Based DASH Adaptation Scheme for Performance Improvement

    Directory of Open Access Journals (Sweden)

    Hyun Jun Kim

    2018-01-01

    Full Text Available We propose a modification of the fuzzy logic based DASH adaptation scheme (FDASH for seamless media service in time-varying network conditions. The proposed scheme (mFDASH selects a more appropriate bit-rate for the next segment by modification of the Fuzzy Logic Controller (FLC and estimates more accurate available bandwidth than FDASH scheme by using History-Based TCP Throughput Estimation. Moreover, mFDASH reduces the number of video bit-rate changes by applying Segment Bit-Rate Filtering Module (SBFM and employs Start Mechanism for clients to provide high-quality videos in the very beginning stage of the streaming service. Lastly, Sleeping Mechanism is applied to avoid any expected buffer overflow. We then use NS-3 Network Simulator to verify the performance of mFDASH. Upon the experimental results, mFDASH shows no buffer overflow within the limited buffer size, which is not guaranteed in FDASH. Also, we confirm that mFDASH provides the highest QoE to DASH clients among the three schemes (mFDASH, FDASH, and SVAA in Point-to-Point networks, Wi-Fi networks, and LTE networks, respectively.

  13. Development of a discrete gas-kinetic scheme for simulation of two-dimensional viscous incompressible and compressible flows.

    Science.gov (United States)

    Yang, L M; Shu, C; Wang, Y

    2016-03-01

    In this work, a discrete gas-kinetic scheme (DGKS) is presented for simulation of two-dimensional viscous incompressible and compressible flows. This scheme is developed from the circular function-based GKS, which was recently proposed by Shu and his co-workers [L. M. Yang, C. Shu, and J. Wu, J. Comput. Phys. 274, 611 (2014)]. For the circular function-based GKS, the integrals for conservation forms of moments in the infinity domain for the Maxwellian function-based GKS are simplified to those integrals along the circle. As a result, the explicit formulations of conservative variables and fluxes are derived. However, these explicit formulations of circular function-based GKS for viscous flows are still complicated, which may not be easy for the application by new users. By using certain discrete points to represent the circle in the phase velocity space, the complicated formulations can be replaced by a simple solution process. The basic requirement is that the conservation forms of moments for the circular function-based GKS can be accurately satisfied by weighted summation of distribution functions at discrete points. In this work, it is shown that integral quadrature by four discrete points on the circle, which forms the D2Q4 discrete velocity model, can exactly match the integrals. Numerical results showed that the present scheme can provide accurate numerical results for incompressible and compressible viscous flows with roughly the same computational cost as that needed by the Roe scheme.

  14. Understanding the persistence of measles: reconciling theory, simulation and observation.

    Science.gov (United States)

    Keeling, Matt J; Grenfell, Bryan T

    2002-01-01

    Ever since the pattern of localized extinction associated with measles was discovered by Bartlett in 1957, many models have been developed in an attempt to reproduce this phenomenon. Recently, the use of constant infectious and incubation periods, rather than the more convenient exponential forms, has been presented as a simple means of obtaining realistic persistence levels. However, this result appears at odds with rigorous mathematical theory; here we reconcile these differences. Using a deterministic approach, we parameterize a variety of models to fit the observed biennial attractor, thus determining the level of seasonality by the choice of model. We can then compare fairly the persistence of the stochastic versions of these models, using the 'best-fit' parameters. Finally, we consider the differences between the observed fade-out pattern and the more theoretically appealing 'first passage time'. PMID:11886620

  15. Optimization of the scheme for natural ecology planning of urban rivers based on ANP (analytic network process) model.

    Science.gov (United States)

    Zhang, Yichuan; Wang, Jiangping

    2015-07-01

    Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.

  16. HYBRID SYSTEM BASED FUZZY-PID CONTROL SCHEMES FOR UNPREDICTABLE PROCESS

    Directory of Open Access Journals (Sweden)

    M.K. Tan

    2011-07-01

    Full Text Available In general, the primary aim of polymerization industry is to enhance the process operation in order to obtain high quality and purity product. However, a sudden and large amount of heat will be released rapidly during the mixing process of two reactants, i.e. phenol and formalin due to its exothermic behavior. The unpredictable heat will cause deviation of process temperature and hence affect the quality of the product. Therefore, it is vital to control the process temperature during the polymerization. In the modern industry, fuzzy logic is commonly used to auto-tune PID controller to control the process temperature. However, this method needs an experienced operator to fine tune the fuzzy membership function and universe of discourse via trial and error approach. Hence, the setting of fuzzy inference system might not be accurate due to the human errors. Besides that, control of the process can be challenging due to the rapid changes in the plant parameters which will increase the process complexity. This paper proposes an optimization scheme using hybrid of Q-learning (QL and genetic algorithm (GA to optimize the fuzzy membership function in order to allow the conventional fuzzy-PID controller to control the process temperature more effectively. The performances of the proposed optimization scheme are compared with the existing fuzzy-PID scheme. The results show that the proposed optimization scheme is able to control the process temperature more effectively even if disturbance is introduced.

  17. A unified thermostat scheme for efficient configurational sampling for classical/quantum canonical ensembles via molecular dynamics

    Science.gov (United States)

    Zhang, Zhijun; Liu, Xinzijian; Chen, Zifei; Zheng, Haifeng; Yan, Kangyu; Liu, Jian

    2017-07-01

    We show a unified second-order scheme for constructing simple, robust, and accurate algorithms for typical thermostats for configurational sampling for the canonical ensemble. When Langevin dynamics is used, the scheme leads to the BAOAB algorithm that has been recently investigated. We show that the scheme is also useful for other types of thermostats, such as the Andersen thermostat and Nosé-Hoover chain, regardless of whether the thermostat is deterministic or stochastic. In addition to analytical analysis, two 1-dimensional models and three typical real molecular systems that range from the gas phase, clusters, to the condensed phase are used in numerical examples for demonstration. Accuracy may be increased by an order of magnitude for estimating coordinate-dependent properties in molecular dynamics (when the same time interval is used), irrespective of which type of thermostat is applied. The scheme is especially useful for path integral molecular dynamics because it consistently improves the efficiency for evaluating all thermodynamic properties for any type of thermostat.

  18. Electricity Consumption Forecasting Scheme via Improved LSSVM with Maximum Correntropy Criterion

    OpenAIRE

    Jiandong Duan; Xinyu Qiu; Wentao Ma; Xuan Tian; Di Shang

    2018-01-01

    In recent years, with the deepening of China’s electricity sales side reform and electricity market opening up gradually, the forecasting of electricity consumption (FoEC) becomes an extremely important technique for the electricity market. At present, how to forecast the electricity accurately and make an evaluation of results scientifically are still key research topics. In this paper, we propose a novel prediction scheme based on the least-square support vector machine (LSSVM) model with a...

  19. Computational electrodynamics in material media with constraint-preservation, multidimensional Riemann solvers and sub-cell resolution - Part II, higher order FVTD schemes

    Science.gov (United States)

    Balsara, Dinshaw S.; Garain, Sudip; Taflove, Allen; Montecinos, Gino

    2018-02-01

    The Finite Difference Time Domain (FDTD) scheme has served the computational electrodynamics community very well and part of its success stems from its ability to satisfy the constraints in Maxwell's equations. Even so, in the previous paper of this series we were able to present a second order accurate Godunov scheme for computational electrodynamics (CED) which satisfied all the same constraints and simultaneously retained all the traditional advantages of Godunov schemes. In this paper we extend the Finite Volume Time Domain (FVTD) schemes for CED in material media to better than second order of accuracy. From the FDTD method, we retain a somewhat modified staggering strategy of primal variables which enables a very beneficial constraint-preservation for the electric displacement and magnetic induction vector fields. This is accomplished with constraint-preserving reconstruction methods which are extended in this paper to third and fourth orders of accuracy. The idea of one-dimensional upwinding from Godunov schemes has to be significantly modified to use the multidimensionally upwinded Riemann solvers developed by the first author. In this paper, we show how they can be used within the context of a higher order scheme for CED. We also report on advances in timestepping. We show how Runge-Kutta IMEX schemes can be adapted to CED even in the presence of stiff source terms brought on by large conductivities as well as strong spatial variations in permittivity and permeability. We also formulate very efficient ADER timestepping strategies to endow our method with sub-cell resolving capabilities. As a result, our method can be stiffly-stable and resolve significant sub-cell variation in the material properties within a zone. Moreover, we present ADER schemes that are applicable to all hyperbolic PDEs with stiff source terms and at all orders of accuracy. Our new ADER formulation offers a treatment of stiff source terms that is much more efficient than previous ADER

  20. High-order non-uniform grid schemes for numerical simulation of hypersonic boundary-layer stability and transition

    International Nuclear Information System (INIS)

    Zhong Xiaolin; Tatineni, Mahidhar

    2003-01-01

    The direct numerical simulation of receptivity, instability and transition of hypersonic boundary layers requires high-order accurate schemes because lower-order schemes do not have an adequate accuracy level to compute the large range of time and length scales in such flow fields. The main limiting factor in the application of high-order schemes to practical boundary-layer flow problems is the numerical instability of high-order boundary closure schemes on the wall. This paper presents a family of high-order non-uniform grid finite difference schemes with stable boundary closures for the direct numerical simulation of hypersonic boundary-layer transition. By using an appropriate grid stretching, and clustering grid points near the boundary, high-order schemes with stable boundary closures can be obtained. The order of the schemes ranges from first-order at the lowest, to the global spectral collocation method at the highest. The accuracy and stability of the new high-order numerical schemes is tested by numerical simulations of the linear wave equation and two-dimensional incompressible flat plate boundary layer flows. The high-order non-uniform-grid schemes (up to the 11th-order) are subsequently applied for the simulation of the receptivity of a hypersonic boundary layer to free stream disturbances over a blunt leading edge. The steady and unsteady results show that the new high-order schemes are stable and are able to produce high accuracy for computations of the nonlinear two-dimensional Navier-Stokes equations for the wall bounded supersonic flow

  1. Discretization of convection-diffusion equations with finite-difference scheme derived from simplified analytical solutions

    International Nuclear Information System (INIS)

    Kriventsev, Vladimir

    2000-09-01

    Most of thermal hydraulic processes in nuclear engineering can be described by general convection-diffusion equations that are often can be simulated numerically with finite-difference method (FDM). An effective scheme for finite-difference discretization of such equations is presented in this report. The derivation of this scheme is based on analytical solutions of a simplified one-dimensional equation written for every control volume of the finite-difference mesh. These analytical solutions are constructed using linearized representations of both diffusion coefficient and source term. As a result, the Efficient Finite-Differencing (EFD) scheme makes it possible to significantly improve the accuracy of numerical method even using mesh systems with fewer grid nodes that, in turn, allows to speed-up numerical simulation. EFD has been carefully verified on the series of sample problems for which either analytical or very precise numerical solutions can be found. EFD has been compared with other popular FDM schemes including novel, accurate (as well as sophisticated) methods. Among the methods compared were well-known central difference scheme, upwind scheme, exponential differencing and hybrid schemes of Spalding. Also, newly developed finite-difference schemes, such as the the quadratic upstream (QUICK) scheme of Leonard, the locally analytic differencing (LOAD) scheme of Wong and Raithby, the flux-spline scheme proposed by Varejago and Patankar as well as the latest LENS discretization of Sakai have been compared. Detailed results of this comparison are given in this report. These tests have shown a high efficiency of the EFD scheme. For most of sample problems considered EFD has demonstrated the numerical error that appeared to be in orders of magnitude lower than that of other discretization methods. Or, in other words, EFD has predicted numerical solution with the same given numerical error but using much fewer grid nodes. In this report, the detailed

  2. Strategies for reconciling environmental goals, productivity improvement, and increased energy efficiency in the industrial sector: Analytic framework

    Energy Technology Data Exchange (ETDEWEB)

    Boyd, G.A.

    1995-06-01

    The project is motivated by recommendations that were made by industry in a number of different forums: the Industry Workshop of the White House Conference on Climate Change, and more recently, industry consultations for EPAct Section 131(c) and Section 160(b). These recommendations were related to reconciling conflicts in environmental goals, productivity improvements and increased energy efficiency in the industrial sector.

  3. Determination of Solution Accuracy of Numerical Schemes as Part of Code and Calculation Verification

    Energy Technology Data Exchange (ETDEWEB)

    Blottner, F.G.; Lopez, A.R.

    1998-10-01

    This investigation is concerned with the accuracy of numerical schemes for solving partial differential equations used in science and engineering simulation codes. Richardson extrapolation methods for steady and unsteady problems with structured meshes are presented as part of the verification procedure to determine code and calculation accuracy. The local truncation error de- termination of a numerical difference scheme is shown to be a significant component of the veri- fication procedure as it determines the consistency of the numerical scheme, the order of the numerical scheme, and the restrictions on the mesh variation with a non-uniform mesh. Genera- tion of a series of co-located, refined meshes with the appropriate variation of mesh cell size is in- vestigated and is another important component of the verification procedure. The importance of mesh refinement studies is shown to be more significant than just a procedure to determine solu- tion accuracy. It is suggested that mesh refinement techniques can be developed to determine con- sistency of numerical schemes and to determine if governing equations are well posed. The present investigation provides further insight into the conditions and procedures required to effec- tively use Richardson extrapolation with mesh refinement studies to achieve confidence that sim- ulation codes are producing accurate numerical solutions.

  4. [Occlusal schemes of complete dentures--a review of the literature].

    Science.gov (United States)

    Tarazi, E; Ticotsky-Zadok, N

    2007-01-01

    movements). Linear occlusion scheme occludes cuspless teeth with anatomic teeth that have been modified (bladed teeth) in order to achieve linear occlusal contacts. Linear contacts are the pin-point contacts of the tips of the cusps of the bladed teeth against cuspless teeth that create a plane. The specific design of positioning upper modified teeth on the upper denture and non anatomic teeth on the lower one is called lingualized occlusion. It is characterized by contacts of only the lingual (palatinal, to be more accurate) cusps of the upper teeth with the lower teeth. The lingualized occlusal scheme provides better aesthetics than the monoplane occlusion scheme, and better stability (in the case of resorbed residual ridges) than bilateral occlusion scheme of anatomic teeth. The results of studies that compared different occlusal schemes may well be summarized as inconclusive. However, it does seem that patients preferred anatomic or semi-anatomic (modified) teeth, and that chewing efficiency with anatomic and modified teeth was better than with non anatomic teeth. Similar results were found in studies of occlusal schemes of implant-supported lower dentures opposed by complete upper dentures. There isn't one occlusal scheme that fits all patients in need of complete dentures, in fact, in many cases more than one occlusal scheme might be adequate. Selection of an occlusal scheme for a patient should include correlation of the characteristics of the patient with those of the various occlusal schemes. The characteristics of the patient include: height and width of the residual ridge, aesthetic demands of the patient, skeletal relations (class I/II/III), neuromuscular control, and tendency for para-functional activity. The multiple characteristics of the occlusal schemes were reviewed in this article. Considering all of those factors in relation to a specific patient, the dentist should be able to decide on the most suitable occlusal scheme for the case.

  5. Reconciling Long-Wavelength Dynamic Topography, Geoid Anomalies and Mass Distribution on Earth

    Science.gov (United States)

    Hoggard, M.; Richards, F. D.; Ghelichkhan, S.; Austermann, J.; White, N.

    2017-12-01

    Since the first satellite observations in the late 1950s, we have known that that the Earth's non-hydrostatic geoid is dominated by spherical harmonic degree 2 (wavelengths of 16,000 km). Peak amplitudes are approximately ± 100 m, with highs centred on the Pacific Ocean and Africa, encircled by lows in the vicinity of the Pacific Ring of Fire and at the poles. Initial seismic tomography models revealed that the shear-wave velocity, and therefore presumably the density structure, of the lower mantle is also dominated by degree 2. Anti-correlation of slow, probably low density regions beneath geoid highs indicates that the mantle is affected by large-scale flow. Thus, buoyant features are rising and exert viscous normal stresses that act to deflect the surface and core-mantle boundary (CMB). Pioneering studies in the 1980s showed that a viscosity jump between the upper and lower mantle is required to reconcile these geoid and tomographically inferred density anomalies. These studies also predict 1-2 km of dynamic topography at the surface, dominated by degree 2. In contrast to this prediction, a global observational database of oceanic residual depth measurements indicates that degree 2 dynamic topography has peak amplitudes of only 500 m. Here, we attempt to reconcile observations of dynamic topography, geoid, gravity anomalies and CMB topography using instantaneous flow kernels. We exploit a density structure constructed from blended seismic tomography models, combining deep mantle imaging with higher resolution upper mantle features. Radial viscosity structure is discretised, and we invert for the best-fitting viscosity profile using a conjugate gradient search algorithm, subject to damping. Our results suggest that, due to strong sensitivity to radial viscosity structure, the Earth's geoid seems to be compatible with only ± 500 m of degree 2 dynamic topography.

  6. Nonlinear H∞ Optimal Control Scheme for an Underwater Vehicle with Regional Function Formulation

    Directory of Open Access Journals (Sweden)

    Zool H. Ismail

    2013-01-01

    Full Text Available A conventional region control technique cannot meet the demands for an accurate tracking performance in view of its inability to accommodate highly nonlinear system dynamics, imprecise hydrodynamic coefficients, and external disturbances. In this paper, a robust technique is presented for an Autonomous Underwater Vehicle (AUV with region tracking function. Within this control scheme, nonlinear H∞ and region based control schemes are used. A Lyapunov-like function is presented for stability analysis of the proposed control law. Numerical simulations are presented to demonstrate the performance of the proposed tracking control of the AUV. It is shown that the proposed control law is robust against parameter uncertainties, external disturbances, and nonlinearities and it leads to uniform ultimate boundedness of the region tracking error.

  7. Instability of the time splitting scheme for the one-dimensional and relativistic Vlasov-Maxwell system

    CERN Document Server

    Huot, F; Bertrand, P; Sonnendrücker, E; Coulaud, O

    2003-01-01

    The Time Splitting Scheme (TSS) has been examined within the context of the one-dimensional (1D) relativistic Vlasov-Maxwell model. In the strongly relativistic regime of the laser-plasma interaction, the TSS cannot be applied to solve the Vlasov equation. We propose a new semi-Lagrangian scheme based on a full 2D advection and study its advantages over the classical Splitting procedure. Details of the underlying integration of the Vlasov equation appear to be important in achieving accurate plasma simulations. Examples are given which are related to the relativistic modulational instability and the self-induced transparency of an ultra-intense electromagnetic pulse in the relativistic regime.

  8. Analytical reconstruction schemes for coarse-mesh spectral nodal solution of slab-geometry SN transport problems

    International Nuclear Information System (INIS)

    Barros, R. C.; Filho, H. A.; Platt, G. M.; Oliveira, F. B. S.; Militao, D. S.

    2009-01-01

    Coarse-mesh numerical methods are very efficient in the sense that they generate accurate results in short computational time, as the number of floating point operations generally decrease, as a result of the reduced number of mesh points. On the other hand, they generate numerical solutions that do not give detailed information on the problem solution profile, as the grid points can be located considerably away from each other. In this paper we describe two analytical reconstruction schemes for the coarse-mesh solution generated by the spectral nodal method for neutral particle discrete ordinates (S N ) transport model in slab geometry. The first scheme we describe is based on the analytical reconstruction of the coarse-mesh solution within each discretization cell of the spatial grid set up on the slab. The second scheme is based on the angular reconstruction of the discrete ordinates solution between two contiguous ordinates of the angular quadrature set used in the S N model. Numerical results are given so we can illustrate the accuracy of the two reconstruction schemes, as described in this paper. (authors)

  9. Reconciling projections of the Antarctic contribution to sea level rise

    Science.gov (United States)

    Edwards, Tamsin; Holden, Philip; Edwards, Neil; Wernecke, Andreas

    2017-04-01

    Two recent studies of the Antarctic contribution to sea level rise this century had best estimates that differed by an order of magnitude (around 10 cm and 1 m by 2100). The first, Ritz et al. (2015), used a model calibrated with satellite data, giving a 5% probability of exceeding 30cm by 2100 for sea level rise due to Antarctic instability. The second, DeConto and Pollard (2016), used a model evaluated with reconstructions of palaeo-sea level. They did not estimate probabilities, but using a simple assumption here about the distribution shape gives up to a 5% chance of Antarctic contribution exceeding 2.3 m this century with total sea level rise approaching 3 m. If robust, this would have very substantial implications for global adaptation to climate change. How are we to make sense of this apparent inconsistency? How much is down to the data - does the past tell us we will face widespread and rapid Antarctic ice losses in the future? How much is due to the mechanism of rapid ice loss ('cliff failure') proposed in the latter paper, or other parameterisation choices in these low resolution models (GRISLI and PISM, respectively)? How much is due to choices made in the ensemble design and calibration? How do these projections compare with high resolution, grounding line resolving models such as BISICLES? Could we reduce the huge uncertainties in the palaeo-study? Emulation provides a powerful tool for understanding these questions and reconciling the projections. By describing the three numerical ice sheet models with statistical models, we can re-analyse the ensembles and re-do the calibrations under a common statistical framework. This reduces uncertainty in the PISM study because it allows massive sampling of the parameter space, which reduces the sensitivity to reconstructed palaeo-sea level values and also narrows the probability intervals because the simple assumption about distribution shape above is no longer needed. We present reconciled probabilistic

  10. A family of high-order gas-kinetic schemes and its comparison with Riemann solver based high-order methods

    Science.gov (United States)

    Ji, Xing; Zhao, Fengxiang; Shyy, Wei; Xu, Kun

    2018-03-01

    Most high order computational fluid dynamics (CFD) methods for compressible flows are based on Riemann solver for the flux evaluation and Runge-Kutta (RK) time stepping technique for temporal accuracy. The advantage of this kind of space-time separation approach is the easy implementation and stability enhancement by introducing more middle stages. However, the nth-order time accuracy needs no less than n stages for the RK method, which can be very time and memory consuming due to the reconstruction at each stage for a high order method. On the other hand, the multi-stage multi-derivative (MSMD) method can be used to achieve the same order of time accuracy using less middle stages with the use of the time derivatives of the flux function. For traditional Riemann solver based CFD methods, the lack of time derivatives in the flux function prevents its direct implementation of the MSMD method. However, the gas kinetic scheme (GKS) provides such a time accurate evolution model. By combining the second-order or third-order GKS flux functions with the MSMD technique, a family of high order gas kinetic methods can be constructed. As an extension of the previous 2-stage 4th-order GKS, the 5th-order schemes with 2 and 3 stages will be developed in this paper. Based on the same 5th-order WENO reconstruction, the performance of gas kinetic schemes from the 2nd- to the 5th-order time accurate methods will be evaluated. The results show that the 5th-order scheme can achieve the theoretical order of accuracy for the Euler equations, and present accurate Navier-Stokes solutions as well due to the coupling of inviscid and viscous terms in the GKS formulation. In comparison with Riemann solver based 5th-order RK method, the high order GKS has advantages in terms of efficiency, accuracy, and robustness, for all test cases. The 4th- and 5th-order GKS have the same robustness as the 2nd-order scheme for the capturing of discontinuous solutions. The current high order MSMD GKS is a

  11. Adaptive protection scheme

    Directory of Open Access Journals (Sweden)

    R. Sitharthan

    2016-09-01

    Full Text Available This paper aims at modelling an electronically coupled distributed energy resource with an adaptive protection scheme. The electronically coupled distributed energy resource is a microgrid framework formed by coupling the renewable energy source electronically. Further, the proposed adaptive protection scheme provides a suitable protection to the microgrid for various fault conditions irrespective of the operating mode of the microgrid: namely, grid connected mode and islanded mode. The outstanding aspect of the developed adaptive protection scheme is that it monitors the microgrid and instantly updates relay fault current according to the variations that occur in the system. The proposed adaptive protection scheme also employs auto reclosures, through which the proposed adaptive protection scheme recovers faster from the fault and thereby increases the consistency of the microgrid. The effectiveness of the proposed adaptive protection is studied through the time domain simulations carried out in the PSCAD⧹EMTDC software environment.

  12. Reconciling EFT and hybrid calculations of the light MSSM Higgs-boson mass

    Energy Technology Data Exchange (ETDEWEB)

    Bahl, Henning; Hollik, Wolfgang [Max-Planck Institut fuer Physik, Munich (Germany); Heinemeyer, Sven [Campus of International Excellence UAM+CSIC, Madrid (Spain); Universidad Autonoma de Madrid, Instituto de Fisica Teorica, (UAM/CSIC), Madrid (Spain); Instituto de Fisica Cantabria (CSIC-UC), Santander (Spain); Weiglein, Georg [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany)

    2018-01-15

    Various methods are used in the literature for predicting the lightest CP-even Higgs boson mass in the Minimal Supersymmetric Standard Model (MSSM). Fixed-order diagrammatic calculations capture all effects at a given order and yield accurate results for scales of supersymmetric (SUSY) particles that are not separated too much from the weak scale. Effective field theory calculations allow a resummation of large logarithmic contributions up to all orders and therefore yield accurate results for a high SUSY scale. A hybrid approach, where both methods have been combined, is implemented in the computer code FeynHiggs. So far, however, at large scales sizeable differences have been observed between FeynHiggs and other pure EFT codes. In this work, the various approaches are analytically compared with each other in a simple scenario in which all SUSY mass scales are chosen to be equal to each other. Three main sources are identified that account for the major part of the observed differences. Firstly, it is shown that the scheme conversion of the input parameters that is commonly used for the comparison of fixed-order results is not adequate for the comparison of results containing a series of higher-order logarithms. Secondly, the treatment of higher-order terms arising from the determination of the Higgs propagator pole is addressed. Thirdly, the effect of different parametrizations in particular of the top Yukawa coupling in the non-logarithmic terms is investigated. Taking into account all of these effects, in the considered simple scenario very good agreement is found for scales above 1 TeV between the results obtained using the EFT approach and the hybrid approach of FeynHiggs. (orig.)

  13. Reconciling EFT and hybrid calculations of the light MSSM Higgs-boson mass

    International Nuclear Information System (INIS)

    Bahl, Henning; Hollik, Wolfgang; Heinemeyer, Sven; Weiglein, Georg

    2017-06-01

    Various methods are used in the literature for predicting the lightest CP-even Higgs boson mass in the Minimal Supersymmetric Standard Model (MSSM). Fixed-order diagrammatic calculations capture all effects at a given order and yield accurate results for scales of supersymmetric (SUSY) particles that are not separated too much from the weak scale. Effective field theory calculations allow a resummation of large logarithmic contributions up to all orders and therefore yield accurate results for a high SUSY scale. A hybrid approach, where both methods have been combined, is implemented in the computer code FeynHiggs. So far, however, at large scales sizeable differences have been observed between FeynHiggs and other pure EFT codes. In this work, the various approaches are analytically compared with each other in a simple scenario in which all SUSY mass scales are chosen to be equal to each other. Three main sources are identified that account for the major part of the observed differences. Firstly, it is shown that the scheme conversion of the input parameters that is commonly used for the comparison of fixed-order results is not adequate for the comparison of results containing a series of higher-order logarithms. Secondly, the treatment of higher-order terms arising from the determination of the Higgs propagator pole is addressed. Thirdly, the effect of different parametrizations in particular of the top Yukawa coupling in the non-logarithmic terms is investigated. Taking into account all of these effects, in the considered simple scenario very good agreement is found for scales above 1 TeV between the results obtained using the EFT approach and the hybrid approach of FeynHiggs.

  14. Reconciling EFT and hybrid calculations of the light MSSM Higgs-boson mass

    Energy Technology Data Exchange (ETDEWEB)

    Bahl, Henning; Hollik, Wolfgang [Max-Planck-Institut fuer Physik, Muenchen (Germany); Heinemeyer, Sven [Campus of International Excellence UAM+CSIC, Madrid (Spain); Univ. Autonoma de Madrid (Spain). Inst. de Fisica Teorica; Instituto de Fisica Cantabria (CSIC-UC), Santander (Spain); Weiglein, Georg [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2017-06-15

    Various methods are used in the literature for predicting the lightest CP-even Higgs boson mass in the Minimal Supersymmetric Standard Model (MSSM). Fixed-order diagrammatic calculations capture all effects at a given order and yield accurate results for scales of supersymmetric (SUSY) particles that are not separated too much from the weak scale. Effective field theory calculations allow a resummation of large logarithmic contributions up to all orders and therefore yield accurate results for a high SUSY scale. A hybrid approach, where both methods have been combined, is implemented in the computer code FeynHiggs. So far, however, at large scales sizeable differences have been observed between FeynHiggs and other pure EFT codes. In this work, the various approaches are analytically compared with each other in a simple scenario in which all SUSY mass scales are chosen to be equal to each other. Three main sources are identified that account for the major part of the observed differences. Firstly, it is shown that the scheme conversion of the input parameters that is commonly used for the comparison of fixed-order results is not adequate for the comparison of results containing a series of higher-order logarithms. Secondly, the treatment of higher-order terms arising from the determination of the Higgs propagator pole is addressed. Thirdly, the effect of different parametrizations in particular of the top Yukawa coupling in the non-logarithmic terms is investigated. Taking into account all of these effects, in the considered simple scenario very good agreement is found for scales above 1 TeV between the results obtained using the EFT approach and the hybrid approach of FeynHiggs.

  15. Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme

    Science.gov (United States)

    Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook

    1995-01-01

    Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.

  16. Tetrahedral-Mesh Simulation of Turbulent Flows with the Space-Time Conservative Schemes

    Science.gov (United States)

    Chang, Chau-Lyan; Venkatachari, Balaji; Cheng, Gary C.

    2015-01-01

    Direct numerical simulations of turbulent flows are predominantly carried out using structured, hexahedral meshes despite decades of development in unstructured mesh methods. Tetrahedral meshes offer ease of mesh generation around complex geometries and the potential of an orientation free grid that would provide un-biased small-scale dissipation and more accurate intermediate scale solutions. However, due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for triangular and tetrahedral meshes at the cell interfaces, numerical issues exist when flow discontinuities or stagnation regions are present. The space-time conservative conservation element solution element (CESE) method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to more accurately simulate turbulent flows using unstructured tetrahedral meshes. To pave the way towards accurate simulation of shock/turbulent boundary-layer interaction, a series of wave and shock interaction benchmark problems that increase in complexity, are computed in this paper with triangular/tetrahedral meshes. Preliminary computations for the normal shock/turbulence interactions are carried out with a relatively coarse mesh, by direct numerical simulations standards, in order to assess other effects such as boundary conditions and the necessity of a buffer domain. The results indicate that qualitative agreement with previous studies can be obtained for flows where, strong shocks co-exist along with unsteady waves that display a broad range of scales, with a relatively compact computational domain and less stringent requirements for grid clustering near the shock. With the space-time conservation properties, stable solutions without any spurious wave reflections can be obtained without a need for buffer domains near the outflow/farfield boundaries. Computational results for the

  17. A second-order cell-centered Lagrangian ADER-MOOD finite volume scheme on multidimensional unstructured meshes for hydrodynamics

    Science.gov (United States)

    Boscheri, Walter; Dumbser, Michael; Loubère, Raphaël; Maire, Pierre-Henri

    2018-04-01

    In this paper we develop a conservative cell-centered Lagrangian finite volume scheme for the solution of the hydrodynamics equations on unstructured multidimensional grids. The method is derived from the Eucclhyd scheme discussed in [47,43,45]. It is second-order accurate in space and is combined with the a posteriori Multidimensional Optimal Order Detection (MOOD) limiting strategy to ensure robustness and stability at shock waves. Second-order of accuracy in time is achieved via the ADER (Arbitrary high order schemes using DERivatives) approach. A large set of numerical test cases is proposed to assess the ability of the method to achieve effective second order of accuracy on smooth flows, maintaining an essentially non-oscillatory behavior on discontinuous profiles, general robustness ensuring physical admissibility of the numerical solution, and precision where appropriate.

  18. Development of explicit solution scheme for the MATRA-LMR code and test calculation

    International Nuclear Information System (INIS)

    Jeong, H. Y.; Ha, K. S.; Chang, W. P.; Kwon, Y. M.; Jeong, K. S.

    2003-01-01

    The local blockage in a subassembly of a liquid metal reactor is of particular importance because local sodium boiling could occur at the downstream of the blockage and integrity of the fuel clad could be threatened. The explicit solution scheme of MATRA-LMR code is developed to analyze the flow blockage in a subassembly of a liquid metal cooled reactor. In the present study, the capability of the code is extended to the analysis of complete blockage of one or more subchannels. The results of the developed solution scheme shows very good agreement with the results obtained from the implicit scheme for the experiments of flow channel without any blockage. The applicability of the code is also evaluated for two typical experiments in a blocked channel. Through the sensitivity study, it is shown that the explicit scheme of MATRA-LMR predicts the flow and temperature profile after blockage reasonably if the effect of wire is suitably modeled. The simple assumption in wire-forcing function is effective for the un-blocked case or for the case of blockage with lower velocity. A different type of wire-forcing function describing the velocity reduction after blockage or an accurate distributed resistance model is required for more improved predictions

  19. Investigation of the influence of sampling schemes on quantitative dynamic fluorescence imaging.

    Science.gov (United States)

    Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Wang, Guodong; Wang, Bo; Zhan, Yonghua; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin

    2018-04-01

    Dynamic optical data from a series of sampling intervals can be used for quantitative analysis to obtain meaningful kinetic parameters of probe in vivo . The sampling schemes may affect the quantification results of dynamic fluorescence imaging. Here, we investigate the influence of different sampling schemes on the quantification of binding potential ( BP ) with theoretically simulated and experimentally measured data. Three groups of sampling schemes are investigated including the sampling starting point, sampling sparsity, and sampling uniformity. In the investigation of the influence of the sampling starting point, we further summarize two cases by considering the missing timing sequence between the probe injection and sampling starting time. Results show that the mean value of BP exhibits an obvious growth trend with an increase in the delay of the sampling starting point, and has a strong correlation with the sampling sparsity. The growth trend is much more obvious if throwing the missing timing sequence. The standard deviation of BP is inversely related to the sampling sparsity, and independent of the sampling uniformity and the delay of sampling starting time. Moreover, the mean value of BP obtained by uniform sampling is significantly higher than that by using the non-uniform sampling. Our results collectively suggest that a suitable sampling scheme can help compartmental modeling of dynamic fluorescence imaging provide more accurate results and simpler operations.

  20. A threshold-based multiple optical signal selection scheme for WDM FSO systems

    KAUST Repository

    Nam, Sung Sik

    2017-07-20

    In this paper, we propose a threshold-based-multiple optical signal selection scheme (TMOS) for free-space optical systems based on wavelength division multiplexing. With the proposed TMOS, we can obtain higher spectral efficiency while reducing the potential increase in complexity of implementation caused by applying a selection-based beam selection scheme without a considerable performance loss. Here, to accurately characterize the performance of the proposed TMOS, we statistically analyze the characteristics with heterodyne detection technique over independent and identically distributed Log-normal turbulence conditions taking into considerations the impact of pointing error. Specifically, we derive exact closed-form expressions for the average bit error rate, and the average spectral efficiency by adopting an adaptive modulation. Some selected results shows that the average spectral efficiency can be increased with TMOS while the system requirement is satisfied.

  1. A threshold-based multiple optical signal selection scheme for WDM FSO systems

    KAUST Repository

    Nam, Sung Sik; Alouini, Mohamed-Slim; Ko, Young-Chai; Cho, Sung Ho

    2017-01-01

    In this paper, we propose a threshold-based-multiple optical signal selection scheme (TMOS) for free-space optical systems based on wavelength division multiplexing. With the proposed TMOS, we can obtain higher spectral efficiency while reducing the potential increase in complexity of implementation caused by applying a selection-based beam selection scheme without a considerable performance loss. Here, to accurately characterize the performance of the proposed TMOS, we statistically analyze the characteristics with heterodyne detection technique over independent and identically distributed Log-normal turbulence conditions taking into considerations the impact of pointing error. Specifically, we derive exact closed-form expressions for the average bit error rate, and the average spectral efficiency by adopting an adaptive modulation. Some selected results shows that the average spectral efficiency can be increased with TMOS while the system requirement is satisfied.

  2. Reconciling work and family caregiving among adult-child family caregivers of older people with dementia: effects on role strain and depressive symptoms.

    Science.gov (United States)

    Wang, Yu-Nu; Shyu, Yea-Ing Lotus; Chen, Min-Chi; Yang, Pei-Shan

    2011-04-01

    This paper is a report of a study that examined the effects of work demands, including employment status, work inflexibility and difficulty reconciling work and family caregiving, on role strain and depressive symptoms of adult-child family caregivers of older people with dementia. Family caregivers also employed for pay are known to be affected by work demands, i.e. excessive workload and time pressures. However, few studies have shown how these work demands and reconciliation between work and family caregiving influence caregivers' role strain and depressive symptoms. For this cross-sectional study, secondary data were analysed for 119 adult-child family caregivers of older people with dementia in Taiwan using hierarchical multiple regression. After adjusting for demographic characteristics, resources and role demands overload, family caregivers with full-time jobs (β=0.25, Pwork and caregiving roles (β=0.36, Pworking part-time or unemployed. Family caregivers with more work inflexibility reported more depressive symptoms (β=0.29, PWork demands affected family caregivers' role strain and depressive symptoms. Working full-time and having more difficulty reconciling work and caregiving roles predicted role strain; work inflexibility predicted depressive symptoms. These results can help clinicians identify high-risk groups for role strain and depression. Nurses need to assess family caregivers for work flexibility when screening for high-risk groups and encourage them to reconcile working with family-care responsibilities to reduce role strain. © 2010 Blackwell Publishing Ltd.

  3. Modern classification of neoplasms: reconciling differences between morphologic and molecular approaches

    International Nuclear Information System (INIS)

    Berman, Jules

    2005-01-01

    For over 150 years, pathologists have relied on histomorphology to classify and diagnose neoplasms. Their success has been stunning, permitting the accurate diagnosis of thousands of different types of neoplasms using only a microscope and a trained eye. In the past two decades, cancer genomics has challenged the supremacy of histomorphology by identifying genetic alterations shared by morphologically diverse tumors and by finding genetic features that distinguish subgroups of morphologically homogeneous tumors. The Developmental Lineage Classification and Taxonomy of Neoplasms groups neoplasms by their embryologic origin. The putative value of this classification is based on the expectation that tumors of a common developmental lineage will share common metabolic pathways and common responses to drugs that target these pathways. The purpose of this manuscript is to show that grouping tumors according to their developmental lineage can reconcile certain fundamental discrepancies resulting from morphologic and molecular approaches to neoplasm classification. In this study, six issues in tumor classification are described that exemplify the growing rift between morphologic and molecular approaches to tumor classification: 1) the morphologic separation between epithelial and non-epithelial tumors; 2) the grouping of tumors based on shared cellular functions; 3) the distinction between germ cell tumors and pluripotent tumors of non-germ cell origin; 4) the distinction between tumors that have lost their differentiation and tumors that arise from uncommitted stem cells; 5) the molecular properties shared by morphologically disparate tumors that have a common developmental lineage, and 6) the problem of re-classifying morphologically identical but clinically distinct subsets of tumors. The discussion of these issues in the context of describing different methods of tumor classification is intended to underscore the clinical value of a robust tumor classification. A

  4. ENSEMBLE methods to reconcile disparate national long range dispersion forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Mikkelsen, T; Galmarini, S; Bianconi, R; French, S [eds.

    2003-11-01

    ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparate national forecasts for long-range dispersion. ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidental atmospheric release of radioactive material. A series of new decision-making 'ENSEMBLE' procedures and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecasts from meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national emergency and meteorological forecasting centres, which may choose to integrate them directly into operational emergency information systems, or possibly use them as a basis for future system development. (au)

  5. ENSEMBLE methods to reconcile disparate national long range dispersion forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Mikkelsen, T.; Galmarini, S.; Bianconi, R.; French, S. (eds.)

    2003-11-01

    ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparate national forecasts for long-range dispersion. ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidental atmospheric release of radioactive material. A series of new decision-making 'ENSEMBLE' procedures and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecasts from meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national emergency and meteorological forecasting centres, which may choose to integrate them directly into operational emergency information systems, or possibly use them as a basis for future system development. (au)

  6. Reconciling parenting and smoking in the context of child development.

    Science.gov (United States)

    Bottorff, Joan L; Oliffe, John L; Kelly, Mary T; Johnson, Joy L; Chan, Anna

    2013-08-01

    In this article we explore the micro-social context of parental tobacco use in the first years of a child's life and early childhood. We conducted individual interviews with 28 mothers and fathers during the 4 years following the birth of their child. Using grounded theory methods, we identified the predominant explanatory concept in parents' accounts as the need to reconcile being a parent and smoking. Desires to become smoke-free coexisted with five types of parent-child interactions: (a) protecting the defenseless child, (b) concealing smoking and cigarettes from the mimicking child, (c) reinforcing smoking as bad with the communicative child, (d) making guilt-driven promises to the fearful child, and (e) relinquishing personal responsibility to the autonomous child. We examine the agency of the child in influencing parents' smoking practices, the importance of children's observational learning in the early years, and the reciprocal nature of parent-child interactions related to parents' smoking behavior.

  7. Evaluating radiative transfer schemes treatment of vegetation canopy architecture in land surface models

    Science.gov (United States)

    Braghiere, Renato; Quaife, Tristan; Black, Emily

    2016-04-01

    Incoming shortwave radiation is the primary source of energy driving the majority of the Earth's climate system. The partitioning of shortwave radiation by vegetation into absorbed, reflected, and transmitted terms is important for most of biogeophysical processes, including leaf temperature changes and photosynthesis, and it is currently calculated by most of land surface schemes (LSS) of climate and/or numerical weather prediction models. The most commonly used radiative transfer scheme in LSS is the two-stream approximation, however it does not explicitly account for vegetation architectural effects on shortwave radiation partitioning. Detailed three-dimensional (3D) canopy radiative transfer schemes have been developed, but they are too computationally expensive to address large-scale related studies over long time periods. Using a straightforward one-dimensional (1D) parameterisation proposed by Pinty et al. (2006), we modified a two-stream radiative transfer scheme by including a simple function of Sun zenith angle, so-called "structure factor", which does not require an explicit description and understanding of the complex phenomena arising from the presence of vegetation heterogeneous architecture, and it guarantees accurate simulations of the radiative balance consistently with 3D representations. In order to evaluate the ability of the proposed parameterisation in accurately represent the radiative balance of more complex 3D schemes, a comparison between the modified two-stream approximation with the "structure factor" parameterisation and state-of-art 3D radiative transfer schemes was conducted, following a set of virtual scenarios described in the RAMI4PILPS experiment. These experiments have been evaluating the radiative balance of several models under perfectly controlled conditions in order to eliminate uncertainties arising from an incomplete or erroneous knowledge of the structural, spectral and illumination related canopy characteristics typical

  8. Efficient numerical schemes for viscoplastic avalanches. Part 1: The 1D case

    Energy Technology Data Exchange (ETDEWEB)

    Fernández-Nieto, Enrique D., E-mail: edofer@us.es [Departamento de Matemática Aplicada I, Universidad de Sevilla, E.T.S. Arquitectura, Avda, Reina Mercedes, s/n, 41012 Sevilla (Spain); Gallardo, José M., E-mail: jmgallardo@uma.es [Departamento de Análisis Matemático, Universidad de Málaga, F. Ciencias, Campus Teatinos S/N (Spain); Vigneaux, Paul, E-mail: Paul.Vigneaux@math.cnrs.fr [Unitée de Mathématiques Pures et Appliquées, Ecole Normale Supérieure de Lyon, 46 allée d' Italie, 69364 Lyon Cedex 07 (France)

    2014-05-01

    This paper deals with the numerical resolution of a shallow water viscoplastic flow model. Viscoplastic materials are characterized by the existence of a yield stress: below a certain critical threshold in the imposed stress, there is no deformation and the material behaves like a rigid solid, but when that yield value is exceeded, the material flows like a fluid. In the context of avalanches, it means that after going down a slope, the material can stop and its free surface has a non-trivial shape, as opposed to the case of water (Newtonian fluid). The model involves variational inequalities associated with the yield threshold: finite-volume schemes are used together with duality methods (namely Augmented Lagrangian and Bermúdez–Moreno) to discretize the problem. To be able to accurately simulate the stopping behavior of the avalanche, new schemes need to be designed, involving the classical notion of well-balancing. In the present context, it needs to be extended to take into account the viscoplastic nature of the material as well as general bottoms with wet/dry fronts which are encountered in geophysical geometries. We derived such schemes and numerical experiments are presented to show their performances.

  9. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    Energy Technology Data Exchange (ETDEWEB)

    Rocklin, Gabriel J. [Department of Pharmaceutical Chemistry, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550, USA and Biophysics Graduate Program, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550 (United States); Mobley, David L. [Departments of Pharmaceutical Sciences and Chemistry, University of California Irvine, 147 Bison Modular, Building 515, Irvine, California 92697-0001, USA and Department of Chemistry, University of New Orleans, 2000 Lakeshore Drive, New Orleans, Louisiana 70148 (United States); Dill, Ken A. [Laufer Center for Physical and Quantitative Biology, 5252 Stony Brook University, Stony Brook, New York 11794-0001 (United States); Hünenberger, Philippe H., E-mail: phil@igc.phys.chem.ethz.ch [Laboratory of Physical Chemistry, Swiss Federal Institute of Technology, ETH, 8093 Zürich (Switzerland)

    2013-11-14

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non

  10. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    Science.gov (United States)

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-11-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB

  11. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: an accurate correction scheme for electrostatic finite-size effects.

    Science.gov (United States)

    Rocklin, Gabriel J; Mobley, David L; Dill, Ken A; Hünenberger, Philippe H

    2013-11-14

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol(-1)) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB

  12. Constraints on nanomaterial structure from experiment and theory: reconciling partial representations

    International Nuclear Information System (INIS)

    Mlinar, Vladan

    2015-01-01

    To facilitate the design and optimization of nanomaterials for a given application it is necessary to understand the relationship between structure and physical properties. For large nanomaterials, there is imprecise structural information so the full structure is only resolved at the level of partial representations. Here we show how to reconcile partial structural representations using constraints from structural characterization measurements and theory to maximally exploit the limited amount of data available from experiment. We determine a range of parameter space where predictive theory can be used to design and optimize the structure. Using an example of variation of chemical composition profile across the interface of two nanomaterials, we demonstrate how, given experimental and theoretical constraints, to find a region of structure-parameter space within which computationally explored partial representations of the full structure will have observable real-world counterparts. (paper)

  13. Deference or Interrogation? Contrasting Models for Reconciling Religion, Gender and Equality

    Directory of Open Access Journals (Sweden)

    Moira Dustin

    2012-01-01

    Full Text Available Abstract Since the late 1990s, the extension of the equality framework in the United Kingdom has been accompanied by the recognition of religion within that framework and new measures to address religious discrimination. This development has been contested, with many arguing that religion is substantively different to other discrimination grounds and that increased protection against religious discrimination may undermine equality for other marginalized groups – in particular, women and lesbian, gay, bisexual and transgender (LGBT people. This paper considers these concerns from the perspective of minoritized women in the UK. It analyses two theoretical approaches to reconciling religious claims with gender equality – one based on privileging, the other based on challenging religious claims – before considering which, if either, reflects experiences in the UK in recent years and what this means for gender equality.

  14. Class of unconditionally stable second-order implicit schemes for hyperbolic and parabolic equations

    International Nuclear Information System (INIS)

    Lui, H.C.

    The linearized Burgers equation is considered as a model u/sub t/ tau/sub x/ = bu/sub xx/, where the subscripts t and x denote the derivatives of the function u with respect to time t and space x; a and b are constants (b greater than or equal to 0). Numerical schemes for solving the equation are described that are second-order accurate, unconditionally stable, and dissipative of higher order. (U.S.)

  15. A third-order moving mesh cell-centered scheme for one-dimensional elastic-plastic flows

    Science.gov (United States)

    Cheng, Jun-Bo; Huang, Weizhang; Jiang, Song; Tian, Baolin

    2017-11-01

    A third-order moving mesh cell-centered scheme without the remapping of physical variables is developed for the numerical solution of one-dimensional elastic-plastic flows with the Mie-Grüneisen equation of state, the Wilkins constitutive model, and the von Mises yielding criterion. The scheme combines the Lagrangian method with the MMPDE moving mesh method and adaptively moves the mesh to better resolve shock and other types of waves while preventing the mesh from crossing and tangling. It can be viewed as a direct arbitrarily Lagrangian-Eulerian method but can also be degenerated to a purely Lagrangian scheme. It treats the relative velocity of the fluid with respect to the mesh as constant in time between time steps, which allows high-order approximation of free boundaries. A time dependent scaling is used in the monitor function to avoid possible sudden movement of the mesh points due to the creation or diminishing of shock and rarefaction waves or the steepening of those waves. A two-rarefaction Riemann solver with elastic waves is employed to compute the Godunov values of the density, pressure, velocity, and deviatoric stress at cell interfaces. Numerical results are presented for three examples. The third-order convergence of the scheme and its ability to concentrate mesh points around shock and elastic rarefaction waves are demonstrated. The obtained numerical results are in good agreement with those in literature. The new scheme is also shown to be more accurate in resolving shock and rarefaction waves than an existing third-order cell-centered Lagrangian scheme.

  16. Colour schemes

    DEFF Research Database (Denmark)

    van Leeuwen, Theo

    2013-01-01

    This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation.......This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation....

  17. An effective fitting scheme for the dynamic structure of pure liquids

    International Nuclear Information System (INIS)

    Wax, J-F; Bryk, Taras

    2013-01-01

    A scheme of analysis for the dynamic structure functions in pure liquids is presented which can be implemented with both experimental and simulation data. Expressions for contributions of relaxing and propagating modes proposed earlier in the framework of the generalized collective modes approach are optimized in order to strictly fulfil three among the required sum-rules. The method is applied to simulation data for liquid cesium, the description of which appears to only require one relaxing and one propagating mode in the investigated wavevector range. These expressions are able to account for the dynamics in both the hydrodynamic and the kinetic regimes, being quantitatively accurate up to the onset of the first peak of the static structure factor and qualitatively beyond. Features of the modes can thus be obtained easily, without resorting to heavy formalism. The scheme of analysis can be straightforwardly extended to account for a higher number of relaxing and propagating modes. (paper)

  18. Hospital financing: calculating inpatient capital costs in Germany with a comparative view on operating costs and the English costing scheme.

    Science.gov (United States)

    Vogl, Matthias

    2014-04-01

    The paper analyzes the German inpatient capital costing scheme by assessing its cost module calculation. The costing scheme represents the first separated national calculation of performance-oriented capital cost lump sums per DRG. The three steps in the costing scheme are reviewed and assessed: (1) accrual of capital costs; (2) cost-center and cost category accounting; (3) data processing for capital cost modules. The assessment of each step is based on its level of transparency and efficiency. A comparative view on operating costing and the English costing scheme is given. Advantages of the scheme are low participation hurdles, low calculation effort for G-DRG calculation participants, highly differentiated cost-center/cost category separation, and advanced patient-based resource allocation. The exclusion of relevant capital costs, nontransparent resource allocation, and unclear capital cost modules, limit the managerial relevance and transparency of the capital costing scheme. The scheme generates the technical premises for a change from dual financing by insurances (operating costs) and state (capital costs) to a single financing source. The new capital costing scheme will intensify the discussion on how to solve the current investment backlog in Germany and can assist regulators in other countries with the introduction of accurate capital costing. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Reconciling Top-Down and Bottom-Up Estimates of Oil and Gas Methane Emissions in the Barnett Shale

    Science.gov (United States)

    Hamburg, S.

    2015-12-01

    Top-down approaches that use aircraft, tower, or satellite-based measurements of well-mixed air to quantify regional methane emissions have typically estimated higher emissions from the natural gas supply chain when compared to bottom-up inventories. A coordinated research campaign in October 2013 used simultaneous top-down and bottom-up approaches to quantify total and fossil methane emissions in the Barnett Shale region of Texas. Research teams have published individual results including aircraft mass-balance estimates of regional emissions and a bottom-up, 25-county region spatially-resolved inventory. This work synthesizes data from the campaign to directly compare top-down and bottom-up estimates. A new analytical approach uses statistical estimators to integrate facility emission rate distributions from unbiased and targeted high emission site datasets, which more rigorously incorporates the fat-tail of skewed distributions to estimate regional emissions of well pads, compressor stations, and processing plants. The updated spatially-resolved inventory was used to estimate total and fossil methane emissions from spatial domains that match seven individual aircraft mass balance flights. Source apportionment of top-down emissions between fossil and biogenic methane was corroborated with two independent analyses of methane and ethane ratios. Reconciling top-down and bottom-up estimates of fossil methane emissions leads to more accurate assessment of natural gas supply chain emission rates and the relative contribution of high emission sites. These results increase our confidence in our understanding of the climate impacts of natural gas relative to more carbon-intensive fossil fuels and the potential effectiveness of mitigation strategies.

  20. Image communication scheme based on dynamic visual cryptography and computer generated holography

    Science.gov (United States)

    Palevicius, Paulius; Ragulskis, Minvydas

    2015-01-01

    Computer generated holograms are often exploited to implement optical encryption schemes. This paper proposes the integration of dynamic visual cryptography (an optical technique based on the interplay of visual cryptography and time-averaging geometric moiré) with Gerchberg-Saxton algorithm. A stochastic moiré grating is used to embed the secret into a single cover image. The secret can be visually decoded by a naked eye if only the amplitude of harmonic oscillations corresponds to an accurately preselected value. The proposed visual image encryption scheme is based on computer generated holography, optical time-averaging moiré and principles of dynamic visual cryptography. Dynamic visual cryptography is used both for the initial encryption of the secret image and for the final decryption. Phase data of the encrypted image are computed by using Gerchberg-Saxton algorithm. The optical image is decrypted using the computationally reconstructed field of amplitudes.

  1. Hybrid model based unified scheme for endoscopic Cerenkov and radio-luminescence tomography: Simulation demonstration

    Science.gov (United States)

    Wang, Lin; Cao, Xin; Ren, Qingyun; Chen, Xueli; He, Xiaowei

    2018-05-01

    Cerenkov luminescence imaging (CLI) is an imaging method that uses an optical imaging scheme to probe a radioactive tracer. Application of CLI with clinically approved radioactive tracers has opened an opportunity for translating optical imaging from preclinical to clinical applications. Such translation was further improved by developing an endoscopic CLI system. However, two-dimensional endoscopic imaging cannot identify accurate depth and obtain quantitative information. Here, we present an imaging scheme to retrieve the depth and quantitative information from endoscopic Cerenkov luminescence tomography, which can also be applied for endoscopic radio-luminescence tomography. In the scheme, we first constructed a physical model for image collection, and then a mathematical model for characterizing the luminescent light propagation from tracer to the endoscopic detector. The mathematical model is a hybrid light transport model combined with the 3rd order simplified spherical harmonics approximation, diffusion, and radiosity equations to warrant accuracy and speed. The mathematical model integrates finite element discretization, regularization, and primal-dual interior-point optimization to retrieve the depth and the quantitative information of the tracer. A heterogeneous-geometry-based numerical simulation was used to explore the feasibility of the unified scheme, which demonstrated that it can provide a satisfactory balance between imaging accuracy and computational burden.

  2. LevelScheme: A level scheme drawing and scientific figure preparation system for Mathematica

    Science.gov (United States)

    Caprio, M. A.

    2005-09-01

    LevelScheme is a scientific figure preparation system for Mathematica. The main emphasis is upon the construction of level schemes, or level energy diagrams, as used in nuclear, atomic, molecular, and hadronic physics. LevelScheme also provides a general infrastructure for the preparation of publication-quality figures, including support for multipanel and inset plotting, customizable tick mark generation, and various drawing and labeling tasks. Coupled with Mathematica's plotting functions and powerful programming language, LevelScheme provides a flexible system for the creation of figures combining diagrams, mathematical plots, and data plots. Program summaryTitle of program:LevelScheme Catalogue identifier:ADVZ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVZ Operating systems:Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux Programming language used:Mathematica 4 Number of bytes in distributed program, including test and documentation:3 051 807 Distribution format:tar.gz Nature of problem:Creation of level scheme diagrams. Creation of publication-quality multipart figures incorporating diagrams and plots. Method of solution:A set of Mathematica packages has been developed, providing a library of level scheme drawing objects, tools for figure construction and labeling, and control code for producing the graphics.

  3. Development of Mycoplasma synoviae (MS) core genome multilocus sequence typing (cgMLST) scheme.

    Science.gov (United States)

    Ghanem, Mostafa; El-Gazzar, Mohamed

    2018-05-01

    Mycoplasma synoviae (MS) is a poultry pathogen with reported increased prevalence and virulence in recent years. MS strain identification is essential for prevention, control efforts and epidemiological outbreak investigations. Multiple multilocus based sequence typing schemes have been developed for MS, yet the resolution of these schemes could be limited for outbreak investigation. The cost of whole genome sequencing became close to that of sequencing the seven MLST targets; however, there is no standardized method for typing MS strains based on whole genome sequences. In this paper, we propose a core genome multilocus sequence typing (cgMLST) scheme as a standardized and reproducible method for typing MS based whole genome sequences. A diverse set of 25 MS whole genome sequences were used to identify 302 core genome genes as cgMLST targets (35.5% of MS genome) and 44 whole genome sequences of MS isolates from six countries in four continents were used for typing applying this scheme. cgMLST based phylogenetic trees displayed a high degree of agreement with core genome SNP based analysis and available epidemiological information. cgMLST allowed evaluation of two conventional MLST schemes of MS. The high discriminatory power of cgMLST allowed differentiation between samples of the same conventional MLST type. cgMLST represents a standardized, accurate, highly discriminatory, and reproducible method for differentiation between MS isolates. Like conventional MLST, it provides stable and expandable nomenclature, allowing for comparing and sharing the typing results between different laboratories worldwide. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  4. CPSFS: A Credible Personalized Spam Filtering Scheme by Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Xin Liu

    2017-01-01

    Full Text Available Email spam consumes a lot of network resources and threatens many systems because of its unwanted or malicious content. Most existing spam filters only target complete-spam but ignore semispam. This paper proposes a novel and comprehensive CPSFS scheme: Credible Personalized Spam Filtering Scheme, which classifies spam into two categories: complete-spam and semispam, and targets filtering both kinds of spam. Complete-spam is always spam for all users; semispam is an email identified as spam by some users and as regular email by other users. Most existing spam filters target complete-spam but ignore semispam. In CPSFS, Bayesian filtering is deployed at email servers to identify complete-spam, while semispam is identified at client side by crowdsourcing. An email user client can distinguish junk from legitimate emails according to spam reports from credible contacts with the similar interests. Social trust and interest similarity between users and their contacts are calculated so that spam reports are more accurately targeted to similar users. The experimental results show that the proposed CPSFS can improve the accuracy rate of distinguishing spam from legitimate emails compared with that of Bayesian filter alone.

  5. An improved current control scheme for grid-connected DG unit based distribution system harmonic compensation

    DEFF Research Database (Denmark)

    He, Jinwei; Wei Li, Yun; Wang, Xiongfei

    2013-01-01

    In order to utilize DG unit interfacing converters to actively compensate distribution system harmonics, this paper proposes an enhanced current control approach. It seamlessly integrates system harmonic mitigation capabilities with the primary DG power generation function. As the proposed current...... controller has two well decoupled control branches to independently control fundamental and harmonic DG currents, phase-locked loops (PLL) and system harmonic component extractions can be avoided during system harmonic compensation. Moreover, a closed-loop power control scheme is also employed to derive...... the fundamental current reference. The proposed power control scheme effectively eliminates the impacts of steady-state fundamental current tracking errors in the DG units. Thus, an accurate power control is realized even when the harmonic compensation functions are activated. Experimental results from a single...

  6. Computational scheme for pH-dependent binding free energy calculation with explicit solvent.

    Science.gov (United States)

    Lee, Juyong; Miller, Benjamin T; Brooks, Bernard R

    2016-01-01

    We present a computational scheme to compute the pH-dependence of binding free energy with explicit solvent. Despite the importance of pH, the effect of pH has been generally neglected in binding free energy calculations because of a lack of accurate methods to model it. To address this limitation, we use a constant-pH methodology to obtain a true ensemble of multiple protonation states of a titratable system at a given pH and analyze the ensemble using the Bennett acceptance ratio (BAR) method. The constant pH method is based on the combination of enveloping distribution sampling (EDS) with the Hamiltonian replica exchange method (HREM), which yields an accurate semi-grand canonical ensemble of a titratable system. By considering the free energy change of constraining multiple protonation states to a single state or releasing a single protonation state to multiple states, the pH dependent binding free energy profile can be obtained. We perform benchmark simulations of a host-guest system: cucurbit[7]uril (CB[7]) and benzimidazole (BZ). BZ experiences a large pKa shift upon complex formation. The pH-dependent binding free energy profiles of the benchmark system are obtained with three different long-range interaction calculation schemes: a cutoff, the particle mesh Ewald (PME), and the isotropic periodic sum (IPS) method. Our scheme captures the pH-dependent behavior of binding free energy successfully. Absolute binding free energy values obtained with the PME and IPS methods are consistent, while cutoff method results are off by 2 kcal mol(-1) . We also discuss the characteristics of three long-range interaction calculation methods for constant-pH simulations. © 2015 The Protein Society.

  7. Packet reversed packet combining scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2006-07-01

    The packet combining scheme is a well defined simple error correction scheme with erroneous copies at the receiver. It offers higher throughput combined with ARQ protocols in networks than that of basic ARQ protocols. But packet combining scheme fails to correct errors when the errors occur in the same bit locations of two erroneous copies. In the present work, we propose a scheme that will correct error if the errors occur at the same bit location of the erroneous copies. The proposed scheme when combined with ARQ protocol will offer higher throughput. (author)

  8. Per-Pixel, Dual-Counter Scheme for Optical Communications

    Science.gov (United States)

    Farr, William H.; Bimbaum, Kevin M.; Quirk, Kevin J.; Sburlan, Suzana; Sahasrabudhe, Adit

    2013-01-01

    Free space optical communications links from deep space are projected to fulfill future NASA communication requirements for 2020 and beyond. Accurate laser-beam pointing is required to achieve high data rates at low power levels.This innovation is a per-pixel processing scheme using a pair of three-state digital counters to implement acquisition and tracking of a dim laser beacon transmitted from Earth for pointing control of an interplanetary optical communications system using a focal plane array of single sensitive detectors. It shows how to implement dim beacon acquisition and tracking for an interplanetary optical transceiver with a method that is suitable for both achieving theoretical performance, as well as supporting additional functions of high data rate forward links and precision spacecraft ranging.

  9. Divergence-free MHD on unstructured meshes using high order finite volume schemes based on multidimensional Riemann solvers

    Science.gov (United States)

    Balsara, Dinshaw S.; Dumbser, Michael

    2015-10-01

    Several advances have been reported in the recent literature on divergence-free finite volume schemes for Magnetohydrodynamics (MHD). Almost all of these advances are restricted to structured meshes. To retain full geometric versatility, however, it is also very important to make analogous advances in divergence-free schemes for MHD on unstructured meshes. Such schemes utilize a staggered Yee-type mesh, where all hydrodynamic quantities (mass, momentum and energy density) are cell-centered, while the magnetic fields are face-centered and the electric fields, which are so useful for the time update of the magnetic field, are centered at the edges. Three important advances are brought together in this paper in order to make it possible to have high order accurate finite volume schemes for the MHD equations on unstructured meshes. First, it is shown that a divergence-free WENO reconstruction of the magnetic field can be developed for unstructured meshes in two and three space dimensions using a classical cell-centered WENO algorithm, without the need to do a WENO reconstruction for the magnetic field on the faces. This is achieved via a novel constrained L2-projection operator that is used in each time step as a postprocessor of the cell-centered WENO reconstruction so that the magnetic field becomes locally and globally divergence free. Second, it is shown that recently-developed genuinely multidimensional Riemann solvers (called MuSIC Riemann solvers) can be used on unstructured meshes to obtain a multidimensionally upwinded representation of the electric field at each edge. Third, the above two innovations work well together with a high order accurate one-step ADER time stepping strategy, which requires the divergence-free nonlinear WENO reconstruction procedure to be carried out only once per time step. The resulting divergence-free ADER-WENO schemes with MuSIC Riemann solvers give us an efficient and easily-implemented strategy for divergence-free MHD on

  10. Two-level MOC calculation scheme in APOLLO2 for cross-section library generation for LWR hexagonal assemblies

    International Nuclear Information System (INIS)

    Petrov, Nikolay; Todorova, Galina; Kolev, Nikola; Damian, Frederic

    2011-01-01

    The accurate and efficient MOC calculation scheme in APOLLO2, developed by CEA for generating multi-parameterized cross-section libraries for PWR assemblies, has been adapted to hexagonal assemblies. The neutronic part of this scheme is based on a two-level calculation methodology. At the first level, a multi-cell method is used in 281 energy groups for cross-section definition and self-shielding. At the second level, precise MOC calculations are performed in a collapsed energy mesh (30-40 groups). In this paper, the application and validation of the two-level scheme for hexagonal assemblies is described. Solutions for a VVER assembly are compared with TRIPOLI4® calculations and direct 281g MOC solutions. The results show that the accuracy is close to that of the 281g MOC calculation while the CPU time is substantially reduced. Compared to the multi-cell method, the accuracy is markedly improved. (author)

  11. A full quantum network scheme

    International Nuclear Information System (INIS)

    Ma Hai-Qiang; Wei Ke-Jin; Yang Jian-Hui; Li Rui-Xue; Zhu Wu

    2014-01-01

    We present a full quantum network scheme using a modified BB84 protocol. Unlike other quantum network schemes, it allows quantum keys to be distributed between two arbitrary users with the help of an intermediary detecting user. Moreover, it has good expansibility and prevents all potential attacks using loopholes in a detector, so it is more practical to apply. Because the fiber birefringence effects are automatically compensated, the scheme is distinctly stable in principle and in experiment. The simple components for every user make our scheme easier for many applications. The experimental results demonstrate the stability and feasibility of this scheme. (general)

  12. Large eddy simulation of spray and combustion characteristics with realistic chemistry and high-order numerical scheme under diesel engine-like conditions

    International Nuclear Information System (INIS)

    Zhou, Lei; Luo, Kai Hong; Qin, Wenjin; Jia, Ming; Shuai, Shi Jin

    2015-01-01

    Highlights: • MUSCL differencing scheme in LES method is used to investigate liquid fuel spray and combustion process. • Using MUSCL can accurately capture the gas phase velocity distribution and liquid spray features. • Detailed chemistry mechanism with a parallel algorithm was used to calculate combustion process. • Increasing oxygen concentration can decrease ignition delay time and flame LOL. - Abstract: The accuracy of large eddy simulation (LES) for turbulent combustion depends on suitably implemented numerical schemes and chemical mechanisms. In the original KIVA3V code, finite difference schemes such as QSOU (Quasi-second-order upwind) and PDC (Partial Donor Cell Differencing) cannot achieve good results or even computational stability when using coarse grids due to large numerical diffusion. In this paper, the MUSCL (Monotone Upstream-centered Schemes for Conservation Laws) differencing scheme is implemented into KIVA3V-LES code to calculate the convective term. In the meantime, Lu’s n-heptane reduced 58-species mechanisms (Lu, 2011) is used to calculate chemistry with a parallel algorithm. Finally, improved models for spray injection are also employed. With these improvements, the KIVA3V-LES code is renamed as KIVALES-CP (Chemistry with Parallel algorithm) in this study. The resulting code was used to study the gas–liquid two phase jet and combustion under various diesel engine-like conditions in a constant volume vessel. The results show that using the MUSCL scheme can accurately capture the spray shape and fuel vapor penetration using even a coarse grid, in comparison with the Sandia experimental data. Similarly good results are obtained for three single-component fuels, i-Octane (C8H18), n-Dodecanese (C12H26), and n-Hexadecane (C16H34) with very different physical properties. Meanwhile the improved methodology is able to accurately predict ignition delay and flame lift-off length (LOL) under different oxygen concentrations from 10% to 21

  13. A new scheme for ATLAS trigger simulation using legacy code

    International Nuclear Information System (INIS)

    Galster, Gorm; Stelzer, Joerg; Wiedenmann, Werner

    2014-01-01

    Analyses at the LHC which search for rare physics processes or determine with high precision Standard Model parameters require accurate simulations of the detector response and the event selection processes. The accurate determination of the trigger response is crucial for the determination of overall selection efficiencies and signal sensitivities. For the generation and the reconstruction of simulated event data, the most recent software releases are usually used to ensure the best agreement between simulated data and real data. For the simulation of the trigger selection process, however, ideally the same software release that was deployed when the real data were taken should be used. This potentially requires running software dating many years back. Having a strategy for running old software in a modern environment thus becomes essential when data simulated for past years start to present a sizable fraction of the total. We examined the requirements and possibilities for such a simulation scheme within the ATLAS software framework and successfully implemented a proof-of-concept simulation chain. One of the greatest challenges was the choice of a data format which promises long term compatibility with old and new software releases. Over the time periods envisaged, data format incompatibilities are also likely to emerge in databases and other external support services. Software availability may become an issue, when e.g. the support for the underlying operating system might stop. In this paper we present the encountered problems and developed solutions, and discuss proposals for future development. Some ideas reach beyond the retrospective trigger simulation scheme in ATLAS as they also touch more generally aspects of data preservation.

  14. Transmission usage cost allocation schemes

    International Nuclear Information System (INIS)

    Abou El Ela, A.A.; El-Sehiemy, R.A.

    2009-01-01

    This paper presents different suggested transmission usage cost allocation (TCA) schemes to the system individuals. Different independent system operator (ISO) visions are presented using the proportional rata and flow-based TCA methods. There are two proposed flow-based TCA schemes (FTCA). The first FTCA scheme generalizes the equivalent bilateral exchanges (EBE) concepts for lossy networks through two-stage procedure. The second FTCA scheme is based on the modified sensitivity factors (MSF). These factors are developed from the actual measurements of power flows in transmission lines and the power injections at different buses. The proposed schemes exhibit desirable apportioning properties and are easy to implement and understand. Case studies for different loading conditions are carried out to show the capability of the proposed schemes for solving the TCA problem. (author)

  15. Matroids and quantum-secret-sharing schemes

    International Nuclear Information System (INIS)

    Sarvepalli, Pradeep; Raussendorf, Robert

    2010-01-01

    A secret-sharing scheme is a cryptographic protocol to distribute a secret state in an encoded form among a group of players such that only authorized subsets of the players can reconstruct the secret. Classically, efficient secret-sharing schemes have been shown to be induced by matroids. Furthermore, access structures of such schemes can be characterized by an excluded minor relation. No such relations are known for quantum secret-sharing schemes. In this paper we take the first steps toward a matroidal characterization of quantum-secret-sharing schemes. In addition to providing a new perspective on quantum-secret-sharing schemes, this characterization has important benefits. While previous work has shown how to construct quantum-secret-sharing schemes for general access structures, these schemes are not claimed to be efficient. In this context the present results prove to be useful; they enable us to construct efficient quantum-secret-sharing schemes for many general access structures. More precisely, we show that an identically self-dual matroid that is representable over a finite field induces a pure-state quantum-secret-sharing scheme with information rate 1.

  16. Reconciling societal and scientific definitions for the monsoon

    Science.gov (United States)

    Reeve, Mathew; Stephenson, David

    2014-05-01

    Science defines the monsoon in numerous ways. We can apply these definitions to forecast data, reanalysis data, observations, GCMs and more. In a basic research setting, we hope that this work will advance science and our understanding of the monsoon system. In an applied research setting, we often hope that this work will benefit a specific stakeholder or community. We may want to inform a stakeholder when the monsoon starts, now and in the future. However, what happens if the stakeholders cannot relate to the information because their perceptions do not align with the monsoon definition we use in our analysis? We can resolve this either by teaching the stakeholders or learning from them about how they define the monsoon and when they perceive it to begin. In this work we reconcile different scientific monsoon definitions with the perceptions of agricultural communities in Bangladesh. We have developed a statistical technique that rates different scientific definitions against the people's perceptions of when the monsoon starts and ends. We construct a probability mass function (pmf) around each of the respondent's answers in a questionnaire survey. We can use this pmf to analyze the time series of monsoon onsets and withdrawals from the different scientific definitions. We can thereby quantitatively judge which definition may be most appropriate for a specific applied research setting.

  17. Stable and high order accurate difference methods for the elastic wave equation in discontinuous media

    KAUST Repository

    Duru, Kenneth

    2014-12-01

    © 2014 Elsevier Inc. In this paper, we develop a stable and systematic procedure for numerical treatment of elastic waves in discontinuous and layered media. We consider both planar and curved interfaces where media parameters are allowed to be discontinuous. The key feature is the highly accurate and provably stable treatment of interfaces where media discontinuities arise. We discretize in space using high order accurate finite difference schemes that satisfy the summation by parts rule. Conditions at layer interfaces are imposed weakly using penalties. By deriving lower bounds of the penalty strength and constructing discrete energy estimates we prove time stability. We present numerical experiments in two space dimensions to illustrate the usefulness of the proposed method for simulations involving typical interface phenomena in elastic materials. The numerical experiments verify high order accuracy and time stability.

  18. Development of a reference scheme for MOX lattice physics calculations

    International Nuclear Information System (INIS)

    Finck, P.J.; Stenberg, C.G.; Roy, R.

    1998-01-01

    The US program to dispose of weapons-grade Pu could involve the irradiation of mixed-oxide (MOX) fuel assemblies in commercial light water reactors. This will require licensing acceptance because of the modifications to the core safety characteristics. In particular, core neutronics will be significantly modified, thus making it necessary to validate the standard suites of neutronics codes for that particular application. Validation criteria are still unclear, but it seems reasonable to expect that the same level of accuracy will be expected for MOX as that which has been achieved for UO 2 . Commercial lattice physics codes are invariably claimed to be accurate for MOX analysis but often lack independent confirmation of their performance on a representative experimental database. Argonne National Laboratory (ANL) has started implementing a public domain suite of codes to provide for a capability to perform independent assessments of MOX core analyses. The DRAGON lattice code was chosen, and fine group ENDF/B-VI.04 and JEF-2.2 libraries have been developed. The objective of this work is to validate the DRAGON algorithms with respect to continuous-energy Monte Carlo for a suite of realistic UO 2 -MOX benchmark cases, with the aim of establishing a reference DRAGON scheme with a demonstrated high level of accuracy and no computing resource constraints. Using this scheme as a reference, future work will be devoted to obtaining simpler and less costly schemes that preserve accuracy as much as possible

  19. Electricity Consumption Forecasting Scheme via Improved LSSVM with Maximum Correntropy Criterion

    Directory of Open Access Journals (Sweden)

    Jiandong Duan

    2018-02-01

    Full Text Available In recent years, with the deepening of China’s electricity sales side reform and electricity market opening up gradually, the forecasting of electricity consumption (FoEC becomes an extremely important technique for the electricity market. At present, how to forecast the electricity accurately and make an evaluation of results scientifically are still key research topics. In this paper, we propose a novel prediction scheme based on the least-square support vector machine (LSSVM model with a maximum correntropy criterion (MCC to forecast the electricity consumption (EC. Firstly, the electricity characteristics of various industries are analyzed to determine the factors that mainly affect the changes in electricity, such as the gross domestic product (GDP, temperature, and so on. Secondly, according to the statistics of the status quo of the small sample data, the LSSVM model is employed as the prediction model. In order to optimize the parameters of the LSSVM model, we further use the local similarity function MCC as the evaluation criterion. Thirdly, we employ the K-fold cross-validation and grid searching methods to improve the learning ability. In the experiments, we have used the EC data of Shaanxi Province in China to evaluate the proposed prediction scheme, and the results show that the proposed prediction scheme outperforms the method based on the traditional LSSVM model.

  20. An Integrative Approach to Accurate Vehicle Logo Detection

    Directory of Open Access Journals (Sweden)

    Hao Pan

    2013-01-01

    required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.

  1. Deep Mixing of 3He: Reconciling Big Bang and Stellar Nucleosynthesis

    International Nuclear Information System (INIS)

    Eggleton, P P; Dearborn, D P; Lattanzio, J

    2006-01-01

    Low-mass stars, ∼ 1-2 solar masses, near the Main Sequence are efficient at producing 3 He, which they mix into the convective envelope on the giant branch and should distribute into the Galaxy by way of envelope loss. This process is so efficient that it is difficult to reconcile the low observed cosmic abundance of 3 He with the predictions of both stellar and Big Bang nucleosynthesis. In this paper we find, by modeling a red giant with a fully three-dimensional hydrodynamic code and a full nucleosynthetic network, that mixing arises in the supposedly stable and radiative zone between the hydrogen-burning shell and the base of the convective envelope. This mixing is due to Rayleigh-Taylor instability within a zone just above the hydrogen-burning shell, where a nuclear reaction lowers the mean molecular weight slightly. Thus we are able to remove the threat that 3 He production in low-mass stars poses to the Big Bang nucleosynthesis of 3 He

  2. Deep mixing of 3He: reconciling Big Bang and stellar nucleosynthesis.

    Science.gov (United States)

    Eggleton, Peter P; Dearborn, David S P; Lattanzio, John C

    2006-12-08

    Low-mass stars, approximately 1 to 2 solar masses, near the Main Sequence are efficient at producing the helium isotope 3He, which they mix into the convective envelope on the giant branch and should distribute into the Galaxy by way of envelope loss. This process is so efficient that it is difficult to reconcile the low observed cosmic abundance of 3He with the predictions of both stellar and Big Bang nucleosynthesis. Here we find, by modeling a red giant with a fully three-dimensional hydrodynamic code and a full nucleosynthetic network, that mixing arises in the supposedly stable and radiative zone between the hydrogen-burning shell and the base of the convective envelope. This mixing is due to Rayleigh-Taylor instability within a zone just above the hydrogen-burning shell, where a nuclear reaction lowers the mean molecular weight slightly. Thus, we are able to remove the threat that 3He production in low-mass stars poses to the Big Bang nucleosynthesis of 3He.

  3. The Effect(s) of Teen Pregnancy: Reconciling Theory, Methods, and Findings.

    Science.gov (United States)

    Diaz, Christina J; Fiel, Jeremy E

    2016-02-01

    Although teenage mothers have lower educational attainment and earnings than women who delay fertility, causal interpretations of this relationship remain controversial. Scholars argue that there are reasons to predict negative, trivial, or even positive effects, and different methodological approaches provide some support for each perspective. We reconcile this ongoing debate by drawing on two heuristics: (1) each methodological strategy emphasizes different women in estimation procedures, and (2) the effects of teenage fertility likely vary in the population. Analyses of the Child and Young Adult Cohorts of the National Longitudinal Survey of Youth (N = 3,661) confirm that teen pregnancy has negative effects on most women's attainment and earnings. More striking, however, is that effects on college completion and early earnings vary considerably and are most pronounced among those least likely to experience an early pregnancy. Further analyses suggest that teen pregnancy is particularly harmful for those with the brightest socioeconomic prospects and who are least prepared for the transition to motherhood.

  4. Prospects for reconciling the conflict between economic growth and biodiversity conservation with technological progress.

    Science.gov (United States)

    Czech, Brian

    2008-12-01

    The conflict between economic growth and biodiversity conservation is understood in portions of academia and sometimes acknowledged in political circles. Nevertheless, there is not a unified response. In political and policy circles, the environmental Kuznets curve (EKC) is posited to solve the conflict between economic growth and environmental protection. In academia, however, the EKC has been deemed fallacious in macroeconomic scenarios and largely irrelevant to biodiversity. A more compelling response to the conflict is that it may be resolved with technological progress. Herein I review the conflict between economic growth and biodiversity conservation in the absence of technological progress, explore the prospects for technological progress to reconcile that conflict, and provide linguistic suggestions for describing the relationships among economic growth, technological progress, and biodiversity conservation. The conflict between economic growth and biodiversity conservation is based on the first two laws of thermodynamics and principles of ecology such as trophic levels and competitive exclusion. In this biophysical context, the human economy grows at the competitive exclusion of nonhuman species in the aggregate. Reconciling the conflict via technological progress has not occurred and is infeasible because of the tight linkage between technological progress and economic growth at current levels of technology. Surplus production in existing economic sectors is required for conducting the research and development necessary for bringing new technologies to market. Technological regimes also reflect macroeconomic goals, and if the goal is economic growth, reconciliatory technologies are less likely to be developed. As the economy grows, the loss of biodiversity may be partly mitigated with end-use innovation that increases technical efficiency, but this type of technological progress requires policies that are unlikely if the conflict between economic growth

  5. Scheme Program Documentation Tools

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2004-01-01

    are separate and intended for different documentation purposes they are related to each other in several ways. Both tools are based on XML languages for tool setup and for documentation authoring. In addition, both tools rely on the LAML framework which---in a systematic way---makes an XML language available...... as named functions in Scheme. Finally, the Scheme Elucidator is able to integrate SchemeDoc resources as part of an internal documentation resource....

  6. A Memory Efficient Network Encryption Scheme

    Science.gov (United States)

    El-Fotouh, Mohamed Abo; Diepold, Klaus

    In this paper, we studied the two widely used encryption schemes in network applications. Shortcomings have been found in both schemes, as these schemes consume either more memory to gain high throughput or low memory with low throughput. The need has aroused for a scheme that has low memory requirements and in the same time possesses high speed, as the number of the internet users increases each day. We used the SSM model [1], to construct an encryption scheme based on the AES. The proposed scheme possesses high throughput together with low memory requirements.

  7. A new scheme for urban impervious surface classification from SAR images

    Science.gov (United States)

    Zhang, Hongsheng; Lin, Hui; Wang, Yunpeng

    2018-05-01

    Urban impervious surfaces have been recognized as a significant indicator for various environmental and socio-economic studies. There is an increasingly urgent demand for timely and accurate monitoring of the impervious surfaces with satellite technology from local to global scales. In the past decades, optical remote sensing has been widely employed for this task with various techniques. However, there are still a range of challenges, e.g. handling cloud contamination on optical data. Therefore, the Synthetic Aperture Radar (SAR) was introduced for the challenging task because it is uniquely all-time- and all-weather-capable. Nevertheless, with an increasing number of SAR data applied, the methodology used for impervious surfaces classification remains unchanged from the methods used for optical datasets. This shortcoming has prevented the community from fully exploring the potential of using SAR data for impervious surfaces classification. We proposed a new scheme that is comparable to the well-known and fundamental Vegetation-Impervious surface-Soil (V-I-S) model for mapping urban impervious surfaces. Three scenes of fully polarimetric Radsarsat-2 data for the cities of Shenzhen, Hong Kong and Macau were employed to test and validate the proposed methodology. Experimental results indicated that the overall accuracy and Kappa coefficient were 96.00% and 0.8808 in Shenzhen, 93.87% and 0.8307 in Hong Kong and 97.48% and 0.9354 in Macau, indicating the applicability and great potential of the new scheme for impervious surfaces classification using polarimetric SAR data. Comparison with the traditional scheme indicated that this new scheme was able to improve the overall accuracy by up to 4.6% and Kappa coefficient by up to 0.18.

  8. A robust and efficient finite volume scheme for the discretization of diffusive flux on extremely skewed meshes in complex geometries

    Science.gov (United States)

    Traoré, Philippe; Ahipo, Yves Marcel; Louste, Christophe

    2009-08-01

    In this paper an improved finite volume scheme to discretize diffusive flux on a non-orthogonal mesh is proposed. This approach, based on an iterative technique initially suggested by Khosla [P.K. Khosla, S.G. Rubin, A diagonally dominant second-order accurate implicit scheme, Computers and Fluids 2 (1974) 207-209] and known as deferred correction, has been intensively utilized by Muzaferija [S. Muzaferija, Adaptative finite volume method for flow prediction using unstructured meshes and multigrid approach, Ph.D. Thesis, Imperial College, 1994] and later Fergizer and Peric [J.H. Fergizer, M. Peric, Computational Methods for Fluid Dynamics, Springer, 2002] to deal with the non-orthogonality of the control volumes. Using a more suitable decomposition of the normal gradient, our scheme gives accurate solutions in geometries where the basic idea of Muzaferija fails. First the performances of both schemes are compared for a Poisson problem solved in quadrangular domains where control volumes are increasingly skewed in order to test their robustness and efficiency. It is shown that convergence properties and the accuracy order of the solution are not degraded even on extremely skewed mesh. Next, the very stable behavior of the method is successfully demonstrated on a randomly distorted grid as well as on an anisotropically distorted one. Finally we compare the solution obtained for quadrilateral control volumes to the ones obtained with a finite element code and with an unstructured version of our finite volume code for triangular control volumes. No differences can be observed between the different solutions, which demonstrates the effectiveness of our approach.

  9. Gauss-Kronrod-Trapezoidal Integration Scheme for Modeling Biological Tissues with Continuous Fiber Distributions

    Science.gov (United States)

    Hou, Chieh; Ateshian, Gerard A.

    2015-01-01

    Fibrous biological tissues may be modeled using a continuous fiber distribution (CFD) to capture tension-compression nonlinearity, anisotropic fiber distributions, and load-induced anisotropy. The CFD framework requires spherical integration of weighted individual fiber responses, with fibers contributing to the stress response only when they are in tension. The common method for performing this integration employs the discretization of the unit sphere into a polyhedron with nearly uniform triangular faces (finite element integration or FEI scheme). Although FEI has proven to be more accurate and efficient than integration using spherical coordinates, it presents three major drawbacks: First, the number of elements on the unit sphere needed to achieve satisfactory accuracy becomes a significant computational cost in a finite element analysis. Second, fibers may not be in tension in some regions on the unit sphere, where the integration becomes a waste. Third, if tensed fiber bundles span a small region compared to the area of the elements on the sphere, a significant discretization error arises. This study presents an integration scheme specialized to the CFD framework, which significantly mitigates the first drawback of the FEI scheme, while eliminating the second and third completely. Here, integration is performed only over the regions of the unit sphere where fibers are in tension. Gauss-Kronrod quadrature is used across latitudes and the trapezoidal scheme across longitudes. Over a wide range of strain states, fiber material properties, and fiber angular distributions, results demonstrate that this new scheme always outperforms FEI, sometimes by orders of magnitude in the number of computational steps and relative accuracy of the stress calculation. PMID:26291492

  10. Modified Aggressive Packet Combining Scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2010-06-01

    In this letter, a few schemes are presented to improve the performance of aggressive packet combining scheme (APC). To combat error in computer/data communication networks, ARQ (Automatic Repeat Request) techniques are used. Several modifications to improve the performance of ARQ are suggested by recent research and are found in literature. The important modifications are majority packet combining scheme (MjPC proposed by Wicker), packet combining scheme (PC proposed by Chakraborty), modified packet combining scheme (MPC proposed by Bhunia), and packet reversed packet combining (PRPC proposed by Bhunia) scheme. These modifications are appropriate for improving throughput of conventional ARQ protocols. Leung proposed an idea of APC for error control in wireless networks with the basic objective of error control in uplink wireless data network. We suggest a few modifications of APC to improve its performance in terms of higher throughput, lower delay and higher error correction capability. (author)

  11. Smartphone-Based Patients' Activity Recognition by Using a Self-Learning Scheme for Medical Monitoring.

    Science.gov (United States)

    Guo, Junqi; Zhou, Xi; Sun, Yunchuan; Ping, Gong; Zhao, Guoxing; Li, Zhuorong

    2016-06-01

    Smartphone based activity recognition has recently received remarkable attention in various applications of mobile health such as safety monitoring, fitness tracking, and disease prediction. To achieve more accurate and simplified medical monitoring, this paper proposes a self-learning scheme for patients' activity recognition, in which a patient only needs to carry an ordinary smartphone that contains common motion sensors. After the real-time data collection though this smartphone, we preprocess the data using coordinate system transformation to eliminate phone orientation influence. A set of robust and effective features are then extracted from the preprocessed data. Because a patient may inevitably perform various unpredictable activities that have no apriori knowledge in the training dataset, we propose a self-learning activity recognition scheme. The scheme determines whether there are apriori training samples and labeled categories in training pools that well match with unpredictable activity data. If not, it automatically assembles these unpredictable samples into different clusters and gives them new category labels. These clustered samples combined with the acquired new category labels are then merged into the training dataset to reinforce recognition ability of the self-learning model. In experiments, we evaluate our scheme using the data collected from two postoperative patient volunteers, including six labeled daily activities as the initial apriori categories in the training pool. Experimental results demonstrate that the proposed self-learning scheme for activity recognition works very well for most cases. When there exist several types of unseen activities without any apriori information, the accuracy reaches above 80 % after the self-learning process converges.

  12. A Multigrid NLS-4DVar Data Assimilation Scheme with Advanced Research WRF (ARW)

    Science.gov (United States)

    Zhang, H.; Tian, X.

    2017-12-01

    The motions of the atmosphere have multiscale properties in space and/or time, and the background error covariance matrix (Β) should thus contain error information at different correlation scales. To obtain an optimal analysis, the multigrid three-dimensional variational data assimilation scheme is used widely when sequentially correcting errors from large to small scales. However, introduction of the multigrid technique into four-dimensional variational data assimilation is not easy, due to its strong dependence on the adjoint model, which has extremely high computational costs in data coding, maintenance, and updating. In this study, the multigrid technique was introduced into the nonlinear least-squares four-dimensional variational assimilation (NLS-4DVar) method, which is an advanced four-dimensional ensemble-variational method that can be applied without invoking the adjoint models. The multigrid NLS-4DVar (MG-NLS-4DVar) scheme uses the number of grid points to control the scale, with doubling of this number when moving from a coarse to a finer grid. Furthermore, the MG-NLS-4DVar scheme not only retains the advantages of NLS-4DVar, but also sufficiently corrects multiscale errors to achieve a highly accurate analysis. The effectiveness and efficiency of the proposed MG-NLS-4DVar scheme were evaluated by several groups of observing system simulation experiments using the Advanced Research Weather Research and Forecasting Model. MG-NLS-4DVar outperformed NLS-4DVar, with a lower computational cost.

  13. Development of a reactivity worth correction scheme for the one-dimensional transient analysis

    International Nuclear Information System (INIS)

    Cho, J. Y.; Song, J. S.; Joo, H. G.; Kim, H. Y.; Kim, K. S.; Lee, C. C.; Zee, S. Q.

    2003-11-01

    This work is to develop a reactivity worth correction scheme for the MASTER one-dimensional (1-D) calculation model. The 1-D cross section variations according to the core state in the MASTER input file, which are produced for 1-D calculation performed by the MASTER code, are incorrect in most of all the core states except for exactly the same core state where the variations are produced. Therefore this scheme performs the reactivity worth correction factor calculations before the main 1-D transient calculation, and generates correction factors for boron worth, Doppler and moderator temperature coefficients, and control rod worth, respectively. These correction factors force the one dimensional calculation to generate the same reactivity worths with the 3-dimensional calculation. This scheme is applied to the control bank withdrawal accident of Yonggwang unit 1 cycle 14, and the performance is examined by comparing the 1-D results with the 3-D results. This problem is analyzed by the RETRAN-MASTER consolidated code system. Most of all results of 1-D calculation including the transient power behavior, the peak power and time are very similar with the 3-D results. In the MASTER neutronics computing time, the 1-D calculation including the correction factor calculation requires the negligible time comparing with the 3-D case. Therefore, the reactivity worth correction scheme is concluded to be very good in that it enables the 1-D calculation to produce the very accurate results in a few computing time

  14. Fast, accurate, and robust frequency offset estimation based on modified adaptive Kalman filter in coherent optical communication system

    Science.gov (United States)

    Yang, Yanfu; Xiang, Qian; Zhang, Qun; Zhou, Zhongqing; Jiang, Wen; He, Qianwen; Yao, Yong

    2017-09-01

    We propose a joint estimation scheme for fast, accurate, and robust frequency offset (FO) estimation along with phase estimation based on modified adaptive Kalman filter (MAKF). The scheme consists of three key modules: extend Kalman filter (EKF), lock detector, and FO cycle slip recovery. The EKF module estimates time-varying phase induced by both FO and laser phase noise. The lock detector module makes decision between acquisition mode and tracking mode and consequently sets the EKF tuning parameter in an adaptive manner. The third module can detect possible cycle slip in the case of large FO and make proper correction. Based on the simulation and experimental results, the proposed MAKF has shown excellent estimation performance featuring high accuracy, fast convergence, as well as the capability of cycle slip recovery.

  15. Bonus schemes and trading activity

    NARCIS (Netherlands)

    Pikulina, E.S.; Renneboog, L.D.R.; ter Horst, J.R.; Tobler, P.N.

    2014-01-01

    Little is known about how different bonus schemes affect traders' propensity to trade and which bonus schemes improve traders' performance. We study the effects of linear versus threshold bonus schemes on traders' behavior. Traders buy and sell shares in an experimental stock market on the basis of

  16. Variable order spherical harmonic expansion scheme for the radiative transport equation using finite elements

    International Nuclear Information System (INIS)

    Surya Mohan, P.; Tarvainen, Tanja; Schweiger, Martin; Pulkkinen, Aki; Arridge, Simon R.

    2011-01-01

    Highlights: → We developed a variable order global basis scheme to solve light transport in 3D. → Based on finite elements, the method can be applied to a wide class of geometries. → It is computationally cheap when compared to the fixed order scheme. → Comparisons with local basis method and other models demonstrate its accuracy. → Addresses problems encountered n modeling of light transport in human brain. - Abstract: We propose the P N approximation based on a finite element framework for solving the radiative transport equation with optical tomography as the primary application area. The key idea is to employ a variable order spherical harmonic expansion for angular discretization based on the proximity to the source and the local scattering coefficient. The proposed scheme is shown to be computationally efficient compared to employing homogeneously high orders of expansion everywhere in the domain. In addition the numerical method is shown to accurately describe the void regions encountered in the forward modeling of real-life specimens such as infant brains. The accuracy of the method is demonstrated over three model problems where the P N approximation is compared against Monte Carlo simulations and other state-of-the-art methods.

  17. Reconciling change blindness with long-term memory for objects.

    Science.gov (United States)

    Wood, Katherine; Simons, Daniel J

    2017-02-01

    How can we reconcile remarkably precise long-term memory for thousands of images with failures to detect changes to similar images? We explored whether people can use detailed, long-term memory to improve change detection performance. Subjects studied a set of images of objects and then performed recognition and change detection tasks with those images. Recognition memory performance exceeded change detection performance, even when a single familiar object in the postchange display consistently indicated the change location. In fact, participants were no better when a familiar object predicted the change location than when the displays consisted of unfamiliar objects. When given an explicit strategy to search for a familiar object as a way to improve performance on the change detection task, they performed no better than in a 6-alternative recognition memory task. Subjects only benefited from the presence of familiar objects in the change detection task when they had more time to view the prechange array before it switched. Once the cost to using the change detection information decreased, subjects made use of it in conjunction with memory to boost performance on the familiar-item change detection task. This suggests that even useful information will go unused if it is sufficiently difficult to extract.

  18. Elucidation of molecular kinetic schemes from macroscopic traces using system identification.

    Directory of Open Access Journals (Sweden)

    Miguel Fribourg

    2017-02-01

    can be successfully applied to accurately derive molecular kinetic schemes from experimental macroscopic traces, and we anticipate that it may be useful in the study of a wide variety of biological systems.

  19. A class of fully second order accurate projection methods for solving the incompressible Navier-Stokes equations

    International Nuclear Information System (INIS)

    Liu Miaoer; Ren Yuxin; Zhang Hanxin

    2004-01-01

    In this paper, a continuous projection method is designed and analyzed. The continuous projection method consists of a set of partial differential equations which can be regarded as an approximation of the Navier-Stokes (N-S) equations in each time interval of a given time discretization. The local truncation error (LTE) analysis is applied to the continuous projection methods, which yields a sufficient condition for the continuous projection methods to be temporally second order accurate. Based on this sufficient condition, a fully second order accurate discrete projection method is proposed. A heuristic stability analysis is performed to this projection method showing that the present projection method can be stable. The stability of the present scheme is further verified through numerical experiments. The second order accuracy of the present projection method is confirmed by several numerical test cases

  20. A new processing scheme for ultra-high resolution direct infusion mass spectrometry data

    Science.gov (United States)

    Zielinski, Arthur T.; Kourtchev, Ivan; Bortolini, Claudio; Fuller, Stephen J.; Giorio, Chiara; Popoola, Olalekan A. M.; Bogialli, Sara; Tapparo, Andrea; Jones, Roderic L.; Kalberer, Markus

    2018-04-01

    High resolution, high accuracy mass spectrometry is widely used to characterise environmental or biological samples with highly complex composition enabling the identification of chemical composition of often unknown compounds. Despite instrumental advancements, the accurate molecular assignment of compounds acquired in high resolution mass spectra remains time consuming and requires automated algorithms, especially for samples covering a wide mass range and large numbers of compounds. A new processing scheme is introduced implementing filtering methods based on element assignment, instrumental error, and blank subtraction. Optional post-processing incorporates common ion selection across replicate measurements and shoulder ion removal. The scheme allows both positive and negative direct infusion electrospray ionisation (ESI) and atmospheric pressure photoionisation (APPI) acquisition with the same programs. An example application to atmospheric organic aerosol samples using an Orbitrap mass spectrometer is reported for both ionisation techniques resulting in final spectra with 0.8% and 8.4% of the peaks retained from the raw spectra for APPI positive and ESI negative acquisition, respectively.

  1. Sensitivity Evaluation of Spectral Nudging Schemes in Historical Dynamical Downscaling for South Asia

    Directory of Open Access Journals (Sweden)

    Mehwish Ramzan

    2017-01-01

    Full Text Available Sensitivity experiments testing two scale-selective bias correction (SSBC methods have been carried out to identify an optimal spectral nudging scheme for historical dynamically downscaled simulations of South Asia, using the coordinated regional climate downscaling experiment (CORDEX protocol and the regional spectral model (RSM. Two time periods were selected under the category of short-term extreme summer and long-term decadal analysis. The new SSBC version applied nudging to full wind components, with an increased relaxation time in the lower model layers, incorporating a vertical weighted damping coefficient. An evaluation of the extraordinary weather conditions experienced in South Asia in the summer of 2005 confirmed the advantages of the new SSBC when modeling monsoon precipitation. Furthermore, the new SSBC scheme was found to predict precipitation and wind patterns more accurately than the older version in decadal analysis, which applies nudging only to the rotational wind field, with a constant strength at all heights.

  2. Ravens reconcile after aggressive conflicts with valuable partners.

    Science.gov (United States)

    Fraser, Orlaith N; Bugnyar, Thomas

    2011-03-25

    Reconciliation, a post-conflict affiliative interaction between former opponents, is an important mechanism for reducing the costs of aggressive conflict in primates and some other mammals as it may repair the opponents' relationship and reduce post-conflict distress. Opponents who share a valuable relationship are expected to be more likely to reconcile as for such partners the benefits of relationship repair should outweigh the risk of renewed aggression. In birds, however, post-conflict behavior has thus far been marked by an apparent absence of reconciliation, suggested to result either from differing avian and mammalian strategies or because birds may not share valuable relationships with partners with whom they engage in aggressive conflict. Here, we demonstrate the occurrence of reconciliation in a group of captive subadult ravens (Corvus corax) and show that it is more likely to occur after conflicts between partners who share a valuable relationship. Furthermore, former opponents were less likely to engage in renewed aggression following reconciliation, suggesting that reconciliation repairs damage caused to their relationship by the preceding conflict. Our findings suggest not only that primate-like valuable relationships exist outside the pair bond in birds, but that such partners may employ the same mechanisms in birds as in primates to ensure that the benefits afforded by their relationships are maintained even when conflicts of interest escalate into aggression. These results provide further support for a convergent evolution of social strategies in avian and mammalian species.

  3. CSR schemes in agribusiness

    DEFF Research Database (Denmark)

    Pötz, Katharina Anna; Haas, Rainer; Balzarova, Michaela

    2013-01-01

    of schemes that can be categorized on focus areas, scales, mechanisms, origins, types and commitment levels. Research limitations/implications – The findings contribute to conceptual and empirical research on existing models to compare and analyse CSR standards. Sampling technique and depth of analysis limit......Purpose – The rise of CSR followed a demand for CSR standards and guidelines. In a sector already characterized by a large number of standards, the authors seek to ask what CSR schemes apply to agribusiness, and how they can be systematically compared and analysed. Design....../methodology/approach – Following a deductive-inductive approach the authors develop a model to compare and analyse CSR schemes based on existing studies and on coding qualitative data on 216 CSR schemes. Findings – The authors confirm that CSR standards and guidelines have entered agribusiness and identify a complex landscape...

  4. Threshold Signature Schemes Application

    Directory of Open Access Journals (Sweden)

    Anastasiya Victorovna Beresneva

    2015-10-01

    Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.

  5. A Spatial Domain Quantum Watermarking Scheme

    International Nuclear Information System (INIS)

    Wei Zhan-Hong; Chen Xiu-Bo; Niu Xin-Xin; Yang Yi-Xian; Xu Shu-Jiang

    2016-01-01

    This paper presents a spatial domain quantum watermarking scheme. For a quantum watermarking scheme, a feasible quantum circuit is a key to achieve it. This paper gives a feasible quantum circuit for the presented scheme. In order to give the quantum circuit, a new quantum multi-control rotation gate, which can be achieved with quantum basic gates, is designed. With this quantum circuit, our scheme can arbitrarily control the embedding position of watermark images on carrier images with the aid of auxiliary qubits. Besides reversely acting the given quantum circuit, the paper gives another watermark extracting algorithm based on quantum measurements. Moreover, this paper also gives a new quantum image scrambling method and its quantum circuit. Differ from other quantum watermarking schemes, all given quantum circuits can be implemented with basic quantum gates. Moreover, the scheme is a spatial domain watermarking scheme, and is not based on any transform algorithm on quantum images. Meanwhile, it can make sure the watermark be secure even though the watermark has been found. With the given quantum circuit, this paper implements simulation experiments for the presented scheme. The experimental result shows that the scheme does well in the visual quality and the embedding capacity. (paper)

  6. Gearbox Fault Features Extraction Using Vibration Measurements and Novel Adaptive Filtering Scheme

    Directory of Open Access Journals (Sweden)

    Ghalib R. Ibrahim

    2012-01-01

    Full Text Available Vibration signals measured from a gearbox are complex multicomponent signals, generated by tooth meshing, gear shaft rotation, gearbox resonance vibration signatures, and a substantial amount of noise. This paper presents a novel scheme for extracting gearbox fault features using adaptive filtering techniques for enhancing condition features, meshing frequency sidebands. A modified least mean square (LMS algorithm is examined and validated using only one accelerometer, instead of using two accelerometers in traditional arrangement, as the main signal and a desired signal is artificially generated from the measured shaft speed and gear meshing frequencies. The proposed scheme is applied to a signal simulated from gearbox frequencies with a numerous values of step size. Findings confirm that 10−5 step size invariably produces more accurate results and there has been a substantial improvement in signal clarity (better signal-to-noise ratio, which makes meshing frequency sidebands more discernible. The developed scheme is validated via a number of experiments carried out using two-stage helical gearbox for a healthy pair of gears and a pair suffering from a tooth breakage with severity fault 1 (25% tooth removal and fault 2 (50% tooth removal under loads (0%, and 80% of the total load. The experimental results show remarkable improvements and enhance gear condition features. This paper illustrates that the new approach offers a more effective way to detect early faults.

  7. Fast and accurate determination of modularity and its effect size

    International Nuclear Information System (INIS)

    Treviño, Santiago III; Nyberg, Amy; Bassler, Kevin E; Del Genio, Charo I

    2015-01-01

    We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erdős–Rényi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a z-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links. (paper)

  8. High resolution kinetic beam schemes in generalized coordinates for ideal quantum gas dynamics

    International Nuclear Information System (INIS)

    Shi, Yu-Hsin; Huang, J.C.; Yang, J.Y.

    2007-01-01

    A class of high resolution kinetic beam schemes in multiple space dimensions in general coordinates system for the ideal quantum gas is presented for the computation of quantum gas dynamical flows. The kinetic Boltzmann equation approach is adopted and the local equilibrium quantum statistics distribution is assumed. High-order accurate methods using essentially non-oscillatory interpolation concept are constructed. Computations of shock wave diffraction by a circular cylinder in an ideal quantum gas are conducted to illustrate the present method. The present method provides a viable means to explore various practical ideal quantum gas flows

  9. An improved experimental scheme for simultaneous measurement of high-resolution zero electron kinetic energy (ZEKE) photoelectron and threshold photoion (MATI) spectra

    Science.gov (United States)

    Michels, François; Mazzoni, Federico; Becucci, Maurizio; Müller-Dethlefs, Klaus

    2017-10-01

    An improved detection scheme is presented for threshold ionization spectroscopy with simultaneous recording of the Zero Electron Kinetic Energy (ZEKE) and Mass Analysed Threshold Ionisation (MATI) signals. The objective is to obtain accurate dissociation energies for larger molecular clusters by simultaneously detecting the fragment and parent ion MATI signals with identical transmission. The scheme preserves an optimal ZEKE spectral resolution together with excellent separation of the spontaneous ion and MATI signals in the time-of-flight mass spectrum. The resulting improvement in sensitivity will allow for the determination of dissociation energies in clusters with substantial mass difference between parent and daughter ions.

  10. Labeling schemes for bounded degree graphs

    DEFF Research Database (Denmark)

    Adjiashvili, David; Rotbart, Noy Galil

    2014-01-01

    We investigate adjacency labeling schemes for graphs of bounded degree Δ = O(1). In particular, we present an optimal (up to an additive constant) log n + O(1) adjacency labeling scheme for bounded degree trees. The latter scheme is derived from a labeling scheme for bounded degree outerplanar...... graphs. Our results complement a similar bound recently obtained for bounded depth trees [Fraigniaud and Korman, SODA 2010], and may provide new insights for closing the long standing gap for adjacency in trees [Alstrup and Rauhe, FOCS 2002]. We also provide improved labeling schemes for bounded degree...

  11. The 'Real Welfare' scheme: benchmarking welfare outcomes for commercially farmed pigs.

    Science.gov (United States)

    Pandolfi, F; Stoddart, K; Wainwright, N; Kyriazakis, I; Edwards, S A

    2017-10-01

    Animal welfare standards have been incorporated in EU legislation and in farm assurance schemes, based on scientific information and aiming to safeguard the welfare of the species concerned. Recently, emphasis has shifted from resource-based measures of welfare to animal-based measures, which are considered to assess more accurately the welfare status. The data used in this analysis were collected from April 2013 to May 2016 through the 'Real Welfare' scheme in order to assess on-farm pig welfare, as required for those finishing pigs under the UK Red Tractor Assurance scheme. The assessment involved five main measures (percentage of pigs requiring hospitalization, percentage of lame pigs, percentage of pigs with severe tail lesions, percentage of pigs with severe body marks and enrichment use ratio) and optional secondary measures (percentage of pigs with mild tail lesions, percentage of pigs with dirty tails, percentage of pigs with mild body marks, percentage of pigs with dirty bodies), with associated information about the environment and the enrichment in the farms. For the complete database, a sample of pens was assessed from 1928 farm units. Repeated measures were taken in the same farm unit over time, giving 112 240 records at pen level. These concerned a total of 13 480 289 pigs present on the farm during the assessments, with 5 463 348 pigs directly assessed using the 'Real Welfare' protocol. The three most common enrichment types were straw, chain and plastic objects. The main substrate was straw which was present in 67.9% of the farms. Compared with 2013, a significant increase of pens with undocked-tail pigs, substrates and objects was observed over time (P0.3). The results from the first 3 years of the scheme demonstrate a reduction of the prevalence of animal-based measures of welfare problems and highlight the value of this initiative.

  12. Spectral collocation method with a flexible angular discretization scheme for radiative transfer in multi-layer graded index medium

    Science.gov (United States)

    Wei, Linyang; Qi, Hong; Sun, Jianping; Ren, Yatao; Ruan, Liming

    2017-05-01

    The spectral collocation method (SCM) is employed to solve the radiative transfer in multi-layer semitransparent medium with graded index. A new flexible angular discretization scheme is employed to discretize the solid angle domain freely to overcome the limit of the number of discrete radiative direction when adopting traditional SN discrete ordinate scheme. Three radial basis function interpolation approaches, named as multi-quadric (MQ), inverse multi-quadric (IMQ) and inverse quadratic (IQ) interpolation, are employed to couple the radiative intensity at the interface between two adjacent layers and numerical experiments show that MQ interpolation has the highest accuracy and best stability. Variable radiative transfer problems in double-layer semitransparent media with different thermophysical properties are investigated and the influence of these thermophysical properties on the radiative transfer procedure in double-layer semitransparent media is also analyzed. All the simulated results show that the present SCM with the new angular discretization scheme can predict the radiative transfer in multi-layer semitransparent medium with graded index efficiently and accurately.

  13. Normal scheme for solving the transport equation independently of spatial discretization

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    1993-01-01

    To solve the discrete ordinates neutron transport equation, a general order nodal scheme is used, where nodes are allowed to have different orders of approximation and the whole system reaches a final order distribution. Independence in the election of system discretization and order of approximation is obtained without loss of accuracy. The final equations and the iterative method to reach a converged order solution were implemented in a two-dimensional computer code to solve monoenergetic, isotropic scattering, external source problems. Two benchmark problems were solved using different automatic selection order methods. Results show accurate solutions without spatial discretization, regardless of the initial selection of distribution order. (author)

  14. A Gauss-Kronrod-Trapezoidal integration scheme for modeling biological tissues with continuous fiber distributions.

    Science.gov (United States)

    Hou, Chieh; Ateshian, Gerard A

    2016-01-01

    Fibrous biological tissues may be modeled using a continuous fiber distribution (CFD) to capture tension-compression nonlinearity, anisotropic fiber distributions, and load-induced anisotropy. The CFD framework requires spherical integration of weighted individual fiber responses, with fibers contributing to the stress response only when they are in tension. The common method for performing this integration employs the discretization of the unit sphere into a polyhedron with nearly uniform triangular faces (finite element integration or FEI scheme). Although FEI has proven to be more accurate and efficient than integration using spherical coordinates, it presents three major drawbacks: First, the number of elements on the unit sphere needed to achieve satisfactory accuracy becomes a significant computational cost in a finite element (FE) analysis. Second, fibers may not be in tension in some regions on the unit sphere, where the integration becomes a waste. Third, if tensed fiber bundles span a small region compared to the area of the elements on the sphere, a significant discretization error arises. This study presents an integration scheme specialized to the CFD framework, which significantly mitigates the first drawback of the FEI scheme, while eliminating the second and third completely. Here, integration is performed only over the regions of the unit sphere where fibers are in tension. Gauss-Kronrod quadrature is used across latitudes and the trapezoidal scheme across longitudes. Over a wide range of strain states, fiber material properties, and fiber angular distributions, results demonstrate that this new scheme always outperforms FEI, sometimes by orders of magnitude in the number of computational steps and relative accuracy of the stress calculation.

  15. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  16. Tabled Execution in Scheme

    Energy Technology Data Exchange (ETDEWEB)

    Willcock, J J; Lumsdaine, A; Quinlan, D J

    2008-08-19

    Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.

  17. Optimal Face-Iris Multimodal Fusion Scheme

    Directory of Open Access Journals (Sweden)

    Omid Sharifi

    2016-06-01

    Full Text Available Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA, a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.

  18. Monte Carlo closure for moment-based transport schemes in general relativistic radiation hydrodynamic simulations

    Science.gov (United States)

    Foucart, Francois

    2018-04-01

    General relativistic radiation hydrodynamic simulations are necessary to accurately model a number of astrophysical systems involving black holes and neutron stars. Photon transport plays a crucial role in radiatively dominated accretion discs, while neutrino transport is critical to core-collapse supernovae and to the modelling of electromagnetic transients and nucleosynthesis in neutron star mergers. However, evolving the full Boltzmann equations of radiative transport is extremely expensive. Here, we describe the implementation in the general relativistic SPEC code of a cheaper radiation hydrodynamic method that theoretically converges to a solution of Boltzmann's equation in the limit of infinite numerical resources. The algorithm is based on a grey two-moment scheme, in which we evolve the energy density and momentum density of the radiation. Two-moment schemes require a closure that fills in missing information about the energy spectrum and higher order moments of the radiation. Instead of the approximate analytical closure currently used in core-collapse and merger simulations, we complement the two-moment scheme with a low-accuracy Monte Carlo evolution. The Monte Carlo results can provide any or all of the missing information in the evolution of the moments, as desired by the user. As a first test of our methods, we study a set of idealized problems demonstrating that our algorithm performs significantly better than existing analytical closures. We also discuss the current limitations of our method, in particular open questions regarding the stability of the fully coupled scheme.

  19. A Scheme for Evaluating Feral Horse Management Strategies

    Directory of Open Access Journals (Sweden)

    L. L. Eberhardt

    2012-01-01

    Full Text Available Context. Feral horses are an increasing problem in many countries and are popular with the public, making management difficult. Aims. To develop a scheme useful in planning management strategies. Methods. A model is developed and applied to four different feral horse herds, three of which have been quite accurately counted over the years. Key Results. The selected model has been tested on a variety of data sets, with emphasis on the four sets of feral horse data. An alternative, nonparametric model is used to check the selected parametric approach. Conclusions. A density-dependent response was observed in all 4 herds, even though only 8 observations were available in each case. Consistency in the model fits suggests that small starting herds can be used to test various management techniques. Implications. Management methods can be tested on actual, confined populations.

  20. A high-precision sampling scheme to assess persistence and transport characteristics of micropollutants in rivers.

    Science.gov (United States)

    Schwientek, Marc; Guillet, Gaëlle; Rügner, Hermann; Kuch, Bertram; Grathwohl, Peter

    2016-01-01

    Increasing numbers of organic micropollutants are emitted into rivers via municipal wastewaters. Due to their persistence many pollutants pass wastewater treatment plants without substantial removal. Transport and fate of pollutants in receiving waters and export to downstream ecosystems is not well understood. In particular, a better knowledge of processes governing their environmental behavior is needed. Although a lot of data are available concerning the ubiquitous presence of micropollutants in rivers, accurate data on transport and removal rates are lacking. In this paper, a mass balance approach is presented, which is based on the Lagrangian sampling scheme, but extended to account for precise transport velocities and mixing along river stretches. The calculated mass balances allow accurate quantification of pollutants' reactivity along river segments. This is demonstrated for representative members of important groups of micropollutants, e.g. pharmaceuticals, musk fragrances, flame retardants, and pesticides. A model-aided analysis of the measured data series gives insight into the temporal dynamics of removal processes. The occurrence of different removal mechanisms such as photooxidation, microbial degradation, and volatilization is discussed. The results demonstrate, that removal processes are highly variable in time and space and this has to be considered for future studies. The high precision sampling scheme presented could be a powerful tool for quantifying removal processes under different boundary conditions and in river segments with contrasting properties. Copyright © 2015. Published by Elsevier B.V.

  1. Accurate detection of hierarchical communities in complex networks based on nonlinear dynamical evolution

    Science.gov (United States)

    Zhuo, Zhao; Cai, Shi-Min; Tang, Ming; Lai, Ying-Cheng

    2018-04-01

    One of the most challenging problems in network science is to accurately detect communities at distinct hierarchical scales. Most existing methods are based on structural analysis and manipulation, which are NP-hard. We articulate an alternative, dynamical evolution-based approach to the problem. The basic principle is to computationally implement a nonlinear dynamical process on all nodes in the network with a general coupling scheme, creating a networked dynamical system. Under a proper system setting and with an adjustable control parameter, the community structure of the network would "come out" or emerge naturally from the dynamical evolution of the system. As the control parameter is systematically varied, the community hierarchies at different scales can be revealed. As a concrete example of this general principle, we exploit clustered synchronization as a dynamical mechanism through which the hierarchical community structure can be uncovered. In particular, for quite arbitrary choices of the nonlinear nodal dynamics and coupling scheme, decreasing the coupling parameter from the global synchronization regime, in which the dynamical states of all nodes are perfectly synchronized, can lead to a weaker type of synchronization organized as clusters. We demonstrate the existence of optimal choices of the coupling parameter for which the synchronization clusters encode accurate information about the hierarchical community structure of the network. We test and validate our method using a standard class of benchmark modular networks with two distinct hierarchies of communities and a number of empirical networks arising from the real world. Our method is computationally extremely efficient, eliminating completely the NP-hard difficulty associated with previous methods. The basic principle of exploiting dynamical evolution to uncover hidden community organizations at different scales represents a "game-change" type of approach to addressing the problem of community

  2. Constrained-DFT method for accurate energy-level alignment of metal/molecule interfaces

    KAUST Repository

    Souza, A. M.

    2013-10-07

    We present a computational scheme for extracting the energy-level alignment of a metal/molecule interface, based on constrained density functional theory and local exchange and correlation functionals. The method, applied here to benzene on Li(100), allows us to evaluate charge-transfer energies, as well as the spatial distribution of the image charge induced on the metal surface. We systematically study the energies for charge transfer from the molecule to the substrate as function of the molecule-substrate distance, and investigate the effects arising from image-charge confinement and local charge neutrality violation. For benzene on Li(100) we find that the image-charge plane is located at about 1.8 Å above the Li surface, and that our calculated charge-transfer energies compare perfectly with those obtained with a classical electrostatic model having the image plane located at the same position. The methodology outlined here can be applied to study any metal/organic interface in the weak coupling limit at the computational cost of a total energy calculation. Most importantly, as the scheme is based on total energies and not on correcting the Kohn-Sham quasiparticle spectrum, accurate results can be obtained with local/semilocal exchange and correlation functionals. This enables a systematic approach to convergence.

  3. Constrained-DFT method for accurate energy-level alignment of metal/molecule interfaces

    KAUST Repository

    Souza, A. M.; Rungger, I.; Pemmaraju, C. D.; Schwingenschlö gl, Udo; Sanvito, S.

    2013-01-01

    We present a computational scheme for extracting the energy-level alignment of a metal/molecule interface, based on constrained density functional theory and local exchange and correlation functionals. The method, applied here to benzene on Li(100), allows us to evaluate charge-transfer energies, as well as the spatial distribution of the image charge induced on the metal surface. We systematically study the energies for charge transfer from the molecule to the substrate as function of the molecule-substrate distance, and investigate the effects arising from image-charge confinement and local charge neutrality violation. For benzene on Li(100) we find that the image-charge plane is located at about 1.8 Å above the Li surface, and that our calculated charge-transfer energies compare perfectly with those obtained with a classical electrostatic model having the image plane located at the same position. The methodology outlined here can be applied to study any metal/organic interface in the weak coupling limit at the computational cost of a total energy calculation. Most importantly, as the scheme is based on total energies and not on correcting the Kohn-Sham quasiparticle spectrum, accurate results can be obtained with local/semilocal exchange and correlation functionals. This enables a systematic approach to convergence.

  4. An asymptotic preserving unified gas kinetic scheme for frequency-dependent radiative transfer equations

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Wenjun, E-mail: sun_wenjun@iapcm.ac.cn [Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088 (China); Jiang, Song, E-mail: jiang@iapcm.ac.cn [Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088 (China); Xu, Kun, E-mail: makxu@ust.hk [Department of Mathematics and Department of Mechanical and Aerospace Engineering, Hong Kong University of Science and Technology, Hong Kong (China); Li, Shu, E-mail: li_shu@iapcm.ac.cn [Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088 (China)

    2015-12-01

    This paper presents an extension of previous work (Sun et al., 2015 [22]) of the unified gas kinetic scheme (UGKS) for the gray radiative transfer equations to the frequency-dependent (multi-group) radiative transfer system. Different from the gray radiative transfer equations, where the optical opacity is only a function of local material temperature, the simulation of frequency-dependent radiative transfer is associated with additional difficulties from the frequency-dependent opacity. For the multiple frequency radiation, the opacity depends on both the spatial location and the frequency. For example, the opacity is typically a decreasing function of frequency. At the same spatial region the transport physics can be optically thick for the low frequency photons, and optically thin for high frequency ones. Therefore, the optical thickness is not a simple function of space location. In this paper, the UGKS for frequency-dependent radiative system is developed. The UGKS is a finite volume method and the transport physics is modeled according to the ratio of the cell size to the photon's frequency-dependent mean free path. When the cell size is much larger than the photon's mean free path, a diffusion solution for such a frequency radiation will be obtained. On the other hand, when the cell size is much smaller than the photon's mean free path, a free transport mechanism will be recovered. In the regime between the above two limits, with the variation of the ratio between the local cell size and photon's mean free path, the UGKS provides a smooth transition in the physical and frequency space to capture the corresponding transport physics accurately. The seemingly straightforward extension of the UGKS from the gray to multiple frequency radiation system is due to its intrinsic consistent multiple scale transport modeling, but it still involves lots of work to properly discretize the multiple groups in order to design an asymptotic preserving (AP

  5. Reconciling pairs of concurrently used clinical practice guidelines using Constraint Logic Programming.

    Science.gov (United States)

    Wilk, Szymon; Michalowski, Martin; Michalowski, Wojtek; Hing, Marisela Mainegra; Farion, Ken

    2011-01-01

    This paper describes a new methodological approach to reconciling adverse and contradictory activities (called points of contention) occurring when a patient is managed according to two or more concurrently used clinical practice guidelines (CPGs). The need to address these inconsistencies occurs when a patient with more than one disease, each of which is a comorbid condition, has to be managed according to different treatment regimens. We propose an automatic procedure that constructs a mathematical guideline model using the Constraint Logic Programming (CLP) methodology, uses this model to identify and mitigate encountered points of contention, and revises the considered CPGs accordingly. The proposed procedure is used as an alerting mechanism and coupled with a guideline execution engine warns the physician about potential problems with the concurrent application of two or more guidelines. We illustrate the operation of our procedure in a clinical scenario describing simultaneous use of CPGs for duodenal ulcer and transient ischemic attack.

  6. Sensitivity of the weather research and forecasting model to parameterization schemes for regional climate of Nile River Basin

    Science.gov (United States)

    Tariku, Tebikachew Betru; Gan, Thian Yew

    2018-06-01

    Regional climate models (RCMs) have been used to simulate rainfall at relatively high spatial and temporal resolutions useful for sustainable water resources planning, design and management. In this study, the sensitivity of the RCM, weather research and forecasting (WRF), in modeling the regional climate of the Nile River Basin (NRB) was investigated using 31 combinations of different physical parameterization schemes which include cumulus (Cu), microphysics (MP), planetary boundary layer (PBL), land-surface model (LSM) and radiation (Ra) schemes. Using the European Centre for Medium-Range Weather Forecast (ECMWF) ERA-Interim reanalysis data as initial and lateral boundary conditions, WRF was configured to model the climate of NRB at a resolution of 36 km with 30 vertical levels. The 1999-2001 simulations using WRF were compared with satellite data combined with ground observation and the NCEP reanalysis data for 2 m surface air temperature (T2), rainfall, short- and longwave downward radiation at the surface (SWRAD, LWRAD). Overall, WRF simulated more accurate T2 and LWRAD (with correlation coefficients >0.8 and low root-mean-square error) than SWRAD and rainfall for the NRB. Further, the simulation of rainfall is more sensitive to PBL, Cu and MP schemes than other schemes of WRF. For example, WRF simulated less biased rainfall with Kain-Fritsch combined with MYJ than with YSU as the PBL scheme. The simulation of T2 is more sensitive to LSM and Ra than to Cu, PBL and MP schemes selected, SWRAD is more sensitive to MP and Ra than to Cu, LSM and PBL schemes, and LWRAD is more sensitive to LSM, Ra and PBL than Cu, and MP schemes. In summary, the following combination of schemes simulated the most representative regional climate of NRB: WSM3 microphysics, KF cumulus, MYJ PBL, RRTM longwave radiation and Dudhia shortwave radiation schemes, and Noah LSM. The above configuration of WRF coupled to the Noah LSM has also been shown to simulate representative regional

  7. Sensitivity of the weather research and forecasting model to parameterization schemes for regional climate of Nile River Basin

    Science.gov (United States)

    Tariku, Tebikachew Betru; Gan, Thian Yew

    2017-08-01

    Regional climate models (RCMs) have been used to simulate rainfall at relatively high spatial and temporal resolutions useful for sustainable water resources planning, design and management. In this study, the sensitivity of the RCM, weather research and forecasting (WRF), in modeling the regional climate of the Nile River Basin (NRB) was investigated using 31 combinations of different physical parameterization schemes which include cumulus (Cu), microphysics (MP), planetary boundary layer (PBL), land-surface model (LSM) and radiation (Ra) schemes. Using the European Centre for Medium-Range Weather Forecast (ECMWF) ERA-Interim reanalysis data as initial and lateral boundary conditions, WRF was configured to model the climate of NRB at a resolution of 36 km with 30 vertical levels. The 1999-2001 simulations using WRF were compared with satellite data combined with ground observation and the NCEP reanalysis data for 2 m surface air temperature (T2), rainfall, short- and longwave downward radiation at the surface (SWRAD, LWRAD). Overall, WRF simulated more accurate T2 and LWRAD (with correlation coefficients >0.8 and low root-mean-square error) than SWRAD and rainfall for the NRB. Further, the simulation of rainfall is more sensitive to PBL, Cu and MP schemes than other schemes of WRF. For example, WRF simulated less biased rainfall with Kain-Fritsch combined with MYJ than with YSU as the PBL scheme. The simulation of T2 is more sensitive to LSM and Ra than to Cu, PBL and MP schemes selected, SWRAD is more sensitive to MP and Ra than to Cu, LSM and PBL schemes, and LWRAD is more sensitive to LSM, Ra and PBL than Cu, and MP schemes. In summary, the following combination of schemes simulated the most representative regional climate of NRB: WSM3 microphysics, KF cumulus, MYJ PBL, RRTM longwave radiation and Dudhia shortwave radiation schemes, and Noah LSM. The above configuration of WRF coupled to the Noah LSM has also been shown to simulate representative regional

  8. Multiuser switched diversity scheduling schemes

    KAUST Repository

    Shaqfeh, Mohammad; Alnuweiri, Hussein M.; Alouini, Mohamed-Slim

    2012-01-01

    Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.

  9. Short-Term Saved Leave Scheme

    CERN Multimedia

    2007-01-01

    As announced at the meeting of the Standing Concertation Committee (SCC) on 26 June 2007 and in http://Bulletin No. 28/2007, the existing Saved Leave Scheme will be discontinued as of 31 December 2007. Staff participating in the Scheme will shortly receive a contract amendment stipulating the end of financial contributions compensated by save leave. Leave already accumulated on saved leave accounts can continue to be taken in accordance with the rules applicable to the current scheme. A new system of saved leave will enter into force on 1 January 2008 and will be the subject of a new implementation procedure entitled "Short-term saved leave scheme" dated 1 January 2008. At its meeting on 4 December 2007, the SCC agreed to recommend the Director-General to approve this procedure, which can be consulted on the HR Department’s website at the following address: https://cern.ch/hr-services/services-Ben/sls_shortterm.asp All staff wishing to participate in the new scheme a...

  10. Short-Term Saved Leave Scheme

    CERN Multimedia

    HR Department

    2007-01-01

    As announced at the meeting of the Standing Concertation Committee (SCC) on 26 June 2007 and in http://Bulletin No. 28/2007, the existing Saved Leave Scheme will be discontinued as of 31 December 2007. Staff participating in the Scheme will shortly receive a contract amendment stipulating the end of financial contributions compensated by save leave. Leave already accumulated on saved leave accounts can continue to be taken in accordance with the rules applicable to the current scheme. A new system of saved leave will enter into force on 1 January 2008 and will be the subject of a new im-plementation procedure entitled "Short-term saved leave scheme" dated 1 January 2008. At its meeting on 4 December 2007, the SCC agreed to recommend the Director-General to approve this procedure, which can be consulted on the HR Department’s website at the following address: https://cern.ch/hr-services/services-Ben/sls_shortterm.asp All staff wishing to participate in the new scheme ...

  11. Multiuser switched diversity scheduling schemes

    KAUST Repository

    Shaqfeh, Mohammad

    2012-09-01

    Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.

  12. Numerical schemes for explosion hazards

    International Nuclear Information System (INIS)

    Therme, Nicolas

    2015-01-01

    In nuclear facilities, internal or external explosions can cause confinement breaches and radioactive materials release in the environment. Hence, modeling such phenomena is crucial for safety matters. Blast waves resulting from explosions are modeled by the system of Euler equations for compressible flows, whereas Navier-Stokes equations with reactive source terms and level set techniques are used to simulate the propagation of flame front during the deflagration phase. The purpose of this thesis is to contribute to the creation of efficient numerical schemes to solve these complex models. The work presented here focuses on two major aspects: first, the development of consistent schemes for the Euler equations, then the buildup of reliable schemes for the front propagation. In both cases, explicit in time schemes are used, but we also introduce a pressure correction scheme for the Euler equations. Staggered discretization is used in space. It is based on the internal energy formulation of the Euler system, which insures its positivity and avoids tedious discretization of the total energy over staggered grids. A discrete kinetic energy balance is derived from the scheme and a source term is added in the discrete internal energy balance equation to preserve the exact total energy balance at the limit. High order methods of MUSCL type are used in the discrete convective operators, based solely on material velocity. They lead to positivity of density and internal energy under CFL conditions. This ensures that the total energy cannot grow and we can furthermore derive a discrete entropy inequality. Under stability assumptions of the discrete L8 and BV norms of the scheme's solutions one can prove that a sequence of converging discrete solutions necessarily converges towards the weak solution of the Euler system. Besides it satisfies a weak entropy inequality at the limit. Concerning the front propagation, we transform the flame front evolution equation (the so called

  13. Compact Spreader Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Placidi, M.; Jung, J. -Y.; Ratti, A.; Sun, C.

    2014-07-25

    This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.

  14. Stabilized linear semi-implicit schemes for the nonlocal Cahn-Hilliard equation

    Science.gov (United States)

    Du, Qiang; Ju, Lili; Li, Xiao; Qiao, Zhonghua

    2018-06-01

    Comparing with the well-known classic Cahn-Hilliard equation, the nonlocal Cahn-Hilliard equation is equipped with a nonlocal diffusion operator and can describe more practical phenomena for modeling phase transitions of microstructures in materials. On the other hand, it evidently brings more computational costs in numerical simulations, thus efficient and accurate time integration schemes are highly desired. In this paper, we propose two energy-stable linear semi-implicit methods with first and second order temporal accuracies respectively for solving the nonlocal Cahn-Hilliard equation. The temporal discretization is done by using the stabilization technique with the nonlocal diffusion term treated implicitly, while the spatial discretization is carried out by the Fourier collocation method with FFT-based fast implementations. The energy stabilities are rigorously established for both methods in the fully discrete sense. Numerical experiments are conducted for a typical case involving Gaussian kernels. We test the temporal convergence rates of the proposed schemes and make a comparison of the nonlocal phase transition process with the corresponding local one. In addition, long-time simulations of the coarsening dynamics are also performed to predict the power law of the energy decay.

  15. Quantum signature scheme for known quantum messages

    International Nuclear Information System (INIS)

    Kim, Taewan; Lee, Hyang-Sook

    2015-01-01

    When we want to sign a quantum message that we create, we can use arbitrated quantum signature schemes which are possible to sign for not only known quantum messages but also unknown quantum messages. However, since the arbitrated quantum signature schemes need the help of a trusted arbitrator in each verification of the signature, it is known that the schemes are not convenient in practical use. If we consider only known quantum messages such as the above situation, there can exist a quantum signature scheme with more efficient structure. In this paper, we present a new quantum signature scheme for known quantum messages without the help of an arbitrator. Differing from arbitrated quantum signature schemes based on the quantum one-time pad with the symmetric key, since our scheme is based on quantum public-key cryptosystems, the validity of the signature can be verified by a receiver without the help of an arbitrator. Moreover, we show that our scheme provides the functions of quantum message integrity, user authentication and non-repudiation of the origin as in digital signature schemes. (paper)

  16. Two-level schemes for the advection equation

    Science.gov (United States)

    Vabishchevich, Petr N.

    2018-06-01

    The advection equation is the basis for mathematical models of continuum mechanics. In the approximate solution of nonstationary problems it is necessary to inherit main properties of the conservatism and monotonicity of the solution. In this paper, the advection equation is written in the symmetric form, where the advection operator is the half-sum of advection operators in conservative (divergent) and non-conservative (characteristic) forms. The advection operator is skew-symmetric. Standard finite element approximations in space are used. The standard explicit two-level scheme for the advection equation is absolutely unstable. New conditionally stable regularized schemes are constructed, on the basis of the general theory of stability (well-posedness) of operator-difference schemes, the stability conditions of the explicit Lax-Wendroff scheme are established. Unconditionally stable and conservative schemes are implicit schemes of the second (Crank-Nicolson scheme) and fourth order. The conditionally stable implicit Lax-Wendroff scheme is constructed. The accuracy of the investigated explicit and implicit two-level schemes for an approximate solution of the advection equation is illustrated by the numerical results of a model two-dimensional problem.

  17. Optimal Sales Schemes for Network Goods

    DEFF Research Database (Denmark)

    Parakhonyak, Alexei; Vikander, Nick

    consumers simultaneously, serve them all sequentially, or employ any intermediate scheme. We show that the optimal sales scheme is purely sequential, where each consumer observes all previous sales before choosing whether to buy himself. A sequential scheme maximizes the amount of information available...

  18. Idealized Simulations of a Squall Line from the MC3E Field Campaign Applying Three Bin Microphysics Schemes: Dynamic and Thermodynamic Structure

    Energy Technology Data Exchange (ETDEWEB)

    Xue, Lulin [National Center for Atmospheric Research, Boulder, Colorado; Fan, Jiwen [Pacific Northwest National Laboratory, Richland, Washington; Lebo, Zachary J. [University of Wyoming, Laramie, Wyoming; Wu, Wei [National Center for Atmospheric Research, Boulder, Colorado; University of Illinois at Urbana–Champaign, Urbana, Illinois; Morrison, Hugh [National Center for Atmospheric Research, Boulder, Colorado; Grabowski, Wojciech W. [National Center for Atmospheric Research, Boulder, Colorado; Chu, Xia [University of Wyoming, Laramie, Wyoming; Geresdi, István [University of Pécs, Pécs, Hungary; North, Kirk [McGill University, Montréal, Québec, Canada; Stenz, Ronald [University of North Dakota, Grand Forks, North Dakota; Gao, Yang [Pacific Northwest National Laboratory, Richland, Washington; Lou, Xiaofeng [Chinese Academy of Meteorological Sciences, Beijing, China; Bansemer, Aaron [National Center for Atmospheric Research, Boulder, Colorado; Heymsfield, Andrew J. [National Center for Atmospheric Research, Boulder, Colorado; McFarquhar, Greg M. [National Center for Atmospheric Research, Boulder, Colorado; University of Illinois at Urbana–Champaign, Urbana, Illinois; Rasmussen, Roy M. [National Center for Atmospheric Research, Boulder, Colorado

    2017-12-01

    The squall line event on May 20, 2011, during the Midlatitude Continental Convective Clouds (MC3E) field campaign has been simulated by three bin (spectral) microphysics schemes coupled into the Weather Research and Forecasting (WRF) model. Semi-idealized three-dimensional simulations driven by temperature and moisture profiles acquired by a radiosonde released in the pre-convection environment at 1200 UTC in Morris, Oklahoma show that each scheme produced a squall line with features broadly consistent with the observed storm characteristics. However, substantial differences in the details of the simulated dynamic and thermodynamic structure are evident. These differences are attributed to different algorithms and numerical representations of microphysical processes, assumptions of the hydrometeor processes and properties, especially ice particle mass, density, and terminal velocity relationships with size, and the resulting interactions between the microphysics, cold pool, and dynamics. This study shows that different bin microphysics schemes, designed to be conceptually more realistic and thus arguably more accurate than bulk microphysics schemes, still simulate a wide spread of microphysical, thermodynamic, and dynamic characteristics of a squall line, qualitatively similar to the spread of squall line characteristics using various bulk schemes. Future work may focus on improving the representation of ice particle properties in bin schemes to reduce this uncertainty and using the similar assumptions for all schemes to isolate the impact of physics from numerics.

  19. A fully-automated multiscale kernel graph cuts based particle localization scheme for temporal focusing two-photon microscopy

    Science.gov (United States)

    Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei

    2017-03-01

    The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.

  20. A magnet lattice for a tau-charm factory suitable for both standard scheme and monochromatization scheme

    International Nuclear Information System (INIS)

    Beloshitsky, P.

    1992-06-01

    A versatile magnet lattice for a tau-charm factory is considered in this report. The main feature of this lattice is the possibility to use it for both standard flat beam scheme and beam monochromatization scheme. The detailed description of the lattice is given. The restrictions following the compatibility of both schemes are discussed

  1. 6 Ma age of carving Westernmost Grand Canyon: Reconciling geologic data with combined AFT, (U-Th)/He, and 4He/3He thermochronologic data

    Science.gov (United States)

    Winn, Carmen; Karlstrom, Karl E.; Shuster, David L.; Kelley, Shari; Fox, Matthew

    2017-09-01

    Conflicting hypotheses about the timing of carving of the Grand Canyon involve either a 70 Ma (;old;) or conflict with these lines of evidence, but are reconciled in this paper via the integration of three methods of analyses on the same sample: apatite (U-Th)/He ages (AHe), 4He/3He thermochronometry (4He/3He), and apatite fission-track ages and lengths (AFT). HeFTy software was used to generate time-temperature (t-T) paths that predict all new and published 4He/3He, AHe, and AFT data to within assumed uncertainties. These t-T paths show cooling from ∼100 °C to 40-60 °C in the Laramide (70-50 Ma), long-term residence at 40-60 °C in the mid-Tertiary (50-10 Ma), and cooling to near-surface temperatures after 10 Ma, and thus support young incision of the westernmost Grand Canyon. A subset of AHe data, when interpreted alone (i.e. without 4He/3He or AFT data), are better predicted by t-T paths that cool to surface temperatures during the Laramide, consistent with an ;old; Grand Canyon. However, the combined AFT, AHe, and 4He/3He analysis of a key sample from Separation Canyon can only be reconciled by a ;young; Canyon. Additional new AFT (5 samples) and AHe data (3 samples) in several locations along the canyon corridor also support a ;young; Canyon. This inconsistency, which mimics the overall controversy of the age of the Grand Canyon, is reconciled here by optimizing cooling paths so they are most consistent with multiple thermochronometers from the same rocks. To do this, we adjusted model parameters and uncertainties to account for uncertainty in the rate of radiation damage annealing in these apatites during sedimentary burial and the resulting variations in He retentivity. In westernmost Grand Canyon, peak burial conditions (temperature and duration) during the Laramide were likely insufficient to fully anneal radiation damage that accumulated during prolonged, near-surface residence since the Proterozoic. We conclude that application of multiple

  2. Medication reconciliation in acute care: ensuring an accurate drug regimen on admission and discharge.

    Science.gov (United States)

    Rodehaver, Claire; Fearing, Deb

    2005-07-01

    Several factors contribute to the potential for patient confusion regarding his or her medication regimen, including multiple names for a single drug and formulary variations when the patient receives medications from more than one pharmacy. A 68-year-old woman was discharged from the hospital on a HMG-CoA reductase inhibitor (statin) and resumed her home statin. Eleven days later she returned to the hospital with a diagnosis of severe rhabdomyolysis due to statin overdose. IMPLEMENTING SOLUTIONS: Miami Valley Hospital, Dayton, Ohio, implemented a reconciliation process and order form at admission and discharge to reduce the likelihood that this miscommunication would recur. Initial efforts were trialed on a 44-bed orthopedic unit, with spread of the initiative to the cardiac units and finally to the remaining 22 nursing units. The team successfully implemented initiation of the order sheet, yet audits indicated the need for improvement in reconciling the medications within 24 hours of admission and in reconciling the home medications at the point of discharge. Successful implementation of the order sheet to drive reconciliation takes communication, perseverance, and a multidisciplinary team approach.

  3. THROUGHPUT ANALYSIS OF EXTENDED ARQ SCHEMES

    African Journals Online (AJOL)

    PUBLICATIONS1

    ABSTRACT. Various Automatic Repeat Request (ARQ) schemes have been used to combat errors that befall in- formation transmitted in digital communication systems. Such schemes include simple ARQ, mixed mode ARQ and Hybrid ARQ (HARQ). In this study we introduce extended ARQ schemes and derive.

  4. Ponzi scheme diffusion in complex networks

    Science.gov (United States)

    Zhu, Anding; Fu, Peihua; Zhang, Qinghe; Chen, Zhenyue

    2017-08-01

    Ponzi schemes taking the form of Internet-based financial schemes have been negatively affecting China's economy for the last two years. Because there is currently a lack of modeling research on Ponzi scheme diffusion within social networks yet, we develop a potential-investor-divestor (PID) model to investigate the diffusion dynamics of Ponzi scheme in both homogeneous and inhomogeneous networks. Our simulation study of artificial and real Facebook social networks shows that the structure of investor networks does indeed affect the characteristics of dynamics. Both the average degree of distribution and the power-law degree of distribution will reduce the spreading critical threshold and will speed up the rate of diffusion. A high speed of diffusion is the key to alleviating the interest burden and improving the financial outcomes for the Ponzi scheme operator. The zero-crossing point of fund flux function we introduce proves to be a feasible index for reflecting the fast-worsening situation of fiscal instability and predicting the forthcoming collapse. The faster the scheme diffuses, the higher a peak it will reach and the sooner it will collapse. We should keep a vigilant eye on the harm of Ponzi scheme diffusion through modern social networks.

  5. Free will: A case study in reconciling phenomenological philosophy with reductionist sciences.

    Science.gov (United States)

    Hong, Felix T

    2015-12-01

    Phenomenology aspires to philosophical analysis of humans' subjective experience while it strives to avoid pitfalls of subjectivity. The first step towards naturalizing phenomenology - making phenomenology scientific - is to reconcile phenomenology with modern physics, on the one hand, and with modern cellular and molecular neuroscience, on the other hand. In this paper, free will is chosen for a case study to demonstrate the feasibility. Special attention is paid to maintain analysis with mathematical precision, if possible, and to evade the inherent deceptive power of natural language. Laplace's determinism is re-evaluated along with the concept of microscopic reversibility. A simple and transparent version of proof demonstrates that microscopic reversibility is irreconcilably incompatible with macroscopic irreversibility, contrary to Boltzmann's claim. But the verdict also exalts Boltzmann's statistical mechanics to the new height of a genuine paradigm shift, thus cutting the umbilical cord linking it to Newtonian mechanics. Laplace's absolute determinism must then be replaced with a weaker form of causality called quasi-determinism. Biological indeterminism is also affirmed with numerous lines of evidence. The strongest evidence is furnished by ion channel fluctuations, which obey an indeterministic stochastic phenomenological law. Furthermore, quantum indeterminacy is shown to be relevant in biology, contrary to the opinion of Erwin Schrödinger. In reconciling phenomenology of free will with modern sciences, three issues - alternativism, intelligibility and origination - of free will must be accounted for. Alternativism and intelligibility can readily be accounted for by quasi-determinism. In order to account for origination of free will, the concept of downward causation must be invoked. However, unlike what is commonly believed, there is no evidence that downward causation can influence, shield off, or overpower low-level physical forces already known to

  6. The Performance-based Funding Scheme of Universities

    Directory of Open Access Journals (Sweden)

    Juha KETTUNEN

    2016-05-01

    Full Text Available The purpose of this study is to analyse the effectiveness of the performance-based funding scheme of the Finnish universities that was adopted at the beginning of 2013. The political decision-makers expect that the funding scheme will create incentives for the universities to improve performance, but these funding schemes have largely failed in many other countries, primarily because public funding is only a small share of the total funding of universities. This study is interesting because Finnish universities have no tuition fees, unlike in many other countries, and the state allocates funding based on the objectives achieved. The empirical evidence of the graduation rates indicates that graduation rates increased when a new scheme was adopted, especially among male students, who have more room for improvement than female students. The new performance-based funding scheme allocates the funding according to the output-based indicators and limits the scope of strategic planning and the autonomy of the university. The performance-based funding scheme is transformed to the strategy map of the balanced scorecard. The new funding scheme steers universities in many respects but leaves the research and teaching skills to the discretion of the universities. The new scheme has also diminished the importance of the performance agreements between the university and the Ministry. The scheme increases the incentives for universities to improve the processes and structures in order to attain as much public funding as possible. It is optimal for the central administration of the university to allocate resources to faculties and other organisational units following the criteria of the performance-based funding scheme. The new funding scheme has made the universities compete with each other, because the total funding to the universities is allocated to each university according to the funding scheme. There is a tendency that the funding schemes are occasionally

  7. A Classification Scheme for Literary Characters

    Directory of Open Access Journals (Sweden)

    Matthew Berry

    2017-10-01

    Full Text Available There is no established classification scheme for literary characters in narrative theory short of generic categories like protagonist vs. antagonist or round vs. flat. This is so despite the ubiquity of stock characters that recur across media, cultures, and historical time periods. We present here a proposal of a systematic psychological scheme for classifying characters from the literary and dramatic fields based on a modification of the Thomas-Kilmann (TK Conflict Mode Instrument used in applied studies of personality. The TK scheme classifies personality along the two orthogonal dimensions of assertiveness and cooperativeness. To examine the validity of a modified version of this scheme, we had 142 participants provide personality ratings for 40 characters using two of the Big Five personality traits as well as assertiveness and cooperativeness from the TK scheme. The results showed that assertiveness and cooperativeness were orthogonal dimensions, thereby supporting the validity of using a modified version of TK’s two-dimensional scheme for classifying characters.

  8. How can conceptual schemes change teaching?

    Science.gov (United States)

    Wickman, Per-Olof

    2012-03-01

    Lundqvist, Almqvist and Östman describe a teacher's manner of teaching and the possible consequences it may have for students' meaning making. In doing this the article examines a teacher's classroom practice by systematizing the teacher's transactions with the students in terms of certain conceptual schemes, namely the epistemological moves, educational philosophies and the selective traditions of this practice. In connection to their study one may ask how conceptual schemes could change teaching. This article examines how the relationship of the conceptual schemes produced by educational researchers to educational praxis has developed from the middle of the last century to today. The relationship is described as having been transformed in three steps: (1) teacher deficit and social engineering, where conceptual schemes are little acknowledged, (2) reflecting practitioners, where conceptual schemes are mangled through teacher practice to aid the choices of already knowledgeable teachers, and (3) the mangling of the conceptual schemes by researchers through practice with the purpose of revising theory.

  9. An Arbitrated Quantum Signature Scheme without Entanglement*

    International Nuclear Information System (INIS)

    Li Hui-Ran; Luo Ming-Xing; Peng Dai-Yuan; Wang Xiao-Jun

    2017-01-01

    Several quantum signature schemes are recently proposed to realize secure signatures of quantum or classical messages. Arbitrated quantum signature as one nontrivial scheme has attracted great interests because of its usefulness and efficiency. Unfortunately, previous schemes cannot against Trojan horse attack and DoS attack and lack of the unforgeability and the non-repudiation. In this paper, we propose an improved arbitrated quantum signature to address these secure issues with the honesty arbitrator. Our scheme takes use of qubit states not entanglements. More importantly, the qubit scheme can achieve the unforgeability and the non-repudiation. Our scheme is also secure for other known quantum attacks . (paper)

  10. An Integrated H-G Scheme Identifying Areas for Soil Remediation and Primary Heavy Metal Contributors: A Risk Perspective

    OpenAIRE

    Bin Zou; Xiaolu Jiang; Xiaoli Duan; Xiuge Zhao; Jing Zhang; Jingwen Tang; Guoqing Sun

    2017-01-01

    Traditional sampling for soil pollution evaluation is cost intensive and has limited representativeness. Therefore, developing methods that can accurately and rapidly identify at-risk areas and the contributing pollutants is imperative for soil remediation. In this study, we propose an innovative integrated H-G scheme combining human health risk assessment and geographical detector methods that was based on geographical information system technology and validated its feasibility in a renewabl...

  11. New PDE-based methods for image enhancement using SOM and Bayesian inference in various discretization schemes

    International Nuclear Information System (INIS)

    Karras, D A; Mertzios, G B

    2009-01-01

    A novel approach is presented in this paper for improving anisotropic diffusion PDE models, based on the Perona–Malik equation. A solution is proposed from an engineering perspective to adaptively estimate the parameters of the regularizing function in this equation. The goal of such a new adaptive diffusion scheme is to better preserve edges when the anisotropic diffusion PDE models are applied to image enhancement tasks. The proposed adaptive parameter estimation in the anisotropic diffusion PDE model involves self-organizing maps and Bayesian inference to define edge probabilities accurately. The proposed modifications attempt to capture not only simple edges but also difficult textural edges and incorporate their probability in the anisotropic diffusion model. In the context of the application of PDE models to image processing such adaptive schemes are closely related to the discrete image representation problem and the investigation of more suitable discretization algorithms using constraints derived from image processing theory. The proposed adaptive anisotropic diffusion model illustrates these concepts when it is numerically approximated by various discretization schemes in a database of magnetic resonance images (MRI), where it is shown to be efficient in image filtering and restoration applications

  12. Breeding schemes in reindeer husbandry

    Directory of Open Access Journals (Sweden)

    Lars Rönnegård

    2003-04-01

    Full Text Available The objective of the paper was to investigate annual genetic gain from selection (G, and the influence of selection on the inbreeding effective population size (Ne, for different possible breeding schemes within a reindeer herding district. The breeding schemes were analysed for different proportions of the population within a herding district included in the selection programme. Two different breeding schemes were analysed: an open nucleus scheme where males mix and mate between owner flocks, and a closed nucleus scheme where the males in non-selected owner flocks are culled to maximise G in the whole population. The theory of expected long-term genetic contributions was used and maternal effects were included in the analyses. Realistic parameter values were used for the population, modelled with 5000 reindeer in the population and a sex ratio of 14 adult females per male. The standard deviation of calf weights was 4.1 kg. Four different situations were explored and the results showed: 1. When the population was randomly culled, Ne equalled 2400. 2. When the whole population was selected on calf weights, Ne equalled 1700 and the total annual genetic gain (direct + maternal in calf weight was 0.42 kg. 3. For the open nucleus scheme, G increased monotonically from 0 to 0.42 kg as the proportion of the population included in the selection programme increased from 0 to 1.0, and Ne decreased correspondingly from 2400 to 1700. 4. In the closed nucleus scheme the lowest value of Ne was 1300. For a given proportion of the population included in the selection programme, the difference in G between a closed nucleus scheme and an open one was up to 0.13 kg. We conclude that for mass selection based on calf weights in herding districts with 2000 animals or more, there are no risks of inbreeding effects caused by selection.

  13. Quantum Secure Communication Scheme with W State

    International Nuclear Information System (INIS)

    Wang Jian; Zhang Quan; Tang Chaojng

    2007-01-01

    We present a quantum secure communication scheme using three-qubit W state. It is unnecessary for the present scheme to use alternative measurement or Bell basis measurement. Compared with the quantum secure direct communication scheme proposed by Cao et al. [H.J. Cao and H.S. Song, Chin. Phys. Lett. 23 (2006) 290], in our scheme, the detection probability for an eavesdropper's attack increases from 8.3% to 25%. We also show that our scheme is secure for a noise quantum channel.

  14. Optimum RA reactor fuelling scheme

    International Nuclear Information System (INIS)

    Strugar, P.; Nikolic, V.

    1965-10-01

    Ideal reactor refueling scheme can be achieved only by continuous fuel elements movement in the core, which is not possible, and thus approximations are applied. One of the possible approximations is discontinuous movement of fuel elements groups in radial direction. This enables higher burnup especially if axial exchange is possible. Analysis of refueling schemes in the RA reactor core and schemes with mixing the fresh and used fuel elements show that 30% higher burnup can be achieved by applying mixing, and even 40% if reactivity due to decrease in experimental space is taken into account. Up to now, mean burnup of 4400 MWd/t has been achieved, and the proposed fueling scheme with reduction of experimental space could achieve mean burnup of 6300 MWd/t which means about 25 Mwd/t per fuel channel [sr

  15. Reconciling the expectations of community participants with the requirements of non-fossil fuel obligation: the experience of Harlock Hill windfarm

    International Nuclear Information System (INIS)

    Harrop, J.

    1998-01-01

    Is it possible to reconcile the aspirations of community participants in a wind energy project with the requirements imposed by the Non-Fossil Fuel Obligation legislation and procedure? This paper considers the practical experience of the framework that was adopted at Harlock Hill wind farm for community participation and the legal structure that were required to ensure that the project retained the full benefit of the premium price arrangements with the Non-Fossil Purchasing Agency Limited. (Author)

  16. Accurately bi-orthogonal direct and adjoint lambda modes via two-sided Eigen-solvers

    International Nuclear Information System (INIS)

    Roman, J.E.; Vidal, V.; Verdu, G.

    2005-01-01

    This work is concerned with the accurate computation of the dominant l-modes (Lambda mode) of the reactor core in order to approximate the solution of the neutron diffusion equation in different situations such as the transient modal analysis. In a previous work, the problem was already addressed by implementing a parallel program based on SLEPc (Scalable Library for Eigenvalue Problem Computations), a public domain software for the solution of eigenvalue problems. Now, the proposed solution is extended by incorporating also the computation of the adjoint l-modes in such a way that the bi-orthogonality condition is enforced very accurately. This feature is very desirable in some types of analyses, and in the proposed scheme it is achieved by making use of two-sided eigenvalue solving software. Current implementations of some of these software, while still susceptible of improvement, show that they can be competitive in terms of response time and accuracy with respect to other types of eigenvalue solving software. The code developed by the authors has parallel capabilities in order to be able to analyze reactors with a great level of detail in a short time. (authors)

  17. Accurately bi-orthogonal direct and adjoint lambda modes via two-sided Eigen-solvers

    Energy Technology Data Exchange (ETDEWEB)

    Roman, J.E.; Vidal, V. [Valencia Univ. Politecnica, D. Sistemas Informaticos y Computacion (Spain); Verdu, G. [Valencia Univ. Politecnica, D. Ingenieria Quimica y Nuclear (Spain)

    2005-07-01

    This work is concerned with the accurate computation of the dominant l-modes (Lambda mode) of the reactor core in order to approximate the solution of the neutron diffusion equation in different situations such as the transient modal analysis. In a previous work, the problem was already addressed by implementing a parallel program based on SLEPc (Scalable Library for Eigenvalue Problem Computations), a public domain software for the solution of eigenvalue problems. Now, the proposed solution is extended by incorporating also the computation of the adjoint l-modes in such a way that the bi-orthogonality condition is enforced very accurately. This feature is very desirable in some types of analyses, and in the proposed scheme it is achieved by making use of two-sided eigenvalue solving software. Current implementations of some of these software, while still susceptible of improvement, show that they can be competitive in terms of response time and accuracy with respect to other types of eigenvalue solving software. The code developed by the authors has parallel capabilities in order to be able to analyze reactors with a great level of detail in a short time. (authors)

  18. Student’s scheme in solving mathematics problems

    Science.gov (United States)

    Setyaningsih, Nining; Juniati, Dwi; Suwarsono

    2018-03-01

    The purpose of this study was to investigate students’ scheme in solving mathematics problems. Scheme are data structures for representing the concepts stored in memory. In this study, we used it in solving mathematics problems, especially ratio and proportion topics. Scheme is related to problem solving that assumes that a system is developed in the human mind by acquiring a structure in which problem solving procedures are integrated with some concepts. The data were collected by interview and students’ written works. The results of this study revealed are students’ scheme in solving the problem of ratio and proportion as follows: (1) the content scheme, where students can describe the selected components of the problem according to their prior knowledge, (2) the formal scheme, where students can explain in construct a mental model based on components that have been selected from the problem and can use existing schemes to build planning steps, create something that will be used to solve problems and (3) the language scheme, where students can identify terms, or symbols of the components of the problem.Therefore, by using the different strategies to solve the problems, the students’ scheme in solving the ratio and proportion problems will also differ.

  19. hybrid modulation scheme fo rid modulation scheme fo dulation

    African Journals Online (AJOL)

    eobe

    control technique is done through simulations and ex control technique .... HYBRID MODULATION SCHEME FOR CASCADED H-BRIDGE INVERTER CELLS. C. I. Odeh ..... and OR operations. Referring to ... MATLAB/SIMULINK environment.

  20. An accurate conservative level set/ghost fluid method for simulating turbulent atomization

    International Nuclear Information System (INIS)

    Desjardins, Olivier; Moureau, Vincent; Pitsch, Heinz

    2008-01-01

    This paper presents a novel methodology for simulating incompressible two-phase flows by combining an improved version of the conservative level set technique introduced in [E. Olsson, G. Kreiss, A conservative level set method for two phase flow, J. Comput. Phys. 210 (2005) 225-246] with a ghost fluid approach. By employing a hyperbolic tangent level set function that is transported and re-initialized using fully conservative numerical schemes, mass conservation issues that are known to affect level set methods are greatly reduced. In order to improve the accuracy of the conservative level set method, high order numerical schemes are used. The overall robustness of the numerical approach is increased by computing the interface normals from a signed distance function reconstructed from the hyperbolic tangent level set by a fast marching method. The convergence of the curvature calculation is ensured by using a least squares reconstruction. The ghost fluid technique provides a way of handling the interfacial forces and large density jumps associated with two-phase flows with good accuracy, while avoiding artificial spreading of the interface. Since the proposed approach relies on partial differential equations, its implementation is straightforward in all coordinate systems, and it benefits from high parallel efficiency. The robustness and efficiency of the approach is further improved by using implicit schemes for the interface transport and re-initialization equations, as well as for the momentum solver. The performance of the method is assessed through both classical level set transport tests and simple two-phase flow examples including topology changes. It is then applied to simulate turbulent atomization of a liquid Diesel jet at Re=3000. The conservation errors associated with the accurate conservative level set technique are shown to remain small even for this complex case

  1. Towards Symbolic Encryption Schemes

    DEFF Research Database (Denmark)

    Ahmed, Naveed; Jensen, Christian D.; Zenner, Erik

    2012-01-01

    , namely an authenticated encryption scheme that is secure under chosen ciphertext attack. Therefore, many reasonable encryption schemes, such as AES in the CBC or CFB mode, are not among the implementation options. In this paper, we report new attacks on CBC and CFB based implementations of the well......Symbolic encryption, in the style of Dolev-Yao models, is ubiquitous in formal security models. In its common use, encryption on a whole message is specified as a single monolithic block. From a cryptographic perspective, however, this may require a resource-intensive cryptographic algorithm......-known Needham-Schroeder and Denning-Sacco protocols. To avoid such problems, we advocate the use of refined notions of symbolic encryption that have natural correspondence to standard cryptographic encryption schemes....

  2. Reconciling Conflicting Geologic and Thermochronologic Interpretations Via Multiple Apatite Thermochronometers (AHe, AFT, and 4He/3He): 6 Ma Incision of the Westernmost Grand Canyon

    Science.gov (United States)

    Winn, C.; Karlstrom, K. E.; Shuster, D. L.; Kelley, S.; Fox, M.

    2017-12-01

    The application of low-temperature apatite thermochronology to the incision history of the Grand Canyon has led to conflicting hypotheses of either a 70 Ma ("old") or conflict with these lines of evidence and indicate a much older ( 70 Ma) westernmost Grand Canyon. We reconcile this conflict by applying apatite (U-Th)/He ages (AHe), 4He/3He thermochronometry, and apatite fission track ages and lengths (AFT) to the same sample at a key location. Using HeFTy, t-T paths that predict these data show cooling from ˜100 °C to 40-60 °C at 70-50 Ma, long-term residence at 40-60 °C from 50-10 Ma, and cooling to surface temperatures after 10 Ma, indicating young incision. New AFT (5) and AHe (3) datasets are also presented here. When datasets are examined separately, AHe data show t-T paths that cool to surface temperatures during the Laramide, consistent with an "old" Canyon. When multiple methods are applied, t-T paths instead show young incision. This inconsistency demonstrates the age of the Grand Canyon controversy. Here we reconcile the difference in t-T paths by adjusting model parameters to account for uncertainty in the rate of radiation damage annealing in apatite during burial heating and the resulting variations in He retentivity. In this area, peak burial conditions during the Laramide were likely insufficient to fully anneal radiation damage that accumulated during prolonged near-surface residence prior to burial. We conclude that application of multiple thermochronometers from common rocks reconciles conflicting thermochronologic interpretations and these data are best explained by a "young" westernmost Grand Canyon.

  3. Improvement of Modeling Scheme of the Safety Injection Tank with Fluidic Device for Realistic LBLOCA Calculation

    International Nuclear Information System (INIS)

    Bang, Young Seok; Cheong, Aeju; Woo, Sweng Woong

    2014-01-01

    Confirmation of the performance of the SIT with FD should be based on thermal-hydraulic analysis of LBLOCA and an adequate and physical model simulating the SIT/FD should be used in the LBLOCA calculation. To develop such a physical model on SIT/FD, simulation of the major phenomena including flow distribution of by standpipe and FD should be justified by full scale experiment and/or plant preoperational testing. Author's previous study indicated that an approximation of SIT/FD phenomena could be obtained by a typical system transient code, MARS-KS, and using 'accumulator' component model, however, that additional improvement on modeling scheme of the FD and standpipe flow paths was needed for a reasonable prediction. One problem was a depressurizing behavior after switchover to low flow injection phase. Also a potential to release of nitrogen gas from the SIT to the downstream pipe and then reactor core through flow paths of FD and standpipe has been concerned. The intrusion of noncondensible gas may have an effect on LBLOCA thermal response. Therefore, a more reliable model on SIT/FD has been requested to get a more accurate prediction and a confidence of the evaluation of LBLOCA. The present paper is to discuss an improvement of modeling scheme from the previous study. Compared to the existing modeling, effect of the present modeling scheme on LBLOCA cladding thermal response is discussed. The present study discussed the modeling scheme of SIT with FD for a realistic simulation of LBLOCA of APR1400. Currently, the SIT blowdown test can be best simulated by the modeling scheme using 'pipe' component with dynamic area reduction. The LBLOCA analysis adopting the modeling scheme showed the PCT increase of 23K when compared to the case of 'accumulator' component model, which was due to the flow rate decrease at transition phase low flow injection and intrusion of nitrogen gas to the core. Accordingly, the effect of SIT/FD modeling

  4. Setting aside transactions from pyramid schemes as impeachable ...

    African Journals Online (AJOL)

    These schemes, which are often referred to as pyramid or Ponzi schemes, are unsustainable operations and give rise to problems in the law of insolvency. Investors in these schemes are often left empty-handed upon the scheme's eventual collapse and insolvency. Investors who received pay-outs from the scheme find ...

  5. Assessment of Planetary-Boundary-Layer Schemes in the Weather Research and Forecasting Model Within and Above an Urban Canopy Layer

    Science.gov (United States)

    Ferrero, Enrico; Alessandrini, Stefano; Vandenberghe, Francois

    2018-03-01

    We tested several planetary-boundary-layer (PBL) schemes available in the Weather Research and Forecasting (WRF) model against measured wind speed and direction, temperature and turbulent kinetic energy (TKE) at three levels (5, 9, 25 m). The Urban Turbulence Project dataset, gathered from the outskirts of Turin, Italy and used for the comparison, provides measurements made by sonic anemometers for more than 1 year. In contrast to other similar studies, which have mainly focused on short-time periods, we considered 2 months of measurements (January and July) representing both the seasonal and the daily variabilities. To understand how the WRF-model PBL schemes perform in an urban environment, often characterized by low wind-speed conditions, we first compared six PBL schemes against observations taken by the highest anemometer located in the inertial sub-layer. The availability of the TKE measurements allows us to directly evaluate the performances of the model; results of the model evaluation are presented in terms of quantile versus quantile plots and statistical indices. Secondly, we considered WRF-model PBL schemes that can be coupled to the urban-surface exchange parametrizations and compared the simulation results with measurements from the two lower anemometers located inside the canopy layer. We find that the PBL schemes accounting for TKE are more accurate and the model representation of the roughness sub-layer improves when the urban model is coupled to each PBL scheme.

  6. A Stable Marching on-in-time Scheme for Solving the Time Domain Electric Field Volume Integral Equation on High-contrast Scatterers

    KAUST Repository

    Sayed, Sadeed Bin

    2015-05-05

    A time domain electric field volume integral equation (TD-EFVIE) solver is proposed for characterizing transient electromagnetic wave interactions on high-contrast dielectric scatterers. The TD-EFVIE is discretized using the Schaubert- Wilton-Glisson (SWG) and approximate prolate spherical wave (APSW) functions in space and time, respectively. The resulting system of equations can not be solved by a straightforward application of the marching on-in-time (MOT) scheme since the two-sided APSW interpolation functions require the knowledge of unknown “future” field samples during time marching. Causality of the MOT scheme is restored using an extrapolation technique that predicts the future samples from known “past” ones. Unlike the extrapolation techniques developed for MOT schemes that are used in solving time domain surface integral equations, this scheme trains the extrapolation coefficients using samples of exponentials with exponents on the complex frequency plane. This increases the stability of the MOT-TD-EFVIE solver significantly, since the temporal behavior of decaying and oscillating electromagnetic modes induced inside the scatterers is very accurately taken into account by this new extrapolation scheme. Numerical results demonstrate that the proposed MOT solver maintains its stability even when applied to analyzing wave interactions on high-contrast scatterers.

  7. A Stable Marching on-in-time Scheme for Solving the Time Domain Electric Field Volume Integral Equation on High-contrast Scatterers

    KAUST Repository

    Sayed, Sadeed Bin; Ulku, Huseyin; Bagci, Hakan

    2015-01-01

    A time domain electric field volume integral equation (TD-EFVIE) solver is proposed for characterizing transient electromagnetic wave interactions on high-contrast dielectric scatterers. The TD-EFVIE is discretized using the Schaubert- Wilton-Glisson (SWG) and approximate prolate spherical wave (APSW) functions in space and time, respectively. The resulting system of equations can not be solved by a straightforward application of the marching on-in-time (MOT) scheme since the two-sided APSW interpolation functions require the knowledge of unknown “future” field samples during time marching. Causality of the MOT scheme is restored using an extrapolation technique that predicts the future samples from known “past” ones. Unlike the extrapolation techniques developed for MOT schemes that are used in solving time domain surface integral equations, this scheme trains the extrapolation coefficients using samples of exponentials with exponents on the complex frequency plane. This increases the stability of the MOT-TD-EFVIE solver significantly, since the temporal behavior of decaying and oscillating electromagnetic modes induced inside the scatterers is very accurately taken into account by this new extrapolation scheme. Numerical results demonstrate that the proposed MOT solver maintains its stability even when applied to analyzing wave interactions on high-contrast scatterers.

  8. Renormalization scheme-invariant perturbation theory

    International Nuclear Information System (INIS)

    Dhar, A.

    1983-01-01

    A complete solution to the problem of the renormalization scheme dependence of perturbative approximants to physical quantities is presented. An equation is derived which determines any physical quantity implicitly as a function of only scheme independent variables. (orig.)

  9. Nonlinear secret image sharing scheme.

    Science.gov (United States)

    Shin, Sang-Ho; Lee, Gil-Je; Yoo, Kee-Young

    2014-01-01

    Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively.

  10. Good governance for pension schemes

    CERN Document Server

    Thornton, Paul

    2011-01-01

    Regulatory and market developments have transformed the way in which UK private sector pension schemes operate. This has increased demands on trustees and advisors and the trusteeship governance model must evolve in order to remain fit for purpose. This volume brings together leading practitioners to provide an overview of what today constitutes good governance for pension schemes, from both a legal and a practical perspective. It provides the reader with an appreciation of the distinctive characteristics of UK occupational pension schemes, how they sit within the capital markets and their social and fiduciary responsibilities. Providing a holistic analysis of pension risk, both from the trustee and the corporate perspective, the essays cover the crucial role of the employer covenant, financing and investment risk, developments in longevity risk hedging and insurance de-risking, and best practice scheme administration.

  11. A Method for Capturing and Reconciling Stakeholder Intentions Based on the Formal Concept Analysis

    Science.gov (United States)

    Aoyama, Mikio

    Information systems are ubiquitous in our daily life. Thus, information systems need to work appropriately anywhere at any time for everybody. Conventional information systems engineering tends to engineer systems from the viewpoint of systems functionality. However, the diversity of the usage context requires fundamental change compared to our current thinking on information systems; from the functionality the systems provide to the goals the systems should achieve. The intentional approach embraces the goals and related aspects of the information systems. This chapter presents a method for capturing, structuring and reconciling diverse goals of multiple stakeholders. The heart of the method lies in the hierarchical structuring of goals by goal lattice based on the formal concept analysis, a semantic extension of the lattice theory. We illustrate the effectiveness of the presented method through application to the self-checkout systems for large-scale supermarkets.

  12. Symmetric weak ternary quantum homomorphic encryption schemes

    Science.gov (United States)

    Wang, Yuqi; She, Kun; Luo, Qingbin; Yang, Fan; Zhao, Chao

    2016-03-01

    Based on a ternary quantum logic circuit, four symmetric weak ternary quantum homomorphic encryption (QHE) schemes were proposed. First, for a one-qutrit rotation gate, a QHE scheme was constructed. Second, in view of the synthesis of a general 3 × 3 unitary transformation, another one-qutrit QHE scheme was proposed. Third, according to the one-qutrit scheme, the two-qutrit QHE scheme about generalized controlled X (GCX(m,n)) gate was constructed and further generalized to the n-qutrit unitary matrix case. Finally, the security of these schemes was analyzed in two respects. It can be concluded that the attacker can correctly guess the encryption key with a maximum probability pk = 1/33n, thus it can better protect the privacy of users’ data. Moreover, these schemes can be well integrated into the future quantum remote server architecture, and thus the computational security of the users’ private quantum information can be well protected in a distributed computing environment.

  13. Labelling schemes: From a consumer perspective

    DEFF Research Database (Denmark)

    Juhl, Hans Jørn; Stacey, Julia

    2000-01-01

    Labelling of food products attracts a lot of political attention these days. As a result of a number of food scandals, most European countries have acknowledged the need for more information and better protection of consumers. Labelling schemes are one way of informing and guiding consumers....... However, initiatives in relation to labelling schemes seldom take their point of departure in consumers' needs and expectations; and in many cases, the schemes are defined by the institutions guaranteeing the label. It is therefore interesting to study how consumers actually value labelling schemes....... A recent MAPP study has investigated the value consumers attach the Government-controlled labels 'Ø-mærket' and 'Den Blå Lup' and the private supermarket label 'Mesterhakket' when they purchase minced meat. The results reveal four consumer segments that use labelling schemes for food products very...

  14. A study of upwind schemes on the laminar hypersonic heating predictions for the reusable space vehicle

    Science.gov (United States)

    Qu, Feng; Sun, Di; Zuo, Guang

    2018-06-01

    With the rapid development of the Computational Fluid Dynamics (CFD), Accurate computing hypersonic heating is in a high demand for the design of the new generation reusable space vehicle to conduct deep space exploration. In the past years, most researchers try to solve this problem by concentrating on the choice of the upwind schemes or the definition of the cell Reynolds number. However, the cell Reynolds number dependencies and limiter dependencies of the upwind schemes, which are of great importance to their performances in hypersonic heating computations, are concerned by few people. In this paper, we conduct a systematic study on these properties respectively. Results in our test cases show that SLAU (Simple Low-dissipation AUSM-family) is with a much higher level of accuracy and robustness in hypersonic heating predictions. Also, it performs much better in terms of the limiter dependency and the cell Reynolds number dependency.

  15. LPTA: location predictive and time adaptive data gathering scheme with mobile sink for wireless sensor networks.

    Science.gov (United States)

    Zhu, Chuan; Wang, Yao; Han, Guangjie; Rodrigues, Joel J P C; Lloret, Jaime

    2014-01-01

    This paper exploits sink mobility to prolong the lifetime of sensor networks while maintaining the data transmission delay relatively low. A location predictive and time adaptive data gathering scheme is proposed. In this paper, we introduce a sink location prediction principle based on loose time synchronization and deduce the time-location formulas of the mobile sink. According to local clocks and the time-location formulas of the mobile sink, nodes in the network are able to calculate the current location of the mobile sink accurately and route data packets timely toward the mobile sink by multihop relay. Considering that data packets generating from different areas may be different greatly, an adaptive dwelling time adjustment method is also proposed to balance energy consumption among nodes in the network. Simulation results show that our data gathering scheme enables data routing with less data transmission time delay and balance energy consumption among nodes.

  16. Analysis of central and upwind compact schemes

    International Nuclear Information System (INIS)

    Sengupta, T.K.; Ganeriwal, G.; De, S.

    2003-01-01

    Central and upwind compact schemes for spatial discretization have been analyzed with respect to accuracy in spectral space, numerical stability and dispersion relation preservation. A von Neumann matrix spectral analysis is developed here to analyze spatial discretization schemes for any explicit and implicit schemes to investigate the full domain simultaneously. This allows one to evaluate various boundary closures and their effects on the domain interior. The same method can be used for stability analysis performed for the semi-discrete initial boundary value problems (IBVP). This analysis tells one about the stability for every resolved length scale. Some well-known compact schemes that were found to be G-K-S and time stable are shown here to be unstable for selective length scales by this analysis. This is attributed to boundary closure and we suggest special boundary treatment to remove this shortcoming. To demonstrate the asymptotic stability of the resultant schemes, numerical solution of the wave equation is compared with analytical solution. Furthermore, some of these schemes are used to solve two-dimensional Navier-Stokes equation and a computational acoustic problem to check their ability to solve problems for long time. It is found that those schemes, that were found unstable for the wave equation, are unsuitable for solving incompressible Navier-Stokes equation. In contrast, the proposed compact schemes with improved boundary closure and an explicit higher-order upwind scheme produced correct results. The numerical solution for the acoustic problem is compared with the exact solution and the quality of the match shows that the used compact scheme has the requisite DRP property

  17. Reconciling the self and morality: an empirical model of moral centrality development.

    Science.gov (United States)

    Frimer, Jeremy A; Walker, Lawrence J

    2009-11-01

    Self-interest and moral sensibilities generally compete with one another, but for moral exemplars, this tension appears to not be in play. This study advances the reconciliation model, which explains this anomaly within a developmental framework by positing that the relationship between the self's interests and moral concerns ideally transforms from one of mutual competition to one of synergy. The degree to which morality is central to an individual's identity-or moral centrality-was operationalized in terms of values advanced implicitly in self-understanding narratives; a measure was developed and then validated. Participants were 97 university students who responded to a self-understanding interview and to several measures of morally relevant behaviors. Results indicated that communal values (centered on concerns for others) positively predicted and agentic (self-interested) values negatively predicted moral behavior. At the same time, the tendency to coordinate both agentic and communal values within narrative thought segments positively predicted moral behavior, indicating that the 2 motives can be adaptively reconciled. Moral centrality holds considerable promise in explaining moral motivation and its development.

  18. The Improved NRL Tropical Cyclone Monitoring System with a Unified Microwave Brightness Temperature Calibration Scheme

    Directory of Open Access Journals (Sweden)

    Song Yang

    2014-05-01

    Full Text Available The near real-time NRL global tropical cyclone (TC monitoring system based on multiple satellite passive microwave (PMW sensors is improved with a new inter-sensor calibration scheme to correct the biases caused by differences in these sensor’s high frequency channels. Since the PMW sensor 89 GHz channel is used in multiple current and near future operational and research satellites, a unified scheme to calibrate all satellite PMW sensor’s ice scattering channels to a common 89 GHz is created so that their brightness temperatures (TBs will be consistent and permit more accurate manual and automated analyses. In order to develop a physically consistent calibration scheme, cloud resolving model simulations of a squall line system over the west Pacific coast and hurricane Bonnie in the Atlantic Ocean are applied to simulate the views from different PMW sensors. To clarify the complicated TB biases due to the competing nature of scattering and emission effects, a four-cloud based calibration scheme is developed (rain, non-rain, light rain, and cloudy. This new physically consistent inter-sensor calibration scheme is then evaluated with the synthetic TBs of hurricane Bonnie and a squall line as well as observed TCs. Results demonstrate the large TB biases up to 13 K for heavy rain situations before calibration between TMI and AMSR-E are reduced to less than 3 K after calibration. The comparison stats show that the overall bias and RMSE are reduced by 74% and 66% for hurricane Bonnie, and 98% and 85% for squall lines, respectively. For the observed hurricane Igor, the bias and RMSE decrease 41% and 25% respectively. This study demonstrates the importance of TB calibrations between PMW sensors in order to systematically monitor the global TC life cycles in terms of intensity, inner core structure and convective organization. A physics-based calibration scheme on TC’s TB corrections developed in this study is able to significantly reduce the

  19. Analysis of Program Obfuscation Schemes with Variable Encoding Technique

    Science.gov (United States)

    Fukushima, Kazuhide; Kiyomoto, Shinsaku; Tanaka, Toshiaki; Sakurai, Kouichi

    Program analysis techniques have improved steadily over the past several decades, and software obfuscation schemes have come to be used in many commercial programs. A software obfuscation scheme transforms an original program or a binary file into an obfuscated program that is more complicated and difficult to analyze, while preserving its functionality. However, the security of obfuscation schemes has not been properly evaluated. In this paper, we analyze obfuscation schemes in order to clarify the advantages of our scheme, the XOR-encoding scheme. First, we more clearly define five types of attack models that we defined previously, and define quantitative resistance to these attacks. Then, we compare the security, functionality and efficiency of three obfuscation schemes with encoding variables: (1) Sato et al.'s scheme with linear transformation, (2) our previous scheme with affine transformation, and (3) the XOR-encoding scheme. We show that the XOR-encoding scheme is superior with regard to the following two points: (1) the XOR-encoding scheme is more secure against a data-dependency attack and a brute force attack than our previous scheme, and is as secure against an information-collecting attack and an inverse transformation attack as our previous scheme, (2) the XOR-encoding scheme does not restrict the calculable ranges of programs and the loss of efficiency is less than in our previous scheme.

  20. Efficient multiparty quantum-secret-sharing schemes

    International Nuclear Information System (INIS)

    Xiao Li; Deng Fuguo; Long Guilu; Pan Jianwei

    2004-01-01

    In this work, we generalize the quantum-secret-sharing scheme of Hillery, Buzek, and Berthiaume [Phys. Rev. A 59, 1829 (1999)] into arbitrary multiparties. Explicit expressions for the shared secret bit is given. It is shown that in the Hillery-Buzek-Berthiaume quantum-secret-sharing scheme the secret information is shared in the parity of binary strings formed by the measured outcomes of the participants. In addition, we have increased the efficiency of the quantum-secret-sharing scheme by generalizing two techniques from quantum key distribution. The favored-measuring-basis quantum-secret-sharing scheme is developed from the Lo-Chau-Ardehali technique [H. K. Lo, H. F. Chau, and M. Ardehali, e-print quant-ph/0011056] where all the participants choose their measuring-basis asymmetrically, and the measuring-basis-encrypted quantum-secret-sharing scheme is developed from the Hwang-Koh-Han technique [W. Y. Hwang, I. G. Koh, and Y. D. Han, Phys. Lett. A 244, 489 (1998)] where all participants choose their measuring basis according to a control key. Both schemes are asymptotically 100% in efficiency, hence nearly all the Greenberger-Horne-Zeilinger states in a quantum-secret-sharing process are used to generate shared secret information

  1. Winners and losers of national and global efforts to reconcile agricultural intensification and biodiversity conservation.

    Science.gov (United States)

    Egli, Lukas; Meyer, Carsten; Scherber, Christoph; Kreft, Holger; Tscharntke, Teja

    2018-05-01

    Closing yield gaps within existing croplands, and thereby avoiding further habitat conversions, is a prominently and controversially discussed strategy to meet the rising demand for agricultural products, while minimizing biodiversity impacts. The agricultural intensification associated with such a strategy poses additional threats to biodiversity within agricultural landscapes. The uneven spatial distribution of both yield gaps and biodiversity provides opportunities for reconciling agricultural intensification and biodiversity conservation through spatially optimized intensification. Here, we integrate distribution and habitat information for almost 20,000 vertebrate species with land-cover and land-use datasets. We estimate that projected agricultural intensification between 2000 and 2040 would reduce the global biodiversity value of agricultural lands by 11%, relative to 2000. Contrasting these projections with spatial land-use optimization scenarios reveals that 88% of projected biodiversity loss could be avoided through globally coordinated land-use planning, implying huge efficiency gains through international cooperation. However, global-scale optimization also implies a highly uneven distribution of costs and benefits, resulting in distinct "winners and losers" in terms of national economic development, food security, food sovereignty or conservation. Given conflicting national interests and lacking effective governance mechanisms to guarantee equitable compensation of losers, multinational land-use optimization seems politically unlikely. In turn, 61% of projected biodiversity loss could be avoided through nationally focused optimization, and 33% through optimization within just 10 countries. Targeted efforts to improve the capacity for integrated land-use planning for sustainable intensification especially in these countries, including the strengthening of institutions that can arbitrate subnational land-use conflicts, may offer an effective, yet

  2. An Accurate Estimate of the Free Energy and Phase Diagram of All-DNA Bulk Fluids

    Directory of Open Access Journals (Sweden)

    Emanuele Locatelli

    2018-04-01

    Full Text Available We present a numerical study in which large-scale bulk simulations of self-assembled DNA constructs have been carried out with a realistic coarse-grained model. The investigation aims at obtaining a precise, albeit numerically demanding, estimate of the free energy for such systems. We then, in turn, use these accurate results to validate a recently proposed theoretical approach that builds on a liquid-state theory, the Wertheim theory, to compute the phase diagram of all-DNA fluids. This hybrid theoretical/numerical approach, based on the lowest-order virial expansion and on a nearest-neighbor DNA model, can provide, in an undemanding way, a parameter-free thermodynamic description of DNA associating fluids that is in semi-quantitative agreement with experiments. We show that the predictions of the scheme are as accurate as those obtained with more sophisticated methods. We also demonstrate the flexibility of the approach by incorporating non-trivial additional contributions that go beyond the nearest-neighbor model to compute the DNA hybridization free energy.

  3. Gamma spectrometry; level schemes

    International Nuclear Information System (INIS)

    Blachot, J.; Bocquet, J.P.; Monnand, E.; Schussler, F.

    1977-01-01

    The research presented dealt with: a new beta emitter, isomer of 131 Sn; the 136 I levels fed through the radioactive decay of 136 Te (20.9s); the A=145 chain (β decay of Ba, La and Ce, and level schemes for 145 La, 145 Ce, 145 Pr); the A=47 chain (La and Ce, β decay, and the level schemes of 147 Ce and 147 Pr) [fr

  4. Multi-symplectic integrators: numerical schemes for Hamiltonian PDEs that conserve symplecticity

    Science.gov (United States)

    Bridges, Thomas J.; Reich, Sebastian

    2001-06-01

    The symplectic numerical integration of finite-dimensional Hamiltonian systems is a well established subject and has led to a deeper understanding of existing methods as well as to the development of new very efficient and accurate schemes, e.g., for rigid body, constrained, and molecular dynamics. The numerical integration of infinite-dimensional Hamiltonian systems or Hamiltonian PDEs is much less explored. In this Letter, we suggest a new theoretical framework for generalizing symplectic numerical integrators for ODEs to Hamiltonian PDEs in R2: time plus one space dimension. The central idea is that symplecticity for Hamiltonian PDEs is directional: the symplectic structure of the PDE is decomposed into distinct components representing space and time independently. In this setting PDE integrators can be constructed by concatenating uni-directional ODE symplectic integrators. This suggests a natural definition of multi-symplectic integrator as a discretization that conserves a discrete version of the conservation of symplecticity for Hamiltonian PDEs. We show that this approach leads to a general framework for geometric numerical schemes for Hamiltonian PDEs, which have remarkable energy and momentum conservation properties. Generalizations, including development of higher-order methods, application to the Euler equations in fluid mechanics, application to perturbed systems, and extension to more than one space dimension are also discussed.

  5. Coordinated renewable energy support schemes

    DEFF Research Database (Denmark)

    Morthorst, P.E.; Jensen, S.G.

    2006-01-01

    . The first example covers countries with regional power markets that also regionalise their support schemes, the second countries with separate national power markets that regionalise their support schemes. The main findings indicate that the almost ideal situation exists if the region prior to regionalising...

  6. SpotCaliper: fast wavelet-based spot detection with accurate size estimation.

    Science.gov (United States)

    Püspöki, Zsuzsanna; Sage, Daniel; Ward, John Paul; Unser, Michael

    2016-04-15

    SpotCaliper is a novel wavelet-based image-analysis software providing a fast automatic detection scheme for circular patterns (spots), combined with the precise estimation of their size. It is implemented as an ImageJ plugin with a friendly user interface. The user is allowed to edit the results by modifying the measurements (in a semi-automated way), extract data for further analysis. The fine tuning of the detections includes the possibility of adjusting or removing the original detections, as well as adding further spots. The main advantage of the software is its ability to capture the size of spots in a fast and accurate way. http://bigwww.epfl.ch/algorithms/spotcaliper/ zsuzsanna.puspoki@epfl.ch Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Asynchronous Channel-Hopping Scheme under Jamming Attacks

    Directory of Open Access Journals (Sweden)

    Yongchul Kim

    2018-01-01

    Full Text Available Cognitive radio networks (CRNs are considered an attractive technology to mitigate inefficiency in the usage of licensed spectrum. CRNs allow the secondary users (SUs to access the unused licensed spectrum and use a blind rendezvous process to establish communication links between SUs. In particular, quorum-based channel-hopping (CH schemes have been studied recently to provide guaranteed blind rendezvous in decentralized CRNs without using global time synchronization. However, these schemes remain vulnerable to jamming attacks. In this paper, we first analyze the limitations of quorum-based rendezvous schemes called asynchronous channel hopping (ACH. Then, we introduce a novel sequence sensing jamming attack (SSJA model in which a sophisticated jammer can dramatically reduce the rendezvous success rates of ACH schemes. In addition, we propose a fast and robust asynchronous rendezvous scheme (FRARS that can significantly enhance robustness under jamming attacks. Our numerical results demonstrate that the performance of the proposed scheme vastly outperforms the ACH scheme when there are security concerns about a sequence sensing jammer.

  8. Functional renormalization group and Kohn-Sham scheme in density functional theory

    Science.gov (United States)

    Liang, Haozhao; Niu, Yifei; Hatsuda, Tetsuo

    2018-04-01

    Deriving accurate energy density functional is one of the central problems in condensed matter physics, nuclear physics, and quantum chemistry. We propose a novel method to deduce the energy density functional by combining the idea of the functional renormalization group and the Kohn-Sham scheme in density functional theory. The key idea is to solve the renormalization group flow for the effective action decomposed into the mean-field part and the correlation part. Also, we propose a simple practical method to quantify the uncertainty associated with the truncation of the correlation part. By taking the φ4 theory in zero dimension as a benchmark, we demonstrate that our method shows extremely fast convergence to the exact result even for the highly strong coupling regime.

  9. A first-passage scheme for determination of overall rate constants for non-diffusion-limited suspensions

    Science.gov (United States)

    Lu, Shih-Yuan; Yen, Yi-Ming

    2002-02-01

    A first-passage scheme is devised to determine the overall rate constant of suspensions under the non-diffusion-limited condition. The original first-passage scheme developed for diffusion-limited processes is modified to account for the finite incorporation rate at the inclusion surface by using a concept of the nonzero survival probability of the diffusing entity at entity-inclusion encounters. This nonzero survival probability is obtained from solving a relevant boundary value problem. The new first-passage scheme is validated by an excellent agreement between overall rate constant results from the present development and from an accurate boundary collocation calculation for the three common spherical arrays [J. Chem. Phys. 109, 4985 (1998)], namely simple cubic, body-centered cubic, and face-centered cubic arrays, for a wide range of P and f. Here, P is a dimensionless quantity characterizing the relative rate of diffusion versus surface incorporation, and f is the volume fraction of the inclusion. The scheme is further applied to random spherical suspensions and to investigate the effect of inclusion coagulation on overall rate constants. It is found that randomness in inclusion arrangement tends to lower the overall rate constant for f up to the near close-packing value of the regular arrays because of the inclusion screening effect. This screening effect turns stronger for regular arrays when f is near and above the close-packing value of the regular arrays, and consequently the overall rate constant of the random array exceeds that of the regular array. Inclusion coagulation too induces the inclusion screening effect, and leads to lower overall rate constants.

  10. A fast resonance interference treatment scheme with subgroup method

    International Nuclear Information System (INIS)

    Cao, L.; He, Q.; Wu, H.; Zu, T.; Shen, W.

    2015-01-01

    A fast Resonance Interference Factor (RIF) scheme is proposed to treat the resonance interference effects between different resonance nuclides. This scheme utilizes the conventional subgroup method to evaluate the self-shielded cross sections of the dominant resonance nuclide in the heterogeneous system and the hyper-fine energy group method to represent the resonance interference effects in a simplified homogeneous model. In this paper, the newly implemented scheme is compared to the background iteration scheme, the Resonance Nuclide Group (RNG) scheme and the conventional RIF scheme. The numerical results show that the errors of the effective self-shielded cross sections are significantly reduced by the fast RIF scheme compared with the background iteration scheme and the RNG scheme. Besides, the fast RIF scheme consumes less computation time than the conventional RIF schemes. The speed-up ratio is ~4.5 for MOX pin cell problems. (author)

  11. Arbitrated quantum signature scheme with message recovery

    International Nuclear Information System (INIS)

    Lee, Hwayean; Hong, Changho; Kim, Hyunsang; Lim, Jongin; Yang, Hyung Jin

    2004-01-01

    Two quantum signature schemes with message recovery relying on the availability of an arbitrator are proposed. One scheme uses a public board and the other does not. However both schemes provide confidentiality of the message and a higher efficiency in transmission

  12. CANONICAL BACKWARD DIFFERENTIATION SCHEMES FOR ...

    African Journals Online (AJOL)

    This paper describes a new nonlinear backward differentiation schemes for the numerical solution of nonlinear initial value problems of first order ordinary differential equations. The schemes are based on rational interpolation obtained from canonical polynomials. They are A-stable. The test problems show that they give ...

  13. A simple angular transmit diversity scheme using a single RF frontend for PSK modulation schemes

    DEFF Research Database (Denmark)

    Alrabadi, Osama Nafeth Saleem; Papadias, Constantinos B.; Kalis, Antonis

    2009-01-01

    array (SPA) with a single transceiver, and an array area of 0.0625 square wavelengths. The scheme which requires no channel state information (CSI) at the transmitter, provides mainly a diversity gain to combat against multipath fading. The performance/capacity of the proposed diversity scheme...

  14. Evaluating statistical cloud schemes

    OpenAIRE

    Grützun, Verena; Quaas, Johannes; Morcrette , Cyril J.; Ament, Felix

    2015-01-01

    Statistical cloud schemes with prognostic probability distribution functions have become more important in atmospheric modeling, especially since they are in principle scale adaptive and capture cloud physics in more detail. While in theory the schemes have a great potential, their accuracy is still questionable. High-resolution three-dimensional observational data of water vapor and cloud water, which could be used for testing them, are missing. We explore the potential of ground-based re...

  15. Reconciling medical expenditure estimates from the MEPS and NHEA, 2007.

    Science.gov (United States)

    Bernard, Didem; Cowan, Cathy; Selden, Thomas; Cai, Liming; Catlin, Aaron; Heffler, Stephen

    2012-01-01

    Provide a comparison of health care expenditure estimates for 2007 from the Medical Expenditure Panel Survey (MEPS) and the National Health Expenditure Accounts (NHEA). Reconciling these estimates serves two important purposes. First, it is an important quality assurance exercise for improving and ensuring the integrity of each source's estimates. Second, the reconciliation provides a consistent baseline of health expenditure data for policy simulations. Our results assist researchers to adjust MEPS to be consistent with the NHEA so that the projected costs as well as budgetary and tax implications of any policy change are consistent with national health spending estimates. The Medical Expenditure Panel Survey produced by the Agency for Healthcare Research and Quality, and the National Health Center for Health Statistics and the National Health Expenditures produced by the Centers for Medicare & Medicaid Service's Office of the Actuary. In this study, we focus on the personal health care (PHC) sector, which includes the goods and services rendered to treat or prevent a specific disease or condition in an individual. The official 2007 NHEA estimate for PHC spending is $1,915 billion and the MEPS estimate is $1,126 billion. Adjusting the NHEA estimates for differences in underlying populations, covered services, and other measurement concepts reduces the NHEA estimate for 2007 to $1,366 billion. As a result, MEPS is $240 billion, or 17.6 percent, less than the adjusted NHEA total.

  16. Development of a universal dual-bolus injection scheme for the quantitative assessment of myocardial perfusion cardiovascular magnetic resonance

    Directory of Open Access Journals (Sweden)

    Alfakih Khaled

    2011-05-01

    Full Text Available Abstract Background The dual-bolus protocol enables accurate quantification of myocardial blood flow (MBF by first-pass perfusion cardiovascular magnetic resonance (CMR. However, despite the advantages and increasing demand for the dual-bolus method for accurate quantification of MBF, thus far, it has not been widely used in the field of quantitative perfusion CMR. The main reasons for this are that the setup for the dual-bolus method is complex and requires a state-of-the-art injector and there is also a lack of post processing software. As a solution to one of these problems, we have devised a universal dual-bolus injection scheme for use in a clinical setting. The purpose of this study is to show the setup and feasibility of the universal dual-bolus injection scheme. Methods The universal dual-bolus injection scheme was tested using multiple combinations of different contrast agents, contrast agent dose, power injectors, perfusion sequences, and CMR scanners. This included 3 different contrast agents (Gd-DO3A-butrol, Gd-DTPA and Gd-DOTA, 4 different doses (0.025 mmol/kg, 0.05 mmol/kg, 0.075 mmol/kg and 0.1 mmol/kg, 2 different types of injectors (with and without "pause" function, 5 different sequences (turbo field echo (TFE, balanced TFE, k-space and time (k-t accelerated TFE, k-t accelerated balanced TFE, turbo fast low-angle shot and 3 different CMR scanners from 2 different manufacturers. The relation between the time width of dilute contrast agent bolus curve and cardiac output was obtained to determine the optimal predefined pause duration between dilute and neat contrast agent injection. Results 161 dual-bolus perfusion scans were performed. Three non-injector-related technical errors were observed (1.9%. No injector-related errors were observed. The dual-bolus scheme worked well in all the combinations of parameters if the optimal predefined pause was used. Linear regression analysis showed that the optimal duration for the predefined

  17. LDPC-PPM Coding Scheme for Optical Communication

    Science.gov (United States)

    Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael

    2009-01-01

    In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.

  18. Multidimensional flux-limited advection schemes

    International Nuclear Information System (INIS)

    Thuburn, J.

    1996-01-01

    A general method for building multidimensional shape preserving advection schemes using flux limiters is presented. The method works for advected passive scalars in either compressible or incompressible flow and on arbitrary grids. With a minor modification it can be applied to the equation for fluid density. Schemes using the simplest form of the flux limiter can cause distortion of the advected profile, particularly sideways spreading, depending on the orientation of the flow relative to the grid. This is partly because the simple limiter is too restrictive. However, some straightforward refinements lead to a shape-preserving scheme that gives satisfactory results, with negligible grid-flow angle-dependent distortion

  19. Tightly Secure Signatures From Lossy Identification Schemes

    OpenAIRE

    Abdalla , Michel; Fouque , Pierre-Alain; Lyubashevsky , Vadim; Tibouchi , Mehdi

    2015-01-01

    International audience; In this paper, we present three digital signature schemes with tight security reductions in the random oracle model. Our first signature scheme is a particularly efficient version of the short exponent discrete log-based scheme of Girault et al. (J Cryptol 19(4):463–487, 2006). Our scheme has a tight reduction to the decisional short discrete logarithm problem, while still maintaining the non-tight reduction to the computational version of the problem upon which the or...

  20. LPTA: Location Predictive and Time Adaptive Data Gathering Scheme with Mobile Sink for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chuan Zhu

    2014-01-01

    Full Text Available This paper exploits sink mobility to prolong the lifetime of sensor networks while maintaining the data transmission delay relatively low. A location predictive and time adaptive data gathering scheme is proposed. In this paper, we introduce a sink location prediction principle based on loose time synchronization and deduce the time-location formulas of the mobile sink. According to local clocks and the time-location formulas of the mobile sink, nodes in the network are able to calculate the current location of the mobile sink accurately and route data packets timely toward the mobile sink by multihop relay. Considering that data packets generating from different areas may be different greatly, an adaptive dwelling time adjustment method is also proposed to balance energy consumption among nodes in the network. Simulation results show that our data gathering scheme enables data routing with less data transmission time delay and balance energy consumption among nodes.

  1. Scheme of energy utilities

    International Nuclear Information System (INIS)

    2002-04-01

    This scheme defines the objectives relative to the renewable energies and the rational use of the energy in the framework of the national energy policy. It evaluates the needs and the potentialities of the regions and preconizes the actions between the government and the territorial organizations. The document is presented in four parts: the situation, the stakes and forecasts; the possible actions for new measures; the scheme management and the regional contributions analysis. (A.L.B.)

  2. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  3. Estimating plume dispersion: a comparison of several sigma schemes

    International Nuclear Information System (INIS)

    Irwin, J.S.

    1983-01-01

    The lateral and vertical Gaussian plume dispersion parameters are estimated and compared with field tracer data collected at 11 sites. The dispersion parameter schemes used in this analysis include Cramer's scheme, suggested for tall stack dispersion estimates, Draxler's scheme, suggested for elevated and surface releases, Pasquill's scheme, suggested for interim use in dispersion estimates, and the Pasquill--Gifford scheme using Turner's technique for assigning stability categories. The schemes suggested by Cramer, Draxler and Pasquill estimate the dispersion parameters using onsite measurements of the vertical and lateral wind-velocity variances at the effective release height. The performances of these schemes in estimating the dispersion parameters are compared with that of the Pasquill--Gifford scheme, using the Prairie Grass and Karlsruhe data. For these two experiments, the estimates of the dispersion parameters using Draxler's scheme correlate better with the measurements than did estimates using the Pasquill--Gifford scheme. Comparison of the dispersion parameter estimates with the measurement suggests that Draxler's scheme for characterizing the dispersion results in the smallest mean fractional error in the estimated dispersion parameters and the smallest variance of the fractional errors

  4. An authentication scheme for secure access to healthcare services.

    Science.gov (United States)

    Khan, Muhammad Khurram; Kumari, Saru

    2013-08-01

    Last few decades have witnessed boom in the development of information and communication technologies. Health-sector has also been benefitted with this advancement. To ensure secure access to healthcare services some user authentication mechanisms have been proposed. In 2012, Wei et al. proposed a user authentication scheme for telecare medical information system (TMIS). Recently, Zhu pointed out offline password guessing attack on Wei et al.'s scheme and proposed an improved scheme. In this article, we analyze both of these schemes for their effectiveness in TMIS. We show that Wei et al.'s scheme and its improvement proposed by Zhu fail to achieve some important characteristics necessary for secure user authentication. We find that security problems of Wei et al.'s scheme stick with Zhu's scheme; like undetectable online password guessing attack, inefficacy of password change phase, traceability of user's stolen/lost smart card and denial-of-service threat. We also identify that Wei et al.'s scheme lacks forward secrecy and Zhu's scheme lacks session key between user and healthcare server. We therefore propose an authentication scheme for TMIS with forward secrecy which preserves the confidentiality of air messages even if master secret key of healthcare server is compromised. Our scheme retains advantages of Wei et al.'s scheme and Zhu's scheme, and offers additional security. The security analysis and comparison results show the enhanced suitability of our scheme for TMIS.

  5. One-Stage and Two-Stage Schemes of High Performance Synchronous PWM with Smooth Pulses-Ratio Changing

    DEFF Research Database (Denmark)

    Oleschuk, V.; Blaabjerg, Frede

    2002-01-01

    This paper presents detailed description of one-stage and two-stage schemes of a novel method of synchronous, pulsewidth modulation (PWM) for voltage source inverters for ac drive application. The proposed control functions provide accurate realization of different versions of voltage space vector...... modulation with synchronization of the voltage waveform of the inverter and with smooth pulse-ratio changing. Voltage spectra do not contain even harmonic and sub-harmonics (combined harmonics) during the whole control range including the zone of overmodulation. Examples of determination of the basic control...

  6. A digital data acquisition scheme for SPECT and PET small animal imaging detectors for Theranostic applications

    Science.gov (United States)

    Georgiou, M.; Fysikopoulos, E.; Loudos, G.

    2017-11-01

    Nanoparticle based drug delivery is considered as a new, promising technology for the efficient treatment of various diseases. When nanoparticles are radiolabelled it is possible to image them, using molecular imaging techniques. The use of magnetic nanoparticles in hyperthermia is one of the most promising nanomedicine directions and requires the accurate, non-invasive, monitoring of temperature increase and drug release. The combination of imaging and therapy has opened the very promising Theranostics domain. In this work, we present a digital data acquisition scheme for nuclear medicine dedicated detectors for Theranostic applications.

  7. Cost-based droop scheme for DC microgrid

    DEFF Research Database (Denmark)

    Nutkani, Inam Ullah; Wang, Peng; Loh, Poh Chiang

    2014-01-01

    voltage level, less on optimized operation and control of generation sources. The latter theme is perused in this paper, where cost-based droop scheme is proposed for distributed generators (DGs) in DC microgrids. Unlike traditional proportional power sharing based droop scheme, the proposed scheme......-connected operation. Most importantly, the proposed scheme can reduce overall total generation cost in DC microgrids without centralized controller and communication links. The performance of the proposed scheme has been verified under different load conditions.......DC microgrids are gaining interest due to higher efficiencies of DC distribution compared with AC. The benefits of DC systems have been widely researched for data centers, IT facilities and residential applications. The research focus, however, has been more on system architecture and optimal...

  8. Resonance ionization scheme development for europium

    Energy Technology Data Exchange (ETDEWEB)

    Chrysalidis, K., E-mail: katerina.chrysalidis@cern.ch; Goodacre, T. Day; Fedosseev, V. N.; Marsh, B. A. [CERN (Switzerland); Naubereit, P. [Johannes Gutenberg-Universität, Institiut für Physik (Germany); Rothe, S.; Seiffert, C. [CERN (Switzerland); Kron, T.; Wendt, K. [Johannes Gutenberg-Universität, Institiut für Physik (Germany)

    2017-11-15

    Odd-parity autoionizing states of europium have been investigated by resonance ionization spectroscopy via two-step, two-resonance excitations. The aim of this work was to establish ionization schemes specifically suited for europium ion beam production using the ISOLDE Resonance Ionization Laser Ion Source (RILIS). 13 new RILIS-compatible ionization schemes are proposed. The scheme development was the first application of the Photo Ionization Spectroscopy Apparatus (PISA) which has recently been integrated into the RILIS setup.

  9. Secure RAID Schemes for Distributed Storage

    OpenAIRE

    Huang, Wentao; Bruck, Jehoshua

    2016-01-01

    We propose secure RAID, i.e., low-complexity schemes to store information in a distributed manner that is resilient to node failures and resistant to node eavesdropping. We generalize the concept of systematic encoding to secure RAID and show that systematic schemes have significant advantages in the efficiencies of encoding, decoding and random access. For the practical high rate regime, we construct three XOR-based systematic secure RAID schemes with optimal or almost optimal encoding and ...

  10. Wireless Broadband Access and Accounting Schemes

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In this paper, we propose two wireless broadband access and accounting schemes. In both schemes, the accounting system adopts RADIUS protocol, but the access system adopts SSH and SSL protocols respectively.

  11. Security analysis and improvements of arbitrated quantum signature schemes

    International Nuclear Information System (INIS)

    Zou Xiangfu; Qiu Daowen

    2010-01-01

    A digital signature is a mathematical scheme for demonstrating the authenticity of a digital message or document. For signing quantum messages, some arbitrated quantum signature (AQS) schemes have been proposed. It was claimed that these AQS schemes could guarantee unconditional security. However, we show that they can be repudiated by the receiver Bob. To conquer this shortcoming, we construct an AQS scheme using a public board. The AQS scheme not only avoids being disavowed by the receiver but also preserves all merits in the existing schemes. Furthermore, we discover that entanglement is not necessary while all these existing AQS schemes depend on entanglement. Therefore, we present another AQS scheme without utilizing entangled states in the signing phase and the verifying phase. This scheme has three advantages: it does not utilize entangled states and it preserves all merits in the existing schemes; the signature can avoid being disavowed by the receiver; and it provides a higher efficiency in transmission and reduces the complexity of implementation.

  12. 75 FR 67453 - Identity Theft Red Flags and Address Discrepancies Under the Fair and Accurate Credit...

    Science.gov (United States)

    2010-11-02

    ... reasonable policies and procedures that a user of consumer reports must employ when a user receives a notice... policies and procedures for users of consumer reports to enable a user to form a reasonable belief that it knows the identity of the person for whom it has obtained a consumer report, and reconcile the address...

  13. Accurate and balanced anisotropic Gaussian type orbital basis sets for atoms in strong magnetic fields

    Science.gov (United States)

    Zhu, Wuming; Trickey, S. B.

    2017-12-01

    In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li+, Be+, and B+, in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.

  14. Accurate and balanced anisotropic Gaussian type orbital basis sets for atoms in strong magnetic fields.

    Science.gov (United States)

    Zhu, Wuming; Trickey, S B

    2017-12-28

    In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li + , Be + , and B + , in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B

  15. Capacity-achieving CPM schemes

    OpenAIRE

    Perotti, Alberto; Tarable, Alberto; Benedetto, Sergio; Montorsi, Guido

    2008-01-01

    The pragmatic approach to coded continuous-phase modulation (CPM) is proposed as a capacity-achieving low-complexity alternative to the serially-concatenated CPM (SC-CPM) coding scheme. In this paper, we first perform a selection of the best spectrally-efficient CPM modulations to be embedded into SC-CPM schemes. Then, we consider the pragmatic capacity (a.k.a. BICM capacity) of CPM modulations and optimize it through a careful design of the mapping between input bits and CPM waveforms. The s...

  16. Improvement of a Quantum Proxy Blind Signature Scheme

    Science.gov (United States)

    Zhang, Jia-Lei; Zhang, Jian-Zhong; Xie, Shu-Cui

    2018-06-01

    Improvement of a quantum proxy blind signature scheme is proposed in this paper. Six-qubit entangled state functions as quantum channel. In our scheme, a trust party Trent is introduced so as to avoid David's dishonest behavior. The receiver David verifies the signature with the help of Trent in our scheme. The scheme uses the physical characteristics of quantum mechanics to implement message blinding, delegation, signature and verification. Security analysis proves that our scheme has the properties of undeniability, unforgeability, anonymity and can resist some common attacks.

  17. A group signature scheme based on quantum teleportation

    International Nuclear Information System (INIS)

    Wen Xiaojun; Tian Yuan; Ji Liping; Niu Xiamu

    2010-01-01

    In this paper, we present a group signature scheme using quantum teleportation. Different from classical group signature and current quantum signature schemes, which could only deliver either group signature or unconditional security, our scheme guarantees both by adopting quantum key preparation, quantum encryption algorithm and quantum teleportation. Security analysis proved that our scheme has the characteristics of group signature, non-counterfeit, non-disavowal, blindness and traceability. Our quantum group signature scheme has a foreseeable application in the e-payment system, e-government, e-business, etc.

  18. A group signature scheme based on quantum teleportation

    Energy Technology Data Exchange (ETDEWEB)

    Wen Xiaojun; Tian Yuan; Ji Liping; Niu Xiamu, E-mail: wxjun36@gmail.co [Information Countermeasure Technique Research Institute, Harbin Institute of Technology, Harbin 150001 (China)

    2010-05-01

    In this paper, we present a group signature scheme using quantum teleportation. Different from classical group signature and current quantum signature schemes, which could only deliver either group signature or unconditional security, our scheme guarantees both by adopting quantum key preparation, quantum encryption algorithm and quantum teleportation. Security analysis proved that our scheme has the characteristics of group signature, non-counterfeit, non-disavowal, blindness and traceability. Our quantum group signature scheme has a foreseeable application in the e-payment system, e-government, e-business, etc.

  19. A new access scheme in OFDMA systems

    Institute of Scientific and Technical Information of China (English)

    GU Xue-lin; YAN Wei; TIAN Hui; ZHANG Ping

    2006-01-01

    This article presents a dynamic random access scheme for orthogonal frequency division multiple access (OFDMA) systems. The key features of the proposed scheme are:it is a combination of both the distributed and the centralized schemes, it can accommodate several delay sensitivity classes,and it can adjust the number of random access channels in a media access control (MAC) frame and the access probability according to the outcome of Mobile Terminals access attempts in previous MAC frames. For floating populated packet-based networks, the proposed scheme possibly leads to high average user satisfaction.

  20. Adaptive transmission schemes for MISO spectrum sharing systems

    KAUST Repository

    Bouida, Zied

    2013-06-01

    We propose three adaptive transmission techniques aiming to maximize the capacity of a multiple-input-single-output (MISO) secondary system under the scenario of an underlay cognitive radio network. In the first scheme, namely the best antenna selection (BAS) scheme, the antenna maximizing the capacity of the secondary link is used for transmission. We then propose an orthogonal space time bloc code (OSTBC) transmission scheme using the Alamouti scheme with transmit antenna selection (TAS), namely the TAS/STBC scheme. The performance improvement offered by this scheme comes at the expense of an increased complexity and delay when compared to the BAS scheme. As a compromise between these schemes, we propose a hybrid scheme using BAS when only one antenna verifies the interference condition and TAS/STBC when two or more antennas are illegible for communication. We first derive closed-form expressions of the statistics of the received signal-to-interference-and-noise ratio (SINR) at the secondary receiver (SR). These results are then used to analyze the performance of the proposed techniques in terms of the average spectral efficiency, the average number of transmit antennas, and the average bit error rate (BER). This performance is then illustrated via selected numerical examples. © 2013 IEEE.

  1. Minimal gain marching schemes: searching for unstable steady-states with unsteady solvers

    Science.gov (United States)

    de S. Teixeira, Renan; S. de B. Alves, Leonardo

    2017-12-01

    Reference solutions are important in several applications. They are used as base states in linear stability analyses as well as initial conditions and reference states for sponge zones in numerical simulations, just to name a few examples. Their accuracy is also paramount in both fields, leading to more reliable analyses and efficient simulations, respectively. Hence, steady-states usually make the best reference solutions. Unfortunately, standard marching schemes utilized for accurate unsteady simulations almost never reach steady-states of unstable flows. Steady governing equations could be solved instead, by employing Newton-type methods often coupled with continuation techniques. However, such iterative approaches do require large computational resources and very good initial guesses to converge. These difficulties motivated the development of a technique known as selective frequency damping (SFD) (Åkervik et al. in Phys Fluids 18(6):068102, 2006). It adds a source term to the unsteady governing equations that filters out the unstable frequencies, allowing a steady-state to be reached. This approach does not require a good initial condition and works well for self-excited flows, where a single nonzero excitation frequency is selected by either absolute or global instability mechanisms. On the other hand, it seems unable to damp stationary disturbances. Furthermore, flows with a broad unstable frequency spectrum might require the use of multiple filters, which delays convergence significantly. Both scenarios appear in convectively, absolutely or globally unstable flows. An alternative approach is proposed in the present paper. It modifies the coefficients of a marching scheme in such a way that makes the absolute value of its linear gain smaller than one within the required unstable frequency spectra, allowing the respective disturbance amplitudes to decay given enough time. These ideas are applied here to implicit multi-step schemes. A few chosen test cases

  2. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    Science.gov (United States)

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of

  3. Scheme-Independent Predictions in QCD: Commensurate Scale Relations and Physical Renormalization Schemes

    International Nuclear Information System (INIS)

    Brodsky, Stanley J.

    1998-01-01

    Commensurate scale relations are perturbative QCD predictions which relate observable to observable at fixed relative scale, such as the ''generalized Crewther relation'', which connects the Bjorken and Gross-Llewellyn Smith deep inelastic scattering sum rules to measurements of the e + e - annihilation cross section. All non-conformal effects are absorbed by fixing the ratio of the respective momentum transfer and energy scales. In the case of fixed-point theories, commensurate scale relations relate both the ratio of couplings and the ratio of scales as the fixed point is approached. The relations between the observables are independent of the choice of intermediate renormalization scheme or other theoretical conventions. Commensurate scale relations also provide an extension of the standard minimal subtraction scheme, which is analytic in the quark masses, has non-ambiguous scale-setting properties, and inherits the physical properties of the effective charge α V (Q 2 ) defined from the heavy quark potential. The application of the analytic scheme to the calculation of quark-mass-dependent QCD corrections to the Z width is also reviewed

  4. Quantum attack-resistent certificateless multi-receiver signcryption scheme.

    Directory of Open Access Journals (Sweden)

    Huixian Li

    Full Text Available The existing certificateless signcryption schemes were designed mainly based on the traditional public key cryptography, in which the security relies on the hard problems, such as factor decomposition and discrete logarithm. However, these problems will be easily solved by the quantum computing. So the existing certificateless signcryption schemes are vulnerable to the quantum attack. Multivariate public key cryptography (MPKC, which can resist the quantum attack, is one of the alternative solutions to guarantee the security of communications in the post-quantum age. Motivated by these concerns, we proposed a new construction of the certificateless multi-receiver signcryption scheme (CLMSC based on MPKC. The new scheme inherits the security of MPKC, which can withstand the quantum attack. Multivariate quadratic polynomial operations, which have lower computation complexity than bilinear pairing operations, are employed in signcrypting a message for a certain number of receivers in our scheme. Security analysis shows that our scheme is a secure MPKC-based scheme. We proved its security under the hardness of the Multivariate Quadratic (MQ problem and its unforgeability under the Isomorphism of Polynomials (IP assumption in the random oracle model. The analysis results show that our scheme also has the security properties of non-repudiation, perfect forward secrecy, perfect backward secrecy and public verifiability. Compared with the existing schemes in terms of computation complexity and ciphertext length, our scheme is more efficient, which makes it suitable for terminals with low computation capacity like smart cards.

  5. Birkhoffian Symplectic Scheme for a Quantum System

    International Nuclear Information System (INIS)

    Su Hongling

    2010-01-01

    In this paper, a classical system of ordinary differential equations is built to describe a kind of n-dimensional quantum systems. The absorption spectrum and the density of the states for the system are defined from the points of quantum view and classical view. From the Birkhoffian form of the equations, a Birkhoffian symplectic scheme is derived for solving n-dimensional equations by using the generating function method. Besides the Birkhoffian structure-preserving, the new scheme is proven to preserve the discrete local energy conservation law of the system with zero vector f. Some numerical experiments for a 3-dimensional example show that the new scheme can simulate the general Birkhoffian system better than the implicit midpoint scheme, which is well known to be symplectic scheme for Hamiltonian system. (general)

  6. Autonomous droop scheme with reduced generation cost

    DEFF Research Database (Denmark)

    Nutkani, Inam Ullah; Loh, Poh Chiang; Blaabjerg, Frede

    2013-01-01

    Droop scheme has been widely applied to the control of Distributed Generators (DGs) in microgrids for proportional power sharing based on their ratings. For standalone microgrid, where centralized management system is not viable, the proportional power sharing based droop might not suit well since...... DGs are usually of different types unlike synchronous generators. This paper presents an autonomous droop scheme that takes into consideration the operating cost, efficiency and emission penalty of each DG since all these factors directly or indirectly contributes to the Total Generation Cost (TGC......) of the overall microgrid. Comparing it with the traditional scheme, the proposed scheme has retained its simplicity, which certainly is a feature preferred by the industry. The overall performance of the proposed scheme has been verified through simulation and experiment....

  7. An Accurate and Impartial Expert Assignment Method for Scientific Project Review

    Directory of Open Access Journals (Sweden)

    Mingliang Yue

    2017-12-01

    Full Text Available Purpose: This paper proposes an expert assignment method for scientific project review that considers both accuracy and impartiality. As impartial and accurate peer review is extremely important to ensure the quality and feasibility of scientific projects, enhanced methods for managing the process are needed. Design/methodology/approach: To ensure both accuracy and impartiality, we design four criteria, the reviewers’ fitness degree, research intensity, academic association, and potential conflict of interest, to express the characteristics of an appropriate peer review expert. We first formalize the expert assignment problem as an optimization problem based on the designed criteria, and then propose a randomized algorithm to solve the expert assignment problem of identifying reviewer adequacy. Findings: Simulation results show that the proposed method is quite accurate and impartial during expert assignment. Research limitations: Although the criteria used in this paper can properly show the characteristics of a good and appropriate peer review expert, more criteria/conditions can be included in the proposed scheme to further enhance accuracy and impartiality of the expert assignment. Practical implications: The proposed method can help project funding agencies (e.g. the National Natural Science Foundation of China find better experts for project peer review. Originality/value: To the authors’ knowledge, this is the first publication that proposes an algorithm that applies an impartial approach to the project review expert assignment process. The simulation results show the effectiveness of the proposed method.

  8. Enhanced arbitrated quantum signature scheme using Bell states

    International Nuclear Information System (INIS)

    Wang Chao; Liu Jian-Wei; Shang Tao

    2014-01-01

    We investigate the existing arbitrated quantum signature schemes as well as their cryptanalysis, including intercept-resend attack and denial-of-service attack. By exploring the loopholes of these schemes, a malicious signatory may successfully disavow signed messages, or the receiver may actively negate the signature from the signatory without being detected. By modifying the existing schemes, we develop counter-measures to these attacks using Bell states. The newly proposed scheme puts forward the security of arbitrated quantum signature. Furthermore, several valuable topics are also presented for further research of the quantum signature scheme

  9. Decoupling schemes for the SSC Collider

    International Nuclear Information System (INIS)

    Cai, Y.; Bourianoff, G.; Cole, B.; Meinke, R.; Peterson, J.; Pilat, F.; Stampke, S.; Syphers, M.; Talman, R.

    1993-05-01

    A decoupling system is designed for the SSC Collider. This system can accommodate three decoupling schemes by using 44 skew quadrupoles in the different configurations. Several decoupling schemes are studied and compared in this paper

  10. Time-and-ID-Based Proxy Reencryption Scheme

    Directory of Open Access Journals (Sweden)

    Kambombo Mtonga

    2014-01-01

    Full Text Available Time- and ID-based proxy reencryption scheme is proposed in this paper in which a type-based proxy reencryption enables the delegator to implement fine-grained policies with one key pair without any additional trust on the proxy. However, in some applications, the time within which the data was sampled or collected is very critical. In such applications, for example, healthcare and criminal investigations, the delegatee may be interested in only some of the messages with some types sampled within some time bound instead of the entire subset. Hence, in order to carter for such situations, in this paper, we propose a time-and-identity-based proxy reencryption scheme that takes into account the time within which the data was collected as a factor to consider when categorizing data in addition to its type. Our scheme is based on Boneh and Boyen identity-based scheme (BB-IBE and Matsuo’s proxy reencryption scheme for identity-based encryption (IBE to IBE. We prove that our scheme is semantically secure in the standard model.

  11. Cancelable remote quantum fingerprint templates protection scheme

    International Nuclear Information System (INIS)

    Liao Qin; Guo Ying; Huang Duan

    2017-01-01

    With the increasing popularity of fingerprint identification technology, its security and privacy have been paid much attention. Only the security and privacy of biological information are insured, the biological technology can be better accepted and used by the public. In this paper, we propose a novel quantum bit (qbit)-based scheme to solve the security and privacy problem existing in the traditional fingerprint identification system. By exploiting the properties of quantm mechanics, our proposed scheme, cancelable remote quantum fingerprint templates protection scheme, can achieve the unconditional security guaranteed in an information-theoretical sense. Moreover, this novel quantum scheme can invalidate most of the attacks aimed at the fingerprint identification system. In addition, the proposed scheme is applicable to the requirement of remote communication with no need to worry about its security and privacy during the transmission. This is an absolute advantage when comparing with other traditional methods. Security analysis shows that the proposed scheme can effectively ensure the communication security and the privacy of users’ information for the fingerprint identification. (paper)

  12. Reconciling conflicting electrophysiological findings on the guidance of attention by working memory.

    Science.gov (United States)

    Carlisle, Nancy B; Woodman, Geoffrey F

    2013-10-01

    Maintaining a representation in working memory has been proposed to be sufficient for the execution of top-down attentional control. Two recent electrophysiological studies that recorded event-related potentials (ERPs) during similar paradigms have tested this proposal, but have reported contradictory findings. The goal of the present study was to reconcile these previous reports. To this end, we used the stimuli from one study (Kumar, Soto, & Humphreys, 2009) combined with the task manipulations from the other (Carlisle & Woodman, 2011b). We found that when an item matching a working memory representation was presented in a visual search array, we could use ERPs to quantify the size of the covert attention effect. When the working memory matches were consistently task-irrelevant, we observed a weak attentional bias to these items. However, when the same item indicated the location of the search target, we found that the covert attention effect was approximately four times larger. This shows that simply maintaining a representation in working memory is not equivalent to having a top-down attentional set for that item. Our findings indicate that high-level goals mediate the relationship between the contents of working memory and perceptual attention.

  13. Reconciling international human rights and cultural relativism: the case of female circumcision.

    Science.gov (United States)

    James, Stephen A

    1994-01-01

    How can we reconcile, in a non-ethnocentric fashion, the enforcement of international, universal human rights standards with the protection of cultural diversity? Examining this question, taking the controversy over female circumcision as a case study, this article will try to bridge the gap between the traditional anthropological view that human rights are non-existent -- or completely relativised to particular cultures -- and the view of Western naturalistic philosophers (including Lockeian philosophers in the natural rights tradition, and Aquinas and neo-Thomists in the natural law tradition) that they are universal -- simply derived from a basic human nature we all share. After briefly defending a universalist conception of human rights, the article will provide a critique of female circumcision as a human rights violation by three principal means: by an internal critique of the practice using the condoning cultures' own functionalist criteria; by identifying supra-national norms the cultures subscribe to which conflict with the practice; and by the identification of traditional and novel values in the cultures, conducive to those norms. Through this analysis, it will be seen that cultural survival, diversity and flourishing need not be incompatible with upholding international, universal human rights standards.

  14. High-resolution crystal structures of protein helices reconciled with three-centered hydrogen bonds and multipole electrostatics.

    Science.gov (United States)

    Kuster, Daniel J; Liu, Chengyu; Fang, Zheng; Ponder, Jay W; Marshall, Garland R

    2015-01-01

    Theoretical and experimental evidence for non-linear hydrogen bonds in protein helices is ubiquitous. In particular, amide three-centered hydrogen bonds are common features of helices in high-resolution crystal structures of proteins. These high-resolution structures (1.0 to 1.5 Å nominal crystallographic resolution) position backbone atoms without significant bias from modeling constraints and identify Φ = -62°, ψ = -43 as the consensus backbone torsional angles of protein helices. These torsional angles preserve the atomic positions of α-β carbons of the classic Pauling α-helix while allowing the amide carbonyls to form bifurcated hydrogen bonds as first suggested by Némethy et al. in 1967. Molecular dynamics simulations of a capped 12-residue oligoalanine in water with AMOEBA (Atomic Multipole Optimized Energetics for Biomolecular Applications), a second-generation force field that includes multipole electrostatics and polarizability, reproduces the experimentally observed high-resolution helical conformation and correctly reorients the amide-bond carbonyls into bifurcated hydrogen bonds. This simple modification of backbone torsional angles reconciles experimental and theoretical views to provide a unified view of amide three-centered hydrogen bonds as crucial components of protein helices. The reason why they have been overlooked by structural biologists depends on the small crankshaft-like changes in orientation of the amide bond that allows maintenance of the overall helical parameters (helix pitch (p) and residues per turn (n)). The Pauling 3.6(13) α-helix fits the high-resolution experimental data with the minor exception of the amide-carbonyl electron density, but the previously associated backbone torsional angles (Φ, Ψ) needed slight modification to be reconciled with three-atom centered H-bonds and multipole electrostatics. Thus, a new standard helix, the 3.6(13/10)-, Némethy- or N-helix, is proposed. Due to the use of constraints from

  15. A New Adaptive Hungarian Mating Scheme in Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Chanju Jung

    2016-01-01

    Full Text Available In genetic algorithms, selection or mating scheme is one of the important operations. In this paper, we suggest an adaptive mating scheme using previously suggested Hungarian mating schemes. Hungarian mating schemes consist of maximizing the sum of mating distances, minimizing the sum, and random matching. We propose an algorithm to elect one of these Hungarian mating schemes. Every mated pair of solutions has to vote for the next generation mating scheme. The distance between parents and the distance between parent and offspring are considered when they vote. Well-known combinatorial optimization problems, the traveling salesperson problem, and the graph bisection problem are used for the test bed of our method. Our adaptive strategy showed better results than not only pure and previous hybrid schemes but also existing distance-based mating schemes.

  16. A hybrid Eulerian–Lagrangian numerical scheme for solving prognostic equations in fluid dynamics

    Directory of Open Access Journals (Sweden)

    E. Kaas

    2013-11-01

    Full Text Available A new hybrid Eulerian–Lagrangian numerical scheme (HEL for solving prognostic equations in fluid dynamics is proposed. The basic idea is to use an Eulerian as well as a fully Lagrangian representation of all prognostic variables. The time step in Lagrangian space is obtained as a translation of irregularly spaced Lagrangian parcels along downstream trajectories. Tendencies due to other physical processes than advection are calculated in Eulerian space, interpolated, and added to the Lagrangian parcel values. A directionally biased mixing amongst neighboring Lagrangian parcels is introduced. The rate of mixing is proportional to the local deformation rate of the flow. The time stepping in Eulerian representation is achieved in two steps: first a mass-conserving Eulerian or semi-Lagrangian scheme is used to obtain a provisional forecast. This forecast is then nudged towards target values defined from the irregularly spaced Lagrangian parcel values. The nudging procedure is defined in such a way that mass conservation and shape preservation is ensured in Eulerian space. The HEL scheme has been designed to be accurate, multi-tracer efficient, mass conserving, and shape preserving. In Lagrangian space only physically based mixing takes place; i.e., the problem of artificial numerical mixing is avoided. This property is desirable in atmospheric chemical transport models since spurious numerical mixing can impact chemical concentrations severely. The properties of HEL are here verified in two-dimensional tests. These include deformational passive transport on the sphere, and simulations with a semi-implicit shallow water model including topography.

  17. Reconciling results of LSND, MiniBooNE and other experiments with soft decoherence

    CERN Document Server

    Farzan, Yasaman; Smirnov, Alexei Yu

    2008-01-01

    We propose an explanation of the LSND signal via quantum-decoherence of the mass states, which leads to damping of the interference terms in the oscillation probabilities. The decoherence parameters as well as their energy dependence are chosen in such a way that the damping affects only oscillations with the large (atmospheric) $\\Delta m^2$ and rapidly decreases with the neutrino energy. This allows us to reconcile the positive LSND signal with MiniBooNE and other null-result experiments. The standard explanations of solar, atmospheric, KamLAND and MINOS data are not affected. No new particles, and in particular, no sterile neutrinos are needed. The LSND signal is controlled by the 1-3 mixing angle $\\theta_{13}$ and, depending on the degree of damping, yields $0.0014 < \\sin^2\\theta_{13} < 0.034$ at $3\\sigma$. The scenario can be tested at upcoming $\\theta_{13}$ searches: while the comparison of near and far detector measurements at reactors should lead to a null-result a positive signal for $\\theta_{13...

  18. Contrasting microbial community assembly hypotheses: a reconciling tale from the Río Tinto.

    Science.gov (United States)

    Palacios, Carmen; Zettler, Erik; Amils, Ricardo; Amaral-Zettler, Linda

    2008-01-01

    The Río Tinto (RT) is distinguished from other acid mine drainage systems by its natural and ancient origins. Microbial life from all three domains flourishes in this ecosystem, but bacteria dominate metabolic processes that perpetuate environmental extremes. While the patchy geochemistry of the RT likely influences the dynamics of bacterial populations, demonstrating which environmental variables shape microbial diversity and unveiling the mechanisms underlying observed patterns, remain major challenges in microbial ecology whose answers rely upon detailed assessments of community structures coupled with fine-scale measurements of physico-chemical parameters. By using high-throughput environmental tag sequencing we achieved saturation of richness estimators for the first time in the RT. We found that environmental factors dictate the distribution of the most abundant taxa in this system, but stochastic niche differentiation processes, such as mutation and dispersal, also contribute to observed diversity patterns. We predict that studies providing clues to the evolutionary and ecological processes underlying microbial distributions will reconcile the ongoing debate between the Baas Becking vs. Hubbell community assembly hypotheses.

  19. Contrasting microbial community assembly hypotheses: a reconciling tale from the Río Tinto.

    Directory of Open Access Journals (Sweden)

    Carmen Palacios

    Full Text Available The Río Tinto (RT is distinguished from other acid mine drainage systems by its natural and ancient origins. Microbial life from all three domains flourishes in this ecosystem, but bacteria dominate metabolic processes that perpetuate environmental extremes. While the patchy geochemistry of the RT likely influences the dynamics of bacterial populations, demonstrating which environmental variables shape microbial diversity and unveiling the mechanisms underlying observed patterns, remain major challenges in microbial ecology whose answers rely upon detailed assessments of community structures coupled with fine-scale measurements of physico-chemical parameters.By using high-throughput environmental tag sequencing we achieved saturation of richness estimators for the first time in the RT. We found that environmental factors dictate the distribution of the most abundant taxa in this system, but stochastic niche differentiation processes, such as mutation and dispersal, also contribute to observed diversity patterns.We predict that studies providing clues to the evolutionary and ecological processes underlying microbial distributions will reconcile the ongoing debate between the Baas Becking vs. Hubbell community assembly hypotheses.

  20. Renormalization scheme invariant predictions for deep-inelastic scattering and determination of ΛQCD

    International Nuclear Information System (INIS)

    Vovk, V.I.

    1989-01-01

    Theoretical aspects of the renormalization scheme (RS) ambiguity problem and the approaches to its solution are discussed from the point of view of QCD phenomenology and the scale Λ determination. The method of RS-invariant perturbation theory (RSIPT) as a sound basis for describing experiment in QCD is advocated. To this end the method is developed for the non-singlet structure functions (SF) of deep-inelastic scattering and recent high precision data on SF's are analyzed in a RS-invariant way. It is shown that RSIPT leads to a more accurate and reliable determination of the QCD scale Λ, which is consistent with the theoretical assumption of a better convergence of RS-invariant perturbative series. 24 refs.; 1 tab

  1. Quantum Communication Scheme Using Non-symmetric Quantum Channel

    International Nuclear Information System (INIS)

    Cao Haijing; Chen Zhonghua; Song Heshan

    2008-01-01

    A theoretical quantum communication scheme based on entanglement swapping and superdense coding is proposed with a 3-dimensional Bell state and 2-dimensional Bell state function as quantum channel. quantum key distribution and quantum secure direct communication can be simultaneously accomplished in the scheme. The scheme is secure and has high source capacity. At last, we generalize the quantum communication scheme to d-dimensional quantum channel

  2. An interactive ocean surface albedo scheme (OSAv1.0): formulation and evaluation in ARPEGE-Climat (V6.1) and LMDZ (V5A)

    Science.gov (United States)

    Séférian, Roland; Baek, Sunghye; Boucher, Olivier; Dufresne, Jean-Louis; Decharme, Bertrand; Saint-Martin, David; Roehrig, Romain

    2018-01-01

    Ocean surface represents roughly 70 % of the Earth's surface, playing a large role in the partitioning of the energy flow within the climate system. The ocean surface albedo (OSA) is an important parameter in this partitioning because it governs the amount of energy penetrating into the ocean or reflected towards space. The old OSA schemes in the ARPEGE-Climat and LMDZ models only resolve the latitudinal dependence in an ad hoc way without an accurate representation of the solar zenith angle dependence. Here, we propose a new interactive OSA scheme suited for Earth system models, which enables coupling between Earth system model components like surface ocean waves and marine biogeochemistry. This scheme resolves spectrally the various contributions of the surface for direct and diffuse solar radiation. The implementation of this scheme in two Earth system models leads to substantial improvements in simulated OSA. At the local scale, models using the interactive OSA scheme better replicate the day-to-day distribution of OSA derived from ground-based observations in contrast to old schemes. At global scale, the improved representation of OSA for diffuse radiation reduces model biases by up to 80 % over the tropical oceans, reducing annual-mean model-data error in surface upwelling shortwave radiation by up to 7 W m-2 over this domain. The spatial correlation coefficient between modeled and observed OSA at monthly resolution has been increased from 0.1 to 0.8. Despite its complexity, this interactive OSA scheme is computationally efficient for enabling precise OSA calculation without penalizing the elapsed model time.

  3. A universal encoding scheme for MIMO transmission using a single active element for PSK modulation schemes

    DEFF Research Database (Denmark)

    Alrabadi, Osama; Papadias, C.B.; Kalis, A.

    2009-01-01

    A universal scheme for encoding multiple symbol streams using a single driven element (and consequently a single radio frequency (RF) frontend) surrounded by parasitic elements (PE) loaded with variable reactive loads, is proposed in this paper. The proposed scheme is based on creating a MIMO sys...

  4. Accurate Recovery of H i Velocity Dispersion from Radio Interferometers

    Energy Technology Data Exchange (ETDEWEB)

    Ianjamasimanana, R. [Max-Planck Institut für Astronomie, Königstuhl 17, D-69117, Heidelberg (Germany); Blok, W. J. G. de [Netherlands Institute for Radio Astronomy (ASTRON), Postbus 2, 7990 AA Dwingeloo (Netherlands); Heald, George H., E-mail: roger@mpia.de, E-mail: blok@astron.nl, E-mail: George.Heald@csiro.au [Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700 AV, Groningen (Netherlands)

    2017-05-01

    Gas velocity dispersion measures the amount of disordered motion of a rotating disk. Accurate estimates of this parameter are of the utmost importance because the parameter is directly linked to disk stability and star formation. A global measure of the gas velocity dispersion can be inferred from the width of the atomic hydrogen (H i) 21 cm line. We explore how several systematic effects involved in the production of H i cubes affect the estimate of H i velocity dispersion. We do so by comparing the H i velocity dispersion derived from different types of data cubes provided by The H i Nearby Galaxy Survey. We find that residual-scaled cubes best recover the H i velocity dispersion, independent of the weighting scheme used and for a large range of signal-to-noise ratio. For H i observations, where the dirty beam is substantially different from a Gaussian, the velocity dispersion values are overestimated unless the cubes are cleaned close to (e.g., ∼1.5 times) the noise level.

  5. A multi-standard active-RC filter with accurate tuning system

    International Nuclear Information System (INIS)

    Ma Heping; Yuan Fang; Shi Yin; Dai, F F

    2009-01-01

    A low-power, highly linear, multi-standard, active-RC filter with an accurate and novel tuning architecture is presented. It exhibits IEEE 802.11 a/b/g (9.5 MHz) and DVB-H (3 MHz, 4 MHz) application. The filter exploits digitally-controlled polysilicon resistor banks and a phase lock loop type automatic tuning system. The novel and complex automatic frequency calibration scheme provides better than 4 corner frequency accuracy, and it can be powered down after calibration to save power and avoid digital signal interference. The filter achieves OIP3 of 26 dBm and the measured group delay variation of the receiver filter is 50 ns (WLAN mode). Its dissipation is 3.4 mA in RX mode and 2.3 mA (only for one path) in TX mode from a 2.85 V supply. The dissipation of calibration consumes 2 mA. The circuit has been fabricated in a 0.35 μm 47 GHz SiGe BiCMOS technology; the receiver and transmitter filter occupy 0.21 mm 2 and 0.11 mm 2 (calibration circuit excluded), respectively.

  6. Critical analysis of the Bennett-Riedel attack on secure cryptographic key distributions via the Kirchhoff-Law-Johnson-noise scheme.

    Science.gov (United States)

    Kish, Laszlo B; Abbott, Derek; Granqvist, Claes G

    2013-01-01

    Recently, Bennett and Riedel (BR) (http://arxiv.org/abs/1303.7435v1) argued that thermodynamics is not essential in the Kirchhoff-law-Johnson-noise (KLJN) classical physical cryptographic exchange method in an effort to disprove the security of the KLJN scheme. They attempted to demonstrate this by introducing a dissipation-free deterministic key exchange method with two batteries and two switches. In the present paper, we first show that BR's scheme is unphysical and that some elements of its assumptions violate basic protocols of secure communication. All our analyses are based on a technically unlimited Eve with infinitely accurate and fast measurements limited only by the laws of physics and statistics. For non-ideal situations and at active (invasive) attacks, the uncertainly principle between measurement duration and statistical errors makes it impossible for Eve to extract the key regardless of the accuracy or speed of her measurements. To show that thermodynamics and noise are essential for the security, we crack the BR system with 100% success via passive attacks, in ten different ways, and demonstrate that the same cracking methods do not function for the KLJN scheme that employs Johnson noise to provide security underpinned by the Second Law of Thermodynamics. We also present a critical analysis of some other claims by BR; for example, we prove that their equations for describing zero security do not apply to the KLJN scheme. Finally we give mathematical security proofs for each BR-attack against the KLJN scheme and conclude that the information theoretic (unconditional) security of the KLJN method has not been successfully challenged.

  7. Proper use of colour schemes for image data visualization

    Science.gov (United States)

    Vozenilek, Vit; Vondrakova, Alena

    2018-04-01

    With the development of information and communication technologies, new technologies are leading to an exponential increase in the volume and types of data available. At this time of the information society, data is one of the most important arguments for policy making, crisis management, research and education, and many other fields. An essential task for experts is to share high-quality data providing the right information at the right time. Designing of data presentation can largely influence the user perception and the cognitive aspects of data interpretation. Significant amounts of data can be visualised in some way. One image can thus replace a considerable number of numeric tables and texts. The paper focuses on the accurate visualisation of data from the point of view of used colour schemes. Bad choose of colours can easily confuse the user and lead to the data misinterpretation. On the contrary, correctly created visualisations can make information transfer much simpler and more efficient.

  8. Schemes for fibre-based entanglement generation in the telecom band

    International Nuclear Information System (INIS)

    Chen, Jun; Lee, Kim Fook; Li Xiaoying; Voss, Paul L; Kumar, Prem

    2007-01-01

    We investigate schemes for generating polarization-entangled photon pairs in standard optical fibres. The advantages of a double-loop scheme are explored through comparison with two other schemes, namely, the Sagnac-loop scheme and the counter-propagating scheme. Experimental measurements with the double-loop scheme verify the predicted advantages

  9. Tradable schemes

    NARCIS (Netherlands)

    J.K. Hoogland (Jiri); C.D.D. Neumann

    2000-01-01

    textabstractIn this article we present a new approach to the numerical valuation of derivative securities. The method is based on our previous work where we formulated the theory of pricing in terms of tradables. The basic idea is to fit a finite difference scheme to exact solutions of the pricing

  10. Finite-volume scheme for anisotropic diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Es, Bram van, E-mail: bramiozo@gmail.com [Centrum Wiskunde & Informatica, P.O. Box 94079, 1090GB Amsterdam (Netherlands); FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands); Koren, Barry [Eindhoven University of Technology (Netherlands); Blank, Hugo J. de [FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands)

    2016-02-01

    In this paper, we apply a special finite-volume scheme, limited to smooth temperature distributions and Cartesian grids, to test the importance of connectivity of the finite volumes. The area of application is nuclear fusion plasma with field line aligned temperature gradients and extreme anisotropy. We apply the scheme to the anisotropic heat-conduction equation, and compare its results with those of existing finite-volume schemes for anisotropic diffusion. Also, we introduce a general model adaptation of the steady diffusion equation for extremely anisotropic diffusion problems with closed field lines.

  11. Time-dependent density functional theory for open systems with a positivity-preserving decomposition scheme for environment spectral functions

    International Nuclear Information System (INIS)

    Wang, RuLin; Zheng, Xiao; Kwok, YanHo; Xie, Hang; Chen, GuanHua; Yam, ChiYung

    2015-01-01

    Understanding electronic dynamics on material surfaces is fundamentally important for applications including nanoelectronics, inhomogeneous catalysis, and photovoltaics. Practical approaches based on time-dependent density functional theory for open systems have been developed to characterize the dissipative dynamics of electrons in bulk materials. The accuracy and reliability of such approaches depend critically on how the electronic structure and memory effects of surrounding material environment are accounted for. In this work, we develop a novel squared-Lorentzian decomposition scheme, which preserves the positive semi-definiteness of the environment spectral matrix. The resulting electronic dynamics is guaranteed to be both accurate and convergent even in the long-time limit. The long-time stability of electronic dynamics simulation is thus greatly improved within the current decomposition scheme. The validity and usefulness of our new approach are exemplified via two prototypical model systems: quasi-one-dimensional atomic chains and two-dimensional bilayer graphene

  12. Time-dependent density functional theory for open systems with a positivity-preserving decomposition scheme for environment spectral functions.

    Science.gov (United States)

    Wang, RuLin; Zheng, Xiao; Kwok, YanHo; Xie, Hang; Chen, GuanHua; Yam, ChiYung

    2015-04-14

    Understanding electronic dynamics on material surfaces is fundamentally important for applications including nanoelectronics, inhomogeneous catalysis, and photovoltaics. Practical approaches based on time-dependent density functional theory for open systems have been developed to characterize the dissipative dynamics of electrons in bulk materials. The accuracy and reliability of such approaches depend critically on how the electronic structure and memory effects of surrounding material environment are accounted for. In this work, we develop a novel squared-Lorentzian decomposition scheme, which preserves the positive semi-definiteness of the environment spectral matrix. The resulting electronic dynamics is guaranteed to be both accurate and convergent even in the long-time limit. The long-time stability of electronic dynamics simulation is thus greatly improved within the current decomposition scheme. The validity and usefulness of our new approach are exemplified via two prototypical model systems: quasi-one-dimensional atomic chains and two-dimensional bilayer graphene.

  13. Reconciling water harvesting and soil erosion control by thoughtful implementation of SWC measures

    Science.gov (United States)

    Bellin, N.; Vanacker, V.; van Wesemael, B.

    2012-04-01

    -agricultural catchments have been found only partially filled with sediments. Extensive reforestation programs, recovery of natural vegetation (dense matorral) and abandonment of agricultural fields in the Sierras led to a strong reduction of the sediment transport towards the river system. Although the effect of the check dams on the transport of sediment has not been important, the check dams have played a major role in flood control in the area. Our data indicate that thoughtful design of SWC schemes is necessary to reconcile water harvesting, erosion mitigation and flood control. Currently, the erosion hotspots are clearly localized in the agricultural fields, and not in the marginal lands in the Sierras. The combination of on-site and off-site SWC measures in the agricultural areas is highly efficient to reduce fluxes of sediment and surface water.

  14. Computing with high-resolution upwind schemes for hyperbolic equations

    International Nuclear Information System (INIS)

    Chakravarthy, S.R.; Osher, S.; California Univ., Los Angeles)

    1985-01-01

    Computational aspects of modern high-resolution upwind finite-difference schemes for hyperbolic systems of conservation laws are examined. An operational unification is demonstrated for constructing a wide class of flux-difference-split and flux-split schemes based on the design principles underlying total variation diminishing (TVD) schemes. Consideration is also given to TVD scheme design by preprocessing, the extension of preprocessing and postprocessing approaches to general control volumes, the removal of expansion shocks and glitches, relaxation methods for implicit TVD schemes, and a new family of high-accuracy TVD schemes. 21 references

  15. Optimum Electrode Configurations for Two-Probe, Four-Probe and Multi-Probe Schemes in Electrical Resistance Tomography for Delamination Identification in Carbon Fiber Reinforced Composites

    Directory of Open Access Journals (Sweden)

    Luis Waldo Escalona-Galvis

    2018-04-01

    Full Text Available Internal damage in Carbon Fiber Reinforced Polymer (CFRP composites modifies the internal electrical conductivity of the composite material. Electrical Resistance Tomography (ERT is a non-destructive evaluation (NDE technique that determines the extent of damage based on electrical conductivity changes. Implementation of ERT for damage identification in CFRP composites requires the optimal selection of the sensing sites for accurate results. This selection depends on the measuring scheme used. The present work uses an effective independence (EI measure for selecting the minimum set of measurements for ERT damage identification using three measuring schemes: two-probe, four-probe and multi-probe. The electrical potential field in two CFRP laminate layups with 14 electrodes is calculated using finite element analyses (FEA for a set of specified delamination damage cases. The measuring schemes consider the cases of 14 electrodes distributed on both sides and seven electrodes on only one side of the laminate for each layup. The effectiveness of EI reduction is demonstrated by comparing the inverse identification results of delamination cases for the full and the reduced sets using the measuring schemes and electrode sets. This work shows that the EI measure optimally reduces electrode and electrode combinations in ERT based damage identification for different measuring schemes.

  16. Mixed ultrasoft/norm-conserved pseudopotential scheme

    DEFF Research Database (Denmark)

    Stokbro, Kurt

    1996-01-01

    A variant of the Vanderbilt ultrasoft pseudopotential scheme, where the norm conservation is released for only one or a few angular channels, is presented. Within this scheme some difficulties of the truly ultrasoft pseudopotentials are overcome without sacrificing the pseudopotential softness. (...

  17. New practicable Siberian Snake schemes

    International Nuclear Information System (INIS)

    Steffen, K.

    1983-07-01

    Siberian Snake schemes can be inserted in ring accelerators for making the spin tune almost independent of energy. Two such schemes are here suggested which lend particularly well to practical application over a wide energy range. Being composed of horizontal and vertical bending magnets, the proposed snakes are designed to have a small maximum beam excursion in one plane. By applying in this plane a bending correction that varies with energy, they can be operated at fixed geometry in the other plane where most of the bending occurs, thus avoiding complicated magnet motion or excessively large magnet apertures that would otherwise be needed for large energy variations. The first of the proposed schemes employs a pair of standard-type Siberian Snakes, i.e. of the usual 1st and 2nd kind which rotate the spin about the longitudinal and the transverse horizontal axis, respectively. The second scheme employs a pair of novel-type snakes which rotate the spin about either one of the horizontal axes that are at 45 0 to the beam direction. In obvious reference to these axes, they are called left-pointed and right-pointed snakes. (orig.)

  18. Reconciling Gases With Glasses: Magma Degassing, Overturn and Mixing at Kilauea Volcano, Hawai`i

    Science.gov (United States)

    Edmonds, M.; Gerlach, T. M.

    2006-12-01

    well as between them; this has important implications for volcano monitoring. Application of this new, remote and accurate technique to measure volcanic gases allows data concerning the volatile budget, both from glasses and from gases, to be reconciled and used in tandem to provide more detailed and complete models for magma migration, storage and transport at Kilauea Volcano.

  19. Robust and Efficient Authentication Scheme for Session Initiation Protocol

    Directory of Open Access Journals (Sweden)

    Yanrong Lu

    2015-01-01

    Full Text Available The session initiation protocol (SIP is a powerful application-layer protocol which is used as a signaling one for establishing, modifying, and terminating sessions among participants. Authentication is becoming an increasingly crucial issue when a user asks to access SIP services. Hitherto, many authentication schemes have been proposed to enhance the security of SIP. In 2014, Arshad and Nikooghadam proposed an enhanced authentication and key agreement scheme for SIP and claimed that their scheme could withstand various attacks. However, in this paper, we show that Arshad and Nikooghadam’s authentication scheme is still susceptible to key-compromise impersonation and trace attacks and does not provide proper mutual authentication. To conquer the flaws, we propose a secure and efficient ECC-based authentication scheme for SIP. Through the informal and formal security analyses, we demonstrate that our scheme is resilient to possible known attacks including the attacks found in Arshad et al.’s scheme. In addition, the performance analysis shows that our scheme has similar or better efficiency in comparison with other existing ECC-based authentication schemes for SIP.

  20. Certificateless Key-Insulated Generalized Signcryption Scheme without Bilinear Pairings

    Directory of Open Access Journals (Sweden)

    Caixue Zhou

    2017-01-01

    Full Text Available Generalized signcryption (GSC can be applied as an encryption scheme, a signature scheme, or a signcryption scheme with only one algorithm and one key pair. A key-insulated mechanism can resolve the private key exposure problem. To ensure the security of cloud storage, we introduce the key-insulated mechanism into GSC and propose a concrete scheme without bilinear pairings in the certificateless cryptosystem setting. We provide a formal definition and a security model of certificateless key-insulated GSC. Then, we prove that our scheme is confidential under the computational Diffie-Hellman (CDH assumption and unforgeable under the elliptic curve discrete logarithm (EC-DL assumption. Our scheme also supports both random-access key update and secure key update. Finally, we evaluate the efficiency of our scheme and demonstrate that it is highly efficient. Thus, our scheme is more suitable for users who communicate with the cloud using mobile devices.

  1. Anonymous Credential Schemes with Encrypted Attributes

    NARCIS (Netherlands)

    Guajardo Merchan, J.; Mennink, B.; Schoenmakers, B.

    2011-01-01

    In anonymous credential schemes, users obtain credentials on certain attributes from an issuer, and later show these credentials to a relying party anonymously and without fully disclosing the attributes. In this paper, we introduce the notion of (anonymous) credential schemes with encrypted

  2. Simple Numerical Schemes for the Korteweg-deVries Equation

    International Nuclear Information System (INIS)

    McKinstrie, C. J.; Kozlov, M.V.

    2000-01-01

    Two numerical schemes, which simulate the propagation of dispersive non-linear waves, are described. The first is a split-step Fourier scheme for the Korteweg-de Vries (KdV) equation. The second is a finite-difference scheme for the modified KdV equation. The stability and accuracy of both schemes are discussed. These simple schemes can be used to study a wide variety of physical processes that involve dispersive nonlinear waves

  3. Simple Numerical Schemes for the Korteweg-deVries Equation

    Energy Technology Data Exchange (ETDEWEB)

    C. J. McKinstrie; M. V. Kozlov

    2000-12-01

    Two numerical schemes, which simulate the propagation of dispersive non-linear waves, are described. The first is a split-step Fourier scheme for the Korteweg-de Vries (KdV) equation. The second is a finite-difference scheme for the modified KdV equation. The stability and accuracy of both schemes are discussed. These simple schemes can be used to study a wide variety of physical processes that involve dispersive nonlinear waves.

  4. Performance comparison of renewable incentive schemes using optimal control

    International Nuclear Information System (INIS)

    Oak, Neeraj; Lawson, Daniel; Champneys, Alan

    2014-01-01

    Many governments worldwide have instituted incentive schemes for renewable electricity producers in order to meet carbon emissions targets. These schemes aim to boost investment and hence growth in renewable energy industries. This paper examines four such schemes: premium feed-in tariffs, fixed feed-in tariffs, feed-in tariffs with contract for difference and the renewable obligations scheme. A generalised mathematical model of industry growth is presented and fitted with data from the UK onshore wind industry. The model responds to subsidy from each of the four incentive schemes. A utility or ‘fitness’ function that maximises installed capacity at some fixed time in the future while minimising total cost of subsidy is postulated. Using this function, the optimal strategy for provision and timing of subsidy for each scheme is calculated. Finally, a comparison of the performance of each scheme, given that they use their optimal control strategy, is presented. This model indicates that the premium feed-in tariff and renewable obligation scheme produce the joint best results. - Highlights: • Stochastic differential equation model of renewable energy industry growth and prices, using UK onshore wind data 1992–2010. • Cost of production reduces as cumulative installed capacity of wind energy increases, consistent with the theory of learning. • Studies the effect of subsidy using feed-in tariff schemes, and the ‘renewable obligations’ scheme. • We determine the optimal timing and quantity of subsidy required to maximise industry growth and minimise costs. • The premium feed-in tariff scheme and the renewable obligations scheme produce the best results under optimal control

  5. A rational function based scheme for solving advection equation

    International Nuclear Information System (INIS)

    Xiao, Feng; Yabe, Takashi.

    1995-07-01

    A numerical scheme for solving advection equations is presented. The scheme is derived from a rational interpolation function. Some properties of the scheme with respect to convex-concave preserving and monotone preserving are discussed. We find that the scheme is attractive in surpressinging overshoots and undershoots even in the vicinities of discontinuity. The scheme can also be easily swicthed as the CIP (Cubic interpolated Pseudo-Particle) method to get a third-order accuracy in smooth region. Numbers of numerical tests are carried out to show the non-oscillatory and less diffusive nature of the scheme. (author)

  6. Reconciling the good patient persona with problematic and non-problematic humour: a grounded theory.

    Science.gov (United States)

    McCreaddie, May; Wiggins, Sally

    2009-08-01

    Humour is a complex phenomenon, incorporating cognitive, emotional, behavioural, physiological and social aspects. Research to date has concentrated on reviewing (rehearsed) humour and 'healthy' individuals via correlation studies using personality-trait based measurements, principally on psychology students in laboratory conditions. Nurses are key participants in modern healthcare interactions however, little is known about their (spontaneous) humour use. A middle-range theory that accounted for humour use in CNS-patient interactions was the aim of the study. The study reviewed the antecedents of humour exploring the use of humour in relation to (motivational) humour theories. Twenty Clinical Nurse Specialist-patient interactions and their respective peer groups in a country of the United Kingdom. An evolved constructivist grounded theory approach investigated a complex and dynamic phenomenon in situated contexts. Naturally occurring interactions provided the basis of the data corpus with follow-up interviews, focus groups, observation and field notes. A constant comparative approach to data collection and analysis was applied until theoretical sufficiency incorporating an innovative interpretative and illustrative framework. This paper reports the grounded theory and is principally based upon 20 CNS-patient interactions and follow-up data. The negative case analysis and peer group interactions will be reported in separate publications. The theory purports that patients' use humour to reconcile a good patient persona. The core category of the good patient persona, two of its constituent elements (compliance, sycophancy), conditions under which it emerges and how this relates to the use of humour are outlined and discussed. In seeking to establish and maintain a meaningful and therapeutic interaction with the CNS, patients enact a good patient persona to varying degrees depending upon the situated context. The good patient persona needs to be maintained within the

  7. Algebraic K-theory of generalized schemes

    DEFF Research Database (Denmark)

    Anevski, Stella Victoria Desiree

    and geometry over the field with one element. It also permits the construction of important Arakelov theoretical objects, such as the completion \\Spec Z of Spec Z. In this thesis, we prove a projective bundle theorem for the eld with one element and compute the Chow rings of the generalized schemes Sp\\ec ZN......Nikolai Durov has developed a generalization of conventional scheme theory in which commutative algebraic monads replace commutative unital rings as the basic algebraic objects. The resulting geometry is expressive enough to encompass conventional scheme theory, tropical algebraic geometry......, appearing in the construction of \\Spec Z....

  8. A modified symplectic PRK scheme for seismic wave modeling

    Science.gov (United States)

    Liu, Shaolin; Yang, Dinghui; Ma, Jian

    2017-02-01

    A new scheme for the temporal discretization of the seismic wave equation is constructed based on symplectic geometric theory and a modified strategy. The ordinary differential equation in terms of time, which is obtained after spatial discretization via the spectral-element method, is transformed into a Hamiltonian system. A symplectic partitioned Runge-Kutta (PRK) scheme is used to solve the Hamiltonian system. A term related to the multiplication of the spatial discretization operator with the seismic wave velocity vector is added into the symplectic PRK scheme to create a modified symplectic PRK scheme. The symplectic coefficients of the new scheme are determined via Taylor series expansion. The positive coefficients of the scheme indicate that its long-term computational capability is more powerful than that of conventional symplectic schemes. An exhaustive theoretical analysis reveals that the new scheme is highly stable and has low numerical dispersion. The results of three numerical experiments demonstrate the high efficiency of this method for seismic wave modeling.

  9. A staggered conservative scheme for every Froude number in rapidly varied shallow water flows

    Science.gov (United States)

    Stelling, G. S.; Duinmeijer, S. P. A.

    2003-12-01

    This paper proposes a numerical technique that in essence is based upon the classical staggered grids and implicit numerical integration schemes, but that can be applied to problems that include rapidly varied flows as well. Rapidly varied flows occur, for instance, in hydraulic jumps and bores. Inundation of dry land implies sudden flow transitions due to obstacles such as road banks. Near such transitions the grid resolution is often low compared to the gradients of the bathymetry. In combination with the local invalidity of the hydrostatic pressure assumption, conservation properties become crucial. The scheme described here, combines the efficiency of staggered grids with conservation properties so as to ensure accurate results for rapidly varied flows, as well as in expansions as in contractions. In flow expansions, a numerical approximation is applied that is consistent with the momentum principle. In flow contractions, a numerical approximation is applied that is consistent with the Bernoulli equation. Both approximations are consistent with the shallow water equations, so under sufficiently smooth conditions they converge to the same solution. The resulting method is very efficient for the simulation of large-scale inundations.

  10. Finite Difference Schemes as Algebraic Correspondences between Layers

    Science.gov (United States)

    Malykh, Mikhail; Sevastianov, Leonid

    2018-02-01

    For some differential equations, especially for Riccati equation, new finite difference schemes are suggested. These schemes define protective correspondences between the layers. Calculation using these schemes can be extended to the area beyond movable singularities of exact solution without any error accumulation.

  11. Financial incentive schemes in primary care

    Directory of Open Access Journals (Sweden)

    Gillam S

    2015-09-01

    Full Text Available Stephen Gillam Department of Public Health and Primary Care, Institute of Public Health, University of Cambridge, Cambridge, UK Abstract: Pay-for-performance (P4P schemes have become increasingly common in primary care, and this article reviews their impact. It is based primarily on existing systematic reviews. The evidence suggests that P4P schemes can change health professionals' behavior and improve recorded disease management of those clinical processes that are incentivized. P4P may narrow inequalities in performance comparing deprived with nondeprived areas. However, such schemes have unintended consequences. Whether P4P improves the patient experience, the outcomes of care or population health is less clear. These practical uncertainties mirror the ethical concerns of many clinicians that a reductionist approach to managing markers of chronic disease runs counter to the humanitarian values of family practice. The variation in P4P schemes between countries reflects different historical and organizational contexts. With so much uncertainty regarding the effects of P4P, policy makers are well advised to proceed carefully with the implementation of such schemes until and unless clearer evidence for their cost–benefit emerges. Keywords: financial incentives, pay for performance, quality improvement, primary care

  12. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    Os, J.J.A.M. van; Uittenbogaard, R.E.

    2004-01-01

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  13. Effect of climate change on the irrigation and discharge scheme for winter wheat in Huaibei Plain, China

    Science.gov (United States)

    Zhu, Y.; Ren, L.; Lü, H.

    2017-12-01

    On the Huaibei Plain of Anhui Province, China, winter wheat (WW) is the most prominent crop. The study area belongs to transitional climate, with shallow water table. The original climate change is complex, in addition, global warming make the climate change more complex. The winter wheat growth period is from October to June, just during the rainless season, the WW growth always depends on part of irrigation water. Under such complex climate change, the rainfall varies during the growing seasons, and water table elevations also vary. Thus, water tables supply variable moisture change between soil water and groundwater, which impact the irrigation and discharge scheme for plant growth and yield. In Huaibei plain, the environmental pollution is very serious because of agricultural use of chemical fertilizer, pesticide, herbicide and etc. In order to protect river water and groundwater from pollution, the irrigation and discharge scheme should be estimated accurately. Therefore, determining the irrigation and discharge scheme for winter wheat under climate change is important for the plant growth management decision-making. Based on field observations and local weather data of 2004-2005 and 2005-2006, the numerical model HYDRUS-1D was validated and calibrated by comparing simulated and measured root-zone soil water contents. The validated model was used to estimate the irrigation and discharge scheme in 2010-2090 under the scenarios described by HadCM3 (1970 to 2000 climate states are taken as baselines) with winter wheat growth in an optimum state indicated by growth height and LAI.

  14. Generalization of binary tensor product schemes depends upon four parameters

    International Nuclear Information System (INIS)

    Bashir, R.; Bari, M.; Mustafa, G.

    2018-01-01

    This article deals with general formulae of parametric and non parametric bivariate subdivision scheme with four parameters. By assigning specific values to those parameters we get some special cases of existing tensor product schemes as well as a new proposed scheme. The behavior of schemes produced by the general formulae is interpolating, approximating and relaxed. Approximating bivariate subdivision schemes produce some other surfaces as compared to interpolating bivariate subdivision schemes. Polynomial reproduction and polynomial generation are desirable properties of subdivision schemes. Capability of polynomial reproduction and polynomial generation is strongly connected with smoothness, sum rules, convergence and approximation order. We also calculate the polynomial generation and polynomial reproduction of 9-point bivariate approximating subdivision scheme. Comparison of polynomial reproduction, polynomial generation and continuity of existing and proposed schemes has also been established. Some numerical examples are also presented to show the behavior of bivariate schemes. (author)

  15. An Efficient, Semi-implicit Pressure-based Scheme Employing a High-resolution Finitie Element Method for Simulating Transient and Steady, Inviscid and Viscous, Compressible Flows on Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Richard C. Martineau; Ray A. Berry

    2003-04-01

    equation in the PCICE-FEM scheme is provided with sufficient internal energy information to avoid iteration. The ability of the PCICE-FEM scheme to accurately and efficiently simulate a wide variety of inviscid and viscous compressible flows is demonstrated here.

  16. Latest Developments on Obtaining Accurate Measurements with Pitot Tubes in ZPG Turbulent Boundary Layers

    Science.gov (United States)

    Nagib, Hassan; Vinuesa, Ricardo

    2013-11-01

    Ability of available Pitot tube corrections to provide accurate mean velocity profiles in ZPG boundary layers is re-examined following the recent work by Bailey et al. Measurements by Bailey et al., carried out with probes of diameters ranging from 0.2 to 1.89 mm, together with new data taken with larger diameters up to 12.82 mm, show deviations with respect to available high-quality datasets and hot-wire measurements in the same Reynolds number range. These deviations are significant in the buffer region around y+ = 30 - 40 , and lead to disagreement in the von Kármán coefficient κ extracted from profiles. New forms for shear, near-wall and turbulence corrections are proposed, highlighting the importance of the latest one. Improved agreement in mean velocity profiles is obtained with new forms, where shear and near-wall corrections contribute with around 85%, and remaining 15% of the total correction comes from turbulence correction. Finally, available algorithms to correct wall position in profile measurements of wall-bounded flows are tested, using as benchmark the corrected Pitot measurements with artificially simulated probe shifts and blockage effects. We develop a new scheme, κB - Musker, which is able to accurately locate wall position.

  17. Accurate palm vein recognition based on wavelet scattering and spectral regression kernel discriminant analysis

    Science.gov (United States)

    Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad

    2015-01-01

    Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.

  18. Signature Schemes Secure against Hard-to-Invert Leakage

    DEFF Research Database (Denmark)

    Faust, Sebastian; Hazay, Carmit; Nielsen, Jesper Buus

    2012-01-01

    of the secret key. As a second contribution, we construct a signature scheme that achieves security for random messages assuming that the adversary is given a polynomial-time hard to invert function. Here, polynomial-hardness is required even when given the entire public-key – so called weak auxiliary input......-theoretically reveal the entire secret key. In this work, we propose the first constructions of digital signature schemes that are secure in the auxiliary input model. Our main contribution is a digital signature scheme that is secure against chosen message attacks when given an exponentially hard-to-invert function...... security. We show that such signature schemes readily give us auxiliary input secure identification schemes...

  19. ONU Power Saving Scheme for EPON System

    Science.gov (United States)

    Mukai, Hiroaki; Tano, Fumihiko; Tanaka, Masaki; Kozaki, Seiji; Yamanaka, Hideaki

    PON (Passive Optical Network) achieves FTTH (Fiber To The Home) economically, by sharing an optical fiber among plural subscribers. Recently, global climate change has been recognized as a serious near term problem. Power saving techniques for electronic devices are important. In PON system, the ONU (Optical Network Unit) power saving scheme has been studied and defined in XG-PON. In this paper, we propose an ONU power saving scheme for EPON. Then, we present an analysis of the power reduction effect and the data transmission delay caused by the ONU power saving scheme. According to the analysis, we propose an efficient provisioning method for the ONU power saving scheme which is applicable to both of XG-PON and EPON.

  20. A survey of Strong Convergent Schemes for the Simulation of ...

    African Journals Online (AJOL)

    We considered strong convergent stochastic schemes for the simulation of stochastic differential equations. The stochastic Taylor's expansion, which is the main tool used for the derivation of strong convergent schemes; the Euler Maruyama, Milstein scheme, stochastic multistep schemes, Implicit and Explicit schemes were ...

  1. Highly accurate Michelson type wavelength meter that uses a rubidium stabilized 1560 nm diode laser as a wavelength reference

    International Nuclear Information System (INIS)

    Masuda, Shin; Kanoh, Eiji; Irisawa, Akiyoshi; Niki, Shoji

    2009-01-01

    We investigated the accuracy limitation of a wavelength meter installed in a vacuum chamber to enable us to develop a highly accurate meter based on a Michelson interferometer in 1550 nm optical communication bands. We found that an error of parts per million order could not be avoided using famous wavelength compensation equations. Chromatic dispersion of the refractive index in air can almost be disregarded when a 1560 nm wavelength produced by a rubidium (Rb) stabilized distributed feedback (DFB) diode laser is used as a reference wavelength. We describe a novel dual-wavelength self-calibration scheme that maintains high accuracy of the wavelength meter. The method uses the fundamental and second-harmonic wavelengths of an Rb-stabilized DFB diode laser. Consequently, a highly accurate Michelson type wavelength meter with an absolute accuracy of 5x10 -8 (10 MHz, 0.08 pm) over a wide wavelength range including optical communication bands was achieved without the need for a vacuum chamber.

  2. A Fuzzy Commitment Scheme with McEliece's Cipher

    Directory of Open Access Journals (Sweden)

    Deo Brat Ojha

    2010-04-01

    Full Text Available In this paper an attempt has been made to explain a fuzzy commitment scheme with McEliece scheme. The efficiency and security of this cryptosystem is comparatively better than any other cryptosystem. This scheme is one of the interesting candidates for post quantum cryptography. Hence our interest to deal with this system with fuzzy commitment scheme. The concept itself is illustrated with the help of a simple situation and the validation of mathematical experimental verification is provided.

  3. Feasible Teleportation Schemes with Five-Atom Entangled State

    Institute of Scientific and Technical Information of China (English)

    XUE Zheng-Yuan; YI You-Min; CAO Zhuo-Liang

    2006-01-01

    Teleportation schemes with a five-atom entangled state are investigated. In the teleportation scheme Bell state measurements (BSMs) are difficult for physical realization, so we investigate another strategy using separate measurements instead of BSM based on cavity quantum electrodynamics techniques. The scheme of two-atom entangled state teleportation is a controlled and probabilistic one. For the teleportation of the three-atom entangled state, the scheme is a probabilistic one. The fidelity and the probability of the successful teleportation are also obtained.

  4. Homogenization scheme for acoustic metamaterials

    KAUST Repository

    Yang, Min

    2014-02-26

    We present a homogenization scheme for acoustic metamaterials that is based on reproducing the lowest orders of scattering amplitudes from a finite volume of metamaterials. This approach is noted to differ significantly from that of coherent potential approximation, which is based on adjusting the effective-medium parameters to minimize scatterings in the long-wavelength limit. With the aid of metamaterials’ eigenstates, the effective parameters, such as mass density and elastic modulus can be obtained by matching the surface responses of a metamaterial\\'s structural unit cell with a piece of homogenized material. From the Green\\'s theorem applied to the exterior domain problem, matching the surface responses is noted to be the same as reproducing the scattering amplitudes. We verify our scheme by applying it to three different examples: a layered lattice, a two-dimensional hexagonal lattice, and a decorated-membrane system. It is shown that the predicted characteristics and wave fields agree almost exactly with numerical simulations and experiments and the scheme\\'s validity is constrained by the number of dominant surface multipoles instead of the usual long-wavelength assumption. In particular, the validity extends to the full band in one dimension and to regimes near the boundaries of the Brillouin zone in two dimensions.

  5. A simple extension of Roe's scheme for real gases

    Energy Technology Data Exchange (ETDEWEB)

    Arabi, Sina, E-mail: sina.arabi@polymtl.ca; Trépanier, Jean-Yves; Camarero, Ricardo

    2017-01-15

    The purpose of this paper is to develop a highly accurate numerical algorithm to model real gas flows in local thermodynamic equilibrium (LTE). The Euler equations are solved using a finite volume method based on Roe's flux difference splitting scheme including real gas effects. A novel algorithm is proposed to calculate the Jacobian matrix which satisfies the flux difference splitting exactly in the average state for a general equation of state. This algorithm increases the robustness and accuracy of the method, especially around the contact discontinuities and shock waves where the gas properties jump appreciably. The results are compared with an exact solution of the Riemann problem for the shock tube which considers the real gas effects. In addition, the method is applied to a blunt cone to illustrate the capability of the proposed extension in solving two dimensional flows.

  6. The Use of Meteosat Second Generation Satellite Data Within A New Type of Solar Irradiance Calculation Scheme

    Science.gov (United States)

    Mueller, R. W.; Beyer, H. G.; Cros, S.; Dagestad, K. F.; Dumortier, D.; Ineichen, P.; Hammer, A.; Heinemann, D.; Kuhlemann, R.; Olseth, J. A.; Piernavieja, G.; Reise, C.; Schroedter, M.; Skartveit, A.; Wald, L.

    1-University of Oldenburg, 2-University of Appl. Sciences Magdeburg, 3-Ecole des Mines de Paris, 4-University of Bergen, 5-Ecole Nationale des Travaux Publics de l'Etat, 6-University of Geneva, 7-Instituto Tecnologico de Canarias, 8-Fraunhofer Institute for Solar Energy Systems, 9-German Aerospace Center Geostationary satellites such as Meteosat provide cloud information with a high spatial and temporal resolution. Such satellites are therefore not only useful for weather fore- casting, but also for the estimation of solar irradiance since the knowledge of the light reflected by clouds is the basis for the calculation of the transmitted light. Additionally an the knowledge of atmospheric parameters involved in scattering and absorption of the sunlight is necessary for an accurate calculation of the solar irradiance. An accurate estimation of the downward solar irradiance is not only of particular im- portance for the assessment of the radiative forcing of the climate system, but also necessary for an efficient planning and operation of solar energy systems. Currently, most of the operational calculation schemes for solar irradiance are semi- empirical. They use cloud information from the current Meteosat satellite and clima- tologies of atmospheric parameters e.g. turbidity (aerosols and water vapor). The Me- teosat Second Generation satellites (MSG, to be launched in 2002) will provide not only a higher spatial and temporal resolution, but also the potential for the retrieval of atmospheric parameters such as ozone, water vapor and with restrictions aerosols. With this more detailed knowledge about atmospheric parameters it is evident to set up a new calculation scheme based on radiative transfer models using the retrieved atmospheric parameters as input. Unfortunately the possibility of deriving aerosol in- formation from MSG data is limited. As a cosequence the use of data from additional satellite instruments ( e.g. GOME/ATSR-2) is neeeded. Within this

  7. A hybrid pi control scheme for airship hovering

    International Nuclear Information System (INIS)

    Ashraf, Z.; Choudhry, M.A.; Hanif, A.

    2012-01-01

    Airship provides us many attractive applications in aerospace industry including transportation of heavy payloads, tourism, emergency management, communication, hover and vision based applications. Hovering control of airship has many utilizations in different engineering fields. However, it is a difficult problem to sustain the hover condition maintaining controllability. So far, different solutions have been proposed in literature but most of them are difficult in analysis and implementation. In this paper, we have presented a simple and efficient scheme to design a multi input multi output hybrid PI control scheme for airship. It can maintain stability of the plant by rejecting disturbance inputs to ensure robustness. A control scheme based on feedback theory is proposed that uses principles of optimality with integral action for hovering applications. Simulations are carried out in MTALAB for examining the proposed control scheme for hovering in different wind conditions. Comparison of the technique with an existing scheme is performed, describing the effectiveness of control scheme. (author)

  8. Privacy Preserving Mapping Schemes Supporting Comparison

    NARCIS (Netherlands)

    Tang, Qiang

    2010-01-01

    To cater to the privacy requirements in cloud computing, we introduce a new primitive, namely Privacy Preserving Mapping (PPM) schemes supporting comparison. An PPM scheme enables a user to map data items into images in such a way that, with a set of images, any entity can determine the <, =, >

  9. Consolidation of the health insurance scheme

    CERN Document Server

    Association du personnel

    2009-01-01

    In the last issue of Echo, we highlighted CERN’s obligation to guarantee a social security scheme for all employees, pensioners and their families. In that issue we talked about the first component: pensions. This time we shall discuss the other component: the CERN Health Insurance Scheme (CHIS).

  10. A numerical scheme for the generalized Burgers–Huxley equation

    Directory of Open Access Journals (Sweden)

    Brajesh K. Singh

    2016-10-01

    Full Text Available In this article, a numerical solution of generalized Burgers–Huxley (gBH equation is approximated by using a new scheme: modified cubic B-spline differential quadrature method (MCB-DQM. The scheme is based on differential quadrature method in which the weighting coefficients are obtained by using modified cubic B-splines as a set of basis functions. This scheme reduces the equation into a system of first-order ordinary differential equation (ODE which is solved by adopting SSP-RK43 scheme. Further, it is shown that the proposed scheme is stable. The efficiency of the proposed method is illustrated by four numerical experiments, which confirm that obtained results are in good agreement with earlier studies. This scheme is an easy, economical and efficient technique for finding numerical solutions for various kinds of (nonlinear physical models as compared to the earlier schemes.

  11. Encryption of QR code and grayscale image in interference-based scheme with high quality retrieval and silhouette problem removal

    Science.gov (United States)

    Qin, Yi; Wang, Hongjuan; Wang, Zhipeng; Gong, Qiong; Wang, Danchen

    2016-09-01

    In optical interference-based encryption (IBE) scheme, the currently available methods have to employ the iterative algorithms in order to encrypt two images and retrieve cross-talk free decrypted images. In this paper, we shall show that this goal can be achieved via an analytical process if one of the two images is QR code. For decryption, the QR code is decrypted in the conventional architecture and the decryption has a noisy appearance. Nevertheless, the robustness of QR code against noise enables the accurate acquisition of its content from the noisy retrieval, as a result of which the primary QR code can be exactly regenerated. Thereafter, a novel optical architecture is proposed to recover the grayscale image by aid of the QR code. In addition, the proposal has totally eliminated the silhouette problem existing in the previous IBE schemes, and its effectiveness and feasibility have been demonstrated by numerical simulations.

  12. An Integrated H-G Scheme Identifying Areas for Soil Remediation and Primary Heavy Metal Contributors: A Risk Perspective.

    Science.gov (United States)

    Zou, Bin; Jiang, Xiaolu; Duan, Xiaoli; Zhao, Xiuge; Zhang, Jing; Tang, Jingwen; Sun, Guoqing

    2017-03-23

    Traditional sampling for soil pollution evaluation is cost intensive and has limited representativeness. Therefore, developing methods that can accurately and rapidly identify at-risk areas and the contributing pollutants is imperative for soil remediation. In this study, we propose an innovative integrated H-G scheme combining human health risk assessment and geographical detector methods that was based on geographical information system technology and validated its feasibility in a renewable resource industrial park in mainland China. With a discrete site investigation of cadmium (Cd), arsenic (As), copper (Cu), mercury (Hg) and zinc (Zn) concentrations, the continuous surfaces of carcinogenic risk and non-carcinogenic risk caused by these heavy metals were estimated and mapped. Source apportionment analysis using geographical detector methods further revealed that these risks were primarily attributed to As, according to the power of the determinant and its associated synergic actions with other heavy metals. Concentrations of critical As and Cd, and the associated exposed CRs are closed to the safe thresholds after remediating the risk areas identified by the integrated H-G scheme. Therefore, the integrated H-G scheme provides an effective approach to support decision-making for regional contaminated soil remediation at fine spatial resolution with limited sampling data over a large geographical extent.

  13. Digital Signature Schemes with Complementary Functionality and Applications

    OpenAIRE

    S. N. Kyazhin

    2012-01-01

    Digital signature schemes with additional functionality (an undeniable signature, a signature of the designated confirmee, a signature blind, a group signature, a signature of the additional protection) and examples of their application are considered. These schemes are more practical, effective and useful than schemes of ordinary digital signature.

  14. A combined spectrum sensing and OFDM demodulation scheme

    NARCIS (Netherlands)

    Heskamp, M.; Slump, Cornelis H.

    2009-01-01

    In this paper we propose a combined signaling and spectrum sensing scheme for cognitive radio that can detect in-band primary users while the networks own signal is active. The signaling scheme uses OFDM with phase shift keying modulated sub-carriers, and the detection scheme measures the deviation

  15. Development of a Blood Pressure Measurement Instrument with Active Cuff Pressure Control Schemes

    Directory of Open Access Journals (Sweden)

    Chung-Hsien Kuo

    2017-01-01

    Full Text Available This paper presents an oscillometric blood pressure (BP measurement approach based on the active control schemes of cuff pressure. Compared with conventional electronic BP instruments, the novelty of the proposed BP measurement approach is to utilize a variable volume chamber which actively and stably alters the cuff pressure during inflating or deflating cycles. The variable volume chamber is operated with a closed-loop pressure control scheme, and it is activated by controlling the piston position of a single-acting cylinder driven by a screw motor. Therefore, the variable volume chamber could significantly eliminate the air turbulence disturbance during the air injection stage when compared to an air pump mechanism. Furthermore, the proposed active BP measurement approach is capable of measuring BP characteristics, including systolic blood pressure (SBP and diastolic blood pressure (DBP, during the inflating cycle. Two modes of air injection measurement (AIM and accurate dual-way measurement (ADM were proposed. According to the healthy subject experiment results, AIM reduced 34.21% and ADM reduced 15.78% of the measurement time when compared to a commercial BP monitor. Furthermore, the ADM performed much consistently (i.e., less standard deviation in the measurements when compared to a commercial BP monitor.

  16. The new WAGR data acquisition scheme

    International Nuclear Information System (INIS)

    Ellis, W.E.; Leng, J.H.; Smith, I.C.; Smith, M.R.

    1976-06-01

    The existing WAGR data acquisition equipment was inadequate to meet the requirements introduced by the installation of two additional experimental loops and was in any case due for replacement. A completely new scheme was planned and implemented based on mini-computers, which while preserving all the useful features of the old scheme provided additional flexibility and improved data display. Both the initial objectives of the design and the final implementation are discussed without introducing detailed descriptions of hardware or the programming techniques employed. Although the scheme solves a specific problem the general principles are more widely applicable and could readily be adapted to other data checking and display problems. (author)

  17. An Efficient Homomorphic Aggregate Signature Scheme Based on Lattice

    Directory of Open Access Journals (Sweden)

    Zhengjun Jing

    2014-01-01

    Full Text Available Homomorphic aggregate signature (HAS is a linearly homomorphic signature (LHS for multiple users, which can be applied for a variety of purposes, such as multi-source network coding and sensor data aggregation. In order to design an efficient postquantum secure HAS scheme, we borrow the idea of the lattice-based LHS scheme over binary field in the single-user case, and develop it into a new lattice-based HAS scheme in this paper. The security of the proposed scheme is proved by showing a reduction to the single-user case and the signature length remains invariant. Compared with the existing lattice-based homomorphic aggregate signature scheme, our new scheme enjoys shorter signature length and high efficiency.

  18. Performance analysis of best relay selection scheme for amplify-and-forward cooperative networks in identical Nakagami-m channels

    KAUST Repository

    Hussain, Syed Imtiaz

    2010-06-01

    In cooperative communication networks, the use of multiple relays between the source and the destination was proposed to increase the diversity gain. Since the source and all the relays must transmit on orthogonal channels, multiple relay cooperation is considered inefficient in terms of channel resources and bandwidth utilization. To overcome this problem, the concept of best relay selection was recently proposed. In this paper, we analyze the performance of the best relay selection scheme for a cooperative network with multiple relays operating in amplify-and-forward (AF) mode over identical Nakagami-m channels using exact source-relay-destination signal to noise ratio (SNR) expression. We derive accurate closed form expressions for various system parameters including probability density function (pdf) of end-to-end SNR, average output SNR, average probability of bit error and average channel capacity. T he analytical results are verified through extensive simulations. It is shown that the best relay selection scheme performs better than the regular all relay cooperation.

  19. Improvement of burnup analysis for pebble bed reactors with an accumulative fuel loading scheme

    International Nuclear Information System (INIS)

    Simanullang, Irwan Liapto; Obara, Toru

    2015-01-01

    Given the limitations of natural uranium resources, innovative nuclear power plant concepts that increase the efficiency of nuclear fuel utilization are needed. The Pebble Bed Reactor (PBR) shows some potential to achieve high efficiency in natural uranium utilization. To simplify the PBR concept, PBR with an accumulation fuel loading scheme was introduced and the Fuel Handling System (FHS) removed. In this concept, the pebble balls are added little by little into the reactor core until the pebble balls reach the top of the reactor core, and all pebble balls are discharged from the core at the end of the operation period. A code based on the MVP/MVP-BURN method has been developed to perform an analysis of a PBR with the accumulative fuel loading scheme. The optimum fuel composition was found using the code for high burnup performance. Previous efforts provided several motivations to improve the burnup performance: First, some errors in the input code were corrected. This correction, and an overall simplification of the input code, was implemented for easier analysis of a PBR with the accumulative fuel loading scheme. Second, the optimum fuel design had been obtained in the infinite geometry. To improve the optimum fuel composition, a parametric survey was obtained by varying the amount of Heavy Metal (HM) uranium per pebble and the degree of uranium enrichment. Moreover, an entire analysis of the parametric survey was obtained in the finite geometry. The results show that improvements in the fuel composition can lead to more accurate analysis with the code. (author)

  20. PET/CT detectability and classification of simulated pulmonary lesions using an SUV correction scheme

    Science.gov (United States)

    Morrow, Andrew N.; Matthews, Kenneth L., II; Bujenovic, Steven

    2008-03-01

    Positron emission tomography (PET) and computed tomography (CT) together are a powerful diagnostic tool, but imperfect image quality allows false positive and false negative diagnoses to be made by any observer despite experience and training. This work investigates PET acquisition mode, reconstruction method and a standard uptake value (SUV) correction scheme on the classification of lesions as benign or malignant in PET/CT images, in an anthropomorphic phantom. The scheme accounts for partial volume effect (PVE) and PET resolution. The observer draws a region of interest (ROI) around the lesion using the CT dataset. A simulated homogenous PET lesion of the same shape as the drawn ROI is blurred with the point spread function (PSF) of the PET scanner to estimate the PVE, providing a scaling factor to produce a corrected SUV. Computer simulations showed that the accuracy of the corrected PET values depends on variations in the CT-drawn boundary and the position of the lesion with respect to the PET image matrix, especially for smaller lesions. Correction accuracy was affected slightly by mismatch of the simulation PSF and the actual scanner PSF. The receiver operating characteristic (ROC) study resulted in several observations. Using observer drawn ROIs, scaled tumor-background ratios (TBRs) more accurately represented actual TBRs than unscaled TBRs. For the PET images, 3D OSEM outperformed 2D OSEM, 3D OSEM outperformed 3D FBP, and 2D OSEM outperformed 2D FBP. The correction scheme significantly increased sensitivity and slightly increased accuracy for all acquisition and reconstruction modes at the cost of a small decrease in specificity.

  1. A general approach to the construction of 'very accurate' or 'definitive' methods by radiochemical NAA and the role of these methods in QA

    International Nuclear Information System (INIS)

    Dybczynski, R.

    1998-01-01

    Constant progress in instrumentation and methodology of inorganic trace analysis is not always paralleled by improvement in reliability of analytical results. Our approach to construction of 'very accurate' methods for the determination of selected trace elements in biological materials by RNAA is based on an assumption that: (i) The radionuclide in question should be selectively and quantitatively isolated from the irradiated sample by a suitable radiochemical scheme, optimized with respect to this particular radionuclide, yielding finally the analyte in the state of high radiochemical purity what assures interference-free measurement by gamma-ray spectrometry. (ii) The radiochemical scheme should be based on ion exchange and/or extraction column chromatography resulting in an easy automatic repetition of an elementary act of distribution of the analyte and accompanying radionuclides between stationary and mobile phases. (iii) The method should have some intrinsic mechanisms incorporated into the procedure preventing any possibility of making gross errors. Based on these general assumptions, several more specific rules for devising of 'very accurate' methods were formulated and applied when elaborating our methods for the determination of copper, cobalt, nickel, cadmium, molybdenum and uranium in biological materials. The significance of such methods for Quality Assurance is pointed out and illustrated by their use in the certification campaign of the new Polish biological CRMs based on tobacco

  2. A combinatorial characterization scheme for high-throughput investigations of hydrogen storage materials

    International Nuclear Information System (INIS)

    Hattrick-Simpers, Jason R; Chiu, Chun; Bendersky, Leonid A; Tan Zhuopeng; Oguchi, Hiroyuki; Heilweil, Edwin J; Maslar, James E

    2011-01-01

    In order to increase measurement throughput, a characterization scheme has been developed that accurately measures the hydrogen storage properties of materials in quantities ranging from 10 ng to 1 g. Initial identification of promising materials is realized by rapidly screening thin-film composition spread and thickness wedge samples using normalized IR emissivity imaging. The hydrogen storage properties of promising samples are confirmed through measurements on single-composition films with high-sensitivity (resolution <0.3 μg) Sievert's-type apparatus. For selected samples, larger quantities of up to ∼100 mg may be prepared and their (de)hydrogenation and micro-structural properties probed via parallel in situ Raman spectroscopy. Final confirmation of the hydrogen storage properties is obtained on ∼1 g powder samples using a combined Raman spectroscopy/Sievert's apparatus.

  3. Quantum election scheme based on anonymous quantum key distribution

    International Nuclear Information System (INIS)

    Zhou Rui-Rui; Yang Li

    2012-01-01

    An unconditionally secure authority-certified anonymous quantum key distribution scheme using conjugate coding is presented, based on which we construct a quantum election scheme without the help of an entanglement state. We show that this election scheme ensures the completeness, soundness, privacy, eligibility, unreusability, fairness, and verifiability of a large-scale election in which the administrator and counter are semi-honest. This election scheme can work even if there exist loss and errors in quantum channels. In addition, any irregularity in this scheme is sensible. (general)

  4. WENO schemes for balance laws with spatially varying flux

    International Nuclear Information System (INIS)

    Vukovic, Senka; Crnjaric-Zic, Nelida; Sopta, Luka

    2004-01-01

    In this paper we construct numerical schemes of high order of accuracy for hyperbolic balance law systems with spatially variable flux function and a source term of the geometrical type. We start with the original finite difference characteristicwise weighted essentially nonoscillatory (WENO) schemes and then we create new schemes by modifying the flux formulations (locally Lax-Friedrichs and Roe with entropy fix) in order to account for the spatially variable flux, and by decomposing the source term in order to obtain balance between numerical approximations of the flux gradient and of the source term. We apply so extended WENO schemes to the one-dimensional open channel flow equations and to the one-dimensional elastic wave equations. In particular, we prove that in these applications the new schemes are exactly consistent with steady-state solutions from an appropriately chosen subset. Experimentally obtained orders of accuracy of the extended and original WENO schemes are almost identical on a convergence test. Other presented test problems illustrate the improvement of the proposed schemes relative to the original WENO schemes combined with the pointwise source term evaluation. As expected, the increase in the formal order of accuracy of applied WENO reconstructions in all the tests causes visible increase in the high resolution properties of the schemes

  5. Critical analysis of the Bennett-Riedel attack on secure cryptographic key distributions via the Kirchhoff-Law-Johnson-noise scheme.

    Directory of Open Access Journals (Sweden)

    Laszlo B Kish

    Full Text Available Recently, Bennett and Riedel (BR (http://arxiv.org/abs/1303.7435v1 argued that thermodynamics is not essential in the Kirchhoff-law-Johnson-noise (KLJN classical physical cryptographic exchange method in an effort to disprove the security of the KLJN scheme. They attempted to demonstrate this by introducing a dissipation-free deterministic key exchange method with two batteries and two switches. In the present paper, we first show that BR's scheme is unphysical and that some elements of its assumptions violate basic protocols of secure communication. All our analyses are based on a technically unlimited Eve with infinitely accurate and fast measurements limited only by the laws of physics and statistics. For non-ideal situations and at active (invasive attacks, the uncertainly principle between measurement duration and statistical errors makes it impossible for Eve to extract the key regardless of the accuracy or speed of her measurements. To show that thermodynamics and noise are essential for the security, we crack the BR system with 100% success via passive attacks, in ten different ways, and demonstrate that the same cracking methods do not function for the KLJN scheme that employs Johnson noise to provide security underpinned by the Second Law of Thermodynamics. We also present a critical analysis of some other claims by BR; for example, we prove that their equations for describing zero security do not apply to the KLJN scheme. Finally we give mathematical security proofs for each BR-attack against the KLJN scheme and conclude that the information theoretic (unconditional security of the KLJN method has not been successfully challenged.

  6. A repeat-until-success quantum computing scheme

    Energy Technology Data Exchange (ETDEWEB)

    Beige, A [School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT (United Kingdom); Lim, Y L [DSO National Laboratories, 20 Science Park Drive, Singapore 118230, Singapore (Singapore); Kwek, L C [Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542, Singapore (Singapore)

    2007-06-15

    Recently we proposed a hybrid architecture for quantum computing based on stationary and flying qubits: the repeat-until-success (RUS) quantum computing scheme. The scheme is largely implementation independent. Despite the incompleteness theorem for optical Bell-state measurements in any linear optics set-up, it allows for the implementation of a deterministic entangling gate between distant qubits. Here we review this distributed quantum computation scheme, which is ideally suited for integrated quantum computation and communication purposes.

  7. A repeat-until-success quantum computing scheme

    International Nuclear Information System (INIS)

    Beige, A; Lim, Y L; Kwek, L C

    2007-01-01

    Recently we proposed a hybrid architecture for quantum computing based on stationary and flying qubits: the repeat-until-success (RUS) quantum computing scheme. The scheme is largely implementation independent. Despite the incompleteness theorem for optical Bell-state measurements in any linear optics set-up, it allows for the implementation of a deterministic entangling gate between distant qubits. Here we review this distributed quantum computation scheme, which is ideally suited for integrated quantum computation and communication purposes

  8. Robustly stable adaptive control of a tandem of master-slave robotic manipulators with force reflection by using a multiestimation scheme.

    Science.gov (United States)

    Ibeas, Asier; de la Sen, Manuel

    2006-10-01

    The problem of controlling a tandem of robotic manipulators composing a teleoperation system with force reflection is addressed in this paper. The final objective of this paper is twofold: 1) to design a robust control law capable of ensuring closed-loop stability for robots with uncertainties and 2) to use the so-obtained control law to improve the tracking of each robot to its corresponding reference model in comparison with previously existing controllers when the slave is interacting with the obstacle. In this way, a multiestimation-based adaptive controller is proposed. Thus, the master robot is able to follow more accurately the constrained motion defined by the slave when interacting with an obstacle than when a single-estimation-based controller is used, improving the transparency property of the teleoperation scheme. The closed-loop stability is guaranteed if a minimum residence time, which might be updated online when unknown, between different controller parameterizations is respected. Furthermore, the analysis of the teleoperation and stability capabilities of the overall scheme is carried out. Finally, some simulation examples showing the working of the multiestimation scheme complete this paper.

  9. Reconciling the Reynolds number dependence of scalar roughness length and laminar resistance

    Science.gov (United States)

    Li, D.; Rigden, A. J.; Salvucci, G.; Liu, H.

    2017-12-01

    The scalar roughness length and laminar resistance are necessary for computing scalar fluxes in numerical simulations and experimental studies. Their dependence on flow properties such as the Reynolds number remains controversial. In particular, two important power laws (1/4 and 1/2), proposed by Brutsaert and Zilitinkevich, respectively, are commonly seen in various parameterizations and models. Building on a previously proposed phenomenological model for interactions between the viscous sublayer and the turbulent flow, it is shown here that the two scaling laws can be reconciled. The "1/4" power law corresponds to the situation where the vertical diffusion is balanced by the temporal change or advection due to a constant velocity in the viscous sublayer, while the "1/2" power law scaling corresponds to the situation where the vertical diffusion is balanced by the advection due to a linear velocity profile in the viscous sublayer. In addition, the recently proposed "1" power law scaling is also recovered, which corresponds to the situation where molecular diffusion dominates the scalar budget in the viscous sublayer. The formulation proposed here provides a unified framework for understanding the onset of these different scaling laws and offers a new perspective on how to evaluate them experimentally.

  10. Scalable Nonlinear Compact Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Debojyoti [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil M. [Univ. of Chicago, IL (United States); Brown, Jed [Univ. of Colorado, Boulder, CO (United States)

    2014-04-01

    In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.

  11. MIMO transmit scheme based on morphological perceptron with competitive learning.

    Science.gov (United States)

    Valente, Raul Ambrozio; Abrão, Taufik

    2016-08-01

    This paper proposes a new multi-input multi-output (MIMO) transmit scheme aided by artificial neural network (ANN). The morphological perceptron with competitive learning (MP/CL) concept is deployed as a decision rule in the MIMO detection stage. The proposed MIMO transmission scheme is able to achieve double spectral efficiency; hence, in each time-slot the receiver decodes two symbols at a time instead one as Alamouti scheme. Other advantage of the proposed transmit scheme with MP/CL-aided detector is its polynomial complexity according to modulation order, while it becomes linear when the data stream length is greater than modulation order. The performance of the proposed scheme is compared to the traditional MIMO schemes, namely Alamouti scheme and maximum-likelihood MIMO (ML-MIMO) detector. Also, the proposed scheme is evaluated in a scenario with variable channel information along the frame. Numerical results have shown that the diversity gain under space-time coding Alamouti scheme is partially lost, which slightly reduces the bit-error rate (BER) performance of the proposed MP/CL-NN MIMO scheme. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Accurate performance analysis of opportunistic decode-and-forward relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may

  13. On doublet composite schemes of leptons and quarks

    International Nuclear Information System (INIS)

    Pirogov, Yu.F.

    1981-01-01

    All simplest doublet composite schemes are classified. Four different doublet schemes are shown to be available. A new scheme with charge doublet Q=(2/3, -1/3) rather advantageous as compared with the previous ones is being considered. Some difficulties in interpreting the colour as an effective symmetry are pointed out [ru

  14. New analytic unitarization schemes

    International Nuclear Information System (INIS)

    Cudell, J.-R.; Predazzi, E.; Selyugin, O. V.

    2009-01-01

    We consider two well-known classes of unitarization of Born amplitudes of hadron elastic scattering. The standard class, which saturates at the black-disk limit includes the standard eikonal representation, while the other class, which goes beyond the black-disk limit to reach the full unitarity circle, includes the U matrix. It is shown that the basic properties of these schemes are independent of the functional form used for the unitarization, and that U matrix and eikonal schemes can be extended to have similar properties. A common form of unitarization is proposed interpolating between both classes. The correspondence with different nonlinear equations are also briefly examined.

  15. Canonical, stable, general mapping using context schemes.

    Science.gov (United States)

    Novak, Adam M; Rosen, Yohei; Haussler, David; Paten, Benedict

    2015-11-15

    Sequence mapping is the cornerstone of modern genomics. However, most existing sequence mapping algorithms are insufficiently general. We introduce context schemes: a method that allows the unambiguous recognition of a reference base in a query sequence by testing the query for substrings from an algorithmically defined set. Context schemes only map when there is a unique best mapping, and define this criterion uniformly for all reference bases. Mappings under context schemes can also be made stable, so that extension of the query string (e.g. by increasing read length) will not alter the mapping of previously mapped positions. Context schemes are general in several senses. They natively support the detection of arbitrary complex, novel rearrangements relative to the reference. They can scale over orders of magnitude in query sequence length. Finally, they are trivially extensible to more complex reference structures, such as graphs, that incorporate additional variation. We demonstrate empirically the existence of high-performance context schemes, and present efficient context scheme mapping algorithms. The software test framework created for this study is available from https://registry.hub.docker.com/u/adamnovak/sequence-graphs/. anovak@soe.ucsc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Reconciling estimates of the ratio of heat and salt fluxes at the ice-ocean interface

    Science.gov (United States)

    Keitzl, T.; Mellado, J. P.; Notz, D.

    2016-12-01

    The heat exchange between floating ice and the underlying ocean is determined by the interplay of diffusive fluxes directly at the ice-ocean interface and turbulent fluxes away from it. In this study, we examine this interplay through direct numerical simulations of free convection. Our results show that an estimation of the interface flux ratio based on direct measurements of the turbulent fluxes can be difficult because the flux ratio varies with depth. As an alternative, we present a consistent evaluation of the flux ratio based on the total heat and salt fluxes across the boundary layer. This approach allows us to reconcile previous estimates of the ice-ocean interface conditions. We find that the ratio of heat and salt fluxes directly at the interface is 83-100 rather than 33 as determined by previous turbulence measurements in the outer layer. This can cause errors in the estimated ice-ablation rate from field measurements of up to 40% if they are based on the three-equation formulation.

  17. Robust Model Predictive Control Schemes for Tracking Setpoints

    Directory of Open Access Journals (Sweden)

    Vu Trieu Minh

    2010-01-01

    Full Text Available This paper briefly reviews the development of nontracking robust model predictive control (RMPC schemes for uncertain systems using linear matrix inequalities (LMIs subject to input saturated and softened state constraints. Then we develop two new tracking setpoint RMPC schemes with common Lyapunov function and with zero terminal equality subject to input saturated and softened state constraints. The novel tracking setpoint RMPC schemes are able to stabilize uncertain systems once the output setpoints lead to the violation of the state constraints. The state violation can be regulated by changing the value of the weighting factor. A brief comparative simulation study of the two tracking setpoint RMPC schemes is done via simple examples to demonstrate the ability of the softened state constraint schemes. Finally, some features of future research from this study are discussed.

  18. Time Accurate Unsteady Pressure Loads Simulated for the Space Launch System at a Wind Tunnel Condition

    Science.gov (United States)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.

    2015-01-01

    Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.

  19. Brillouin-zone integration schemes: an efficiency study for the phonon frequency moments of the harmonic, solid, one-component plasma

    International Nuclear Information System (INIS)

    Albers, R.C.; Gubernatis, J.E.

    1981-01-01

    The efficiency of four different Brillouin-zone integration schemes including the uniform mesh, special point method, special directions method, and Holas method are compared for calculating moments of the harmonic phonon frequencies of the solid one-component plasma. Very accurate values for the moments are also presented. The Holas method for which weights and integration points can easily be generated has roughly the same efficiency as the special directions method, which is much superior to the uniform mesh and special point methods for this problem

  20. Vector domain decomposition schemes for parabolic equations

    Science.gov (United States)

    Vabishchevich, P. N.

    2017-09-01

    A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.