WorldWideScience

Sample records for level set scheme

  1. Discretisation Schemes for Level Sets of Planar Gaussian Fields

    Science.gov (United States)

    Beliaev, D.; Muirhead, S.

    2018-01-01

    Smooth random Gaussian functions play an important role in mathematical physics, a main example being the random plane wave model conjectured by Berry to give a universal description of high-energy eigenfunctions of the Laplacian on generic compact manifolds. Our work is motivated by questions about the geometry of such random functions, in particular relating to the structure of their nodal and level sets. We study four discretisation schemes that extract information about level sets of planar Gaussian fields. Each scheme recovers information up to a different level of precision, and each requires a maximum mesh-size in order to be valid with high probability. The first two schemes are generalisations and enhancements of similar schemes that have appeared in the literature (Beffara and Gayet in Publ Math IHES, 2017. https://doi.org/10.1007/s10240-017-0093-0; Mischaikow and Wanner in Ann Appl Probab 17:980-1018, 2007); these give complete topological information about the level sets on either a local or global scale. As an application, we improve the results in Beffara and Gayet (2017) on Russo-Seymour-Welsh estimates for the nodal set of positively-correlated planar Gaussian fields. The third and fourth schemes are, to the best of our knowledge, completely new. The third scheme is specific to the nodal set of the random plane wave, and provides global topological information about the nodal set up to `visible ambiguities'. The fourth scheme gives a way to approximate the mean number of excursion domains of planar Gaussian fields.

  2. LevelScheme: A level scheme drawing and scientific figure preparation system for Mathematica

    Science.gov (United States)

    Caprio, M. A.

    2005-09-01

    LevelScheme is a scientific figure preparation system for Mathematica. The main emphasis is upon the construction of level schemes, or level energy diagrams, as used in nuclear, atomic, molecular, and hadronic physics. LevelScheme also provides a general infrastructure for the preparation of publication-quality figures, including support for multipanel and inset plotting, customizable tick mark generation, and various drawing and labeling tasks. Coupled with Mathematica's plotting functions and powerful programming language, LevelScheme provides a flexible system for the creation of figures combining diagrams, mathematical plots, and data plots. Program summaryTitle of program:LevelScheme Catalogue identifier:ADVZ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVZ Operating systems:Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux Programming language used:Mathematica 4 Number of bytes in distributed program, including test and documentation:3 051 807 Distribution format:tar.gz Nature of problem:Creation of level scheme diagrams. Creation of publication-quality multipart figures incorporating diagrams and plots. Method of solution:A set of Mathematica packages has been developed, providing a library of level scheme drawing objects, tools for figure construction and labeling, and control code for producing the graphics.

  3. An efficient quantum scheme for Private Set Intersection

    Science.gov (United States)

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-01

    Private Set Intersection allows a client to privately compute set intersection with the collaboration of the server, which is one of the most fundamental and key problems within the multiparty collaborative computation of protecting the privacy of the parties. In this paper, we first present a cheat-sensitive quantum scheme for Private Set Intersection. Compared with classical schemes, our scheme has lower communication complexity, which is independent of the size of the server's set. Therefore, it is very suitable for big data services in Cloud or large-scale client-server networks.

  4. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    Energy Technology Data Exchange (ETDEWEB)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au [School of Chemistry and Biochemistry, The University of Western Australia, Perth, WA 6009 (Australia)

    2015-05-15

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.

  5. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    International Nuclear Information System (INIS)

    Spackman, Peter R.; Karton, Amir

    2015-01-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L α two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol –1 . The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol –1

  6. Gamma spectrometry; level schemes

    International Nuclear Information System (INIS)

    Blachot, J.; Bocquet, J.P.; Monnand, E.; Schussler, F.

    1977-01-01

    The research presented dealt with: a new beta emitter, isomer of 131 Sn; the 136 I levels fed through the radioactive decay of 136 Te (20.9s); the A=145 chain (β decay of Ba, La and Ce, and level schemes for 145 La, 145 Ce, 145 Pr); the A=47 chain (La and Ce, β decay, and the level schemes of 147 Ce and 147 Pr) [fr

  7. Multi-phase flow monitoring with electrical impedance tomography using level set based method

    International Nuclear Information System (INIS)

    Liu, Dong; Khambampati, Anil Kumar; Kim, Sin; Kim, Kyung Youn

    2015-01-01

    Highlights: • LSM has been used for shape reconstruction to monitor multi-phase flow using EIT. • Multi-phase level set model for conductivity is represented by two level set functions. • LSM handles topological merging and breaking naturally during evolution process. • To reduce the computational time, a narrowband technique was applied. • Use of narrowband and optimization approach results in efficient and fast method. - Abstract: In this paper, a level set-based reconstruction scheme is applied to multi-phase flow monitoring using electrical impedance tomography (EIT). The proposed scheme involves applying a narrowband level set method to solve the inverse problem of finding the interface between the regions having different conductivity values. The multi-phase level set model for the conductivity distribution inside the domain is represented by two level set functions. The key principle of the level set-based method is to implicitly represent the shape of interface as the zero level set of higher dimensional function and then solve a set of partial differential equations. The level set-based scheme handles topological merging and breaking naturally during the evolution process. It also offers several advantages compared to traditional pixel-based approach. Level set-based method for multi-phase flow is tested with numerical and experimental data. It is found that level set-based method has better reconstruction performance when compared to pixel-based method

  8. Numerical simulation of interface movement in gas-liquid two-phase flows with Level Set method

    International Nuclear Information System (INIS)

    Li Huixiong; Chinese Academy of Sciences, Beijing; Deng Sheng; Chen Tingkuan; Zhao Jianfu; Wang Fei

    2005-01-01

    Numerical simulation of gas-liquid two-phase flow and heat transfer has been an attractive work for a quite long time, but still remains as a knotty difficulty due to the inherent complexities of the gas-liquid two-phase flow resulted from the existence of moving interfaces with topology changes. This paper reports the effort and the latest advances that have been made by the authors, with special emphasis on the methods for computing solutions to the advection equation of the Level set function, which is utilized to capture the moving interfaces in gas-liquid two-phase flows. Three different schemes, i.e. the simple finite difference scheme, the Superbee-TVD scheme and the 5-order WENO scheme in combination with the Runge-Kutta method are respectively applied to solve the advection equation of the Level Set. A numerical procedure based on the well-verified SIMPLER method is employed to numerically calculate the momentum equations of the two-phase flow. The above-mentioned three schemes are employed to simulate the movement of four typical interfaces under 5 typical flowing conditions. Analysis of the numerical results shows that the 5-order WENO scheme and the Superbee-TVD scheme are much better than the simple finite difference scheme, and the 5-order WENO scheme is the best to compute solutions to the advection equation of the Level Set. The 5-order WENO scheme will be employed as the main scheme to get solutions to the advection equations of the Level Set when gas-liquid two-phase flows are numerically studied in the future. (authors)

  9. A new level set model for multimaterial flows

    Energy Technology Data Exchange (ETDEWEB)

    Starinshak, David P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Karni, Smadar [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of Mathematics; Roe, Philip L. [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of AerospaceEngineering

    2014-01-08

    We present a new level set model for representing multimaterial flows in multiple space dimensions. Instead of associating a level set function with a specific fluid material, the function is associated with a pair of materials and the interface that separates them. A voting algorithm collects sign information from all level sets and determines material designations. M(M ₋1)/2 level set functions might be needed to represent a general M-material configuration; problems of practical interest use far fewer functions, since not all pairs of materials share an interface. The new model is less prone to producing indeterminate material states, i.e. regions claimed by more than one material (overlaps) or no material at all (vacuums). It outperforms existing material-based level set models without the need for reinitialization schemes, thereby avoiding additional computational costs and preventing excessive numerical diffusion.

  10. An accurate conservative level set/ghost fluid method for simulating turbulent atomization

    International Nuclear Information System (INIS)

    Desjardins, Olivier; Moureau, Vincent; Pitsch, Heinz

    2008-01-01

    This paper presents a novel methodology for simulating incompressible two-phase flows by combining an improved version of the conservative level set technique introduced in [E. Olsson, G. Kreiss, A conservative level set method for two phase flow, J. Comput. Phys. 210 (2005) 225-246] with a ghost fluid approach. By employing a hyperbolic tangent level set function that is transported and re-initialized using fully conservative numerical schemes, mass conservation issues that are known to affect level set methods are greatly reduced. In order to improve the accuracy of the conservative level set method, high order numerical schemes are used. The overall robustness of the numerical approach is increased by computing the interface normals from a signed distance function reconstructed from the hyperbolic tangent level set by a fast marching method. The convergence of the curvature calculation is ensured by using a least squares reconstruction. The ghost fluid technique provides a way of handling the interfacial forces and large density jumps associated with two-phase flows with good accuracy, while avoiding artificial spreading of the interface. Since the proposed approach relies on partial differential equations, its implementation is straightforward in all coordinate systems, and it benefits from high parallel efficiency. The robustness and efficiency of the approach is further improved by using implicit schemes for the interface transport and re-initialization equations, as well as for the momentum solver. The performance of the method is assessed through both classical level set transport tests and simple two-phase flow examples including topology changes. It is then applied to simulate turbulent atomization of a liquid Diesel jet at Re=3000. The conservation errors associated with the accurate conservative level set technique are shown to remain small even for this complex case

  11. Two-level schemes for the advection equation

    Science.gov (United States)

    Vabishchevich, Petr N.

    2018-06-01

    The advection equation is the basis for mathematical models of continuum mechanics. In the approximate solution of nonstationary problems it is necessary to inherit main properties of the conservatism and monotonicity of the solution. In this paper, the advection equation is written in the symmetric form, where the advection operator is the half-sum of advection operators in conservative (divergent) and non-conservative (characteristic) forms. The advection operator is skew-symmetric. Standard finite element approximations in space are used. The standard explicit two-level scheme for the advection equation is absolutely unstable. New conditionally stable regularized schemes are constructed, on the basis of the general theory of stability (well-posedness) of operator-difference schemes, the stability conditions of the explicit Lax-Wendroff scheme are established. Unconditionally stable and conservative schemes are implicit schemes of the second (Crank-Nicolson scheme) and fourth order. The conditionally stable implicit Lax-Wendroff scheme is constructed. The accuracy of the investigated explicit and implicit two-level schemes for an approximate solution of the advection equation is illustrated by the numerical results of a model two-dimensional problem.

  12. A MEPS is a MEPS is a MEPS. Comparing Ecodesign and Top Runner schemes for setting product efficiency standards

    Energy Technology Data Exchange (ETDEWEB)

    Siderius, P.J.S. [NL Agency, Croeselaan 15, P.O. Box 8242, 3503 RE Utrecht (Netherlands); Nakagami, H. [Jyukankyo Research Institute, 3-29, Kioi-cho, Chiyoda-ku Tokyo, 102-0094 (Japan)

    2013-02-15

    Both Top Runner in Japan and Ecodesign in the European Union are schemes to set requirements on the energy efficiency (minimum efficiency performance standards, MEPS) of a variety of products. This article provides an overview of the main characteristics and results of both schemes and gives recommendations for improving them. Both schemes contribute significantly to the energy efficiency targets set by the European Commission and the Japanese government. Although it is difficult to compare the absolute levels of the requirements, comparison of the relative improvements and of the savings on household electricity consumption (11 % in Japan, 16 % in the EU) suggest they are in the same range. Furthermore, the time needed to set or review requirements is in both schemes considerable (between 5 and 6 years on average) and the manageability increasingly will become a challenge. The appeal of the Top Runner approach is that the most efficient product (Top Runner) sets the standard for all products at the next target year. Although the Ecodesign scheme includes the elements for a Top Runner approach, it could exploit this principle more explicitly. On the other hand, the Top Runner scheme could benefit by using a real minimum efficiency performance standard instead of a fleet average. This would make the monitoring and enforcement more simple and transparent, and would open the scheme for products where the market situation is less clear.

  13. Optimized Basis Sets for the Environment in the Domain-Specific Basis Set Approach of the Incremental Scheme.

    Science.gov (United States)

    Anacker, Tony; Hill, J Grant; Friedrich, Joachim

    2016-04-21

    Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.

  14. Statistical interpretation of low energy nuclear level schemes

    Energy Technology Data Exchange (ETDEWEB)

    Egidy, T von; Schmidt, H H; Behkami, A N

    1988-01-01

    Nuclear level schemes and neutron resonance spacings yield information on level densities and level spacing distributions. A total of 75 nuclear level schemes with 1761 levels and known spins and parities was investigated. The A-dependence of level density parameters is discussed. The spacing distributions of levels near the groundstate indicate transitional character between regular and chaotic properties while chaos dominates near the neutron binding energy.

  15. On 165Ho level scheme

    International Nuclear Information System (INIS)

    Ardisson, Claire; Ardisson, Gerard.

    1976-01-01

    A 165 Ho level scheme was constructed which led to the interpretation of sixty γ rays belonging to the decay of 165 Dy. A new 702.9keV level was identified to be the 5/2 - member of the 1/2 ) 7541{ Nilsson orbit. )] [fr

  16. A Variational Level Set Model Combined with FCMS for Image Clustering Segmentation

    Directory of Open Access Journals (Sweden)

    Liming Tang

    2014-01-01

    Full Text Available The fuzzy C means clustering algorithm with spatial constraint (FCMS is effective for image segmentation. However, it lacks essential smoothing constraints to the cluster boundaries and enough robustness to the noise. Samson et al. proposed a variational level set model for image clustering segmentation, which can get the smooth cluster boundaries and closed cluster regions due to the use of level set scheme. However it is very sensitive to the noise since it is actually a hard C means clustering model. In this paper, based on Samson’s work, we propose a new variational level set model combined with FCMS for image clustering segmentation. Compared with FCMS clustering, the proposed model can get smooth cluster boundaries and closed cluster regions due to the use of level set scheme. In addition, a block-based energy is incorporated into the energy functional, which enables the proposed model to be more robust to the noise than FCMS clustering and Samson’s model. Some experiments on the synthetic and real images are performed to assess the performance of the proposed model. Compared with some classical image segmentation models, the proposed model has a better performance for the images contaminated by different noise levels.

  17. Application of the level set method for multi-phase flow computation in fusion engineering

    International Nuclear Information System (INIS)

    Luo, X-Y.; Ni, M-J.; Ying, A.; Abdou, M.

    2006-01-01

    Numerical simulation of multi-phase flow is essential to evaluate the feasibility of a liquid protection scheme for the power plant chamber. The level set method is one of the best methods for computing and analyzing the motion of interface among the multi-phase flow. This paper presents a general formula for the second-order projection method combined with the level set method to simulate unsteady incompressible multi-phase flow with/out phase change flow encountered in fusion science and engineering. The third-order ENO scheme and second-order semi-implicit Crank-Nicholson scheme is used to update the convective and diffusion term. The numerical results show this method can handle the complex deformation of the interface and the effect of liquid-vapor phase change will be included in the future work

  18. Level-set techniques for facies identification in reservoir modeling

    Science.gov (United States)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  19. Level-set techniques for facies identification in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2011-01-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil–water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301–29; 2004 Inverse Problems 20 259–82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg–Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush–Kuhn–Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies

  20. GRAP, Gamma-Ray Level-Scheme Assignment

    International Nuclear Information System (INIS)

    Franklyn, C.B.

    2002-01-01

    1 - Description of program or function: An interactive program for allocating gamma-rays to an energy level scheme. Procedure allows for searching for new candidate levels of the form: 1) L1 + G(A) + G(B) = L2; 2) G(A) + G(B) = G(C); 3) G(A) + G(B) = C (C is a user defined number); 4) L1 + G(A) + G(B) + G(C) = L2. Procedure indicates intensity balance of feed and decay of each energy level. Provides for optimization of a level energy (and associated error). Overall procedure allows for pre-defining of certain gamma-rays as belonging to particular regions of the level scheme, for example, high energy transition levels, or due to beta- decay. 2 - Method of solution: Search for cases in which the energy difference between two energy levels is equal to a gamma-ray energy within user-defined limits. 3 - Restrictions on the complexity of the problem: Maximum number of gamma-rays: 999; Maximum gamma ray energy: 32000 units; Minimum gamma ray energy: 10 units; Maximum gamma-ray intensity: 32000 units; Minimum gamma-ray intensity: 0.001 units; Maximum number of levels: 255; Maximum level energy: 32000 units; Minimum level energy: 10 units; Maximum error on energy, intensity: 32 units; Minimum error on energy, intensity: 0.001 units; Maximum number of combinations: 6400 (ca); Maximum number of gamma-ray types : 127

  1. Transport and diffusion of material quantities on propagating interfaces via level set methods

    CERN Document Server

    Adalsteinsson, D

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies.

  2. Transport and diffusion of material quantities on propagating interfaces via level set methods

    International Nuclear Information System (INIS)

    Adalsteinsson, David; Sethian, J.A.

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies

  3. Validation of a simple evaporation-transpiration scheme (SETS) to estimate evaporation using micro-lysimeter measurements

    Science.gov (United States)

    Ghazanfari, Sadegh; Pande, Saket; Savenije, Hubert

    2014-05-01

    Several methods exist to estimate E and T. The Penman-Montieth or Priestly-Taylor methods along with the Jarvis scheme for estimating vegetation resistance are commonly used to estimate these fluxes as a function of land cover, atmospheric forcing and soil moisture content. In this study, a simple evaporation transpiration method is developed based on MOSAIC Land Surface Model that explicitly accounts for soil moisture. Soil evaporation and transpiration estimated by SETS is validated on a single column of soil profile with measured evaporation data from three micro-lysimeters located at Ferdowsi University of Mashhad synoptic station, Iran, for the year 2005. SETS is run using both implicit and explicit computational schemes. Results show that the implicit scheme estimates the vapor flux close to that by the explicit scheme. The mean difference between the implicit and explicit scheme is -0.03 mm/day. The paired T-test of mean difference (p-Value = 0.042 and t-Value = 2.04) shows that there is no significant difference between the two methods. The sum of soil evaporation and transpiration from SETS is also compared with P-M equation and micro-lysimeters measurements. The SETS predicts the actual evaporation with a lower bias (= 1.24mm/day) than P-M (= 1.82 mm/day) and with R2 value of 0.82.

  4. Level set method for image segmentation based on moment competition

    Science.gov (United States)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  5. A level set approach for shock-induced α-γ phase transition of RDX

    Science.gov (United States)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  6. Setting aside transactions from pyramid schemes as impeachable ...

    African Journals Online (AJOL)

    These schemes, which are often referred to as pyramid or Ponzi schemes, are unsustainable operations and give rise to problems in the law of insolvency. Investors in these schemes are often left empty-handed upon the scheme's eventual collapse and insolvency. Investors who received pay-outs from the scheme find ...

  7. Healthy incentive scheme in the Irish full-day-care pre-school setting.

    LENUS (Irish Health Repository)

    Molloy, C Johnston

    2013-12-16

    A pre-school offering a full-day-care service provides for children aged 0-5 years for more than 4 h\\/d. Researchers have called for studies that will provide an understanding of nutrition and physical activity practices in this setting. Obesity prevention in pre-schools, through the development of healthy associations with food and health-related practices, has been advocated. While guidelines for the promotion of best nutrition and health-related practice in the early years\\' setting exist in a number of jurisdictions, associated regulations have been noted to be poor, with the environment of the child-care facility mainly evaluated for safety. Much cross-sectional research outlines poor nutrition and physical activity practice in this setting. However, there are few published environmental and policy-level interventions targeting the child-care provider with, to our knowledge, no evidence of such interventions in Ireland. The aim of the present paper is to review international guidelines and recommendations relating to health promotion best practice in the pre-school setting: service and resource provision; food service and food availability; and the role and involvement of parents in pre-schools. Intervention programmes and assessment tools available to measure such practice are outlined; and insight is provided into an intervention scheme, formulated from available best practice, that was introduced into the Irish full-day-care pre-school setting.

  8. Schemes for Probabilistic Teleportation of an Unknown Three-Particle Three-Level Entangled State

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this paper, two schemes for teleporting an unknown three-particle three-level entangled state are proposed. In the first scheme, two partial three-particle three-level entangled states are used as the quantum channels, while in the second scheme, three two-particle three-level non-maximally entangled states are employed as quantum channels.It is shown that the teleportation can be successfully realized with certain probability, for both two schemes, if a receiver adopts some appropriate unitary transformations. It is shown also that the successful probabilities of these two schemes are different.

  9. Aerostructural Level Set Topology Optimization for a Common Research Model Wing

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2014-01-01

    The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.

  10. Four-level conservative finite-difference schemes for Boussinesq paradigm equation

    Science.gov (United States)

    Kolkovska, N.

    2013-10-01

    In this paper a two-parametric family of four level conservative finite difference schemes is constructed for the multidimensional Boussinesq paradigm equation. The schemes are explicit in the sense that no inner iterations are needed for evaluation of the numerical solution. The preservation of the discrete energy with this method is proved. The schemes have been numerically tested on one soliton propagation model and two solitons interaction model. The numerical experiments demonstrate that the proposed family of schemes has second order of convergence in space and time steps in the discrete maximal norm.

  11. Adopting the EU sustainable performance scheme Level(s) in the Danish building sector

    DEFF Research Database (Denmark)

    Kanafani, Kai; Rasmussen, Freja Nygaard; Zimmermann, Regitze Kjær

    2018-01-01

    to life cycle assessment (LCA) requirements within the Level(s) scheme. As a measure for the Danish building sector’s LCA practice, the specifications for LCAbyg, the official Danish building LCA tool, is used. In 2017, the European commission’s Joint Research Centre has launched Level(s) as a vo...

  12. ESCL8R and LEVIT8R: interactive graphical analysis of {gamma}-{gamma} and {gamma}-{gamma}-{gamma} coincidence data for level schemes

    Energy Technology Data Exchange (ETDEWEB)

    Radford, D C [Atomic Energy of Canada Ltd., Chalk River, ON (Canada). Chalk River Nuclear Labs.

    1992-08-01

    The extraction of complete and consistent nuclear level schemes from high-fold coincidence data will require intelligent computer programs. These will need to present the relevant data in an easily assimilated manner, keep track of all {gamma}-ray assignments and expected coincidence intensities, and quickly find significant discrepancies between a proposed level scheme and the data. Some steps in this direction have been made at Chalk River. The programs ESCL8R and LEVIT8R, for analysis of two-fold and three-fold data sets respectively, allow fast and easy inspection of the data, and compare the results to expectations calculations on the basis of a proposed level scheme. Least-squares fits directly to the 2D and/or 3D data, with the intensities and energies of the level scheme transitions as parameters, allow fast and easy extraction of the optimum physics results. (author). 4 refs., 3 figs.

  13. Revisiting the level scheme of the proton emitter 151Lu

    International Nuclear Information System (INIS)

    Wang, F.; Sun, B.H.; Liu, Z.; Scholey, C.; Eeckhaudt, S.; Grahn, T.; Greenlees, P.T.; Jones, P.; Julin, R.; Juutinen, S.; Kettelhut, S.; Leino, M.; Nyman, M.; Rahkila, P.; Saren, J.; Sorri, J.; Uusitalo, J.; Ashley, S.F.; Cullen, I.J.; Garnsworthy, A.B.; Gelletly, W.; Jones, G.A.; Pietri, S.; Podolyak, Z.; Steer, S.; Thompson, N.J.; Walker, P.M.; Williams, S.; Bianco, L.; Darby, I.G.; Joss, D.T.; Page, R.D.; Pakarinen, J.; Rigby, S.; Cullen, D.M.; Khan, S.; Kishada, A.; Gomez-Hornillos, M.B.; Simpson, J.; Jenkins, D.G.; Niikura, M.; Seweryniak, D.; Shizuma, Toshiyuki

    2015-01-01

    An experiment aiming to search for new isomers in the region of proton emitter 151 Lu was performed at the Accelerator Laboratory of the University of Jyväskylä (JYFL), by combining the high resolution γ-ray array JUROGAM, gas-filled RITU separator and GREAT detectors with the triggerless total data readout acquisition (TDR) system. In this proceeding, we revisit the level scheme of 151 Lu by using the proton-tagging technique. A level scheme consistent with the latest experimental results is obtained, and 3 additional levels are identified at high excitation energies. (author)

  14. Performance of a Two-Level Call Admission Control Scheme for DS-CDMA Wireless Networks

    Directory of Open Access Journals (Sweden)

    Fapojuwo Abraham O

    2007-01-01

    Full Text Available We propose a two-level call admission control (CAC scheme for direct sequence code division multiple access (DS-CDMA wireless networks supporting multimedia traffic and evaluate its performance. The first-level admission control assigns higher priority to real-time calls (also referred to as class 0 calls in gaining access to the system resources. The second level admits nonreal-time calls (or class 1 calls based on the resources remaining after meeting the resource needs for real-time calls. However, to ensure some minimum level of performance for nonreal-time calls, the scheme reserves some resources for such calls. The proposed two-level CAC scheme utilizes the delay-tolerant characteristic of non-real-time calls by incorporating a queue to temporarily store those that cannot be assigned resources at the time of initial access. We analyze and evaluate the call blocking, outage probability, throughput, and average queuing delay performance of the proposed two-level CAC scheme using Markov chain theory. The analytic results are validated by simulation results. The numerical results show that the proposed two-level CAC scheme provides better performance than the single-level CAC scheme. Based on these results, it is concluded that the proposed two-level CAC scheme serves as a good solution for supporting multimedia applications in DS-CDMA wireless communication systems.

  15. Performance of a Two-Level Call Admission Control Scheme for DS-CDMA Wireless Networks

    Directory of Open Access Journals (Sweden)

    Abraham O. Fapojuwo

    2007-11-01

    Full Text Available We propose a two-level call admission control (CAC scheme for direct sequence code division multiple access (DS-CDMA wireless networks supporting multimedia traffic and evaluate its performance. The first-level admission control assigns higher priority to real-time calls (also referred to as class 0 calls in gaining access to the system resources. The second level admits nonreal-time calls (or class 1 calls based on the resources remaining after meeting the resource needs for real-time calls. However, to ensure some minimum level of performance for nonreal-time calls, the scheme reserves some resources for such calls. The proposed two-level CAC scheme utilizes the delay-tolerant characteristic of non-real-time calls by incorporating a queue to temporarily store those that cannot be assigned resources at the time of initial access. We analyze and evaluate the call blocking, outage probability, throughput, and average queuing delay performance of the proposed two-level CAC scheme using Markov chain theory. The analytic results are validated by simulation results. The numerical results show that the proposed two-level CAC scheme provides better performance than the single-level CAC scheme. Based on these results, it is concluded that the proposed two-level CAC scheme serves as a good solution for supporting multimedia applications in DS-CDMA wireless communication systems.

  16. Variational Level Set Method for Two-Stage Image Segmentation Based on Morphological Gradients

    Directory of Open Access Journals (Sweden)

    Zemin Ren

    2014-01-01

    Full Text Available We use variational level set method and transition region extraction techniques to achieve image segmentation task. The proposed scheme is done by two steps. We first develop a novel algorithm to extract transition region based on the morphological gradient. After this, we integrate the transition region into a variational level set framework and develop a novel geometric active contour model, which include an external energy based on transition region and fractional order edge indicator function. The external energy is used to drive the zero level set toward the desired image features, such as object boundaries. Due to this external energy, the proposed model allows for more flexible initialization. The fractional order edge indicator function is incorporated into the length regularization term to diminish the influence of noise. Moreover, internal energy is added into the proposed model to penalize the deviation of the level set function from a signed distance function. The results evolution of the level set function is the gradient flow that minimizes the overall energy functional. The proposed model has been applied to both synthetic and real images with promising results.

  17. Sensor data security level estimation scheme for wireless sensor networks.

    Science.gov (United States)

    Ramos, Alex; Filho, Raimir Holanda

    2015-01-19

    Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates.

  18. Carrier-based modulation schemes for various three-level matrix converters

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Loh, P.C.; Rong, R.C.

    2008-01-01

    different performance merits. To avoid confusion and hence fasten the converter applications in the industry, it would surely be better for modulation schemes to be developed from a common set of modulation principles that unfortunately has not yet been thoroughly defined. Contributing to that area...... a limited set of switching vectors because of its lower semiconductor count. Through simulation and experimental testing, all the evaluated matrix converters are shown to produce satisfactory sinusoidal input and output quantities using the same set of generic modulation principles, which can conveniently...

  19. Multi-domain, higher order level set scheme for 3D image segmentation on the GPU

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Zhang, Qin; Anton, François

    2010-01-01

    to evaluate level set surfaces that are $C^2$ continuous, but are slow due to high computational burden. In this paper, we provide a higher order GPU based solver for fast and efficient segmentation of large volumetric images. We also extend the higher order method to multi-domain segmentation. Our streaming...

  20. Sensor Data Security Level Estimation Scheme for Wireless Sensor Networks

    Science.gov (United States)

    Ramos, Alex; Filho, Raimir Holanda

    2015-01-01

    Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates. PMID:25608215

  1. Sensor Data Security Level Estimation Scheme for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Alex Ramos

    2015-01-01

    Full Text Available Due to their increasing dissemination, wireless sensor networks (WSNs have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE, a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates.

  2. Numerical simulations of natural or mixed convection in vertical channels: comparisons of level-set numerical schemes for the modeling of immiscible incompressible fluid flows

    International Nuclear Information System (INIS)

    Li, R.

    2012-01-01

    The aim of this research dissertation is at studying natural and mixed convections of fluid flows, and to develop and validate numerical schemes for interface tracking in order to treat incompressible and immiscible fluid flows, later. In a first step, an original numerical method, based on Finite Volume discretizations, is developed for modeling low Mach number flows with large temperature gaps. Three physical applications on air flowing through vertical heated parallel plates were investigated. We showed that the optimum spacing corresponding to the peak heat flux transferred from an array of isothermal parallel plates cooled by mixed convection is smaller than those for natural or forced convections when the pressure drop at the outlet keeps constant. We also proved that mixed convection flows resulting from an imposed flow rate may exhibit unexpected physical solutions; alternative model based on prescribed total pressure at inlet and fixed pressure at outlet sections gives more realistic results. For channels heated by heat flux on one wall only, surface radiation tends to suppress the onset of re-circulations at the outlet and to unify the walls temperature. In a second step, the mathematical model coupling the incompressible Navier-Stokes equations and the Level-Set method for interface tracking is derived. Improvements in fluid volume conservation by using high order discretization (ENO-WENO) schemes for the transport equation and variants of the signed distance equation are discussed. (author)

  3. A progressive diagonalization scheme for the Rabi Hamiltonian

    International Nuclear Information System (INIS)

    Pan, Feng; Guan, Xin; Wang, Yin; Draayer, J P

    2010-01-01

    A diagonalization scheme for the Rabi Hamiltonian, which describes a qubit interacting with a single-mode radiation field via a dipole interaction, is proposed. It is shown that the Rabi Hamiltonian can be solved almost exactly using a progressive scheme that involves a finite set of one variable polynomial equations. The scheme is especially efficient for the lower part of the spectrum. Some low-lying energy levels of the model with several sets of parameters are calculated and compared to those provided by the recently proposed generalized rotating-wave approximation and a full matrix diagonalization.

  4. Riemann-problem and level-set approaches for two-fluid flow computations I. Linearized Godunov scheme

    NARCIS (Netherlands)

    B. Koren (Barry); M.R. Lewis; E.H. van Brummelen (Harald); B. van Leer

    2001-01-01

    textabstractA finite-volume method is presented for the computation of compressible flows of two immiscible fluids at very different densities. The novel ingredient in the method is a two-fluid linearized Godunov scheme, allowing for flux computations in case of different fluids (e.g., water and

  5. Single particle level scheme for alpha decay

    International Nuclear Information System (INIS)

    Mirea, M.

    1998-01-01

    The fine structure phenomenon in alpha decay was evidenced by Rosenblum. In this process the kinetic energy of the emitted particle has several determined values related to the structure of the parent and the daughter nucleus. The probability to find the daughter in a low lying state was considered strongly dependent on the spectroscopic factor defined as the square of overlap between the wave function of the parent in the ground state and the wave functions of the specific excited states of the daughter. This treatment provides a qualitative agreement with the experimental results if the variations of the penetrability between different excited states are neglected. Based on single particle structure during fission, a new formalism explained quantitatively the fine structure of the cluster decay. It was suggested that this formalism can be applied also to alpha decay. For this purpose, the first step is to construct the level scheme of this type of decay. Such a scheme, obtained with the super-asymmetric two-center potential, is plotted for the alpha decay of 223 Ra. It is interesting to note that, diabatically, the level with spin 3/2 emerging from 1i 11/2 (ground state of the parent) reaches an excited state of the daughter in agreement with the experiment. (author)

  6. A Robust Control Scheme for Medium-Voltage-Level DVR Implementation

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Loh, Poh Chiang; Li, Yun Wei

    2007-01-01

    of Hinfin controller weighting function selection, inner current loop tuning, and system disturbance rejection capability is presented. Finally, the designed control scheme is extensively tested on a laboratory 10-kV MV-level DVR system with varying voltage sag (balanced and unbalanced) and loading (linear....../nonlinear load and induction motor load) conditions. It is shown that the proposed control scheme is effective in both balanced and unbalanced sag compensation and load disturbance rejection, as its robustness is explicitly specified....

  7. Macro-level integrated renewable energy production schemes for sustainable development

    International Nuclear Information System (INIS)

    Subhadra, Bobban G.

    2011-01-01

    The production of renewable clean energy is a prime necessity for the sustainable future existence of our planet. However, because of the resource-intensive nature, and other challenges associated with these new generation renewable energy sources, novel industrial frameworks need to be co-developed. Integrated renewable energy production schemes with foundations on resource sharing, carbon neutrality, energy-efficient design, source reduction, green processing plan, anthropogenic use of waste resources for the production green energy along with the production of raw material for allied food and chemical industries is imperative for the sustainable development of this sector especially in an emission-constrained future industrial scenario. To attain these objectives, the scope of hybrid renewable production systems and integrated renewable energy industrial ecology is briefly described. Further, the principles of Integrated Renewable Energy Park (IREP) approach, an example for macro-level energy production, and its benefits and global applications are also explored. - Research highlights: → Discusses the need for macro-level renewable energy production schemes. → Scope of hybrid and integrated industrial ecology for renewable energy production. → Integrated Renewable Energy Parks (IREPs): A macro-level energy production scheme. → Discusses the principle foundations and global applications of IREPs. → Describes the significance of IREPs in the carbon-neutral future business arena.

  8. An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs

    Directory of Open Access Journals (Sweden)

    Kishore R. Mosaliganti

    2013-12-01

    Full Text Available In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse and grid representations (point, mesh, and image-based. Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g. gradient and Hessians across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a

  9. High-spin level scheme of odd-odd 142Pm

    International Nuclear Information System (INIS)

    Liu Minliang; Zhang Yuhu; Zhou Xiaohong; He Jianjun; Guo Yingxiang; Lei Xiangguo; Huang Wenxue; Liu Zhong; Luo Yixiao; Feng Xichen; Zhang Shuangquan; Xu Xiao; Zheng Yong; Luo Wanju

    2002-01-01

    The level structure of doubly odd nucleus 142 Pm has been studied via the 128 Te( 19 F, 5nγ) 142 Pm reaction in the energy region from 75 to 95 MeV. In-beam γ rays were measured including the excited function, γ-ray singles and γ-γ coincidences in experiment. The level scheme of 142 Pm has been extended up to excitation energy of 7030.0 keV including 25 new γ rays and 13 new levels. Based on the measured γ-ray anisotropies, the level spins in 142 Pm have been suggested

  10. Soft rotator model and {sup 246}Cm low-lying level scheme

    Energy Technology Data Exchange (ETDEWEB)

    Porodzinskij, Yu.V.; Sukhovitskij, E.Sh. [Radiation Physics and Chemistry Problems Inst., Minsk-Sosny (Belarus)

    1997-03-01

    Non-axial soft rotator nuclear model is suggested as self-consistent approach for interpretation of level schemes, {gamma}-transition probabilities and neutron interaction with even-even nuclei. (author)

  11. Microwave imaging of dielectric cylinder using level set method and conjugate gradient algorithm

    International Nuclear Information System (INIS)

    Grayaa, K.; Bouzidi, A.; Aguili, T.

    2011-01-01

    In this paper, we propose a computational method for microwave imaging cylinder and dielectric object, based on combining level set technique and the conjugate gradient algorithm. By measuring the scattered field, we tried to retrieve the shape, localisation and the permittivity of the object. The forward problem is solved by the moment method, while the inverse problem is reformulate in an optimization one and is solved by the proposed scheme. It found that the proposed method is able to give good reconstruction quality in terms of the reconstructed shape and permittivity.

  12. Voltage protection scheme for MG sets used to drive inductive energy storage systems

    International Nuclear Information System (INIS)

    Campen, G.L.; Easter, R.B.

    1977-01-01

    A recent tokamak proposal at ORNL called for MG (motor-generator) sets to drive the ohmic heating (OH] coil, which was to be subjected to 20 kV immediately after coil charge-up to initiate the experiment. Since most rotating machinery is inherently low voltage, including the machines available at ORNL, a mechanism was necessary to isolate the generators from the high voltage portions of the circuit before the appearance of this voltage. It is not the expected 20 kV at the coil that causes difficulty, because the main interrupting switch handles this. The voltage induced in the armature due to di/dt and the possibility of faults are the greatest causes for concern and are responsible for the complexity of the voltage protection scheme, which must accommodate any possible combination of fault time and location. Such a protection scheme is presented in this paper

  13. New data on excited level scheme of 73Ge nucleus

    International Nuclear Information System (INIS)

    Kosyak, Yu.G.; Kaipov, D.K.; Chekushina, L.V.

    1990-01-01

    New data on the scheme of 73 Ge decay obtained by the method of reactor fast neutron inelastic scattering are presented. γ-Spectra from reaction 73 Ge(n, n'γ) 73 Ge at the angles of 90 and 124 deg of relatively incident neutron beam have been measured. Experimental populations of the levels are studied. 29 new γ-transitions have been identified, two new levels have been introduced

  14. A Study on the Security Levels of Spread-Spectrum Embedding Schemes in the WOA Framework.

    Science.gov (United States)

    Wang, Yuan-Gen; Zhu, Guopu; Kwong, Sam; Shi, Yun-Qing

    2017-08-23

    Security analysis is a very important issue for digital watermarking. Several years ago, according to Kerckhoffs' principle, the famous four security levels, namely insecurity, key security, subspace security, and stego-security, were defined for spread-spectrum (SS) embedding schemes in the framework of watermarked-only attack. However, up to now there has been little application of the definition of these security levels to the theoretical analysis of the security of SS embedding schemes, due to the difficulty of the theoretical analysis. In this paper, based on the security definition, we present a theoretical analysis to evaluate the security levels of five typical SS embedding schemes, which are the classical SS, the improved SS (ISS), the circular extension of ISS, the nonrobust and robust natural watermarking, respectively. The theoretical analysis of these typical SS schemes are successfully performed by taking advantage of the convolution of probability distributions to derive the probabilistic models of watermarked signals. Moreover, simulations are conducted to illustrate and validate our theoretical analysis. We believe that the theoretical and practical analysis presented in this paper can bridge the gap between the definition of the four security levels and its application to the theoretical analysis of SS embedding schemes.

  15. CT liver volumetry using geodesic active contour segmentation with a level-set algorithm

    Science.gov (United States)

    Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard

    2010-03-01

    Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.

  16. Scheme of 2-dimensional atom localization for a three-level atom via quantum coherence

    OpenAIRE

    Zafar, Sajjad; Ahmed, Rizwan; Khan, M. Khalid

    2013-01-01

    We present a scheme for two-dimensional (2D) atom localization in a three-level atomic system. The scheme is based on quantum coherence via classical standing wave fields between the two excited levels. Our results show that conditional position probability is significantly phase dependent of the applied field and frequency detuning of spontaneously emitted photons. We obtain a single localization peak having probability close to unity by manipulating the control parameters. The effect of ato...

  17. Abolition of set-aside schemes and its impacts on habitat styructure in Denmark from 2007-2008

    DEFF Research Database (Denmark)

    Levin, Gregor

    2010-01-01

    Agriculture accounts for 65% of the Danish land area. Habitats for wild species are characterized by small patches, surrounded by intensive agriculture. Due to extensive management, set-aside land can if located close to habitats, improve habitat structure in terms of patch size and connectivity....... In 2008 set-aside schemes were abolished, leading to a decline in the area of set-aside land from 6% of all agricultural land in 2007 to 3% in 2008. We developed an indicator aiming to measure the effect of the reduced area of set-aside land on habitat structure. The indicator combines distance...... to habitats, potential corridors between habitats and area percentage of set-aside land. Analyses show that the halving of the area of set-aside land has led to a 55% reduction of the effect of set-aside land on habitat structure....

  18. Set of difference spitting schemes for solving the Navier-Stokes incompressible equations in natural variables

    International Nuclear Information System (INIS)

    Koleshko, S.B.

    1989-01-01

    A three-parametric set of difference schemes is suggested to solve Navier-Stokes equations with the use of the relaxation form of the continuity equation. The initial equations are stated for time increments. Use is made of splitting the operator into one-dimensional forms that reduce calculations to scalar factorizations. Calculated results for steady- and unsteady-state flows in a cavity are presented

  19. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less

  20. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    International Nuclear Information System (INIS)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-01-01

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F≤f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  1. Multiobjective hyper heuristic scheme for system design and optimization

    Science.gov (United States)

    Rafique, Amer Farhan

    2012-11-01

    As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.

  2. Proposed classification scheme for high-level and other radioactive wastes

    International Nuclear Information System (INIS)

    Kocher, D.C.; Croff, A.G.

    1986-01-01

    The Nuclear Waste Policy Act (NWPA) of 1982 defines high-level radioactive waste (HLW) as: (A) the highly radioactive material resulting from the reprocessing of spent nuclear fuel....that contains fission products in sufficient concentrations; and (B) other highly radioactive material that the Commission....determines....requires permanent isolation. This paper presents a generally applicable quantitative definition of HLW that addresses the description in paragraph (B). The approach also results in definitions of other waste classes, i.e., transuranic (TRU) and low-level waste (LLW). A basic waste classification scheme results from the quantitative definitions

  3. Abolition of set-aside schemes and its impact on habitat connectivity in Denmark from 2007 - 2008

    DEFF Research Database (Denmark)

    Levin, Gregor

    In Denmark, agriculture occupies 28,000 km² or 65% of the land. As a consequence, habitats for wild species are mainly characterized by small patches, surrounded by intensive agriculture. Due to extensive agricultural management, set-aside land can spatially connect habitats and thus positively...... affect habitat connectivity, which is of importance to the survival of wild species. In 2008 set-aside schemes were abolished, leading to a considerable re-cultivation of former set-aside land and consequently to a decline in the area of set-aside land from 6% of all agricultural land in 2007 to 3......% in 2008. The main argument against regulations of the re-cultivation of set-aside land with the aim to minimize declines in habitat-connectivity was that re-cultivation would primarily occur on highly productive land at a long distance from habitats, while set-aside land located on marginal land, close...

  4. Geminal embedding scheme for optimal atomic basis set construction in correlated calculations

    Energy Technology Data Exchange (ETDEWEB)

    Sorella, S., E-mail: sorella@sissa.it [International School for Advanced Studies (SISSA), Via Beirut 2-4, 34014 Trieste, Italy and INFM Democritos National Simulation Center, Trieste (Italy); Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr [Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France); Mazzola, G., E-mail: gmazzola@phys.ethz.ch [Theoretische Physik, ETH Zurich, 8093 Zurich (Switzerland); Casula, M., E-mail: michele.casula@impmc.upmc.fr [CNRS and Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France)

    2015-12-28

    We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.

  5. Comparative Study on Feature Selection and Fusion Schemes for Emotion Recognition from Speech

    Directory of Open Access Journals (Sweden)

    Santiago Planet

    2012-09-01

    Full Text Available The automatic analysis of speech to detect affective states may improve the way users interact with electronic devices. However, the analysis only at the acoustic level could be not enough to determine the emotion of a user in a realistic scenario. In this paper we analyzed the spontaneous speech recordings of the FAU Aibo Corpus at the acoustic and linguistic levels to extract two sets of features. The acoustic set was reduced by a greedy procedure selecting the most relevant features to optimize the learning stage. We compared two versions of this greedy selection algorithm by performing the search of the relevant features forwards and backwards. We experimented with three classification approaches: Naïve-Bayes, a support vector machine and a logistic model tree, and two fusion schemes: decision-level fusion, merging the hard-decisions of the acoustic and linguistic classifiers by means of a decision tree; and feature-level fusion, concatenating both sets of features before the learning stage. Despite the low performance achieved by the linguistic data, a dramatic improvement was achieved after its combination with the acoustic information, improving the results achieved by this second modality on its own. The results achieved by the classifiers using the parameters merged at feature level outperformed the classification results of the decision-level fusion scheme, despite the simplicity of the scheme. Moreover, the extremely reduced set of acoustic features obtained by the greedy forward search selection algorithm improved the results provided by the full set.

  6. Novel gene sets improve set-level classification of prokaryotic gene expression data.

    Science.gov (United States)

    Holec, Matěj; Kuželka, Ondřej; Železný, Filip

    2015-10-28

    Set-level classification of gene expression data has received significant attention recently. In this setting, high-dimensional vectors of features corresponding to genes are converted into lower-dimensional vectors of features corresponding to biologically interpretable gene sets. The dimensionality reduction brings the promise of a decreased risk of overfitting, potentially resulting in improved accuracy of the learned classifiers. However, recent empirical research has not confirmed this expectation. Here we hypothesize that the reported unfavorable classification results in the set-level framework were due to the adoption of unsuitable gene sets defined typically on the basis of the Gene ontology and the KEGG database of metabolic networks. We explore an alternative approach to defining gene sets, based on regulatory interactions, which we expect to collect genes with more correlated expression. We hypothesize that such more correlated gene sets will enable to learn more accurate classifiers. We define two families of gene sets using information on regulatory interactions, and evaluate them on phenotype-classification tasks using public prokaryotic gene expression data sets. From each of the two gene-set families, we first select the best-performing subtype. The two selected subtypes are then evaluated on independent (testing) data sets against state-of-the-art gene sets and against the conventional gene-level approach. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. Novel gene sets defined on the basis of regulatory interactions improve set-level classification of gene expression data. The experimental scripts and other material needed to reproduce the experiments are available at http://ida.felk.cvut.cz/novelgenesets.tar.gz.

  7. An Enhanced Three-Level Voltage Switching State Scheme for Direct Torque Controlled Open End Winding Induction Motor

    Science.gov (United States)

    Kunisetti, V. Praveen Kumar; Thippiripati, Vinay Kumar

    2018-01-01

    Open End Winding Induction Motors (OEWIM) are popular for electric vehicles, ship propulsion applications due to less DC link voltage. Electric vehicles, ship propulsions require ripple free torque. In this article, an enhanced three-level voltage switching state scheme for direct torque controlled OEWIM drive is implemented to reduce torque and flux ripples. The limitations of conventional Direct Torque Control (DTC) are: possible problems during low speeds and starting, it operates with variable switching frequency due to hysteresis controllers and produces higher torque and flux ripple. The proposed DTC scheme can abate the problems of conventional DTC with an enhanced voltage switching state scheme. The three-level inversion was obtained by operating inverters with equal DC-link voltages and it produces 18 voltage space vectors. These 18 vectors are divided into low and high frequencies of operation based on rotor speed. The hardware results prove the validity of proposed DTC scheme during steady-state and transients. From simulation and experimental results, proposed DTC scheme gives less torque and flux ripples on comparison to two-level DTC. The proposed DTC is implemented using dSPACE DS-1104 control board interface with MATLAB/SIMULINK-RTI model.

  8. A SCHEME FOR TEMPLATE SECURITY AT FEATURE FUSION LEVEL IN MULTIMODAL BIOMETRIC SYSTEM

    Directory of Open Access Journals (Sweden)

    Arvind Selwal

    2016-09-01

    Full Text Available Biometric is the science of human recognition based upon using their biological, chemical or behavioural traits. These systems are used in many real life applications simply from biometric based attendance system to providing security at very sophisticated level. A biometric system deals with raw data captured using a sensor and feature template extracted from raw image. One of the challenges being faced by designers of these systems is to secure template data extracted from the biometric modalities of the user and protect the raw images. To minimize spoof attacks on biometric systems by unauthorised users one of the solutions is to use multi-biometric systems. Multi-modal biometric system works by using fusion technique to merge feature templates generated from different modalities of the human. In this work a new scheme is proposed to secure template during feature fusion level. Scheme is based on union operation of fuzzy relations of templates of modalities during fusion process of multimodal biometric systems. This approach serves dual purpose of feature fusion as well as transformation of templates into a single secured non invertible template. The proposed technique is cancelable and experimentally tested on a bimodal biometric system comprising of fingerprint and hand geometry. Developed scheme removes the problem of an attacker learning the original minutia position in fingerprint and various measurements of hand geometry. Given scheme provides improved performance of the system with reduction in false accept rate and improvement in genuine accept rate.

  9. Synchronised PWM Schemes for Three-level Inverters with Zero Common-mode Voltage

    DEFF Research Database (Denmark)

    Oleschuk, Valentin; Blaabjerg, Frede

    2002-01-01

    This paper presents results of analysis and comparison of novel synchronised schemes of pulsewidth modulation (PWM), applied to three-level voltage source inverters with control algorithms providing elimination of the common-mode voltage. The proposed approach is based on a new strategy of digital...

  10. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    Science.gov (United States)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  11. Adaptive protection coordination scheme for distribution network with distributed generation using ABC

    Directory of Open Access Journals (Sweden)

    A.M. Ibrahim

    2016-09-01

    Full Text Available This paper presents an adaptive protection coordination scheme for optimal coordination of DOCRs in interconnected power networks with the impact of DG, the used coordination technique is the Artificial Bee Colony (ABC. The scheme adapts to system changes; new relays settings are obtained as generation-level or system-topology changes. The developed adaptive scheme is applied on the IEEE 30-bus test system for both single- and multi-DG existence where results are shown and discussed.

  12. Joint multiuser switched diversity and adaptive modulation schemes for spectrum sharing systems

    KAUST Repository

    Qaraqe, Marwa

    2012-12-01

    In this paper, we develop multiuser access schemes for spectrum sharing systems whereby secondary users are allowed to share the spectrum with primary users under the condition that the interference observed at the primary receiver is below a predetermined threshold. In particular, we devise two schemes for selecting a user among those that satisfy the interference constraint and achieve an acceptable signal-to-noise ratio level. The first scheme selects the user that reports the best channel quality. In order to alleviate the high feedback load associated with the first scheme, we develop a second scheme based on the concept of switched diversity where the base station scans the users in a sequential manner until an acceptable user is found. In addition to these two selection schemes, we consider two power adaptive settings at the secondary users based on the amount of interference available at the secondary transmitter. In the On/Off power setting, users are allowed to transmit based on whether the interference constraint is met or not, while in the full power adaptive setting, the users are allowed to vary their transmission power to satisfy the interference constraint. Finally, we present numerical results for our proposed algorithms where we show the trade-off between the average spectral efficiency and average feedback load for both schemes. © 2012 IEEE.

  13. Joint multiuser switched diversity and adaptive modulation schemes for spectrum sharing systems

    KAUST Repository

    Qaraqe, Marwa; Abdallah, Mohamed M.; Serpedin, Erchin; Alouini, Mohamed-Slim; Alnuweiri, Hussein M.

    2012-01-01

    In this paper, we develop multiuser access schemes for spectrum sharing systems whereby secondary users are allowed to share the spectrum with primary users under the condition that the interference observed at the primary receiver is below a predetermined threshold. In particular, we devise two schemes for selecting a user among those that satisfy the interference constraint and achieve an acceptable signal-to-noise ratio level. The first scheme selects the user that reports the best channel quality. In order to alleviate the high feedback load associated with the first scheme, we develop a second scheme based on the concept of switched diversity where the base station scans the users in a sequential manner until an acceptable user is found. In addition to these two selection schemes, we consider two power adaptive settings at the secondary users based on the amount of interference available at the secondary transmitter. In the On/Off power setting, users are allowed to transmit based on whether the interference constraint is met or not, while in the full power adaptive setting, the users are allowed to vary their transmission power to satisfy the interference constraint. Finally, we present numerical results for our proposed algorithms where we show the trade-off between the average spectral efficiency and average feedback load for both schemes. © 2012 IEEE.

  14. Two-level MOC calculation scheme in APOLLO2 for cross-section library generation for LWR hexagonal assemblies

    International Nuclear Information System (INIS)

    Petrov, Nikolay; Todorova, Galina; Kolev, Nikola; Damian, Frederic

    2011-01-01

    The accurate and efficient MOC calculation scheme in APOLLO2, developed by CEA for generating multi-parameterized cross-section libraries for PWR assemblies, has been adapted to hexagonal assemblies. The neutronic part of this scheme is based on a two-level calculation methodology. At the first level, a multi-cell method is used in 281 energy groups for cross-section definition and self-shielding. At the second level, precise MOC calculations are performed in a collapsed energy mesh (30-40 groups). In this paper, the application and validation of the two-level scheme for hexagonal assemblies is described. Solutions for a VVER assembly are compared with TRIPOLI4® calculations and direct 281g MOC solutions. The results show that the accuracy is close to that of the 281g MOC calculation while the CPU time is substantially reduced. Compared to the multi-cell method, the accuracy is markedly improved. (author)

  15. Volume Sculpting Using the Level-Set Method

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Christensen, Niels Jørgen

    2002-01-01

    In this paper, we propose the use of the Level--Set Method as the underlying technology of a volume sculpting system. The main motivation is that this leads to a very generic technique for deformation of volumetric solids. In addition, our method preserves a distance field volume representation....... A scaling window is used to adapt the Level--Set Method to local deformations and to allow the user to control the intensity of the tool. Level--Set based tools have been implemented in an interactive sculpting system, and we show sculptures created using the system....

  16. An efficient numerical progressive diagonalization scheme for the quantum Rabi model revisited

    International Nuclear Information System (INIS)

    Pan, Feng; Bao, Lina; Dai, Lianrong; Draayer, Jerry P

    2017-01-01

    An efficient numerical progressive diagonalization scheme for the quantum Rabi model is revisited. The advantage of the scheme lies in the fact that the quantum Rabi model can be solved almost exactly by using the scheme that only involves a finite set of one variable polynomial equations. The scheme is especially efficient for a specified eigenstate of the model, for example, the ground state. Some low-lying level energies of the model for several sets of parameters are calculated, of which one set of the results is compared to that obtained from the Braak’s exact solution proposed recently. It is shown that the derivative of the entanglement measure defined in terms of the reduced von Neumann entropy with respect to the coupling parameter does reach the maximum near the critical point deduced from the classical limit of the Dicke model, which may provide a probe of the critical point of the crossover in finite quantum many-body systems, such as that in the quantum Rabi model. (paper)

  17. Fast Sparse Level Sets on Graphics Hardware

    NARCIS (Netherlands)

    Jalba, Andrei C.; Laan, Wladimir J. van der; Roerdink, Jos B.T.M.

    The level-set method is one of the most popular techniques for capturing and tracking deformable interfaces. Although level sets have demonstrated great potential in visualization and computer graphics applications, such as surface editing and physically based modeling, their use for interactive

  18. An intelligent hybrid scheme for optimizing parking space: A Tabu metaphor and rough set based approach

    Directory of Open Access Journals (Sweden)

    Soumya Banerjee

    2011-03-01

    Full Text Available Congested roads, high traffic, and parking problems are major concerns for any modern city planning. Congestion of on-street spaces in official neighborhoods may give rise to inappropriate parking areas in office and shopping mall complex during the peak time of official transactions. This paper proposes an intelligent and optimized scheme to solve parking space problem for a small city (e.g., Mauritius using a reactive search technique (named as Tabu Search assisted by rough set. Rough set is being used for the extraction of uncertain rules that exist in the databases of parking situations. The inclusion of rough set theory depicts the accuracy and roughness, which are used to characterize uncertainty of the parking lot. Approximation accuracy is employed to depict accuracy of a rough classification [1] according to different dynamic parking scenarios. And as such, the hybrid metaphor proposed comprising of Tabu Search and rough set could provide substantial research directions for other similar hard optimization problems.

  19. First UHF Implementation of the Incremental Scheme for Open-Shell Systems.

    Science.gov (United States)

    Anacker, Tony; Tew, David P; Friedrich, Joachim

    2016-01-12

    The incremental scheme makes it possible to compute CCSD(T) correlation energies to high accuracy for large systems. We present the first extension of this fully automated black-box approach to open-shell systems using an Unrestricted Hartree-Fock (UHF) wave function, extending the efficient domain-specific basis set approach to handle open-shell references. We test our approach on a set of organic and metal organic structures and molecular clusters and demonstrate standard deviations from canonical CCSD(T) values of only 1.35 kJ/mol using a triple ζ basis set. We find that the incremental scheme is significantly more cost-effective than the canonical implementation even for relatively small systems and that the ease of parallelization makes it possible to perform high-level calculations on large systems in a few hours on inexpensive computers. We show that the approximations that make our approach widely applicable are significantly smaller than both the basis set incompleteness error and the intrinsic error of the CCSD(T) method, and we further demonstrate that incremental energies can be reliably used in extrapolation schemes to obtain near complete basis set limit CCSD(T) reaction energies for large systems.

  20. Solving the Sea-Level Equation in an Explicit Time Differencing Scheme

    Science.gov (United States)

    Klemann, V.; Hagedoorn, J. M.; Thomas, M.

    2016-12-01

    In preparation of coupling the solid-earth to an ice-sheet compartment in an earth-system model, the dependency of initial topography on the ice-sheet history and viscosity structure has to be analysed. In this study, we discuss this dependency and how it influences the reconstruction of former sea level during a glacial cycle. The modelling is based on the VILMA code in which the field equations are solved in the time domain applying an explicit time-differencing scheme. The sea-level equation is solved simultaneously in the same explicit scheme as the viscoleastic field equations (Hagedoorn et al., 2007). With the assumption of only small changes, we neglect the iterative solution at each time step as suggested by e.g. Kendall et al. (2005). Nevertheless, the prediction of the initial paleo topography in case of moving coastlines remains to be iterated by repeated integration of the whole load history. The sensitivity study sketched at the beginning is accordingly motivated by the question if the iteration of the paleo topography can be replaced by a predefined one. This study is part of the German paleoclimate modelling initiative PalMod. Lit:Hagedoorn JM, Wolf D, Martinec Z, 2007. An estimate of global mean sea-level rise inferred from tide-gauge measurements using glacial-isostatic models consistent with the relative sea-level record. Pure appl. Geophys. 164: 791-818, doi:10.1007/s00024-007-0186-7Kendall RA, Mitrovica JX, Milne GA, 2005. On post-glacial sea level - II. Numerical formulation and comparative reesults on spherically symmetric models. Geophys. J. Int., 161: 679-706, doi:10.1111/j.365-246.X.2005.02553.x

  1. Optimal Face-Iris Multimodal Fusion Scheme

    Directory of Open Access Journals (Sweden)

    Omid Sharifi

    2016-06-01

    Full Text Available Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA, a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.

  2. A comparative study of reinitialization approaches of the level set method for simulating free-surface flows

    Energy Technology Data Exchange (ETDEWEB)

    Sufyan, Muhammad; Ngo, Long Cu; Choi, Hyoung Gwon [Seoul National University, Seoul (Korea, Republic of)

    2016-04-15

    Unstructured grids were used to compare the performance of a direct reinitialization scheme with those of two reinitialization approaches based on the solution of a hyperbolic Partial differential equation (PDE). The problems of moving interface were solved in the context of a finite element method. A least-square weighted residual method was used to discretize the advection equation of the level set method. The benchmark problems of rotating Zalesak's disk, time-reversed single vortex, and two-dimensional sloshing were examined. Numerical results showed that the direct reinitialization scheme performed better than the PDE-based reinitialization approaches in terms of mass conservation, dissipation and dispersion error, and computational time. In the case of sloshing, numerical results were found to be in good agreement with existing experimental data. The direct reinitialization approach consumed considerably less CPU time than the PDE-based simulations for 20 time periods of sloshing. This approach was stable, accurate, and efficient for all the problems considered in this study.

  3. Proposed classification scheme for high-level and other radioactive wastes

    International Nuclear Information System (INIS)

    Kocher, D.C.; Croff, A.G.

    1986-01-01

    The Nuclear Waste Policy Act (NWPA) of 1982 defines high-level (radioactive) waste (HLW) as (A) the highly radioactive material resulting from the reprocessing of spent nuclear fuel...that contains fission products in sufficient concentrations; and (B) other highly radioactive material that the Commission...determines...requires permanent isolation. This paper presents a generally applicable quantitative definition of HLW that addresses the description in paragraph B. The approach also results in definitions of other wastes classes, i.e., transuranic (TRU) and low-level waste (LLW). The basic waste classification scheme that results from the quantitative definitions of highly radioactive and requires permanent isolation is depicted. The concentrations of radionuclides that correspond to these two boundaries, and that may be used to classify radioactive wastes, are given

  4. A Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    Science.gov (United States)

    Buyens, Wim; van Dijk, Bas; Wouters, Jan; Moonen, Marc

    2015-10-01

    Listening to music is still one of the more challenging aspects of using a cochlear implant (CI) for most users. Simple musical structures, a clear rhythm/beat, and lyrics that are easy to follow are among the top factors contributing to music appreciation for CI users. Modifying the audio mix of complex music potentially improves music enjoyment in CI users. A stereo music preprocessing scheme is described in which vocals, drums, and bass are emphasized based on the representation of the harmonic and the percussive components in the input spectrogram, combined with the spatial allocation of instruments in typical stereo recordings. The scheme is assessed with postlingually deafened CI subjects (N = 7) using pop/rock music excerpts with different complexity levels. The scheme is capable of modifying relative instrument level settings, with the aim of improving music appreciation in CI users, and allows individual preference adjustments. The assessment with CI subjects confirms the preference for more emphasis on vocals, drums, and bass as offered by the preprocessing scheme, especially for songs with higher complexity. The stereo music preprocessing scheme has the potential to improve music enjoyment in CI users by modifying the audio mix in widespread (stereo) music recordings. Since music enjoyment in CI users is generally poor, this scheme can assist the music listening experience of CI users as a training or rehabilitation tool.

  5. Evaluation and interconversion of various indicator PCB schemes for ∑PCB and dioxin-like PCB toxic equivalent levels in fish.

    Science.gov (United States)

    Gandhi, Nilima; Bhavsar, Satyendra P; Reiner, Eric J; Chen, Tony; Morse, Dave; Arhonditsis, George B; Drouillard, Ken G

    2015-01-06

    Polychlorinated biphenyls (PCBs) remain chemicals of concern more than three decades after the ban on their production. Technical mixture-based total PCB measurements are unreliable due to weathering and degradation, while detailed full congener specific measurements can be time-consuming and costly for large studies. Measurements using a subset of indicator PCBs (iPCBs) have been considered appropriate; however, inclusion of different PCB congeners in various iPCB schemes makes it challenging to readily compare data. Here, using an extensive data set, we examine the performance of existing iPCB3 (PCB 138, 153, and 180), iPCB6 (iPCB3 plus 28, 52, and 101) and iPCB7 (iPCB6 plus 118) schemes, and new iPCB schemes in estimating total of PCB congeners (∑PCB) and dioxin-like PCB toxic equivalent (dlPCB-TEQ) concentrations in sport fish fillets and the whole body of juvenile fish. The coefficients of determination (R(2)) for regressions conducted using logarithmically transformed data suggest that inclusion of an increased number of PCBs in an iPCB improves relationship with ∑PCB but not dlPCB-TEQs. Overall, novel iPCB3 (PCB 95, 118, and 153), iPCB4 (iPCB3 plus 138) and iPCB5 (iPCB4 plus 110) presented in this study and existing iPCB6 and iPCB7 are the most optimal indicators, while the current iPCB3 should be avoided. Measurement of ∑PCB based on a more detailed analysis (50+ congeners) is also overall a good approach for assessing PCB contamination and to track PCB origin in fish. Relationships among the existing and new iPCB schemes have been presented to facilitate their interconversion. The iPCB6 equiv levels for the 6.5 and 10 pg/g benchmarks of dlPCB-TEQ05 are about 50 and 120 ng/g ww, respectively, which are lower than the corresponding iPCB6 limits of 125 and 300 ng/g ww set by the European Union.

  6. Setting aside Transactions from Pyramid Schemes as Impeachable Dispositions under South African Insolvency Legislation

    Directory of Open Access Journals (Sweden)

    Zingapi Mabe

    2016-10-01

    Full Text Available South African courts have experienced a rise in the number of cases involving schemes that promise a return on investment with interest rates which are considerably above the maximum amount allowed by law, or schemes which promise compensation from the active recruitment of participants. These schemes, which are often referred to as pyramid or Ponzi schemes, are unsustainable operations and give rise to problems in the law of insolvency. Investors in these schemes are often left empty-handed upon the scheme’s eventual collapse and insolvency. Investors who received pay-outs from the scheme find themselves in the defence against the trustee’s claims for the return of the pay-outs to the insolvent estate. As the schemes are illegal and the pay-outs are often in terms of void agreements, the question arises whether they can be returned to the insolvent estate. A similar situation arose in Griffiths v Janse van Rensburg 2015 ZASCA 158 (26 October 2015. The point of contention in this case was whether the illegality of the business of the scheme was a relevant consideration in determining whether the pay-outs were made in the ordinary course of business of the scheme. This paper discusses pyramid schemes in the context of impeachable dispositions in terms of the Insolvency Act 24 of 1936.

  7. Exploring the level sets of quantum control landscapes

    International Nuclear Information System (INIS)

    Rothman, Adam; Ho, Tak-San; Rabitz, Herschel

    2006-01-01

    A quantum control landscape is defined by the value of a physical observable as a functional of the time-dependent control field E(t) for a given quantum-mechanical system. Level sets through this landscape are prescribed by a particular value of the target observable at the final dynamical time T, regardless of the intervening dynamics. We present a technique for exploring a landscape level set, where a scalar variable s is introduced to characterize trajectories along these level sets. The control fields E(s,t) accomplishing this exploration (i.e., that produce the same value of the target observable for a given system) are determined by solving a differential equation over s in conjunction with the time-dependent Schroedinger equation. There is full freedom to traverse a level set, and a particular trajectory is realized by making an a priori choice for a continuous function f(s,t) that appears in the differential equation for the control field. The continuous function f(s,t) can assume an arbitrary form, and thus a level set generally contains a family of controls, where each control takes the quantum system to the same final target value, but produces a distinct control mechanism. In addition, although the observable value remains invariant over the level set, other dynamical properties (e.g., the degree of robustness to control noise) are not specifically preserved and can vary greatly. Examples are presented to illustrate the continuous nature of level-set controls and their associated induced dynamical features, including continuously morphing mechanisms for population control in model quantum systems

  8. Level-Set Topology Optimization with Aeroelastic Constraints

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2015-01-01

    Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.

  9. Bit-level quantum color image encryption scheme with quantum cross-exchange operation and hyper-chaotic system

    Science.gov (United States)

    Zhou, Nanrun; Chen, Weiwei; Yan, Xinyu; Wang, Yunqian

    2018-06-01

    In order to obtain higher encryption efficiency, a bit-level quantum color image encryption scheme by exploiting quantum cross-exchange operation and a 5D hyper-chaotic system is designed. Additionally, to enhance the scrambling effect, the quantum channel swapping operation is employed to swap the gray values of corresponding pixels. The proposed color image encryption algorithm has larger key space and higher security since the 5D hyper-chaotic system has more complex dynamic behavior, better randomness and unpredictability than those based on low-dimensional hyper-chaotic systems. Simulations and theoretical analyses demonstrate that the presented bit-level quantum color image encryption scheme outperforms its classical counterparts in efficiency and security.

  10. The QKD network: model and routing scheme

    Science.gov (United States)

    Yang, Chao; Zhang, Hongqi; Su, Jinhai

    2017-11-01

    Quantum key distribution (QKD) technology can establish unconditional secure keys between two communicating parties. Although this technology has some inherent constraints, such as the distance and point-to-point mode limits, building a QKD network with multiple point-to-point QKD devices can overcome these constraints. Considering the development level of current technology, the trust relaying QKD network is the first choice to build a practical QKD network. However, the previous research didn't address a routing method on the trust relaying QKD network in detail. This paper focuses on the routing issues, builds a model of the trust relaying QKD network for easily analysing and understanding this network, and proposes a dynamical routing scheme for this network. From the viewpoint of designing a dynamical routing scheme in classical network, the proposed scheme consists of three components: a Hello protocol helping share the network topology information, a routing algorithm to select a set of suitable paths and establish the routing table and a link state update mechanism helping keep the routing table newly. Experiments and evaluation demonstrates the validity and effectiveness of the proposed routing scheme.

  11. Modulation Schemes of Multi-phase Three-Level Z-Source Inverters

    DEFF Research Database (Denmark)

    Gao, F.; Loh, P.C.; Blaabjerg, Frede

    2007-01-01

    different modulation requirement and output performance. For clearly illustrating the detailed modulation process, time domain analysis instead of the traditional multi-dimensional space vector demonstration is assumed which reveals the right way to insert shoot-through durations in the switching sequence...... with minimal commutation count. Lastly, the theoretical findings are verified in Matlab/PLECS simulation and experimentally using constructed laboratory prototypes.......This paper investigates the modulation schemes of three-level multiphase Z-source inverters with either two Z-source networks or single Z-source network connected between the dc sources and inverter circuitry. With the proper offset added for achieving both desired four-leg operation and optimized...

  12. Renormalization in self-consistent approximation schemes at finite temperature I: theory

    International Nuclear Information System (INIS)

    Hees, H. van; Knoll, J.

    2001-07-01

    Within finite temperature field theory, we show that truncated non-perturbative self-consistent Dyson resummation schemes can be renormalized with local counter-terms defined at the vacuum level. The requirements are that the underlying theory is renormalizable and that the self-consistent scheme follows Baym's Φ-derivable concept. The scheme generates both, the renormalized self-consistent equations of motion and the closed equations for the infinite set of counter terms. At the same time the corresponding 2PI-generating functional and the thermodynamic potential can be renormalized, in consistency with the equations of motion. This guarantees the standard Φ-derivable properties like thermodynamic consistency and exact conservation laws also for the renormalized approximation scheme to hold. The proof uses the techniques of BPHZ-renormalization to cope with the explicit and the hidden overlapping vacuum divergences. (orig.)

  13. Interference-aware random beam selection schemes for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed

    2012-10-19

    Spectrum sharing systems have been recently introduced to alleviate the problem of spectrum scarcity by allowing secondary unlicensed networks to share the spectrum with primary licensed networks under acceptable interference levels to the primary users. In this work, we develop interference-aware random beam selection schemes that provide enhanced performance for the secondary network under the condition that the interference observed by the receivers of the primary network is below a predetermined/acceptable value. We consider a secondary link composed of a transmitter equipped with multiple antennas and a single-antenna receiver sharing the same spectrum with a primary link composed of a single-antenna transmitter and a single-antenna receiver. The proposed schemes select a beam, among a set of power-optimized random beams, that maximizes the signal-to-interference-plus-noise ratio (SINR) of the secondary link while satisfying the primary interference constraint for different levels of feedback information describing the interference level at the primary receiver. For the proposed schemes, we develop a statistical analysis for the SINR statistics as well as the capacity and bit error rate (BER) of the secondary link.

  14. Different-Level Simultaneous Minimization Scheme for Fault Tolerance of Redundant Manipulator Aided with Discrete-Time Recurrent Neural Network.

    Science.gov (United States)

    Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang

    2017-01-01

    By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network.

  15. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    Science.gov (United States)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  16. Privacy Preserving Mapping Schemes Supporting Comparison

    NARCIS (Netherlands)

    Tang, Qiang

    2010-01-01

    To cater to the privacy requirements in cloud computing, we introduce a new primitive, namely Privacy Preserving Mapping (PPM) schemes supporting comparison. An PPM scheme enables a user to map data items into images in such a way that, with a set of images, any entity can determine the <, =, >

  17. On multiple level-set regularization methods for inverse problems

    International Nuclear Information System (INIS)

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  18. A level set method for multiple sclerosis lesion segmentation.

    Science.gov (United States)

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Numerical schemes for explosion hazards

    International Nuclear Information System (INIS)

    Therme, Nicolas

    2015-01-01

    In nuclear facilities, internal or external explosions can cause confinement breaches and radioactive materials release in the environment. Hence, modeling such phenomena is crucial for safety matters. Blast waves resulting from explosions are modeled by the system of Euler equations for compressible flows, whereas Navier-Stokes equations with reactive source terms and level set techniques are used to simulate the propagation of flame front during the deflagration phase. The purpose of this thesis is to contribute to the creation of efficient numerical schemes to solve these complex models. The work presented here focuses on two major aspects: first, the development of consistent schemes for the Euler equations, then the buildup of reliable schemes for the front propagation. In both cases, explicit in time schemes are used, but we also introduce a pressure correction scheme for the Euler equations. Staggered discretization is used in space. It is based on the internal energy formulation of the Euler system, which insures its positivity and avoids tedious discretization of the total energy over staggered grids. A discrete kinetic energy balance is derived from the scheme and a source term is added in the discrete internal energy balance equation to preserve the exact total energy balance at the limit. High order methods of MUSCL type are used in the discrete convective operators, based solely on material velocity. They lead to positivity of density and internal energy under CFL conditions. This ensures that the total energy cannot grow and we can furthermore derive a discrete entropy inequality. Under stability assumptions of the discrete L8 and BV norms of the scheme's solutions one can prove that a sequence of converging discrete solutions necessarily converges towards the weak solution of the Euler system. Besides it satisfies a weak entropy inequality at the limit. Concerning the front propagation, we transform the flame front evolution equation (the so called

  20. A parametric level-set method for partially discrete tomography

    NARCIS (Netherlands)

    A. Kadu (Ajinkya); T. van Leeuwen (Tristan); K.J. Batenburg (Joost)

    2017-01-01

    textabstractThis paper introduces a parametric level-set method for tomographic reconstruction of partially discrete images. Such images consist of a continuously varying background and an anomaly with a constant (known) grey-value. We express the geometry of the anomaly using a level-set function,

  1. Tabled Execution in Scheme

    Energy Technology Data Exchange (ETDEWEB)

    Willcock, J J; Lumsdaine, A; Quinlan, D J

    2008-08-19

    Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.

  2. Hitting emissions targets with (statistical) confidence in multi-instrument Emissions Trading Schemes

    International Nuclear Information System (INIS)

    Shipworth, David

    2003-12-01

    A means of assessing, monitoring and controlling aggregate emissions from multi-instrument Emissions Trading Schemes is proposed. The approach allows contributions from different instruments with different forms of emissions targets to be integrated. Where Emissions Trading Schemes are helping to meet specific national targets, the approach allows the entry requirements of new participants to be calculated and set at a level that will achieve these targets. The approach is multi-levelled, and may be extended downwards to support pooling of participants within instruments, or upwards to embed Emissions Trading Schemes within a wider suite of policies and measures with hard and soft targets. Aggregate emissions from each instrument are treated stochastically. Emissions from the scheme as a whole are then the joint probability distribution formed by integrating the emissions from its instruments. Because a Bayesian approach is adopted, qualitative and semi-qualitative data from expert opinion can be used where quantitative data is not currently available, or is incomplete. This approach helps government retain sufficient control over emissions trading scheme targets to allow them to meet their emissions reduction obligations, while minimising the need for retrospectively adjusting existing participants' conditions of entry. This maintains participant confidence, while providing the necessary policy levers for good governance

  3. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints in the Spectral-Element Solver Nek5000

    Energy Technology Data Exchange (ETDEWEB)

    Schanen, Michel; Marin, Oana; Zhang, Hong; Anitescu, Mihai

    2016-01-01

    Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validate it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.

  4. The effect of hearing aid signal-processing schemes on acceptable noise levels: perception and prediction.

    Science.gov (United States)

    Wu, Yu-Hsiang; Stangl, Elizabeth

    2013-01-01

    The acceptable noise level (ANL) test determines the maximum noise level that an individual is willing to accept while listening to speech. The first objective of the present study was to systematically investigate the effect of wide dynamic range compression processing (WDRC), and its combined effect with digital noise reduction (DNR) and directional processing (DIR), on ANL. Because ANL represents the lowest signal-to-noise ratio (SNR) that a listener is willing to accept, the second objective was to examine whether the hearing aid output SNR could predict aided ANL across different combinations of hearing aid signal-processing schemes. Twenty-five adults with sensorineural hearing loss participated in the study. ANL was measured monaurally in two unaided and seven aided conditions, in which the status of the hearing aid processing schemes (enabled or disabled) and the location of noise (front or rear) were manipulated. The hearing aid output SNR was measured for each listener in each condition using a phase-inversion technique. The aided ANL was predicted by unaided ANL and hearing aid output SNR, under the assumption that the lowest acceptable SNR at the listener's eardrum is a constant across different ANL test conditions. Study results revealed that, on average, WDRC increased (worsened) ANL by 1.5 dB, while DNR and DIR decreased (improved) ANL by 1.1 and 2.8 dB, respectively. Because the effects of WDRC and DNR on ANL were opposite in direction but similar in magnitude, the ANL of linear/DNR-off was not significantly different from that of WDRC/DNR-on. The results further indicated that the pattern of ANL change across different aided conditions was consistent with the pattern of hearing aid output SNR change created by processing schemes. Compared with linear processing, WDRC creates a noisier sound image and makes listeners less willing to accept noise. However, this negative effect on noise acceptance can be offset by DNR, regardless of microphone mode

  5. Structural level set inversion for microwave breast screening

    International Nuclear Information System (INIS)

    Irishina, Natalia; Álvarez, Diego; Dorn, Oliver; Moscoso, Miguel

    2010-01-01

    We present a new inversion strategy for the early detection of breast cancer from microwave data which is based on a new multiphase level set technique. This novel structural inversion method uses a modification of the color level set technique adapted to the specific situation of structural breast imaging taking into account the high complexity of the breast tissue. We only use data of a few microwave frequencies for detecting the tumors hidden in this complex structure. Three level set functions are employed for describing four different types of breast tissue, where each of these four regions is allowed to have a complicated topology and to have an interior structure which needs to be estimated from the data simultaneously with the region interfaces. The algorithm consists of several stages of increasing complexity. In each stage more details about the anatomical structure of the breast interior is incorporated into the inversion model. The synthetic breast models which are used for creating simulated data are based on real MRI images of the breast and are therefore quite realistic. Our results demonstrate the potential and feasibility of the proposed level set technique for detecting, locating and characterizing a small tumor in its early stage of development embedded in such a realistic breast model. Both the data acquisition simulation and the inversion are carried out in 2D

  6. Interpolation-free scanning and sampling scheme for tomographic reconstructions

    International Nuclear Information System (INIS)

    Donohue, K.D.; Saniie, J.

    1987-01-01

    In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation

  7. Hilbert schemes of points on some classes surface singularities

    OpenAIRE

    Gyenge, Ádám

    2016-01-01

    We study the geometry and topology of Hilbert schemes of points on the orbifold surface [C^2/G], respectively the singular quotient surface C^2/G, where G is a finite subgroup of SL(2,C) of type A or D. We give a decomposition of the (equivariant) Hilbert scheme of the orbifold into affine space strata indexed by a certain combinatorial set, the set of Young walls. The generating series of Euler characteristics of Hilbert schemes of points of the singular surface of type A or D is computed in...

  8. A Transactional Asynchronous Replication Scheme for Mobile Database Systems

    Institute of Scientific and Technical Information of China (English)

    丁治明; 孟小峰; 王珊

    2002-01-01

    In mobile database systems, mobility of users has a significant impact on data replication. As a result, the various replica control protocols that exist today in traditional distributed and multidatabase environments are no longer suitable. To solve this problem, a new mobile database replication scheme, the Transaction-Level Result-Set Propagation (TLRSP)model, is put forward in this paper. The conflict detection and resolution strategy based on TLRSP is discussed in detail, and the implementation algorithm is proposed. In order to compare the performance of the TLRSP model with that of other mobile replication schemes, we have developed a detailed simulation model. Experimental results show that the TLRSP model provides an efficient support for replicated mobile database systems by reducing reprocessing overhead and maintaining database consistency.

  9. A repeat-until-success quantum computing scheme

    Energy Technology Data Exchange (ETDEWEB)

    Beige, A [School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT (United Kingdom); Lim, Y L [DSO National Laboratories, 20 Science Park Drive, Singapore 118230, Singapore (Singapore); Kwek, L C [Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542, Singapore (Singapore)

    2007-06-15

    Recently we proposed a hybrid architecture for quantum computing based on stationary and flying qubits: the repeat-until-success (RUS) quantum computing scheme. The scheme is largely implementation independent. Despite the incompleteness theorem for optical Bell-state measurements in any linear optics set-up, it allows for the implementation of a deterministic entangling gate between distant qubits. Here we review this distributed quantum computation scheme, which is ideally suited for integrated quantum computation and communication purposes.

  10. A repeat-until-success quantum computing scheme

    International Nuclear Information System (INIS)

    Beige, A; Lim, Y L; Kwek, L C

    2007-01-01

    Recently we proposed a hybrid architecture for quantum computing based on stationary and flying qubits: the repeat-until-success (RUS) quantum computing scheme. The scheme is largely implementation independent. Despite the incompleteness theorem for optical Bell-state measurements in any linear optics set-up, it allows for the implementation of a deterministic entangling gate between distant qubits. Here we review this distributed quantum computation scheme, which is ideally suited for integrated quantum computation and communication purposes

  11. Generalized quantization scheme for two-person non-zero sum games

    International Nuclear Information System (INIS)

    Nawaz, Ahmad; Toor, A H

    2004-01-01

    We proposed a generalized quantization scheme for non-zero sum games which can be reduced to the two existing quantization schemes under an appropriate set of parameters. Some other important situations are identified which are not apparent in the two existing quantization schemes

  12. Identifying Heterogeneities in Subsurface Environment using the Level Set Method

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Hongzhuan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lu, Zhiming [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vesselinov, Velimir Valentinov [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-25

    These are slides from a presentation on identifying heterogeneities in subsurface environment using the level set method. The slides start with the motivation, then explain Level Set Method (LSM), the algorithms, some examples are given, and finally future work is explained.

  13. Economic sustainability, water security and multi-level governance of local water schemes in Nepal

    Directory of Open Access Journals (Sweden)

    Emma Hakala

    2017-07-01

    Full Text Available This article explores the role of multi-level governance and power structures in local water security through a case study of the Nawalparasi district in Nepal. It focuses on economic sustainability as a measure to address water security, placing this thematic in the context of a complicated power structure consisting of local, district and national administration as well as external development cooperation actors. The study aims to find out whether efforts to improve the economic sustainability of water schemes have contributed to water security at the local level. In addition, it will consider the interactions between water security, power structures and local equality and justice. The research builds upon survey data from the Nepalese districts of Nawalparasi and Palpa, and a case study based on interviews and observation in Nawalparasi. The survey was performed in water schemes built within a Finnish development cooperation programme spanning from 1990 to 2004, allowing a consideration of the long-term sustainability of water management projects. This adds a crucial external influence into the intra-state power structures shaping water management in Nepal. The article thus provides an alternative perspective to cross-regional water security through a discussion combining transnational involvement with national and local points of view.

  14. Setting-level influences on implementation of the responsive classroom approach.

    Science.gov (United States)

    Wanless, Shannon B; Patton, Christine L; Rimm-Kaufman, Sara E; Deutsch, Nancy L

    2013-02-01

    We used mixed methods to examine the association between setting-level factors and observed implementation of a social and emotional learning intervention (Responsive Classroom® approach; RC). In study 1 (N = 33 3rd grade teachers after the first year of RC implementation), we identified relevant setting-level factors and uncovered the mechanisms through which they related to implementation. In study 2 (N = 50 4th grade teachers after the second year of RC implementation), we validated our most salient Study 1 finding across multiple informants. Findings suggested that teachers perceived setting-level factors, particularly principal buy-in to the intervention and individualized coaching, as influential to their degree of implementation. Further, we found that intervention coaches' perspectives of principal buy-in were more related to implementation than principals' or teachers' perspectives. Findings extend the application of setting theory to the field of implementation science and suggest that interventionists may want to consider particular accounts of school setting factors before determining the likelihood of schools achieving high levels of implementation.

  15. Vector domain decomposition schemes for parabolic equations

    Science.gov (United States)

    Vabishchevich, P. N.

    2017-09-01

    A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.

  16. Threshold Signature Schemes Application

    Directory of Open Access Journals (Sweden)

    Anastasiya Victorovna Beresneva

    2015-10-01

    Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.

  17. REFERQUAL: a pilot study of a new service quality assessment instrument in the GP exercise referral scheme setting

    Science.gov (United States)

    Cock, Don; Adams, Iain C; Ibbetson, Adrian B; Baugh, Phil

    2006-01-01

    Background The development of an instrument accurately assessing service quality in the GP Exercise Referral Scheme (ERS) industry could potentially inform scheme organisers of the factors that affect adherence rates leading to the implementation of strategic interventions aimed at reducing client drop-out. Methods A modified version of the SERVQUAL instrument was designed for use in the ERS setting and subsequently piloted amongst 27 ERS clients. Results Test re-test correlations were calculated via Pearson's 'r' or Spearman's 'rho', depending on whether the variables were Normally Distributed, to show a significant (mean r = 0.957, SD = 0.02, p < 0.05; mean rho = 0.934, SD = 0.03, p < 0.05) relationship between all items within the questionnaire. In addition, satisfactory internal consistency was demonstrated via Cronbach's 'α'. Furthermore, clients responded favourably towards the usability, wording and applicability of the instrument's items. Conclusion REFERQUAL is considered to represent promise as a suitable tool for future evaluation of service quality within the ERS community. Future research should further assess the validity and reliability of this instrument through the use of a confirmatory factor analysis to scrutinise the proposed dimensional structure. PMID:16725021

  18. Level Set Structure of an Integrable Cellular Automaton

    Directory of Open Access Journals (Sweden)

    Taichiro Takagi

    2010-03-01

    Full Text Available Based on a group theoretical setting a sort of discrete dynamical system is constructed and applied to a combinatorial dynamical system defined on the set of certain Bethe ansatz related objects known as the rigged configurations. This system is then used to study a one-dimensional periodic cellular automaton related to discrete Toda lattice. It is shown for the first time that the level set of this cellular automaton is decomposed into connected components and every such component is a torus.

  19. A hybrid Lagrangian Voronoi-SPH scheme

    Science.gov (United States)

    Fernandez-Gutierrez, D.; Souto-Iglesias, A.; Zohdi, T. I.

    2017-11-01

    A hybrid Lagrangian Voronoi-SPH scheme, with an explicit weakly compressible formulation for both the Voronoi and SPH sub-domains, has been developed. The SPH discretization is substituted by Voronoi elements close to solid boundaries, where SPH consistency and boundary conditions implementation become problematic. A buffer zone to couple the dynamics of both sub-domains is used. This zone is formed by a set of particles where fields are interpolated taking into account SPH particles and Voronoi elements. A particle may move in or out of the buffer zone depending on its proximity to a solid boundary. The accuracy of the coupled scheme is discussed by means of a set of well-known verification benchmarks.

  20. Decentralising Zimbabwe’s water management: The case of Guyu-Chelesa irrigation scheme

    Science.gov (United States)

    Tambudzai, Rashirayi; Everisto, Mapedza; Gideon, Zhou

    Smallholder irrigation schemes are largely supply driven such that they exclude the beneficiaries on the management decisions and the choice of the irrigation schemes that would best suit their local needs. It is against this background that the decentralisation framework and the Dublin Principles on Integrated Water Resource Management (IWRM) emphasise the need for a participatory approach to water management. The Zimbabwean government has gone a step further in decentralising the management of irrigation schemes, that is promoting farmer managed irrigation schemes so as to ensure effective management of scarce community based land and water resources. The study set to investigate the way in which the Guyu-Chelesa irrigation scheme is managed with specific emphasis on the role of the Irrigation Management Committee (IMC), the level of accountability and the powers devolved to the IMC. Merrey’s 2008 critique of IWRM also informs this study which views irrigation as going beyond infrastructure by looking at how institutions and decision making processes play out at various levels including at the irrigation scheme level. The study was positioned on the hypothesis that ‘decentralised or autonomous irrigation management enhances the sustainability and effectiveness of irrigation schemes’. To validate or falsify the stated hypothesis, data was gathered using desk research in the form of reviewing articles, documents from within the scheme and field research in the form of questionnaire surveys, key informant interviews and field observation. The Statistical Package for Social Sciences was used to analyse data quantitatively, whilst content analysis was utilised to analyse qualitative data whereby data was analysed thematically. Comparative analysis was carried out as Guyu-Chelesa irrigation scheme was compared with other smallholder irrigation scheme’s experiences within Zimbabwe and the Sub Saharan African region at large. The findings were that whilst the

  1. A classification scheme for risk assessment methods.

    Energy Technology Data Exchange (ETDEWEB)

    Stamp, Jason Edwin; Campbell, Philip LaRoche

    2004-08-01

    This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In

  2. Propagation of frequency-chirped laser pulses in a medium of atoms with a Λ-level scheme

    International Nuclear Information System (INIS)

    Demeter, G.; Dzsotjan, D.; Djotyan, G. P.

    2007-01-01

    We study the propagation of frequency-chirped laser pulses in optically thick media. We consider a medium of atoms with a Λ level-scheme (Lambda atoms) and also, for comparison, a medium of two-level atoms. Frequency-chirped laser pulses that induce adiabatic population transfer between the atomic levels are considered. They induce transitions between the two lower (metastable) levels of the Λ-atoms and between the ground and excited states of the two-level atoms. We show that associated with this adiabatic population transfer in Λ-atoms, there is a regime of enhanced transparency of the medium--the pulses are distorted much less than in the medium of two-level atoms and retain their ability to transfer the atomic population much longer during propagation

  3. Visual privacy by context: proposal and evaluation of a level-based visualisation scheme.

    Science.gov (United States)

    Padilla-López, José Ramón; Chaaraoui, Alexandros Andre; Gu, Feng; Flórez-Revuelta, Francisco

    2015-06-04

    Privacy in image and video data has become an important subject since cameras are being installed in an increasing number of public and private spaces. Specifically, in assisted living, intelligent monitoring based on computer vision can allow one to provide risk detection and support services that increase people's autonomy at home. In the present work, a level-based visualisation scheme is proposed to provide visual privacy when human intervention is necessary, such as at telerehabilitation and safety assessment applications. Visualisation levels are dynamically selected based on the previously modelled context. In this way, different levels of protection can be provided, maintaining the necessary intelligibility required for the applications. Furthermore, a case study of a living room, where a top-view camera is installed, is presented. Finally, the performed survey-based evaluation indicates the degree of protection provided by the different visualisation models, as well as the personal privacy preferences and valuations of the users.

  4. A Prediction Packetizing Scheme for Reducing Channel Traffic in Transaction-Level Hardware/Software Co-Emulation

    OpenAIRE

    Lee , Jae-Gon; Chung , Moo-Kyoung; Ahn , Ki-Yong; Lee , Sang-Heon; Kyung , Chong-Min

    2005-01-01

    Submitted on behalf of EDAA (http://www.edaa.com/); International audience; This paper presents a scheme for efficient channel usage between simulator and accelerator where the accelerator models some RTL sub-blocks in the accelerator-based hardware/software co-simulation while the simulator runs transaction-level model of the remaining part of the whole chip being verified. With conventional simulation accelerator, evaluations of simulator and accelerator alternate at every valid simulation ...

  5. Reevaluation of steam generator level trip set point

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Yoon Sub; Soh, Dong Sub; Kim, Sung Oh; Jung, Se Won; Sung, Kang Sik; Lee, Joon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-06-01

    The reactor trip by the low level of steam generator water accounts for a substantial portion of reactor scrams in a nuclear plant and the feasibility of modification of the steam generator water level trip system of YGN 1/2 was evaluated in this study. The study revealed removal of the reactor trip function from the SG water level trip system is not possible because of plant safety but relaxation of the trip set point by 9 % is feasible. The set point relaxation requires drilling of new holes for level measurement to operating steam generators. Characteristics of negative neutron flux rate trip and reactor trip were also reviewed as an additional work. Since the purpose of the trip system modification for reduction of a reactor scram frequency is not to satisfy legal requirements but to improve plant performance and the modification yields positive and negative aspects, the decision of actual modification needs to be made based on the results of this study and also the policy of a plant owner. 37 figs, 6 tabs, 14 refs. (Author).

  6. EU Action against Climate Change. EU emissions trading. An open scheme promoting global innovation

    International Nuclear Information System (INIS)

    2005-01-01

    The European Union is committed to global efforts to reduce the greenhouse gas emissions from human activities that threaten to cause serious disruption to the world's climate. Building on the innovative mechanisms set up under the Kyoto Protocol to the 1992 United Nations Framework Convention on Climate Change (UNFCCC) - joint implementation, the clean development mechanism and international emissions trading - the EU has developed the largest company-level scheme for trading in emissions of carbon dioxide (CO2), making it the world leader in this emerging market. The emissions trading scheme started in the 25 EU Member States on 1 January 2005

  7. Properties of 112Cd from the (n,n'γ) reaction: Levels and level densities

    International Nuclear Information System (INIS)

    Garrett, P. E.; Lehmann, H.; Jolie, J.; McGrath, C. A.; Yeh, Minfang; Younes, W.; Yates, S. W.

    2001-01-01

    Levels in 112 Cd have been studied through the (n,n'γ) reaction with monoenergetic neutrons. An extended set of experiments that included excitation functions, γ-ray angular distributions, and γγ coincidence measurements was performed. A total of 375 γ rays were placed in a level scheme comprising 200 levels (of which 238 γ-ray assignments and 58 levels are newly established) up to 4 MeV in excitation. No evidence to support the existence of 47 levels as suggested in previous studies was found, and these have been removed from the level scheme. From the results, a comparison of the level density is made with the constant temperature and back-shifted Fermi gas models. The back-shifted Fermi gas model with the Gilbert-Cameron spin cutoff parameter provided the best overall fit. Without using the neutron resonance information and only fitting the cumulative number of low-lying levels, the level density parameters extracted are a sensitive function of the maximum energy used in the fit

  8. Additive operator-difference schemes splitting schemes

    CERN Document Server

    Vabishchevich, Petr N

    2013-01-01

    Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy

  9. Gradient augmented level set method for phase change simulations

    Science.gov (United States)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  10. Mapping topographic structure in white matter pathways with level set trees.

    Directory of Open Access Journals (Sweden)

    Brian P Kent

    Full Text Available Fiber tractography on diffusion imaging data offers rich potential for describing white matter pathways in the human brain, but characterizing the spatial organization in these large and complex data sets remains a challenge. We show that level set trees--which provide a concise representation of the hierarchical mode structure of probability density functions--offer a statistically-principled framework for visualizing and analyzing topography in fiber streamlines. Using diffusion spectrum imaging data collected on neurologically healthy controls (N = 30, we mapped white matter pathways from the cortex into the striatum using a deterministic tractography algorithm that estimates fiber bundles as dimensionless streamlines. Level set trees were used for interactive exploration of patterns in the endpoint distributions of the mapped fiber pathways and an efficient segmentation of the pathways that had empirical accuracy comparable to standard nonparametric clustering techniques. We show that level set trees can also be generalized to model pseudo-density functions in order to analyze a broader array of data types, including entire fiber streamlines. Finally, resampling methods show the reliability of the level set tree as a descriptive measure of topographic structure, illustrating its potential as a statistical descriptor in brain imaging analysis. These results highlight the broad applicability of level set trees for visualizing and analyzing high-dimensional data like fiber tractography output.

  11. CSR schemes in agribusiness

    DEFF Research Database (Denmark)

    Pötz, Katharina Anna; Haas, Rainer; Balzarova, Michaela

    2013-01-01

    of schemes that can be categorized on focus areas, scales, mechanisms, origins, types and commitment levels. Research limitations/implications – The findings contribute to conceptual and empirical research on existing models to compare and analyse CSR standards. Sampling technique and depth of analysis limit......Purpose – The rise of CSR followed a demand for CSR standards and guidelines. In a sector already characterized by a large number of standards, the authors seek to ask what CSR schemes apply to agribusiness, and how they can be systematically compared and analysed. Design....../methodology/approach – Following a deductive-inductive approach the authors develop a model to compare and analyse CSR schemes based on existing studies and on coding qualitative data on 216 CSR schemes. Findings – The authors confirm that CSR standards and guidelines have entered agribusiness and identify a complex landscape...

  12. Pixel detector readout electronics with two-level discriminator scheme

    International Nuclear Information System (INIS)

    Pengg, F.

    1998-01-01

    In preparation for a silicon pixel detector with more than 3,000 readout channels per chip for operation at the future large hadron collider (LHC) at CERN the analog front end of the readout electronics has been designed and measured on several test-arrays with 16 by 4 cells. They are implemented in the HP 0.8 microm process but compatible with the design rules of the radiation hard Honeywell 0.8 microm bulk process. Each cell contains bump bonding pad, preamplifier, discriminator and control logic for masking and testing within a layout area of only 50 microm by 140 microm. A new two-level discriminator scheme has been implemented to cope with the problems of time-walk and interpixel cross-coupling. The measured gain of the preamplifier is 900 mV for a minimum ionizing particle (MIP, about 24,000 e - for a 300 microm thick Si-detector) with a return to baseline within 750 ns for a 1 MIP input signal. The full readout chain (without detector) shows an equivalent noise charge to 60e - r.m.s. The time-walk, a function of the separation between the two threshold levels, is measured to be 22 ns at a separation of 1,500 e - , which is adequate for the 40 MHz beam-crossing frequency at the LHC. The interpixel cross-coupling, measured with a 40fF coupling capacitance, is less than 3%. A single cell consumes 35 microW at 3.5 V supply voltage

  13. A simple mass-conserved level set method for simulation of multiphase flows

    Science.gov (United States)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  14. The Political Economy of International Emissions Trading Scheme Choice

    DEFF Research Database (Denmark)

    Boom, Jan-Tjeerd; Svendsen, Jan Tinggard

    2000-01-01

    The Kyoto Protocol allows emission trade between the Annex B countries. We consider three schemes of emissions trading: government trading, permit trading and credit trading. The schemes are compared in a public choice setting focusing on group size and rent-seeking from interest groups. We find ...

  15. Reconstruction of thin electromagnetic inclusions by a level-set method

    International Nuclear Information System (INIS)

    Park, Won-Kwang; Lesselier, Dominique

    2009-01-01

    In this contribution, we consider a technique of electromagnetic imaging (at a single, non-zero frequency) which uses the level-set evolution method for reconstructing a thin inclusion (possibly made of disconnected parts) with either dielectric or magnetic contrast with respect to the embedding homogeneous medium. Emphasis is on the proof of the concept, the scattering problem at hand being so far based on a two-dimensional scalar model. To do so, two level-set functions are employed; the first one describes location and shape, and the other one describes connectivity and length. Speeds of evolution of the level-set functions are calculated via the introduction of Fréchet derivatives of a least-square cost functional. Several numerical experiments on noiseless and noisy data as well illustrate how the proposed method behaves

  16. Certificateless Key-Insulated Generalized Signcryption Scheme without Bilinear Pairings

    Directory of Open Access Journals (Sweden)

    Caixue Zhou

    2017-01-01

    Full Text Available Generalized signcryption (GSC can be applied as an encryption scheme, a signature scheme, or a signcryption scheme with only one algorithm and one key pair. A key-insulated mechanism can resolve the private key exposure problem. To ensure the security of cloud storage, we introduce the key-insulated mechanism into GSC and propose a concrete scheme without bilinear pairings in the certificateless cryptosystem setting. We provide a formal definition and a security model of certificateless key-insulated GSC. Then, we prove that our scheme is confidential under the computational Diffie-Hellman (CDH assumption and unforgeable under the elliptic curve discrete logarithm (EC-DL assumption. Our scheme also supports both random-access key update and secure key update. Finally, we evaluate the efficiency of our scheme and demonstrate that it is highly efficient. Thus, our scheme is more suitable for users who communicate with the cloud using mobile devices.

  17. A novel two-level dynamic parallel data scheme for large 3-D SN calculations

    International Nuclear Information System (INIS)

    Sjoden, G.E.; Shedlock, D.; Haghighat, A.; Yi, C.

    2005-01-01

    We introduce a new dynamic parallel memory optimization scheme for executing large scale 3-D discrete ordinates (Sn) simulations on distributed memory parallel computers. In order for parallel transport codes to be truly scalable, they must use parallel data storage, where only the variables that are locally computed are locally stored. Even with parallel data storage for the angular variables, cumulative storage requirements for large discrete ordinates calculations can be prohibitive. To address this problem, Memory Tuning has been implemented into the PENTRAN 3-D parallel discrete ordinates code as an optimized, two-level ('large' array, 'small' array) parallel data storage scheme. Memory Tuning can be described as the process of parallel data memory optimization. Memory Tuning dynamically minimizes the amount of required parallel data in allocated memory on each processor using a statistical sampling algorithm. This algorithm is based on the integral average and standard deviation of the number of fine meshes contained in each coarse mesh in the global problem. Because PENTRAN only stores the locally computed problem phase space, optimal two-level memory assignments can be unique on each node, depending upon the parallel decomposition used (hybrid combinations of angular, energy, or spatial). As demonstrated in the two large discrete ordinates models presented (a storage cask and an OECD MOX Benchmark), Memory Tuning can save a substantial amount of memory per parallel processor, allowing one to accomplish very large scale Sn computations. (authors)

  18. Level-Set Methodology on Adaptive Octree Grids

    Science.gov (United States)

    Gibou, Frederic; Guittet, Arthur; Mirzadeh, Mohammad; Theillard, Maxime

    2017-11-01

    Numerical simulations of interfacial problems in fluids require a methodology capable of tracking surfaces that can undergo changes in topology and capable to imposing jump boundary conditions in a sharp manner. In this talk, we will discuss recent advances in the level-set framework, in particular one that is based on adaptive grids.

  19. The scheme machine: A case study in progress in design derivation at system levels

    Science.gov (United States)

    Johnson, Steven D.

    1995-01-01

    The Scheme Machine is one of several design projects of the Digital Design Derivation group at Indiana University. It differs from the other projects in its focus on issues of system design and its connection to surrounding research in programming language semantics, compiler construction, and programming methodology underway at Indiana and elsewhere. The genesis of the project dates to the early 1980's, when digital design derivation research branched from the surrounding research effort in programming languages. Both branches have continued to develop in parallel, with this particular project serving as a bridge. However, by 1990 there remained little real interaction between the branches and recently we have undertaken to reintegrate them. On the software side, researchers have refined a mathematically rigorous (but not mechanized) treatment starting with the fully abstract semantic definition of Scheme and resulting in an efficient implementation consisting of a compiler and virtual machine model, the latter typically realized with a general purpose microprocessor. The derivation includes a number of sophisticated factorizations and representations and is also deep example of the underlying engineering methodology. The hardware research has created a mechanized algebra supporting the tedious and massive transformations often seen at lower levels of design. This work has progressed to the point that large scale devices, such as processors, can be derived from first-order finite state machine specifications. This is roughly where the language oriented research stops; thus, together, the two efforts establish a thread from the highest levels of abstract specification to detailed digital implementation. The Scheme Machine project challenges hardware derivation research in several ways, although the individual components of the system are of a similar scale to those we have worked with before. The machine has a custom dual-ported memory to support garbage collection

  20. Insights on different participation schemes to meet climate goals

    International Nuclear Information System (INIS)

    Russ, Peter; Ierland, Tom van

    2009-01-01

    Models and scenarios to assess greenhouse gas mitigation action have become more diversified and detailed, allowing the simulation of more realistic global climate policy set-ups. In this paper, different participation schemes to meet different levels of radiative forcing are analysed. The focus is on scenarios that are in line with the 2 deg. C target. Typical stylised participation schemes are based either on a perfect global carbon market or delayed participation with targets only for developed countries, no actions by developing countries and no access to credits from offsetting mechanisms in developing countries. This paper adds an intermediate policy scenario assuming a gradual incorporation of all countries, including a gradually developing carbon market, and taking into account the ability to contribute of different parties. Perfect participation by all parties would be optimal, but it is shown that participation schemes involving a gradual and differentiated participation by all parties can substantially decrease global costs and still meet the 2 deg. C target. Carbon markets can compensate in part for those costs incurred by developing countries' own, autonomous mitigation actions that do not generate tradable emission credits.

  1. Scheme-Independent Predictions in QCD: Commensurate Scale Relations and Physical Renormalization Schemes

    International Nuclear Information System (INIS)

    Brodsky, Stanley J.

    1998-01-01

    Commensurate scale relations are perturbative QCD predictions which relate observable to observable at fixed relative scale, such as the ''generalized Crewther relation'', which connects the Bjorken and Gross-Llewellyn Smith deep inelastic scattering sum rules to measurements of the e + e - annihilation cross section. All non-conformal effects are absorbed by fixing the ratio of the respective momentum transfer and energy scales. In the case of fixed-point theories, commensurate scale relations relate both the ratio of couplings and the ratio of scales as the fixed point is approached. The relations between the observables are independent of the choice of intermediate renormalization scheme or other theoretical conventions. Commensurate scale relations also provide an extension of the standard minimal subtraction scheme, which is analytic in the quark masses, has non-ambiguous scale-setting properties, and inherits the physical properties of the effective charge α V (Q 2 ) defined from the heavy quark potential. The application of the analytic scheme to the calculation of quark-mass-dependent QCD corrections to the Z width is also reviewed

  2. A numerical scheme for the generalized Burgers–Huxley equation

    Directory of Open Access Journals (Sweden)

    Brajesh K. Singh

    2016-10-01

    Full Text Available In this article, a numerical solution of generalized Burgers–Huxley (gBH equation is approximated by using a new scheme: modified cubic B-spline differential quadrature method (MCB-DQM. The scheme is based on differential quadrature method in which the weighting coefficients are obtained by using modified cubic B-splines as a set of basis functions. This scheme reduces the equation into a system of first-order ordinary differential equation (ODE which is solved by adopting SSP-RK43 scheme. Further, it is shown that the proposed scheme is stable. The efficiency of the proposed method is illustrated by four numerical experiments, which confirm that obtained results are in good agreement with earlier studies. This scheme is an easy, economical and efficient technique for finding numerical solutions for various kinds of (nonlinear physical models as compared to the earlier schemes.

  3. Score level fusion scheme based on adaptive local Gabor features for face-iris-fingerprint multimodal biometric

    Science.gov (United States)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Chen, Ying

    2014-05-01

    A multimodal biometric system has been considered a promising technique to overcome the defects of unimodal biometric systems. We have introduced a fusion scheme to gain a better understanding and fusion method for a face-iris-fingerprint multimodal biometric system. In our case, we use particle swarm optimization to train a set of adaptive Gabor filters in order to achieve the proper Gabor basic functions for each modality. For a closer analysis of texture information, two different local Gabor features for each modality are produced by the corresponding Gabor coefficients. Next, all matching scores of the two Gabor features for each modality are projected to a single-scalar score via a trained, supported, vector regression model for a final decision. A large-scale dataset is formed to validate the proposed scheme using the Facial Recognition Technology database-fafb and CASIA-V3-Interval together with FVC2004-DB2a datasets. The experimental results demonstrate that as well as achieving further powerful local Gabor features of multimodalities and obtaining better recognition performance by their fusion strategy, our architecture also outperforms some state-of-the-art individual methods and other fusion approaches for face-iris-fingerprint multimodal biometric systems.

  4. Electromagnetically induced transparency and retrieval of light pulses in a Λ-type and a V-type level scheme in Pr3+:Y2SiO5

    International Nuclear Information System (INIS)

    Beil, Fabian; Klein, Jens; Halfmann, Thomas; Nikoghosyan, Gor

    2008-01-01

    We examine electromagnetically induced transparency (EIT), the optical preparation of persistent nuclear spin coherences and the retrieval of light pulses both in a Λ-type and a V-type coupling scheme in a Pr 3+ :Y 2 SiO 5 crystal, cooled to cryogenic temperatures. The medium is prepared by optical pumping and spectral hole burning, creating a spectrally isolated Λ-type and a V-type system within the inhomogeneous bandwidth of the 3 H 4 ↔ 1 D 2 transition of the Pr 3+ ions. By EIT, in the Λ-type scheme we drive a nuclear spin coherence between the ground-state hyperfine levels, while in the V-type scheme we drive a coherence between the excited-state hyperfine levels. We observe the cancellation of absorption due to EIT and the retrieval of light pulses in both level schemes. This also permits the determination of dephasing times of the nuclear spin coherence, either in the ground state or the optically excited state

  5. Adaptive PCA based fault diagnosis scheme in imperial smelting process.

    Science.gov (United States)

    Hu, Zhikun; Chen, Zhiwen; Gui, Weihua; Jiang, Bin

    2014-09-01

    In this paper, an adaptive fault detection scheme based on a recursive principal component analysis (PCA) is proposed to deal with the problem of false alarm due to normal process changes in real process. Our further study is also dedicated to develop a fault isolation approach based on Generalized Likelihood Ratio (GLR) test and Singular Value Decomposition (SVD) which is one of general techniques of PCA, on which the off-set and scaling fault can be easily isolated with explicit off-set fault direction and scaling fault classification. The identification of off-set and scaling fault is also applied. The complete scheme of PCA-based fault diagnosis procedure is proposed. The proposed scheme is first applied to Imperial Smelting Process, and the results show that the proposed strategies can be able to mitigate false alarms and isolate faults efficiently. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Level Set Approach to Anisotropic Wet Etching of Silicon

    Directory of Open Access Journals (Sweden)

    Branislav Radjenović

    2010-05-01

    Full Text Available In this paper a methodology for the three dimensional (3D modeling and simulation of the profile evolution during anisotropic wet etching of silicon based on the level set method is presented. Etching rate anisotropy in silicon is modeled taking into account full silicon symmetry properties, by means of the interpolation technique using experimentally obtained values for the etching rates along thirteen principal and high index directions in KOH solutions. The resulting level set equations are solved using an open source implementation of the sparse field method (ITK library, developed in medical image processing community, extended for the case of non-convex Hamiltonians. Simulation results for some interesting initial 3D shapes, as well as some more practical examples illustrating anisotropic etching simulation in the presence of masks (simple square aperture mask, convex corner undercutting and convex corner compensation, formation of suspended structures are shown also. The obtained results show that level set method can be used as an effective tool for wet etching process modeling, and that is a viable alternative to the Cellular Automata method which now prevails in the simulations of the wet etching process.

  7. The behaviour of the lande factor and effective exchange parameter in a group of Pr intermetallics observed through reduced level scheme models

    International Nuclear Information System (INIS)

    Ranke, P.J. von; Caldas, A.; Palermo, L.

    1993-01-01

    The present work constitutes a portion of a continuing series of studies dealing with models, in which we retain only the two lowest levels of the crystal field splitting scheme of rare-earth ion in rare-earth intermetallics. In these reduced level scheme models, the crystal field and the magnetic Hamiltonians are represented in matrix notation. These two matrices constitute the model Hamiltonian proposed in this paper, from which we derive the magnetic state equations of interest for this work. Putting into these equations a group of adequate experimental data found in the literature for a particular rare-earth intermetallic we obtain the Lande factor and effective exchange parameter related to this rare-earth intermetallic. This study will be applied to a group of Pr intermetallics, in cubic symmetry, in which the ground level may be a non-magnetic singlet level or a non-magnetic doublet level. In both cases, the first excited level is a triplet one. (orig.)

  8. New Imaging Operation Scheme at VLTI

    Science.gov (United States)

    Haubois, Xavier

    2018-04-01

    After PIONIER and GRAVITY, MATISSE will soon complete the set of 4 telescope beam combiners at VLTI. Together with recent developments in the image reconstruction algorithms, the VLTI aims to develop its operation scheme to allow optimized and adaptive UV plane coverage. The combination of spectro-imaging instruments, optimized operation framework and image reconstruction algorithms should lead to an increase of the reliability and quantity of the interferometric images. In this contribution, I will present the status of this new scheme as well as possible synergies with other instruments.

  9. Two Surface-Tension Formulations For The Level Set Interface-Tracking Method

    International Nuclear Information System (INIS)

    Shepel, S.V.; Smith, B.L.

    2005-01-01

    The paper describes a comparative study of two surface-tension models for the Level Set interface tracking method. In both models, the surface tension is represented as a body force, concentrated near the interface, but the technical implementation of the two options is different. The first is based on a traditional Level Set approach, in which the surface tension is distributed over a narrow band around the interface using a smoothed Delta function. In the second model, which is based on the integral form of the fluid-flow equations, the force is imposed only in those computational cells through which the interface passes. Both models have been incorporated into the Finite-Element/Finite-Volume Level Set method, previously implemented into the commercial Computational Fluid Dynamics (CFD) code CFX-4. A critical evaluation of the two models, undertaken in the context of four standard Level Set benchmark problems, shows that the first model, based on the smoothed Delta function approach, is the more general, and more robust, of the two. (author)

  10. A deep level set method for image segmentation

    OpenAIRE

    Tang, Min; Valipour, Sepehr; Zhang, Zichen Vincent; Cobzas, Dana; MartinJagersand

    2017-01-01

    This paper proposes a novel image segmentation approachthat integrates fully convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the integrated method can incorporatesmoothing and prior information to achieve an accurate segmentation.Furthermore, different than using the level set model as a post-processingtool, we integrate it into the training phase to fine-tune the FCN. Thisallows the use of unlabeled data during training in a semi-supervisedsetting. Using two types o...

  11. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin

    2011-04-01

    In this paper, we construct a level set method for an elliptic obstacle problem, which can be reformulated as a shape optimization problem. We provide a detailed shape sensitivity analysis for this reformulation and a stability result for the shape Hessian at the optimal shape. Using the shape sensitivities, we construct a geometric gradient flow, which can be realized in the context of level set methods. We prove the convergence of the gradient flow to an optimal shape and provide a complete analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its behavior through several computational experiments. © 2011 World Scientific Publishing Company.

  12. Which factors engage women in deprived neighbourhoods to participate in exercise referral schemes?

    Directory of Open Access Journals (Sweden)

    Nierkens Vera

    2008-10-01

    Full Text Available Abstract Background Exercise referral schemes (ERS have become a popular way of promoting physical activity. The aim of these schemes is to encourage high risk patients to exercise. In evaluating these schemes, little attention has been paid to lower socio-economic groups in a multi-ethnic urban setting. This study aimed to explore the socio-demographic and psychosocial characteristics of female participants in ERS located in deprived neighbourhoods. The second aim was to determine which elements of the intervention make it appealing to participate in the scheme. Methods A mixed method approach was utilized, combining a cross-sectional descriptive study and a qualitative component. In the quantitative part of the study, all female participants (n = 523 filled out a registration form containing questions about socio-demographic and psychosocial characteristics. Height and weight were also measured. In the qualitative part of the study, 38 of these 523 participants were interviewed. Results The majority of the participants had a migrant background, a low level of education, no paid job and a high body mass index. Although most participants were living sedentary lives, at intake they were quite motivated to start exercising. The ERS appealed to them because of its specific elements: facilitating role of the health professional, supportive environment, financial incentive, supervision and neighbourhood setting. Conclusion This study supports the idea that ERS interventions appeal to women from lower socio-economic groups, including ethnic minorities. The ERS seems to meet their contextual, economic and cultural needs. Since the elements that enabled the women to start exercising are specific to this ERS, we should become aware of whether this population continues to exercise after the end of the scheme.

  13. Design of Rate-Compatible Parallel Concatenated Punctured Polar Codes for IR-HARQ Transmission Schemes

    Directory of Open Access Journals (Sweden)

    Jian Jiao

    2017-11-01

    Full Text Available In this paper, we propose a rate-compatible (RC parallel concatenated punctured polar (PCPP codes for incremental redundancy hybrid automatic repeat request (IR-HARQ transmission schemes, which can transmit multiple data blocks over a time-varying channel. The PCPP coding scheme can provide RC polar coding blocks in order to adapt to channel variations. First, we investigate an improved random puncturing (IRP pattern for the PCPP coding scheme due to the code-rate and block length limitations of conventional polar codes. The proposed IRP algorithm only select puncturing bits from the frozen bits set and keep the information bits unchanged during puncturing, which can improve 0.2–1 dB decoding performance more than the existing random puncturing (RP algorithm. Then, we develop a RC IR-HARQ transmission scheme based on PCPP codes. By analyzing the overhead of the previous successful decoded PCPP coding block in our IR-HARQ scheme, the optimal initial code-rate can be determined for each new PCPP coding block over time-varying channels. Simulation results show that the average number of transmissions is about 1.8 times for each PCPP coding block in our RC IR-HARQ scheme with a 2-level PCPP encoding construction, which can reduce half of the average number of transmissions than the existing RC polar coding schemes.

  14. Casemix and rehabilitation: evaluation of an early discharge scheme.

    Science.gov (United States)

    Brandis, S

    2000-01-01

    This paper presents a case study of an early discharge scheme funded by casemix incentives and discusses limitations of a casemix model of funding whereby hospital inpatient care is funded separately from care in other settings. The POSITIVE Rehabilitation program received 151 patients discharged early from hospital in a twelve-month period. Program evaluation demonstrates a 40.9% drop in the average length of stay of rehabilitation patients and a 42.6% drop in average length of stay for patients with stroke. Other benefits of the program include a high level of patient satisfaction, improved carer support and increased continuity of care. The challenge under the Australian interpretation of a casemix model of funding is ensuring the viability of services that extend across acute hospital, non-acute care, and community and home settings.

  15. Histogram plots and cutoff energies for nuclear discrete levels

    International Nuclear Information System (INIS)

    Belgya, T.; Molnar, G.; Fazekas, B.; Oestoer, J.

    1997-05-01

    Discrete level schemes for 1277 nuclei, from 6 Li through 251 Es, extracted from the Evaluated Nuclear Structure Data File were analyzed. Cutoff energies (U max ), indicating the upper limit of level scheme completeness, were deduced from the inspection of histograms of the cumulative number of levels. Parameters of the constant-temperature level density formula (nuclear temperature T and energy shift U 0 ) were obtained by means of the least square fit of the formula to the known levels below cutoff energy. The results are tabulated for all 1277 nuclei allowing for an easy and reliable application of the constant-temperature level density approach. A complete set of cumulative plots of discrete levels is also provided. (author). 5 figs, 2 tabs

  16. Multi-level trellis coded modulation and multi-stage decoding

    Science.gov (United States)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  17. The EU Emissions Trading Scheme and Biomass. Final Report

    International Nuclear Information System (INIS)

    Schwaiger, H.; Tuerk, A.; Arasto, A.; Vehlow, J.; Kautto, N.; Sijm, J.; Hunder, M.; Brammer, J.

    2009-02-01

    Within its Energy and Climate Package, adopted by the European Parliament in December 2008, the European commission set a 10% minimum for the market share of renewables in the transport sector in 2020. To find the appropriate instruments to reach this target and the instrument mix with which biomass use in general could be best stimulated are the main questions of this project. An important instrument of the European Climate Policy is the European Emissions Trading Scheme (EU-ETS), which started operation in 2005. Previous work done within Bioenergy NoE showed that only a high share of auctioning of allowances and a high CO2 price provide necessary incentives for a higher biomass use. According to the Energy and Climate Package, all allowances will be auctioned in the energy sector from 2013 on, with exceptions for a few CEE countries. Based on work done within the project, a model has been developed to analyse at which CO2 price biomass becomes competitive in case of 100 per cent auctioning or at a lower level. The European Commission furthermore decided not to include the road transport sector into the EU-ETS until 2020. Whether the inclusion of the road transport sector in the EU-ETS, could help introducing biofuels, a separate trading scheme for biofuels should be set up, or biofuels should be addressed with other policy instruments, was another main question of this project. The first result shows that an integrated scheme would hardly have any effects on the use of liquid biofuels in the transportation sector, but might cause higher CO2 prices for the energy and industry sector. A separate trading scheme has been implemented in the UK in 2008, California is planning such as scheme in addition to include the road transport sector into the future ETS. Within this project the design of such as system has been elaborated based on the comparison of several policy instruments to increase the use of liquid biofuels in the transportation sector. Policy interaction

  18. Accuracy of spectral and finite difference schemes in 2D advection problems

    DEFF Research Database (Denmark)

    Naulin, V.; Nielsen, A.H.

    2003-01-01

    In this paper we investigate the accuracy of two numerical procedures commonly used to solve 2D advection problems: spectral and finite difference (FD) schemes. These schemes are widely used, simulating, e.g., neutral and plasma flows. FD schemes have long been considered fast, relatively easy...... that the accuracy of FD schemes can be significantly improved if one is careful in choosing an appropriate FD scheme that reflects conservation properties of the nonlinear terms and in setting up the grid in accordance with the problem....

  19. Level and decay schemes of even-A Se and Ge isotopes from (n,n'γ) reaction studies

    Energy Technology Data Exchange (ETDEWEB)

    Sigaud, J.; Patin, Y.; McEllistrem, M. T.; Haouat, G.; Lachkar, J.

    1975-06-01

    The energy levels and the decay schemes of {sup 76}Se, {sup 78}Se, {sup 80}Se, {sup 82}Se and {sup 76}Ge have been studied through the measurements of (n,n'γ) differential cross sections. Gamma-ray excitation functions have been measured between 2.0- and 4.1-MeV incident neutron energy, and angular distribution have been observed for all of these isotopes.

  20. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin; Matevosyan, Norayr; Wolfram, Marie-Therese

    2011-01-01

    analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its

  1. A Velocity-Level Bi-Criteria Optimization Scheme for Coordinated Path Tracking of Dual Robot Manipulators Using Recurrent Neural Network.

    Science.gov (United States)

    Xiao, Lin; Zhang, Yongsheng; Liao, Bolin; Zhang, Zhijun; Ding, Lei; Jin, Long

    2017-01-01

    A dual-robot system is a robotic device composed of two robot arms. To eliminate the joint-angle drift and prevent the occurrence of high joint velocity, a velocity-level bi-criteria optimization scheme, which includes two criteria (i.e., the minimum velocity norm and the repetitive motion), is proposed and investigated for coordinated path tracking of dual robot manipulators. Specifically, to realize the coordinated path tracking of dual robot manipulators, two subschemes are first presented for the left and right robot manipulators. After that, such two subschemes are reformulated as two general quadratic programs (QPs), which can be formulated as one unified QP. A recurrent neural network (RNN) is thus presented to solve effectively the unified QP problem. At last, computer simulation results based on a dual three-link planar manipulator further validate the feasibility and the efficacy of the velocity-level optimization scheme for coordinated path tracking using the recurrent neural network.

  2. A local level set method based on a finite element method for unstructured meshes

    International Nuclear Information System (INIS)

    Ngo, Long Cu; Choi, Hyoung Gwon

    2016-01-01

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time

  3. A local level set method based on a finite element method for unstructured meshes

    Energy Technology Data Exchange (ETDEWEB)

    Ngo, Long Cu; Choi, Hyoung Gwon [School of Mechanical Engineering, Seoul National University of Science and Technology, Seoul (Korea, Republic of)

    2016-12-15

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time.

  4. OPTIMIZATION OF AGGREGATION AND SEQUENTIAL-PARALLEL EXECUTION MODES OF INTERSECTING OPERATION SETS

    Directory of Open Access Journals (Sweden)

    G. М. Levin

    2016-01-01

    Full Text Available A mathematical model and a method for the problem of optimization of aggregation and of sequential- parallel execution modes of intersecting operation sets are proposed. The proposed method is based on the two-level decomposition scheme. At the top level the variant of aggregation for groups of operations is selected, and at the lower level the execution modes of operations are optimized for a fixed version of aggregation.

  5. A segmentation and classification scheme for single tooth in MicroCT images based on 3D level set and k-means+.

    Science.gov (United States)

    Wang, Liansheng; Li, Shusheng; Chen, Rongzhen; Liu, Sze-Yu; Chen, Jyh-Cheng

    2017-04-01

    Accurate classification of different anatomical structures of teeth from medical images provides crucial information for the stress analysis in dentistry. Usually, the anatomical structures of teeth are manually labeled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS) method improved by fully utilizing 3 dimensional (3D) information, and classify the tooth by employing unsupervised learning i.e., k-means++ method. In order to evaluate the proposed method, the experiments are conducted on the sufficient and extensive datasets of mandibular molars. The experimental results show that our method can achieve higher accuracy and robustness compared to other three clustering methods. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Study on a new meteorological sampling scheme developed for the OSCAAR code system

    International Nuclear Information System (INIS)

    Liu Xinhe; Tomita, Kenichi; Homma, Toshimitsu

    2002-03-01

    One important step in Level-3 Probabilistic Safety Assessment is meteorological sequence sampling, on which the previous studies were mainly related to code systems using the straight-line plume model and more efforts are needed for those using the trajectory puff model such as the OSCAAR code system. This report describes the development of a new meteorological sampling scheme for the OSCAAR code system that explicitly considers population distribution. A group of principles set for the development of this new sampling scheme includes completeness, appropriate stratification, optimum allocation, practicability and so on. In this report, discussions are made about the procedures of the new sampling scheme and its application. The calculation results illustrate that although it is quite difficult to optimize stratification of meteorological sequences based on a few environmental parameters the new scheme do gather the most inverse conditions in a single subset of meteorological sequences. The size of this subset may be as small as a few dozens, so that the tail of a complementary cumulative distribution function is possible to remain relatively static in different trials of the probabilistic consequence assessment code. (author)

  7. Accurate prediction of complex free surface flow around a high speed craft using a single-phase level set method

    Science.gov (United States)

    Broglia, Riccardo; Durante, Danilo

    2017-11-01

    This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to

  8. A Secure and Efficient Certificateless Short Signature Scheme

    Directory of Open Access Journals (Sweden)

    Lin Cheng

    2013-07-01

    Full Text Available Certificateless public key cryptography combines advantage of traditional public key cryptography and identity-based public key cryptography as it avoids usage of certificates and resolves the key escrow problem. In 2007, Huang et al. classified adversaries against certificateless signatures according to their attack power into normal, strong and super adversaries (ordered by their attack power. In this paper, we propose a new certificateless short signature scheme and prove that it is secure against both of the super type I and the super type II adversaries. Our new scheme not only achieves the strongest security level but also has the shortest signature length (one group element. Compared with the other short certificateless signature schemes which have a similar security level, our new scheme has less operation cost.

  9. Mammography image assessment; validity and reliability of current scheme

    International Nuclear Information System (INIS)

    Hill, C.; Robinson, L.

    2015-01-01

    Mammographers currently score their own images according to criteria set out by Regional Quality Assurance. The criteria used are based on the ‘Perfect, Good, Moderate, Inadequate’ (PGMI) marking criteria established by the National Health Service Breast Screening Programme (NHSBSP) in their Quality Assurance Guidelines of 2006 1 . This document discusses the validity and reliability of the current mammography image assessment scheme. Commencing with a critical review of the literature this document sets out to highlight problems with the national approach to the use of marking schemes. The findings suggest that ‘PGMI’ scheme is flawed in terms of reliability and validity and is not universally applied across the UK. There also appear to be differences in schemes used by trainees and qualified mammographers. Initial recommendations are to be made in collaboration with colleagues within the National Health Service Breast Screening Programme (NHSBSP), Higher Education Centres, College of Radiographers and the Royal College of Radiologists in order to identify a mammography image appraisal scheme that is fit for purpose. - Highlights: • Currently no robust evidence based marking tools in use for the assessment of images in mammography. • Is current system valid, reliable and robust? • How can the current image assessment tool be improved? • Should students and qualified mammographers use the same tool? • What marking criteria are available for image assessment?

  10. A parametric level-set approach for topology optimization of flow domains

    DEFF Research Database (Denmark)

    Pingen, Georg; Waidmann, Matthias; Evgrafov, Anton

    2010-01-01

    of the design variables in the traditional approaches is seen as a possible cause for the slow convergence. Non-smooth material distributions are suspected to trigger premature onset of instationary flows which cannot be treated by steady-state flow models. In the present work, we study whether the convergence...... and the versatility of topology optimization methods for fluidic systems can be improved by employing a parametric level-set description. In general, level-set methods allow controlling the smoothness of boundaries, yield a non-local influence of design variables, and decouple the material description from the flow...... field discretization. The parametric level-set method used in this study utilizes a material distribution approach to represent flow boundaries, resulting in a non-trivial mapping between design variables and local material properties. Using a hydrodynamic lattice Boltzmann method, we study...

  11. Cost-based droop scheme for DC microgrid

    DEFF Research Database (Denmark)

    Nutkani, Inam Ullah; Wang, Peng; Loh, Poh Chiang

    2014-01-01

    voltage level, less on optimized operation and control of generation sources. The latter theme is perused in this paper, where cost-based droop scheme is proposed for distributed generators (DGs) in DC microgrids. Unlike traditional proportional power sharing based droop scheme, the proposed scheme......-connected operation. Most importantly, the proposed scheme can reduce overall total generation cost in DC microgrids without centralized controller and communication links. The performance of the proposed scheme has been verified under different load conditions.......DC microgrids are gaining interest due to higher efficiencies of DC distribution compared with AC. The benefits of DC systems have been widely researched for data centers, IT facilities and residential applications. The research focus, however, has been more on system architecture and optimal...

  12. Decentralized Economic Dispatch Scheme With Online Power Reserve for Microgrids

    DEFF Research Database (Denmark)

    Nutkani, I. U.; Loh, Poh Chiang; Wang, P.

    2017-01-01

    Decentralized economic operation schemes have several advantages when compared with the traditional centralized management system for microgrids. Specifically, decentralized schemes are more flexible, less computationally intensive, and easier to implement without relying on communication...... costs, their power ratings, and other necessary constraints, before deciding the DG dispatch priorities and droop characteristics. The proposed scheme also allows online power reserve to be set and regulated within the microgrid. This, together with the generation cost saved, has been verified...... infrastructure. Economic operation of existing decentralized schemes is also usually achieved by either tuning the droop characteristics of distributed generators (DGs) or prioritizing their dispatch order. For the latter, an earlier scheme has tried to prioritize the DG dispatch based on their no...

  13. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    Science.gov (United States)

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of

  14. A Layered Searchable Encryption Scheme with Functional Components Independent of Encryption Methods

    Science.gov (United States)

    Luo, Guangchun; Qin, Ke

    2014-01-01

    Searchable encryption technique enables the users to securely store and search their documents over the remote semitrusted server, which is especially suitable for protecting sensitive data in the cloud. However, various settings (based on symmetric or asymmetric encryption) and functionalities (ranked keyword query, range query, phrase query, etc.) are often realized by different methods with different searchable structures that are generally not compatible with each other, which limits the scope of application and hinders the functional extensions. We prove that asymmetric searchable structure could be converted to symmetric structure, and functions could be modeled separately apart from the core searchable structure. Based on this observation, we propose a layered searchable encryption (LSE) scheme, which provides compatibility, flexibility, and security for various settings and functionalities. In this scheme, the outputs of the core searchable component based on either symmetric or asymmetric setting are converted to some uniform mappings, which are then transmitted to loosely coupled functional components to further filter the results. In such a way, all functional components could directly support both symmetric and asymmetric settings. Based on LSE, we propose two representative and novel constructions for ranked keyword query (previously only available in symmetric scheme) and range query (previously only available in asymmetric scheme). PMID:24719565

  15. Building Secure Public Key Encryption Scheme from Hidden Field Equations

    Directory of Open Access Journals (Sweden)

    Yuan Ping

    2017-01-01

    Full Text Available Multivariate public key cryptography is a set of cryptographic schemes built from the NP-hardness of solving quadratic equations over finite fields, amongst which the hidden field equations (HFE family of schemes remain the most famous. However, the original HFE scheme was insecure, and the follow-up modifications were shown to be still vulnerable to attacks. In this paper, we propose a new variant of the HFE scheme by considering the special equation x2=x defined over the finite field F3 when x=0,1. We observe that the equation can be used to further destroy the special structure of the underlying central map of the HFE scheme. It is shown that the proposed public key encryption scheme is secure against known attacks including the MinRank attack, the algebraic attacks, and the linearization equations attacks. The proposal gains some advantages over the original HFE scheme with respect to the encryption speed and public key size.

  16. New level schemes with high-spin states of 105,107,109Tc

    International Nuclear Information System (INIS)

    Luo, Y.X.; Rasmussen, J.O.; Lee, I.Y.; Fallon, P.; Hamilton, J.H.; Ramayya, A.V.; Hwang, J.K.; Gore, P.M.; Zhu, S.J.; Wu, S.C.; Ginter, T.N.; Ter-Akopian, G.M.; Daniel, A.V.; Stoyer, M.A.; Donangelo, R.; Gelberg, A.

    2004-01-01

    New level schemes of odd-Z 105,107,109 Tc are proposed based on the 252 Cf spontaneous-fission-gamma data taken with Gammasphere in 2000. Bands of levels are considerably extended and expanded to show rich spectroscopic information. Spin/parity and configuration assignments are made based on determinations of multipolarities of low-lying transitions and the level analogies to the previously reported levels, and to those of the neighboring Rh isotopes. A non-yrast negative-parity band built on the 3/2 - [301] orbital is observed for the first time in 105 Tc. A positive-parity band built on the 1/2 + [431] intruder orbital originating from the π(g 7/2 /d 5/2 ) subshells and having a strong deformation-driving effect is observed for the first time in 105 Tc, and assigned in 107 Tc. A positive-parity band built on the excited 11/2 + level, which has rather low excitation energy and predominantly decays into the 9/2 + level of the ground state band, provides evidence of triaxiality in 107,109 Tc, and probably also in 105 Tc. Rotational constants are calculated and discussed for the K=1/2 intruder bands using the Bohr-Mottelson formula. Level systematics are discussed in terms of the locations of proton Fermi levels and deformations. The band crossings of yrast positive-parity bands are observed, most likely related to h 11/2 neutron alignment. Triaxial-rotor-plus-particle model calculations performed with ε=0.32 and γ=-22.5 deg. on the prolate side of maximum triaxiality yielded the best reproduction of the excitation energies, signature splittings, and branching ratios of the positive-parity bands (except for the intruder bands) of these Tc isotopes. The significant discrepancies between the triaxial-rotor-plus-particle model calculations and experiment for the K=1/2 intruder bands in 105,107 Tc need further theoretical studies

  17. A New Quantum Secure Direct Communication Scheme with Authentication

    International Nuclear Information System (INIS)

    Dan, Liu; Chang-Xing, Pei; Dong-Xiao, Quan; Nan, Zhao

    2010-01-01

    A new quantum secure direct communication (QSDC) scheme with authentication is proposed based on polarized photons and EPR pairs. EPR pairs are used to transmit information, while polarized photons are used to detect Eve and their encoding bases are used to transmit authentication information. Alice and Bob have their own identity number which is shared by legal users only. The identity number is encoded on the bases of polarized photons and distilled if there is no Eve. Compared with other QSDC schemes with authentication, this new scheme is considerably easier and less expensive to implement in a practical setting

  18. AUS - the Australian modular scheme for reactor neutronics computations

    International Nuclear Information System (INIS)

    Robinson, G.S.

    1975-12-01

    A general description is given of the AUS modular scheme for reactor neutronics calculations. The scheme currently includes modules which provide the capacity for lattice calculations, 1D transport calculations, 1 and 2D diffusion calculations (with feedback-free kinetics), and burnup calculations. Details are provided of all system aspects of AUS, but individual modules are only outlined. A complete specification is given of that part of user input which controls the calculation sequence. The report also provides sufficient details of the supervisor program and of the interface data sets to enable additional modules to be incorporated in the scheme. (author)

  19. An integrated extended Kalman filter–implicit level set algorithm for monitoring planar hydraulic fractures

    International Nuclear Information System (INIS)

    Peirce, A; Rochinha, F

    2012-01-01

    We describe a novel approach to the inversion of elasto-static tiltmeter measurements to monitor planar hydraulic fractures propagating within three-dimensional elastic media. The technique combines the extended Kalman filter (EKF), which predicts and updates state estimates using tiltmeter measurement time-series, with a novel implicit level set algorithm (ILSA), which solves the coupled elasto-hydrodynamic equations. The EKF and ILSA are integrated to produce an algorithm to locate the unknown fracture-free boundary. A scaling argument is used to derive a strategy to tune the algorithm parameters to enable measurement information to compensate for unmodeled dynamics. Synthetic tiltmeter data for three numerical experiments are generated by introducing significant changes to the fracture geometry by altering the confining geological stress field. Even though there is no confining stress field in the dynamic model used by the new EKF-ILSA scheme, it is able to use synthetic data to arrive at remarkably accurate predictions of the fracture widths and footprints. These experiments also explore the robustness of the algorithm to noise and to placement of tiltmeter arrays operating in the near-field and far-field regimes. In these experiments, the appropriate parameter choices and strategies to improve the robustness of the algorithm to significant measurement noise are explored. (paper)

  20. Multi person detection and tracking based on hierarchical level-set method

    Science.gov (United States)

    Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid

    2018-04-01

    In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.

  1. Comparative Evaluation of Cash Benefit Scheme of Janani Suraksha Yojana for Beneficiary Mothers from Different Health Care Settings of Rewa District, Madhya Pradesh, India.

    Directory of Open Access Journals (Sweden)

    Trivedi R

    2014-05-01

    Full Text Available Introduction: For better outcomes in mother and child health, Government of India launched the National Rural Health Mission (NRHM in 2005 with a major objective of providing accessible, affordable and quality health care to the rural population; especially the vulnerable. Reduction in MMR to 100/100,000 is one of its goals and the Janani Suraksha Yojana (JSY is the key strategy of NRHM to achieve this reduction. The JSY, as a safe motherhood intervention and modified alternative of the National Maternity Benefit Scheme (NMBS, has been implemented in all states and Union territories with special focus on low performing states. The main objective and vision of JSY is to reduce maternal, neo-natal mortality and promote institutional delivery among the poor pregnant women of rural and urban areas. This scheme is 100% centrally sponsored and has an integrated delivery and post delivery care with the help of a key person i.e. ASHA (Accredited Social Health Activist, followed by cash monetary help to the women. Objectives: 1To evaluate cash benefit service provided under JSY at different health care settings. 2 To know the perception and elicit suggestions of beneficiaries on quality of cash benefit scheme of JSY. Methodology: This is a health care institute based observational cross sectional study including randomly selected 200 JSY beneficiary mothers from the different health care settings i.e., Primary Health Centres, Community Health Centres, District Hospital and Medical College Hospital of Rewa District of Madhya Pradesh state. Data was collected with the help of set pro forma and then analysed with Epi Info 2000. Chi square test was applied appropriately. Results: 60% and 80% beneficiaries from PHC and CHC received cash within 1 week after discharge whereas 100% beneficiaries of District Hospital and Medical College Hospital received cash at the time of discharge; the overall distribution of time of cash disbursement among beneficiaries of

  2. Level set methods for detonation shock dynamics using high-order finite elements

    Energy Technology Data Exchange (ETDEWEB)

    Dobrev, V. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Grogan, F. C. [Univ. of California, San Diego, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kolev, T. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rieben, R [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tomov, V. Z. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-26

    Level set methods are a popular approach to modeling evolving interfaces. We present a level set ad- vection solver in two and three dimensions using the discontinuous Galerkin method with high-order nite elements. During evolution, the level set function is reinitialized to a signed distance function to maintain ac- curacy. Our approach leads to stable front propagation and convergence on high-order, curved, unstructured meshes. The ability of the solver to implicitly track moving fronts lends itself to a number of applications; in particular, we highlight applications to high-explosive (HE) burn and detonation shock dynamics (DSD). We provide results for two- and three-dimensional benchmark problems as well as applications to DSD.

  3. An investigation of children's levels of inquiry in an informal science setting

    Science.gov (United States)

    Clark-Thomas, Beth Anne

    Elementary school students' understanding of both science content and processes are enhanced by the higher level thinking associated with inquiry-based science investigations. Informal science setting personnel, elementary school teachers, and curriculum specialists charged with designing inquiry-based investigations would be well served by an understanding of the varying influence of certain present factors upon the students' willingness and ability to delve into such higher level inquiries. This study examined young children's use of inquiry-based materials and factors which may influence the level of inquiry they engaged in during informal science activities. An informal science setting was selected as the context for the examination of student inquiry behaviors because of the rich inquiry-based environment present at the site and the benefits previously noted in the research regarding the impact of informal science settings upon the construction of knowledge in science. The study revealed several patterns of behavior among children when they are engaged in inquiry-based activities at informal science exhibits. These repeated behaviors varied in the children's apparent purposeful use of the materials at the exhibits. These levels of inquiry behavior were taxonomically defined as high/medium/low within this study utilizing a researcher-developed tool. Furthermore, in this study adult interventions, questions, or prompting were found to impact the level of inquiry engaged in by the children. This study revealed that higher levels of inquiry were preceded by task directed and physical feature prompts. Moreover, the levels of inquiry behaviors were haltered, even lowered, when preceded by a prompt that focused on a science content or concept question. Results of this study have implications for the enhancement of inquiry-based science activities in elementary schools as well as in informal science settings. These findings have significance for all science educators

  4. Interband cascade laser-based ppbv-level mid-infrared methane detection using two digital lock-in amplifier schemes

    Science.gov (United States)

    Song, Fang; Zheng, Chuantao; Yu, Di; Zhou, Yanwen; Yan, Wanhong; Ye, Weilin; Zhang, Yu; Wang, Yiding; Tittel, Frank K.

    2018-03-01

    A parts-per-billion in volume (ppbv) level mid-infrared methane (CH4) sensor system was demonstrated using second-harmonic wavelength modulation spectroscopy (2 f-WMS). A 3291 nm interband cascade laser (ICL) and a multi-pass gas cell (MPGC) with a 16 m optical path length were adopted in the reported sensor system. Two digital lock-in amplifier (DLIA) schemes, a digital signal processor (DSP)-based DLIA and a LabVIEW-based DLIA, were used for harmonic signal extraction. A limit of detection (LoD) of 13.07 ppbv with an averaging time of 2 s was achieved using the DSP-based DLIA and a LoD of 5.84 ppbv was obtained using the LabVIEW-based DLIA with the same averaging time. A rise time of 0→2 parts-per-million in volume (ppmv) and fall time of 2→0 ppmv were observed. Outdoor atmospheric CH4 concentration measurements were carried out to evaluate the sensor performance using the two DLIA schemes.

  5. Emissions trading and competitiveness: pros and cons of relative and absolute schemes

    International Nuclear Information System (INIS)

    Kuik, Onno; Mulder, Machiel

    2004-01-01

    Emissions trading is a hot issue. At national as well as supranational levels, proposals for introduction of emissions trading schemes have been made. This paper assesses alternative emissions trading schemes at domestic level: (1) schemes where the total level of emissions is fixed (absolute cap-and-trade), (2) schemes where the allowable level of emissions per firm is related to some firm-specific indicator (relative cap-and-trade), and (3) mixed schemes which combine elements of the above alternatives. We present a quantitative assessment of these alternatives for climate change policy in the Netherlands. It is concluded that while relative cap-and-trade would avoid negative effects on competitiveness, it would not reduce emissions at the lowest costs. Besides, the addition of a trade system to existing relative standards does not result in additional emission reduction; it should be combined with other policy measures, such as energy taxes, in order to realise further reduction. Absolute cap-and-trade leads to efficient emissions reduction, but, implemented at the national level, its overall macroeconomic costs may be significant. The mixed scheme has as drawback that it treats firms unequal, which leads to high administrative costs. We conclude that none of the trading schemes is an advisable instrument for domestic climate policy

  6. Reinforcement Learning Based Data Self-Destruction Scheme for Secured Data Management

    Directory of Open Access Journals (Sweden)

    Young Ki Kim

    2018-04-01

    Full Text Available As technologies and services that leverage cloud computing have evolved, the number of businesses and individuals who use them are increasing rapidly. In the course of using cloud services, as users store and use data that include personal information, research on privacy protection models to protect sensitive information in the cloud environment is becoming more important. As a solution to this problem, a self-destructing scheme has been proposed that prevents the decryption of encrypted user data after a certain period of time using a Distributed Hash Table (DHT network. However, the existing self-destructing scheme does not mention how to set the number of key shares and the threshold value considering the environment of the dynamic DHT network. This paper proposes a method to set the parameters to generate the key shares needed for the self-destructing scheme considering the availability and security of data. The proposed method defines state, action, and reward of the reinforcement learning model based on the similarity of the graph, and applies the self-destructing scheme process by updating the parameter based on the reinforcement learning model. Through the proposed technique, key sharing parameters can be set in consideration of data availability and security in dynamic DHT network environments.

  7. Energy mesh optimization for multi-level calculation schemes

    International Nuclear Information System (INIS)

    Mosca, P.; Taofiki, A.; Bellier, P.; Prevost, A.

    2011-01-01

    The industrial calculations of third generation nuclear reactors are based on sophisticated strategies of homogenization and collapsing at different spatial and energetic levels. An important issue to ensure the quality of these calculation models is the choice of the collapsing energy mesh. In this work, we show a new approach to generate optimized energy meshes starting from the SHEM 281-group library. The optimization model is applied on 1D cylindrical cells and consists of finding an energy mesh which minimizes the errors between two successive collision probability calculations. The former is realized over the fine SHEM mesh with Livolant-Jeanpierre self-shielded cross sections and the latter is performed with collapsed cross sections over the energy mesh being optimized. The optimization is done by the particle swarm algorithm implemented in the code AEMC and multigroup flux solutions are obtained from standard APOLLO2 solvers. By this new approach, a set of new optimized meshes which encompass from 10 to 50 groups has been defined for PWR and BWR calculations. This set will allow users to adapt the energy detail of the solution to the complexity of the calculation (assembly, multi-assembly, two-dimensional whole core). Some preliminary verifications, in which the accuracy of the new meshes is measured compared to a direct 281-group calculation, show that the 30-group optimized mesh offers a good compromise between simulation time and accuracy for a standard 17 x 17 UO 2 assembly with and without control rods. (author)

  8. On Converting Secret Sharing Scheme to Visual Secret Sharing Scheme

    Directory of Open Access Journals (Sweden)

    Wang Daoshun

    2010-01-01

    Full Text Available Abstract Traditional Secret Sharing (SS schemes reconstruct secret exactly the same as the original one but involve complex computation. Visual Secret Sharing (VSS schemes decode the secret without computation, but each share is m times as big as the original and the quality of the reconstructed secret image is reduced. Probabilistic visual secret sharing (Prob.VSS schemes for a binary image use only one subpixel to share the secret image; however the probability of white pixels in a white area is higher than that in a black area in the reconstructed secret image. SS schemes, VSS schemes, and Prob. VSS schemes have various construction methods and advantages. This paper first presents an approach to convert (transform a -SS scheme to a -VSS scheme for greyscale images. The generation of the shadow images (shares is based on Boolean XOR operation. The secret image can be reconstructed directly by performing Boolean OR operation, as in most conventional VSS schemes. Its pixel expansion is significantly smaller than that of VSS schemes. The quality of the reconstructed images, measured by average contrast, is the same as VSS schemes. Then a novel matrix-concatenation approach is used to extend the greyscale -SS scheme to a more general case of greyscale -VSS scheme.

  9. Intercomparison between BATS and LSPM surface schemes, using point micrometeorological data set

    Energy Technology Data Exchange (ETDEWEB)

    Ruti, P.M.; Cacciamani, C.; Paccagnella, T. [Servizio Meteorologico Regionale, Bologna (Italy); Cassardo, C. [Turin Univ., Alessandria (Italy). Dipt. di Scienze e Technologie Avanzate; Longhetto, A. [Turin Univ. (Italy). Ist. di Fisica Generale; Bargagli, A. [ENEA, Roma (Italy). Gruppo di Dinamica dell`Atmosfera e dell`Oceano

    1997-08-01

    This work has been developed with the aim to create an archive of climatological values of sensible, latent and ground-atmosphere heat fluxes in the Po valley (CLIPS experiment); due to the unavailability of climatological archives of turbulent fluxes at synoptic scale, we have used the outputs of ``stand-alone`` runnings of biospheric models; this archive could be used to check the parametrizations of large- and mesoscale models in the surface layer. We started to check the reliability of our proposal by testing the model outputs by a comparison with observed data. We selected a flat, rural area in the middle-east Po valley (San Pietro Capofiume, Italy) and used the data gathered in the experimental campaign SPCFLUX93 carried out there. The models adopted for the intercomparison have been the biosphere-atmosphere transfer scheme (BATS) of Dickinson et al. (1986 version) and the land surface process model (LSPM) of Cassardo et al. (1996 version). An improved version of BATS has been implemented by us changing in a substantial way the soil thermal and hydrological subroutines. The upper boundary conditions used for all models were taken by interpolating the synoptic observations carried out at San Pietro Capofiume (Italy) station; the algorithm used for the interpolations was tested with the data achieved in a fortnight campaign (SPCFLUX93) carried out at the same location during June 1993, showing a good agreement between interpolated and observed variables. Two experiments have been carried out; in the first one, the vegetation parameter set used by BATS has been used to force all models, while in the second one a vegetation cover value closest to the observations in the site has been used. 30 refs.

  10. Awareness and Coverage of the National Health Insurance Scheme ...

    African Journals Online (AJOL)

    Sub- national levels possess a high degree of autonomy in a number of sectors including health. It is important to assess the level of coverage of the scheme among the formal sector workers in Nigeria as a proxy to gauge the extent of coverage of the scheme and derive suitable lessons that could be used in its expansion.

  11. Canonical, stable, general mapping using context schemes.

    Science.gov (United States)

    Novak, Adam M; Rosen, Yohei; Haussler, David; Paten, Benedict

    2015-11-15

    Sequence mapping is the cornerstone of modern genomics. However, most existing sequence mapping algorithms are insufficiently general. We introduce context schemes: a method that allows the unambiguous recognition of a reference base in a query sequence by testing the query for substrings from an algorithmically defined set. Context schemes only map when there is a unique best mapping, and define this criterion uniformly for all reference bases. Mappings under context schemes can also be made stable, so that extension of the query string (e.g. by increasing read length) will not alter the mapping of previously mapped positions. Context schemes are general in several senses. They natively support the detection of arbitrary complex, novel rearrangements relative to the reference. They can scale over orders of magnitude in query sequence length. Finally, they are trivially extensible to more complex reference structures, such as graphs, that incorporate additional variation. We demonstrate empirically the existence of high-performance context schemes, and present efficient context scheme mapping algorithms. The software test framework created for this study is available from https://registry.hub.docker.com/u/adamnovak/sequence-graphs/. anovak@soe.ucsc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Surface-to-surface registration using level sets

    DEFF Research Database (Denmark)

    Hansen, Mads Fogtmann; Erbou, Søren G.; Vester-Christensen, Martin

    2007-01-01

    This paper presents a general approach for surface-to-surface registration (S2SR) with the Euclidean metric using signed distance maps. In addition, the method is symmetric such that the registration of a shape A to a shape B is identical to the registration of the shape B to the shape A. The S2SR...... problem can be approximated by the image registration (IR) problem of the signed distance maps (SDMs) of the surfaces confined to some narrow band. By shrinking the narrow bands around the zero level sets the solution to the IR problem converges towards the S2SR problem. It is our hypothesis...... that this approach is more robust and less prone to fall into local minima than ordinary surface-to-surface registration. The IR problem is solved using the inverse compositional algorithm. In this paper, a set of 40 pelvic bones of Duroc pigs are registered to each other w.r.t. the Euclidean transformation...

  13. Multi-criteria decision aid approach for the selection of the best compromise management scheme for ELVs: the case of Cyprus.

    Science.gov (United States)

    Mergias, I; Moustakas, K; Papadopoulos, A; Loizidou, M

    2007-08-25

    Each alternative scheme for treating a vehicle at its end of life has its own consequences from a social, environmental, economic and technical point of view. Furthermore, the criteria used to determine these consequences are often contradictory and not equally important. In the presence of multiple conflicting criteria, an optimal alternative scheme never exists. A multiple-criteria decision aid (MCDA) method to aid the Decision Maker (DM) in selecting the best compromise scheme for the management of End-of-Life Vehicles (ELVs) is presented in this paper. The constitution of a set of alternatives schemes, the selection of a list of relevant criteria to evaluate these alternative schemes and the choice of an appropriate management system are also analyzed in this framework. The proposed procedure relies on the PROMETHEE method which belongs to the well-known family of multiple criteria outranking methods. For this purpose, level, linear and Gaussian functions are used as preference functions.

  14. Age-of-Air, Tape Recorder, and Vertical Transport Schemes

    Science.gov (United States)

    Lin, S.-J.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    A numerical-analytic investigation of the impacts of vertical transport schemes on the model simulated age-of-air and the so-called 'tape recorder' will be presented using an idealized 1-D column transport model as well as a more realistic 3-D dynamical model. By comparing to the 'exact' solutions of 'age-of-air' and the 'tape recorder' obtainable in the 1-D setting, useful insight is gained on the impacts of numerical diffusion and dispersion of numerical schemes used in global models. Advantages and disadvantages of Eulerian, semi-Lagrangian, and Lagrangian transport schemes will be discussed. Vertical resolution requirement for numerical schemes as well as observing systems for capturing the fine details of the 'tape recorder' or any upward propagating wave-like structures can potentially be derived from the 1-D analytic model.

  15. Numerical schemes for dynamically orthogonal equations of stochastic fluid and ocean flows

    International Nuclear Information System (INIS)

    Ueckermann, M.P.; Lermusiaux, P.F.J.; Sapsis, T.P.

    2013-01-01

    The quantification of uncertainties is critical when systems are nonlinear and have uncertain terms in their governing equations or are constrained by limited knowledge of initial and boundary conditions. Such situations are common in multiscale, intermittent and non-homogeneous fluid and ocean flows. The dynamically orthogonal (DO) field equations provide an adaptive methodology to predict the probability density functions of such flows. The present work derives efficient computational schemes for the DO methodology applied to unsteady stochastic Navier–Stokes and Boussinesq equations, and illustrates and studies the numerical aspects of these schemes. Semi-implicit projection methods are developed for the mean and for the DO modes, and time-marching schemes of first to fourth order are used for the stochastic coefficients. Conservative second-order finite-volumes are employed in physical space with new advection schemes based on total variation diminishing methods. Other results include: (i) the definition of pseudo-stochastic pressures to obtain a number of pressure equations that is linear in the subspace size instead of quadratic; (ii) symmetric advection schemes for the stochastic velocities; (iii) the use of generalized inversion to deal with singular subspace covariances or deterministic modes; and (iv) schemes to maintain orthonormal modes at the numerical level. To verify our implementation and study the properties of our schemes and their variations, a set of stochastic flow benchmarks are defined including asymmetric Dirac and symmetric lock-exchange flows, lid-driven cavity flows, and flows past objects in a confined channel. Different Reynolds number and Grashof number regimes are employed to illustrate robustness. Optimal convergence under both time and space refinements is shown as well as the convergence of the probability density functions with the number of stochastic realizations.

  16. Estimation of the common cause failure probabilities on the component group with mixed testing scheme

    International Nuclear Information System (INIS)

    Hwang, Meejeong; Kang, Dae Il

    2011-01-01

    Highlights: ► This paper presents a method to estimate the common cause failure probabilities on the common cause component group with mixed testing schemes. ► The CCF probabilities are dependent on the testing schemes such as staggered testing or non-staggered testing. ► There are many CCCGs with specific mixed testing schemes in real plant operation. ► Therefore, a general formula which is applicable to both alternate periodic testing scheme and train level mixed testing scheme was derived. - Abstract: This paper presents a method to estimate the common cause failure (CCF) probabilities on the common cause component group (CCCG) with mixed testing schemes such as the train level mixed testing scheme or the alternate periodic testing scheme. In the train level mixed testing scheme, the components are tested in a non-staggered way within the same train, but the components are tested in a staggered way between the trains. The alternate periodic testing scheme indicates that all components in the same CCCG are tested in a non-staggered way during the planned maintenance period, but they are tested in a staggered way during normal plant operation. Since the CCF probabilities are dependent on the testing schemes such as staggered testing or non-staggered testing, CCF estimators have two kinds of formulas in accordance with the testing schemes. Thus, there are general formulas to estimate the CCF probability on the staggered testing scheme and non-staggered testing scheme. However, in real plant operation, there are many CCCGs with specific mixed testing schemes. Recently, Barros () and Kang () proposed a CCF factor estimation method to reflect the alternate periodic testing scheme and the train level mixed testing scheme. In this paper, a general formula which is applicable to both the alternate periodic testing scheme and the train level mixed testing scheme was derived.

  17. Conservative numerical schemes for Euler-Lagrange equations

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez, L. [Universidad Complutense, Madrid (Spain). Dept. de Matematica Aplicada; Jimenez, S. [Universidad Alfonso X El Sabio, Madrid (Spain). Dept. de Matematica Aplicada

    1999-05-01

    As a preliminary step to study magnetic field lines, the authors seek numerical schemes that reproduce at discrete level the significant feature of the continuous model, based on an underling Lagrangian structure. The resulting scheme give discrete counterparts of the variation law for the energy as well of as the Euler-Lagrange equations and their symmetries.

  18. A Regev-Type Fully Homomorphic Encryption Scheme Using Modulus Switching

    Science.gov (United States)

    Chen, Zhigang; Wang, Jian; Song, Xinxia

    2014-01-01

    A critical challenge in a fully homomorphic encryption (FHE) scheme is to manage noise. Modulus switching technique is currently the most efficient noise management technique. When using the modulus switching technique to design and implement a FHE scheme, how to choose concrete parameters is an important step, but to our best knowledge, this step has drawn very little attention to the existing FHE researches in the literature. The contributions of this paper are twofold. On one hand, we propose a function of the lower bound of dimension value in the switching techniques depending on the LWE specific security levels. On the other hand, as a case study, we modify the Brakerski FHE scheme (in Crypto 2012) by using the modulus switching technique. We recommend concrete parameter values of our proposed scheme and provide security analysis. Our result shows that the modified FHE scheme is more efficient than the original Brakerski scheme in the same security level. PMID:25093212

  19. A Memory and Computation Efficient Sparse Level-Set Method

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.

    Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the

  20. Energy Efficient MAC Scheme for Wireless Sensor Networks with High-Dimensional Data Aggregate

    Directory of Open Access Journals (Sweden)

    Seokhoon Kim

    2015-01-01

    Full Text Available This paper presents a novel and sustainable medium access control (MAC scheme for wireless sensor network (WSN systems that process high-dimensional aggregated data. Based on a preamble signal and buffer threshold analysis, it maximizes the energy efficiency of the wireless sensor devices which have limited energy resources. The proposed group management MAC (GM-MAC approach not only sets the buffer threshold value of a sensor device to be reciprocal to the preamble signal but also sets a transmittable group value to each sensor device by using the preamble signal of the sink node. The primary difference between the previous and the proposed approach is that existing state-of-the-art schemes use duty cycle and sleep mode to save energy consumption of individual sensor devices, whereas the proposed scheme employs the group management MAC scheme for sensor devices to maximize the overall energy efficiency of the whole WSN systems by minimizing the energy consumption of sensor devices located near the sink node. Performance evaluations show that the proposed scheme outperforms the previous schemes in terms of active time of sensor devices, transmission delay, control overhead, and energy consumption. Therefore, the proposed scheme is suitable for sensor devices in a variety of wireless sensor networking environments with high-dimensional data aggregate.

  1. Axisymmetric pumping scheme for the thermal barrier in a tandem mirror

    International Nuclear Information System (INIS)

    Li, X.Z.

    1985-09-01

    An axisymmetric pumping scheme is proposed to pump the particles that trap in a thermal barrier without invoking the neutral beam or geodesic curvature. In this scheme a magnetic scraper is moved uni-directionally on the barrier peak to push the barely trapped particles into the central cell. We utilize a potential jump that forms at the peak field for sufficiently strong pumping. The non-collisional catching effect has to be limited by setting an upper limit on the scraping frequency of the magnetic bump. On the other hand, the dynamic stability of the pumping scheme sets a lower limit on the scraping frequency. Using the variational method, we are able to estimate the window between these two limits, which seems feasible for the Tara reactor parameter set. A primary calculation shows that the magnetic bump, ΔB/B is about 10 -4 and the scraping frequency, nu/sub sc/, is about 10 +5 sec -1 , which are similar to the parameters required for those for drift pumping

  2. Influence of the turbulence typing scheme upon the cumulative frequency distribution of the calculated relative concentrations for different averaging times

    Energy Technology Data Exchange (ETDEWEB)

    Kretzschmar, J.G.; Mertens, I.

    1984-01-01

    Over the period 1977-1979, hourly meteorological measurements at the Nuclear Energy Research Centre, Mol, Belgium and simultaneous synoptic observations at the nearby military airport of Kleine Brogel, have been compiled as input data for a bi-Gaussian dispersion model. The available information has first of all been used to determine hourly stability classes in ten widely used turbulent diffusion typing schemes. Systematic correlations between different systems were rare. Twelve different combinations of diffusion typing scheme-dispersion parameters were then used for calculating cumulative frequency distributions of 1 h, 8 h, 16 h, 3 d, and 26 d average ground-level concentrations at receptors respectively at 500 m, 1 km, 2 km, 4 km and 8 km from continuous ground-level release and an elevated release at 100 m height. Major differences were noted as well in the extreme values, the higher percentiles, as in the annual mean concentrations. These differences are almost entirely due to the differences in the numercial values (as a function of distance) of the various sets of dispersion parameters actually in use for impact assessment studies. Dispersion parameter sets giving the lowest normalized ground-level concentration values for ground level releases give the highest results for elevated releases and vice versa. While it was illustrated once again that the applicability of a given set of dispersion parameters is restricted due to the specific conditions under which the given set derived, it was also concluded that systematic experimental work to validate certain assumptions is urgently needed.

  3. An Optimization Scheme for ProdMod

    International Nuclear Information System (INIS)

    Gregory, M.V.

    1999-01-01

    A general purpose dynamic optimization scheme has been devised in conjunction with the ProdMod simulator. The optimization scheme is suitable for the Savannah River Site (SRS) High Level Waste (HLW) complex operations, and able to handle different types of optimizations such as linear, nonlinear, etc. The optimization is performed in the stand-alone FORTRAN based optimization deliver, while the optimizer is interfaced with the ProdMod simulator for flow of information between the two

  4. Level sets and extrema of random processes and fields

    CERN Document Server

    Azais, Jean-Marc

    2009-01-01

    A timely and comprehensive treatment of random field theory with applications across diverse areas of study Level Sets and Extrema of Random Processes and Fields discusses how to understand the properties of the level sets of paths as well as how to compute the probability distribution of its extremal values, which are two general classes of problems that arise in the study of random processes and fields and in related applications. This book provides a unified and accessible approach to these two topics and their relationship to classical theory and Gaussian processes and fields, and the most modern research findings are also discussed. The authors begin with an introduction to the basic concepts of stochastic processes, including a modern review of Gaussian fields and their classical inequalities. Subsequent chapters are devoted to Rice formulas, regularity properties, and recent results on the tails of the distribution of the maximum. Finally, applications of random fields to various areas of mathematics a...

  5. Skull defect reconstruction based on a new hybrid level set.

    Science.gov (United States)

    Zhang, Ziqun; Zhang, Ran; Song, Zhijian

    2014-01-01

    Skull defect reconstruction is an important aspect of surgical repair. Historically, a skull defect prosthesis was created by the mirroring technique, surface fitting, or formed templates. These methods are not based on the anatomy of the individual patient's skull, and therefore, the prosthesis cannot precisely correct the defect. This study presented a new hybrid level set model, taking into account both the global optimization region information and the local accuracy edge information, while avoiding re-initialization during the evolution of the level set function. Based on the new method, a skull defect was reconstructed, and the skull prosthesis was produced by rapid prototyping technology. This resulted in a skull defect prosthesis that well matched the skull defect with excellent individual adaptation.

  6. 76 FR 9004 - Public Comment on Setting Achievement Levels in Writing

    Science.gov (United States)

    2011-02-16

    ... DEPARTMENT OF EDUCATION Public Comment on Setting Achievement Levels in Writing AGENCY: U.S... Achievement Levels in Writing. SUMMARY: The National Assessment Governing Board (Governing Board) is... for NAEP in writing. This notice provides opportunity for public comment and submitting...

  7. Chooz-B1, the new Electricite de France PWR: calculation scheme of neutron leakages from the reactor cavity

    International Nuclear Information System (INIS)

    Champion, G.; Thiriet, A.; Vergnaud, T.; Bourdet, L.; Nimal, J.C.; Brandicourt, G.

    1987-04-01

    A new calculation scheme has been set up to assess the neutron field characteristics inside French PWR. In order to take into account multiple neutron scattering and the complexity of the reactor geometry, the use of Monte-Carlo methods have been heavily increased. They are coupled with classical SN.-methods. The main goal aimed at was to find out the neutron field characteristics at the level of the reactor pit openings. These radiation reference sources will be used to check the neutron shielding efficiencies. The new calculation scheme has been applied to CHOOZ-B1, the first unit of the new N4 program. The former results have been compared with the measurement results related to PALUEL-I and II PWR, two units of the previous P4 program. Although the core and the geometry are not entirely similar, it is possible to check with confidence the calculation results along the vessel and at the core midplane level with the measurement results at the same locations. It appears that they are in good agreement. Consequently, the new calculation scheme appears reliable

  8. Sound classification of dwellings - Comparison of schemes in Europe

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2009-01-01

    National sound classification schemes for dwellings exist in nine countries in Europe, and proposals are under preparation in more countries. The schemes specify class criteria concerning several acoustic aspects, the main criteria being about airborne and impact sound insulation between dwellings......, facade sound insulation and installation noise. The quality classes reflect dierent levels of acoustical comfort. The paper presents and compares the sound classification schemes in Europe. The schemes have been implemented and revised gradually since the 1990es. However, due to lack of coordination...

  9. Appropriate criteria set for personnel promotion across organizational levels using analytic hierarchy process (AHP

    Directory of Open Access Journals (Sweden)

    Charles Noven Castillo

    2017-01-01

    Full Text Available Currently, there has been limited established specific set of criteria for personnel promotion to each level of the organization. This study is conducted in order to develop a personnel promotion strategy by identifying specific sets of criteria for each level of the organization. The complexity of identifying the criteria set along with the subjectivity of these criteria require the use of multi-criteria decision-making approach particularly the analytic hierarchy process (AHP. Results show different sets of criteria for each management level which are consistent with several frameworks in literature. These criteria sets would help avoid mismatch of employee skills and competencies and their job, and at the same time eliminate the issues in personnel promotion such as favouritism, glass ceiling, and gender and physical attractiveness preference. This work also shows that personality and traits, job satisfaction and experience and skills are more critical rather than social capital across different organizational levels. The contribution of this work is in identifying relevant criteria in developing a personnel promotion strategy across organizational levels.

  10. Demons versus level-set motion registration for coronary 18F-sodium fluoride PET

    Science.gov (United States)

    Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R.; Fletcher, Alison; Motwani, Manish; Thomson, Louise E.; Germano, Guido; Dey, Damini; Berman, Daniel S.; Newby, David E.; Slomka, Piotr J.

    2016-03-01

    Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18F-sodium fluoride (18F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18F-NaF PET. To this end, fifteen patients underwent 18F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically

  11. A novel grain cluster-based homogenization scheme

    International Nuclear Information System (INIS)

    Tjahjanto, D D; Eisenlohr, P; Roters, F

    2010-01-01

    An efficient homogenization scheme, termed the relaxed grain cluster (RGC), for elasto-plastic deformations of polycrystals is presented. The scheme is based on a generalization of the grain cluster concept. A volume element consisting of eight (= 2 × 2 × 2) hexahedral grains is considered. The kinematics of the RGC scheme is formulated within a finite deformation framework, where the relaxation of the local deformation gradient of each individual grain is connected to the overall deformation gradient by the, so-called, interface relaxation vectors. The set of relaxation vectors is determined by the minimization of the constitutive energy (or work) density of the overall cluster. An additional energy density associated with the mismatch at the grain boundaries due to relaxations is incorporated as a penalty term into the energy minimization formulation. Effectively, this penalty term represents the kinematical condition of deformation compatibility at the grain boundaries. Simulations have been performed for a dual-phase grain cluster loaded in uniaxial tension. The results of the simulations are presented and discussed in terms of the effective stress–strain response and the overall deformation anisotropy as functions of the penalty energy parameters. In addition, the prediction of the RGC scheme is compared with predictions using other averaging schemes, as well as to the result of direct finite element (FE) simulation. The comparison indicates that the present RGC scheme is able to approximate FE simulation results of relatively fine discretization at about three orders of magnitude lower computational cost

  12. PDF fit in the fixed-flavour-number scheme

    International Nuclear Information System (INIS)

    Alekhin, S.; Bluemlein, J.; Moch, S.

    2012-02-01

    We discuss the heavy-quark contribution to deep inelastic scattering in the scheme with n f =3;4;5 fixed flavors. Based on the recent ABM11 PDF analysis of world data for deep-inelastic scattering and fixed-target data for the Drell-Yan process with the running-mass definition for heavy quarks we show that fixed flavor number scheme is sufficient for describing the deep-inelastic-scattering data in the entire kinematic range. We compare with other PDF sets and comment on the implications for measuring the strong coupling constant α s (M Z ).

  13. Fourier analysis of finite element preconditioned collocation schemes

    Science.gov (United States)

    Deville, Michel O.; Mund, Ernest H.

    1990-01-01

    The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.

  14. Out-of-Core Computations of High-Resolution Level Sets by Means of Code Transformation

    DEFF Research Database (Denmark)

    Christensen, Brian Bunch; Nielsen, Michael Bang; Museth, Ken

    2012-01-01

    We propose a storage efficient, fast and parallelizable out-of-core framework for streaming computations of high resolution level sets. The fundamental techniques are skewing and tiling transformations of streamed level set computations which allow for the combination of interface propagation, re...... computations are now CPU bound and consequently the overall performance is unaffected by disk latency and bandwidth limitations. We demonstrate this with several benchmark tests that show sustained out-of-core throughputs close to that of in-core level set simulations....

  15. An extrapolation scheme for solid-state NMR chemical shift calculations

    Science.gov (United States)

    Nakajima, Takahito

    2017-06-01

    Conventional quantum chemical and solid-state physical approaches include several problems to accurately calculate solid-state nuclear magnetic resonance (NMR) properties. We propose a reliable computational scheme for solid-state NMR chemical shifts using an extrapolation scheme that retains the advantages of these approaches but reduces their disadvantages. Our scheme can satisfactorily yield solid-state NMR magnetic shielding constants. The estimated values have only a small dependence on the low-level density functional theory calculation with the extrapolation scheme. Thus, our approach is efficient because the rough calculation can be performed in the extrapolation scheme.

  16. On usage of CABARET scheme for tracer transport in INM ocean model

    International Nuclear Information System (INIS)

    Diansky, Nikolay; Kostrykin, Sergey; Gusev, Anatoly; Salnikov, Nikolay

    2010-01-01

    The contemporary state of ocean numerical modelling sets some requirements for the numerical advection schemes used in ocean general circulation models (OGCMs). The most important requirements are conservation, monotonicity and numerical efficiency including good parallelization properties. Investigation of some advection schemes shows that one of the best schemes satisfying the criteria is CABARET scheme. 3D-modification of the CABARET scheme was used to develop a new transport module (for temperature and salinity) for the Institute of Numerical Mathematics ocean model (INMOM). Testing of this module on some common benchmarks shows a high accuracy in comparison with the second-order advection scheme used in the INMOM. This new module was incorporated in the INMOM and experiments with the modified model showed a better simulation of oceanic circulation than its previous version.

  17. Market-based support schemes for renewable energy sources

    NARCIS (Netherlands)

    Fagiani, R.

    2014-01-01

    The European Union set ambitious goals regarding the production of electricity from renewable energy sources and the majority of European governments have implemented policies stimulating investments in such technologies. Support schemes differ in many aspects, not only in their effectivity and

  18. Discrete level schemes and their gamma radiation branching ratios (CENPL-DLS): Pt.2

    Energy Technology Data Exchange (ETDEWEB)

    Limin, Zhang; Zongdi, Su; Zhengjun, Sun [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    The DLS data files contains the data and information of nuclear discrete levels and gamma rays. At present, it has 79461 levels and 93177 gamma rays for 1908 nuclides. The DLS sub-library has been set up at the CNDC, and widely used for nuclear model calculation and other field. the DLS management retrieval code DLS is introduced and an example is given for {sup 56}Fe. (1 tab.).

  19. Ulam's scheme revisited digital modeling of chaotic attractors via micro-perturbations

    CERN Document Server

    Domokos, Gabor K

    2002-01-01

    We consider discretizations $f_N$ of expanding maps $f:I \\to I$ in the strict sense: i.e. we assume that the only information available on the map is a finite set of integers. Using this definition for computability, we show that by adding a random perturbation of order $1/N$, the invariant measure corresponding to $f$ can be approximated and we can also give estimates of the error term. We prove that the randomized discrete scheme is equivalent to Ulam's scheme applied to the polygonal approximation of $f$, thus providing a new interpretation of Ulam's scheme. We also compare the efficiency of the randomized iterative scheme to the direct solution of the $N \\times N$ linear system.

  20. Approaching the theoretical limit in periodic local MP2 calculations with atomic-orbital basis sets: the case of LiH.

    Science.gov (United States)

    Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin

    2011-06-07

    The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics

  1. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    Science.gov (United States)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  2. Feasibility of megavoltage portal CT using an electronic portal imaging device (EPID) and a multi-level scheme algebraic reconstruction technique (MLS-ART)

    International Nuclear Information System (INIS)

    Guan, Huaiqun; Zhu, Yunping

    1998-01-01

    Although electronic portal imaging devices (EPIDs) are efficient tools for radiation therapy verification, they only provide images of overlapped anatomic structures. We investigated using a fluorescent screen/CCD-based EPID, coupled with a novel multi-level scheme algebraic reconstruction technique (MLS-ART), for a feasibility study of portal computed tomography (CT) reconstructions. The CT images might be useful for radiation treatment planning and verification. We used an EPID, set it to work at the linear dynamic range and collimated 6 MV photons from a linear accelerator to a slit beam of 1 cm wide and 25 cm long. We performed scans under a total of ∼200 monitor units (MUs) for several phantoms in which we varied the number of projections and MUs per projection. The reconstructed images demonstrated that using the new MLS-ART technique megavoltage portal CT with a total of 200 MUs can achieve a contrast detectibility of ∼2.5% (object size 5mmx5mm) and a spatial resolution of 2.5 mm. (author)

  3. Some numerical studies of interface advection properties of level set ...

    Indian Academy of Sciences (India)

    explicit computational elements moving through an Eulerian grid. ... location. The interface is implicitly defined (captured) as the location of the discontinuity in the ... This level set function is advected with the background flow field and thus ...

  4. Verifying atom entanglement schemes by testing Bell's inequality

    International Nuclear Information System (INIS)

    Angelakis, D.G.; Knight, P.L.; Tregenna, B.; Munro, W.J.

    2001-01-01

    Recent experiments to test Bell's inequality using entangled photons and ions aimed at tests of basic quantum mechanical principles. Interesting results have been obtained and many loopholes could be closed. In this paper we want to point out that tests of Bell's inequality also play an important role in verifying atom entanglement schemes. We describe as an example a scheme to prepare arbitrary entangled states of N two-level atoms using a leaky optical cavity and a scheme to entangle atoms inside a photonic crystal. During the state preparation no photons are emitted, and observing a violation of Bell's inequality is the only way to test whether a scheme works with a high precision or not. (orig.)

  5. A New Graph Drawing Scheme for Social Network

    Directory of Open Access Journals (Sweden)

    Eric Ke Wang

    2014-01-01

    visualization is employed to extract the potential information from the large scale of social network data and present the information briefly as visualized graphs. In the process of information visualization, graph drawing is a crucial part. In this paper, we study the graph layout algorithms and propose a new graph drawing scheme combining multilevel and single-level drawing approaches, including the graph division method based on communities and refining approach based on partitioning strategy. Besides, we compare the effectiveness of our scheme and FM3 in experiments. The experiment results show that our scheme can achieve a clearer diagram and effectively extract the community structure of the social network to be applied to drawing schemes.

  6. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    Science.gov (United States)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  7. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation.

    Science.gov (United States)

    Barasa, Edwine W; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-09-16

    Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these complementary schools of thought. © 2015

  8. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    Science.gov (United States)

    Barasa, Edwine W.; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-01-01

    Background: Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods: We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results: Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Conclusion: Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these

  9. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    Directory of Open Access Journals (Sweden)

    Edwine W. Barasa

    2015-11-01

    Full Text Available Background Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1 Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a Stakeholder satisfaction, (b Stakeholder understanding, (c Shifted priorities (reallocation of resources, and (d Implementation of decisions. (2 Priority setting processes should also meet the procedural conditions of (a Stakeholder engagement, (b Stakeholder empowerment, (c Transparency, (d Use of evidence, (e Revisions, (f Enforcement, and (g Being grounded on community values. Conclusion Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from

  10. Learning to Voice? The Evolving Roles of Family Farmers in the Coordination of Large-Scale Irrigation Schemes in Morocco

    Directory of Open Access Journals (Sweden)

    Nicolas Faysse

    2010-02-01

    Full Text Available In Morocco, large-scale irrigation schemes have evolved over the past twenty years from the centralised management of irrigation and agricultural production into more complex multi-actor systems. This study analysed whether, and how, in the context of state withdrawal, increased farmer autonomy and political liberalisation, family farmers currently participate in the coordination and negotiation of issues that affect them and involve scheme-level organisations. Issues related to water management, the sugar industry and the dairy sector were analysed in five large-scale irrigation schemes. Farmer organisations that were set up to intervene in water management and sugar production were seen to be either inactive or to have weak links with their constituency; hence, the irrigation administration and the sugar industry continue to interact directly with farmers in a centralised way. Given their inability to voice their interests, when farmers have the opportunity, many choose exit strategies, for instance by resorting to the use of groundwater. In contrast, many community-based milk collection cooperatives were seen to function as accountable intermediaries between smallholders and dairy firms. While, as in the past, family farmers are still generally not involved in decision making at scheme level, in the milk collection cooperatives studied, farmers learn to coordinate and negotiate for the development of their communities.

  11. PDF fit in the fixed-flavour-number scheme

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, S. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute for High Energy Physics, Moscow (Russian Federation); Bluemlein, J.; Moch, S. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-02-15

    We discuss the heavy-quark contribution to deep inelastic scattering in the scheme with n{sub f}=3;4;5 fixed flavors. Based on the recent ABM11 PDF analysis of world data for deep-inelastic scattering and fixed-target data for the Drell-Yan process with the running-mass definition for heavy quarks we show that fixed flavor number scheme is sufficient for describing the deep-inelastic-scattering data in the entire kinematic range. We compare with other PDF sets and comment on the implications for measuring the strong coupling constant {alpha}{sub s}(M{sub Z}).

  12. Small-scale classification schemes

    DEFF Research Database (Denmark)

    Hertzum, Morten

    2004-01-01

    Small-scale classification schemes are used extensively in the coordination of cooperative work. This study investigates the creation and use of a classification scheme for handling the system requirements during the redevelopment of a nation-wide information system. This requirements...... classification inherited a lot of its structure from the existing system and rendered requirements that transcended the framework laid out by the existing system almost invisible. As a result, the requirements classification became a defining element of the requirements-engineering process, though its main...... effects remained largely implicit. The requirements classification contributed to constraining the requirements-engineering process by supporting the software engineers in maintaining some level of control over the process. This way, the requirements classification provided the software engineers...

  13. Level Sets and Voronoi based Feature Extraction from any Imagery

    DEFF Research Database (Denmark)

    Sharma, O.; Anton, François; Mioc, Darka

    2012-01-01

    Polygon features are of interest in many GEOProcessing applications like shoreline mapping, boundary delineation, change detection, etc. This paper presents a unique new GPU-based methodology to automate feature extraction combining level sets, or mean shift based segmentation together with Voron...

  14. Evaluating healthcare priority setting at the meso level: A thematic review of empirical literature

    Science.gov (United States)

    Waithaka, Dennis; Tsofa, Benjamin; Barasa, Edwine

    2018-01-01

    Background: Decentralization of health systems has made sub-national/regional healthcare systems the backbone of healthcare delivery. These regions are tasked with the difficult responsibility of determining healthcare priorities and resource allocation amidst scarce resources. We aimed to review empirical literature that evaluated priority setting practice at the meso (sub-national) level of health systems. Methods: We systematically searched PubMed, ScienceDirect and Google scholar databases and supplemented these with manual searching for relevant studies, based on the reference list of selected papers. We only included empirical studies that described and evaluated, or those that only evaluated priority setting practice at the meso-level. A total of 16 papers were identified from LMICs and HICs. We analyzed data from the selected papers by thematic review. Results: Few studies used systematic priority setting processes, and all but one were from HICs. Both formal and informal criteria are used in priority-setting, however, informal criteria appear to be more perverse in LMICs compared to HICs. The priority setting process at the meso-level is a top-down approach with minimal involvement of the community. Accountability for reasonableness was the most common evaluative framework as it was used in 12 of the 16 studies. Efficiency, reallocation of resources and options for service delivery redesign were the most common outcome measures used to evaluate priority setting. Limitations: Our study was limited by the fact that there are very few empirical studies that have evaluated priority setting at the meso-level and there is likelihood that we did not capture all the studies. Conclusions: Improving priority setting practices at the meso level is crucial to strengthening health systems. This can be achieved through incorporating and adapting systematic priority setting processes and frameworks to the context where used, and making considerations of both process

  15. Online monitoring of oil film using electrical capacitance tomography and level set method

    International Nuclear Information System (INIS)

    Xue, Q.; Ma, M.; Sun, B. Y.; Cui, Z. Q.; Wang, H. X.

    2015-01-01

    In the application of oil-air lubrication system, electrical capacitance tomography (ECT) provides a promising way for monitoring oil film in the pipelines by reconstructing cross sectional oil distributions in real time. While in the case of small diameter pipe and thin oil film, the thickness of the oil film is hard to be observed visually since the interface of oil and air is not obvious in the reconstructed images. And the existence of artifacts in the reconstructions has seriously influenced the effectiveness of image segmentation techniques such as level set method. Besides, level set method is also unavailable for online monitoring due to its low computation speed. To address these problems, a modified level set method is developed: a distance regularized level set evolution formulation is extended to image two-phase flow online using an ECT system, a narrowband image filter is defined to eliminate the influence of artifacts, and considering the continuity of the oil distribution variation, the detected oil-air interface of a former image can be used as the initial contour for the detection of the subsequent frame; thus, the propagation from the initial contour to the boundary can be greatly accelerated, making it possible for real time tracking. To testify the feasibility of the proposed method, an oil-air lubrication facility with 4 mm inner diameter pipe is measured in normal operation using an 8-electrode ECT system. Both simulation and experiment results indicate that the modified level set method is capable of visualizing the oil-air interface accurately online

  16. Study of the scheme of two-beam accelerator driver with accompanying electromagnetic wave

    International Nuclear Information System (INIS)

    Elzhov, A.V.; Kaminskij, A.K.; Kazacha, V.I.; Perel'shtejn, E.A.; Sedykh, S.N.; Sergeev, A.P.

    2000-01-01

    A novel scheme of two-beam accelerator (TBA) driver based on a linear induction accelerator is considered. In this scheme the bunched beam propagates in the accompanying enhanced microwave that provides the steady longitudinal beam bunching along the whole driver. A travelling wave tube (TWT) is used as the wave-slowing periodic structure. Major merits of the driver scheme in hand are the possibilities of providing the microwave phase and amplitude stability and the preliminary beam bunching at a rather low initial energy (∼ 1 MeV). The numerical simulation has shown that a steady state could be found when electron bunches accompanied by an amplified microwave are simultaneously accelerated in the external electric field. The total power, which is inserted into the beam by the accelerating field, transforms into the microwave power in the steady state. The first set of experiments was fulfilled with the buncher on the base of the JINR LIU-3000 linac (electron beam energy ∼ 600 keV, electron current ∼ 150 A). The considerable level of the amplified microwave power (∼ 5 MW) and high enough bunching parameter (∼ 0.4) were obtained. The electron beam bunching at the frequency of 36.4 GHz was registered by means of the Cherenkov radiation of the electron bunches that occurred at their passing through the special target. The beam keeps a high bunching level at the distance ∼ 10 cm from the TWT exit being accompanied by the amplified microwave

  17. A level-set method for two-phase flows with soluble surfactant

    Science.gov (United States)

    Xu, Jian-Jun; Shi, Weidong; Lai, Ming-Chih

    2018-01-01

    A level-set method is presented for solving two-phase flows with soluble surfactant. The Navier-Stokes equations are solved along with the bulk surfactant and the interfacial surfactant equations. In particular, the convection-diffusion equation for the bulk surfactant on the irregular moving domain is solved by using a level-set based diffusive-domain method. A conservation law for the total surfactant mass is derived, and a re-scaling procedure for the surfactant concentrations is proposed to compensate for the surfactant mass loss due to numerical diffusion. The whole numerical algorithm is easy for implementation. Several numerical simulations in 2D and 3D show the effects of surfactant solubility on drop dynamics under shear flow.

  18. Level set methods for inverse scattering—some recent developments

    International Nuclear Information System (INIS)

    Dorn, Oliver; Lesselier, Dominique

    2009-01-01

    We give an update on recent techniques which use a level set representation of shapes for solving inverse scattering problems, completing in that matter the exposition made in (Dorn and Lesselier 2006 Inverse Problems 22 R67) and (Dorn and Lesselier 2007 Deformable Models (New York: Springer) pp 61–90), and bringing it closer to the current state of the art

  19. Explicit TE/TM scheme for particle beam simulations

    International Nuclear Information System (INIS)

    Dohlus, M.; Zagorodnov, I.

    2008-10-01

    In this paper we propose an explicit two-level conservative scheme based on a TE/TM like splitting of the field components in time. Its dispersion properties are adjusted to accelerator problems. It is simpler and faster than the implicit version. It does not have dispersion in the longitudinal direction and the dispersion properties in the transversal plane are improved. The explicit character of the new scheme allows a uniformly stable conformal method without iterations and the scheme can be parallelized easily. It assures energy and charge conservation. A version of this explicit scheme for rotationally symmetric structures is free from the progressive time step reducing for higher order azimuthal modes as it takes place for Yee's explicit method used in the most popular electrodynamics codes. (orig.)

  20. Investigation of optimal photoionization schemes for Sm by multi-step resonance ionization

    International Nuclear Information System (INIS)

    Cha, H.; Song, K.; Lee, J.

    1997-01-01

    Excited states of Sm atoms are investigated by using multi-color resonance enhanced multiphoton ionization spectroscopy. Among the ionization signals one observed at 577.86 nm is regarded as the most efficient excited state if an 1-color 3-photon scheme is applied. Meanwhile an observed level located at 587.42 nm is regarded as the most efficient state if one uses a 2-color scheme. For 2-color scheme a level located at 573.50 nm from this first excited state is one of the best second excited state for the optimal photoionization scheme. Based on this ionization scheme various concentrations of standard solutions for samarium are determined. The minimum amount of sample which can be detected by a 2-color scheme is determined as 200 fg. The detection sensitivity is limited mainly due to the pollution of the graphite atomizer. copyright 1997 American Institute of Physics

  1. An evaluation scheme for nanotechnology policies

    International Nuclear Information System (INIS)

    Soltani, Ali M.; Tabatabaeian, Seyed H.; Hanafizadeh, Payam; Bamdad Soofi, Jahanyar

    2011-01-01

    Dozens of countries are executing national nanotechnology plans. No rigorous evaluation scheme for these plans exists, although stakeholders—especially policy makers, top-level agencies and councils, as well as the society at large—are eager to learn the outcome of these policies. In this article, we recommend an evaluation scheme for national nanotechnology policies that would be used to review the whole or any component part of a national nanotechnology plan. In this scheme, a component at any level of aggregation is evaluated. The component may be part of the plan’s overarching policy goal, which for most countries is to create wealth and improve the quality of life of their nation with nanotechnology. Alternatively, the component may be a programme or an activity related to a programme. The evaluation could be executed at different times in the policy’s life cycle, i.e., before the policy is formulated, during its execution or after its completion. The three criteria for policy evaluation are appropriateness, efficiency and effectiveness. The evaluator should select the appropriate qualitative or quantitative methods to evaluate the various components of national nanotechnology plans.

  2. A Temporal Domain Decomposition Algorithmic Scheme for Large-Scale Dynamic Traffic Assignment

    Directory of Open Access Journals (Sweden)

    Eric J. Nava

    2012-03-01

    This paper presents a temporal decomposition scheme for large spatial- and temporal-scale dynamic traffic assignment, in which the entire analysis period is divided into Epochs. Vehicle assignment is performed sequentially in each Epoch, thus improving the model scalability and confining the peak run-time memory requirement regardless of the total analysis period. A proposed self-turning scheme adaptively searches for the run-time-optimal Epoch setting during iterations regardless of the characteristics of the modeled network. Extensive numerical experiments confirm the promising performance of the proposed algorithmic schemes.

  3. The same number of optimized parameters scheme for determining intermolecular interaction energies

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Ettenhuber, Patrick; Eriksen, Janus Juul

    2015-01-01

    We propose the Same Number Of Optimized Parameters (SNOOP) scheme as an alternative to the counterpoise method for treating basis set superposition errors in calculations of intermolecular interaction energies. The key point of the SNOOP scheme is to enforce that the number of optimized wave...... as numerically. Numerical results for second-order Møller-Plesset perturbation theory (MP2) and coupled-cluster with single, double, and approximate triple excitations (CCSD(T)) show that the SNOOP scheme in general outperforms the uncorrected and counterpoise approaches. Furthermore, we show that SNOOP...

  4. An Employer of Last Resort Scheme which Resembles a Free Labour Market

    OpenAIRE

    MUSGRAVE, Ralph S.

    2017-01-01

    Abstract. The idea that government should act as employer of last resort (ELR) is an old one. That idea is often referred to nowadays as “job guarantee”. Many ELR schemes to date have been confined to the public sector. There is no good reason for that limitation: i.e. the private sector should use ELR labour as well.  A second common characteristic of ELR schemes has been that (like the WPA in the US in the 1930s) they involve specially set up projects or schemes as distinct from subsidising...

  5. The renormalization scale-setting problem in QCD

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Xing-Gang [Chongqing Univ. (China); Brodsky, Stanley J. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Mojaza, Matin [SLAC National Accelerator Lab., Menlo Park, CA (United States); Univ. of Southern Denmark, Odense (Denmark)

    2013-09-01

    A key problem in making precise perturbative QCD predictions is to set the proper renormalization scale of the running coupling. The conventional scale-setting procedure assigns an arbitrary range and an arbitrary systematic error to fixed-order pQCD predictions. In fact, this ad hoc procedure gives results which depend on the choice of the renormalization scheme, and it is in conflict with the standard scale-setting procedure used in QED. Predictions for physical results should be independent of the choice of the scheme or other theoretical conventions. We review current ideas and points of view on how to deal with the renormalization scale ambiguity and show how to obtain renormalization scheme- and scale-independent estimates. We begin by introducing the renormalization group (RG) equation and an extended version, which expresses the invariance of physical observables under both the renormalization scheme and scale-parameter transformations. The RG equation provides a convenient way for estimating the scheme- and scale-dependence of a physical process. We then discuss self-consistency requirements of the RG equations, such as reflexivity, symmetry, and transitivity, which must be satisfied by a scale-setting method. Four typical scale setting methods suggested in the literature, i.e., the Fastest Apparent Convergence (FAC) criterion, the Principle of Minimum Sensitivity (PMS), the Brodsky–Lepage–Mackenzie method (BLM), and the Principle of Maximum Conformality (PMC), are introduced. Basic properties and their applications are discussed. We pay particular attention to the PMC, which satisfies all of the requirements of RG invariance. Using the PMC, all non-conformal terms associated with the β-function in the perturbative series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. The PMC provides the principle underlying the BLM method, since it gives the general rule for extending

  6. Matching soil salinization and cropping systems in communally managed irrigation schemes

    Science.gov (United States)

    Malota, Mphatso; Mchenga, Joshua

    2018-03-01

    Occurrence of soil salinization in irrigation schemes can be a good indicator to introduce high salt tolerant crops in irrigation schemes. This study assessed the level of soil salinization in a communally managed 233 ha Nkhate irrigation scheme in the Lower Shire Valley region of Malawi. Soil samples were collected within the 0-0.4 m soil depth from eight randomly selected irrigation blocks. Irrigation water samples were also collected from five randomly selected locations along the Nkhate River which supplies irrigation water to the scheme. Salinity of both the soil and the irrigation water samples was determined using an electrical conductivity (EC) meter. Analysis of the results indicated that even for very low salinity tolerant crops (ECi water was suitable for irrigation purposes. However, root-zone soil salinity profiles depicted that leaching of salts was not adequate and that the leaching requirement for the scheme needs to be relooked and always be adhered to during irrigation operation. The study concluded that the crop system at the scheme needs to be adjusted to match with prevailing soil and irrigation water salinity levels.

  7. [Occlusal schemes of complete dentures--a review of the literature].

    Science.gov (United States)

    Tarazi, E; Ticotsky-Zadok, N

    2007-01-01

    Occlusal scheme is defined as the form and the arrangement of the occlusal contacts in natural and artificial dentition. The choice of an occlusal scheme will determine the pattern of occlusal contacts between opposing teeth during centric relation and functional movement of the mandible. With dentures, the quantity and the intensity of these contacts determine the amount and the direction of the forces that are transmitted through the bases of the denture to the residual ridges. That is why the occlusal scheme is an important factor in the design of complete dentures. Three occlusal schemes are viewed in this review: bilateral balanced occlusion, monplane occlusion, and linear occlusion scheme. Each scheme represents a different concept of occlusion. Comparisons between these schemes are also reviewed and analyzed. The reasoning underlying the bilateral balanced occlusion scheme is that stability of the dentures is attained when bilateral contacts exist throughout all dynamic and static states of the denture during function. Anatomic teeth are used: the upper anterior teeth are set to satisfy aesthetics, and the posterior teeth are arranged in a compensatory curve and a medial curve. This scheme is adequate for well developed residual ridges, with skeletal class I relation. With highly resorbed residual ridges, the vectors of force that are transmitted through anatomic cusps will dislodge the lower denture and thus impair the comfort and efficiency of mastication experienced by the patient. In order to accommodate to the special needs posed by highly resorbed residual ridges and skeletal relations that are not class I, the monoplane scheme of occlusion was designed. This scheme consists of non anatomic (cuspless) teeth, which are set so that the anterior teeth provide the aesthetics, the premolars and the first molars are used for chewing, and the second molars do not occlude (although sometimes they are specifically used to establish bilateral contacts in lateral

  8. A level set method for cupping artifact correction in cone-beam CT

    International Nuclear Information System (INIS)

    Xie, Shipeng; Li, Haibo; Ge, Qi; Li, Chunming

    2015-01-01

    Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts in CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts

  9. Level-set simulations of buoyancy-driven motion of single and multiple bubbles

    International Nuclear Information System (INIS)

    Balcázar, Néstor; Lehmkuhl, Oriol; Jofre, Lluís; Oliva, Assensi

    2015-01-01

    Highlights: • A conservative level-set method is validated and verified. • An extensive study of buoyancy-driven motion of single bubbles is performed. • The interactions of two spherical and ellipsoidal bubbles is studied. • The interaction of multiple bubbles is simulated in a vertical channel. - Abstract: This paper presents a numerical study of buoyancy-driven motion of single and multiple bubbles by means of the conservative level-set method. First, an extensive study of the hydrodynamics of single bubbles rising in a quiescent liquid is performed, including its shape, terminal velocity, drag coefficients and wake patterns. These results are validated against experimental and numerical data well established in the scientific literature. Then, a further study on the interaction of two spherical and ellipsoidal bubbles is performed for different orientation angles. Finally, the interaction of multiple bubbles is explored in a periodic vertical channel. The results show that the conservative level-set approach can be used for accurate modelling of bubble dynamics. Moreover, it is demonstrated that the present method is numerically stable for a wide range of Morton and Reynolds numbers.

  10. A Level Set Discontinuous Galerkin Method for Free Surface Flows

    DEFF Research Database (Denmark)

    Grooss, Jesper; Hesthaven, Jan

    2006-01-01

    We present a discontinuous Galerkin method on a fully unstructured grid for the modeling of unsteady incompressible fluid flows with free surfaces. The surface is modeled by embedding and represented by a levelset. We discuss the discretization of the flow equations and the level set equation...

  11. Priority setting at the micro-, meso- and macro-levels in Canada, Norway and Uganda.

    Science.gov (United States)

    Kapiriri, Lydia; Norheim, Ole Frithjof; Martin, Douglas K

    2007-06-01

    The objectives of this study were (1) to describe the process of healthcare priority setting in Ontario-Canada, Norway and Uganda at the three levels of decision-making; (2) to evaluate the description using the framework for fair priority setting, accountability for reasonableness; so as to identify lessons of good practices. We carried out case studies involving key informant interviews, with 184 health practitioners and health planners from the macro-level, meso-level and micro-level from Canada-Ontario, Norway and Uganda (selected by virtue of their varying experiences in priority setting). Interviews were audio-recorded, transcribed and analyzed using a modified thematic approach. The descriptions were evaluated against the four conditions of "accountability for reasonableness", relevance, publicity, revisions and enforcement. Areas of adherence to these conditions were identified as lessons of good practices; areas of non-adherence were identified as opportunities for improvement. (i) at the macro-level, in all three countries, cabinet makes most of the macro-level resource allocation decisions and they are influenced by politics, public pressure, and advocacy. Decisions within the ministries of health are based on objective formulae and evidence. International priorities influenced decisions in Uganda. Some priority-setting reasons are publicized through circulars, printed documents and the Internet in Canada and Norway. At the meso-level, hospital priority-setting decisions were made by the hospital managers and were based on national priorities, guidelines, and evidence. Hospital departments that handle emergencies, such as surgery, were prioritized. Some of the reasons are available on the hospital intranet or presented at meetings. Micro-level practitioners considered medical and social worth criteria. These reasons are not publicized. Many practitioners lacked knowledge of the macro- and meso-level priority-setting processes. (ii) Evaluation

  12. A Modified Computational Scheme for the Stochastic Perturbation Finite Element Method

    Directory of Open Access Journals (Sweden)

    Feng Wu

    Full Text Available Abstract A modified computational scheme of the stochastic perturbation finite element method (SPFEM is developed for structures with low-level uncertainties. The proposed scheme can provide second-order estimates of the mean and variance without differentiating the system matrices with respect to the random variables. When the proposed scheme is used, it involves finite analyses of deterministic systems. In the case of one random variable with a symmetric probability density function, the proposed computational scheme can even provide a result with fifth-order accuracy. Compared with the traditional computational scheme of SPFEM, the proposed scheme is more convenient for numerical implementation. Four numerical examples demonstrate that the proposed scheme can be used in linear or nonlinear structures with correlated or uncorrelated random variables.

  13. Embedded Real-Time Architecture for Level-Set-Based Active Contours

    Directory of Open Access Journals (Sweden)

    Dejnožková Eva

    2005-01-01

    Full Text Available Methods described by partial differential equations have gained a considerable interest because of undoubtful advantages such as an easy mathematical description of the underlying physics phenomena, subpixel precision, isotropy, or direct extension to higher dimensions. Though their implementation within the level set framework offers other interesting advantages, their vast industrial deployment on embedded systems is slowed down by their considerable computational effort. This paper exploits the high parallelization potential of the operators from the level set framework and proposes a scalable, asynchronous, multiprocessor platform suitable for system-on-chip solutions. We concentrate on obtaining real-time execution capabilities. The performance is evaluated on a continuous watershed and an object-tracking application based on a simple gradient-based attraction force driving the active countour. The proposed architecture can be realized on commercially available FPGAs. It is built around general-purpose processor cores, and can run code developed with usual tools.

  14. Numerical Modelling of Three-Fluid Flow Using The Level-set Method

    Science.gov (United States)

    Li, Hongying; Lou, Jing; Shang, Zhi

    2014-11-01

    This work presents a numerical model for simulation of three-fluid flow involving two different moving interfaces. These interfaces are captured using the level-set method via two different level-set functions. A combined formulation with only one set of conservation equations for the whole physical domain, consisting of the three different immiscible fluids, is employed. Numerical solution is performed on a fixed mesh using the finite volume method. Surface tension effect is incorporated using the Continuum Surface Force model. Validation of the present model is made against available results for stratified flow and rising bubble in a container with a free surface. Applications of the present model are demonstrated by a variety of three-fluid flow systems including (1) three-fluid stratified flow, (2) two-fluid stratified flow carrying the third fluid in the form of drops and (3) simultaneous rising and settling of two drops in a stationary third fluid. The work is supported by a Thematic and Strategic Research from A*STAR, Singapore (Ref. #: 1021640075).

  15. Patterns of agri-environmental scheme participation in Europe

    DEFF Research Database (Denmark)

    Pavlis, Evangelos S.; Terkenli, Theano S.; Kristensen, Søren Bech Pilgaard

    2016-01-01

    This paper investigates the personal and property characteristics of landowners who use EU Rural Development agri-environmental schemes (AES), as well as their motives for participation or non-participation in such schemes. The study is based on a questionnaire survey with landowners, in selected...... areas with marginal potential for agriculture. Motives for non-participation were also found to be dependent on the level of farming engagement and on case-area landscape types.......This paper investigates the personal and property characteristics of landowners who use EU Rural Development agri-environmental schemes (AES), as well as their motives for participation or non-participation in such schemes. The study is based on a questionnaire survey with landowners, in selected...... geographical particularities and on subjective factors, farmers' individualities, different rural cultures, landscape types, EU and national policies and special needs of the study areas—all areas where agricultural production is increasingly marginalized, for different reasons. Subsidy scheme participation...

  16. Progress with multigrid schemes for hypersonic flow problems

    International Nuclear Information System (INIS)

    Radespiel, R.; Swanson, R.C.

    1995-01-01

    Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm employs upwind spatial discretization with explicit multistage time stepping. Two-level versions of the various multigrid algorithms are applied to the two-dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving three different hypersonic flow problems. Some new multigrid schemes based on semicoarsening strategies are shown to be quite effective in relieving the stiffness caused by the high-aspect-ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 X 10 6 and Mach numbers up to 25. 32 refs., 31 figs., 1 tab

  17. A light weight secure image encryption scheme based on chaos & DNA computing

    Directory of Open Access Journals (Sweden)

    Bhaskar Mondal

    2017-10-01

    Full Text Available This paper proposed a new light weight secure cryptographic scheme for secure image communication. In this scheme the plain image is permuted first using a sequence of pseudo random number (PRN and encrypted by DeoxyriboNucleic Acid (DNA computation. Two PRN sequences are generated by a Pseudo Random Number Generator (PRNG based on cross coupled chaotic logistic map using two sets of keys. The first PRN sequence is used for permuting the plain image whereas the second PRN sequence is used for generating random DNA sequence. The number of rounds of permutation and encryption may be variable to increase security. The scheme is proposed for gray label images but the scheme may be extended for color images and text data. Simulation results exhibit that the proposed scheme can defy any kind of attack.

  18. Gradual and Cumulative Improvements to the Classical Differential Evolution Scheme through Experiments

    Directory of Open Access Journals (Sweden)

    Anescu George

    2016-12-01

    Full Text Available The paper presents the experimental results of some tests conducted with the purpose to gradually and cumulatively improve the classical DE scheme in both efficiency and success rate. The modifications consisted in the randomization of the scaling factor (a simple jitter scheme, a more efficient Random Greedy Selection scheme, an adaptive scheme for the crossover probability and a resetting mechanism for the agents. After each modification step, experiments have been conducted on a set of 11 scalable, multimodal, continuous optimization functions in order to analyze the improvements and decide the new improvement direction. Finally, only the initial classical scheme and the constructed Fast Self-Adaptive DE (FSA-DE variant were compared with the purpose of testing their performance degradation with the increase of the search space dimension. The experimental results demonstrated the superiority of the proposed FSA-DE variant.

  19. Traffic calming schemes : opportunities and implementation strategies.

    NARCIS (Netherlands)

    Schagen, I.N.L.G. van (ed.)

    2003-01-01

    Commissioned by the Swedish National Road Authority, this report aims to provide a concise overview of knowledge of and experiences with traffic calming schemes in urban areas, both on a technical level and on a policy level. Traffic calming refers to a combination of network planning and

  20. Alternative health insurance schemes

    DEFF Research Database (Denmark)

    Keiding, Hans; Hansen, Bodil O.

    2002-01-01

    In this paper, we present a simple model of health insurance with asymmetric information, where we compare two alternative ways of organizing the insurance market. Either as a competitive insurance market, where some risks remain uninsured, or as a compulsory scheme, where however, the level...... competitive insurance; this situation turns out to be at least as good as either of the alternatives...

  1. Implementation and analysis of trajectory schemes for informate: a serial link robot manipulator

    International Nuclear Information System (INIS)

    Rauf, A.; Ahmed, S.M.; Asif, M.; Ahmad, M.

    1997-01-01

    Trajectory planning schemes generally interpolate or approximate the desired path by a class of polynomial functions and generate a sequence of time based control set points for the control of the manipulator movement from certain initial configuration to final configuration. Schemes for trajectory generation can be implemented in Joint space and in Cartesian space. This paper describes Joint Space trajectory schemes and Cartesian Space trajectory schemes and their implementation for Infomate, a six degrees of freedom serial link robot manipulator. LSPBs and cubic Spline are chosen as interpolating functions of time for each type of schemes. Modules developed have been incorporated in an OLP system for Infomate. Trajectory planning Schemes discussed in this paper incorporate the constraints of velocities and accelerations of the actuators. comparison with respect to computation and motion time is presented for above mentioned trajectory schemes. Algorithms have been developed that enable the end effector to follow a straight line; other paths like circle, ellipse, etc. can be approximated by straight line segments. (author)

  2. Global and local level density models

    International Nuclear Information System (INIS)

    Koning, A.J.; Hilaire, S.; Goriely, S.

    2008-01-01

    Four different level density models, three phenomenological and one microscopic, are consistently parameterized using the same set of experimental observables. For each of the phenomenological models, the Constant Temperature Model, the Back-shifted Fermi gas Model and the Generalized Superfluid Model, a version without and with explicit collective enhancement is considered. Moreover, a recently published microscopic combinatorial model is compared with the phenomenological approaches and with the same set of experimental data. For each nuclide for which sufficient experimental data exists, a local level density parameterization is constructed for each model. Next, these local models have helped to construct global level density prescriptions, to be used for cases for which no experimental data exists. Altogether, this yields a collection of level density formulae and parameters that can be used with confidence in nuclear model calculations. To demonstrate this, a large-scale validation with experimental discrete level schemes and experimental cross sections and neutron emission spectra for various different reaction channels has been performed

  3. Exact analysis of Packet Reversed Packet Combining Scheme and Modified Packet Combining Scheme; and a combined scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-07-01

    Packet combining scheme is a well defined simple error correction scheme for the detection and correction of errors at the receiver. Although it permits a higher throughput when compared to other basic ARQ protocols, packet combining (PC) scheme fails to correct errors when errors occur in the same bit locations of copies. In a previous work, a scheme known as Packet Reversed Packet Combining (PRPC) Scheme that will correct errors which occur at the same bit location of erroneous copies, was studied however PRPC does not handle a situation where a packet has more than 1 error bit. The Modified Packet Combining (MPC) Scheme that can correct double or higher bit errors was studied elsewhere. Both PRPC and MPC schemes are believed to offer higher throughput in previous studies, however neither adequate investigation nor exact analysis was done to substantiate this claim of higher throughput. In this work, an exact analysis of both PRPC and MPC is carried out and the results reported. A combined protocol (PRPC and MPC) is proposed and the analysis shows that it is capable of offering even higher throughput and better error correction capability at high bit error rate (BER) and larger packet size. (author)

  4. Mimicking Natural Photosynthesis: Solar to Renewable H2 Fuel Synthesis by Z-Scheme Water Splitting Systems.

    Science.gov (United States)

    Wang, Yiou; Suzuki, Hajime; Xie, Jijia; Tomita, Osamu; Martin, David James; Higashi, Masanobu; Kong, Dan; Abe, Ryu; Tang, Junwang

    2018-05-23

    Visible light-driven water splitting using cheap and robust photocatalysts is one of the most exciting ways to produce clean and renewable energy for future generations. Cutting edge research within the field focuses on so-called "Z-scheme" systems, which are inspired by the photosystem II-photosystem I (PSII/PSI) coupling from natural photosynthesis. A Z-scheme system comprises two photocatalysts and generates two sets of charge carriers, splitting water into its constituent parts, hydrogen and oxygen, at separate locations. This is not only more efficient than using a single photocatalyst, but practically it could also be safer. Researchers within the field are constantly aiming to bring systems toward industrial level efficiencies by maximizing light absorption of the materials, engineering more stable redox couples, and also searching for new hydrogen and oxygen evolution cocatalysts. This review provides an in-depth survey of relevant Z-schemes from past to present, with particular focus on mechanistic breakthroughs, and highlights current state of the art systems which are at the forefront of the field.

  5. 197Au(d,3He)196Pt reaction and the supersymmetry scheme

    International Nuclear Information System (INIS)

    Vergnes, M.; Berrier-Ronsin, G.; Rotbard, G.; Vernotte, J.; Langevin- Joliot, H.; Gerlic, E.; Wiele, J. van de; Guillot, J.

    1981-01-01

    The 197 Au(d, 3 He) 196 Pt reaction has been studied at Esub(d) = 108 MeV. An important breakdown of the selection rules of the supersymmetry scheme is observed for the 2 2 + level. The generally strong excitation of the 2 2 + level by transfer reactions in the Pt region leads to question the validity of the supersymmetry scheme at least for this level

  6. Assessment Schemes for Sustainability Design through BIM: Lessons Learnt

    Directory of Open Access Journals (Sweden)

    Kamaruzzaman Syahrul Nizam

    2016-01-01

    Full Text Available There is increasing demand on sustainability-led design to reduce negative impacts brought by construction development. The capability of Building Information Modeling (BIM to achieve sustainability is widely acknowledged. Various sustainability analysis and calculation can be performed at early stages to help the designers in decision making. However, the level of implementation is still not popular in the construction industry. Many of the industry players are still rely on traditional 2D method for designing and analysis. Hence, this study aims to demonstrate a proof concept of using BIM for sustainability design. The first phase of this study conducted a critical review of existing assessment schemes: BREEAM, LEED, SBTool, CASBEE, BEAM Plus, Green Star, Green Mark and GBI, to develop a set of main criteria to be considered for sustainability design. The findings revealed that fourteen criteria are considered, which are management, sustainable site, transport, indoor environmental quality, energy, waste, water, material, pollution, innovation, economics, social, culture and quality of services. It was found that most of the existing schemes emphasized on environmental aspect as compared to economics, social and culture except SBTool. The next phase of this study will conduct a case study to demonstrate sustainability design through BIM by using the criteria developed from the first phase.

  7. Topological Hausdorff dimension and level sets of generic continuous functions on fractals

    International Nuclear Information System (INIS)

    Balka, Richárd; Buczolich, Zoltán; Elekes, Márton

    2012-01-01

    Highlights: ► We examine a new fractal dimension, the so called topological Hausdorff dimension. ► The generic continuous function has a level set of maximal Hausdorff dimension. ► This maximal dimension is the topological Hausdorff dimension minus one. ► Homogeneity implies that “most” level sets are of this dimension. ► We calculate the various dimensions of the graph of the generic function. - Abstract: In an earlier paper we introduced a new concept of dimension for metric spaces, the so called topological Hausdorff dimension. For a compact metric space K let dim H K and dim tH K denote its Hausdorff and topological Hausdorff dimension, respectively. We proved that this new dimension describes the Hausdorff dimension of the level sets of the generic continuous function on K, namely sup{ dim H f -1 (y):y∈R} =dim tH K-1 for the generic f ∈ C(K), provided that K is not totally disconnected, otherwise every non-empty level set is a singleton. We also proved that if K is not totally disconnected and sufficiently homogeneous then dim H f −1 (y) = dim tH K − 1 for the generic f ∈ C(K) and the generic y ∈ f(K). The most important goal of this paper is to make these theorems more precise. As for the first result, we prove that the supremum is actually attained on the left hand side of the first equation above, and also show that there may only be a unique level set of maximal Hausdorff dimension. As for the second result, we characterize those compact metric spaces for which for the generic f ∈ C(K) and the generic y ∈ f(K) we have dim H f −1 (y) = dim tH K − 1. We also generalize a result of B. Kirchheim by showing that if K is self-similar then for the generic f ∈ C(K) for every y∈intf(K) we have dim H f −1 (y) = dim tH K − 1. Finally, we prove that the graph of the generic f ∈ C(K) has the same Hausdorff and topological Hausdorff dimension as K.

  8. Simplification of a dust emission scheme and comparison with data

    Science.gov (United States)

    Shao, Yaping

    2004-05-01

    A simplification of a dust emission scheme is proposed, which takes into account of saltation bombardment and aggregates disintegration. The statement of the scheme is that dust emission is proportional to streamwise saltation flux, but the proportionality depends on soil texture and soil plastic pressure p. For small p values (loose soils), dust emission rate is proportional to u*4 (u* is friction velocity) but not necessarily so in general. The dust emission predictions using the scheme are compared with several data sets published in the literature. The comparison enables the estimate of a model parameter and soil plastic pressure for various soils. While more data are needed for further verification, a general guideline for choosing model parameters is recommended.

  9. Energy Design Advice Scheme (EDAS): operations and achievements 1992-1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-09-01

    The Energy Design Advice Scheme (EDAS) was launched in 1992 under the DTI's Passive Solar Programme to help improve the energy performance of the UK's building stock. It aimed to do this through direct advice and guidance on passive solar design and energy efficient technologies and processes given to the designers of real building projects. Furthermore, the scheme aimed to raise the awareness and take-up of definitive guidance produced under government programmes such as the Passive Solar programme and the Energy Efficiency Best Practice programme. A target energy saving worth Pound 19.3m was set to be achieved by the end of the scheme. This energy saving is equivalent to a reduction in carbon dioxide emission of 220,000 tonnes per year. (author)

  10. Relationships between college settings and student alcohol use before, during and after events: a multi-level study.

    Science.gov (United States)

    Paschall, Mallie J; Saltz, Robert F

    2007-11-01

    We examined how alcohol risk is distributed based on college students' drinking before, during and after they go to certain settings. Students attending 14 California public universities (N=10,152) completed a web-based or mailed survey in the fall 2003 semester, which included questions about how many drinks they consumed before, during and after the last time they went to six settings/events: fraternity or sorority party, residence hall party, campus event (e.g. football game), off-campus party, bar/restaurant and outdoor setting (referent). Multi-level analyses were conducted in hierarchical linear modeling (HLM) to examine relationships between type of setting and level of alcohol use before, during and after going to the setting, and possible age and gender differences in these relationships. Drinking episodes (N=24,207) were level 1 units, students were level 2 units and colleges were level 3 units. The highest drinking levels were observed during all settings/events except campus events, with the highest number of drinks being consumed at off-campus parties, followed by residence hall and fraternity/sorority parties. The number of drinks consumed before a fraternity/sorority party was higher than other settings/events. Age group and gender differences in relationships between type of setting/event and 'before,''during' and 'after' drinking levels also were observed. For example, going to a bar/restaurant (relative to an outdoor setting) was positively associated with 'during' drinks among students of legal drinking age while no relationship was observed for underage students. Findings of this study indicate differences in the extent to which college settings are associated with student drinking levels before, during and after related events, and may have implications for intervention strategies targeting different types of settings.

  11. Individual-and Setting-Level Correlates of Secondary Traumatic Stress in Rape Crisis Center Staff.

    Science.gov (United States)

    Dworkin, Emily R; Sorell, Nicole R; Allen, Nicole E

    2016-02-01

    Secondary traumatic stress (STS) is an issue of significant concern among providers who work with survivors of sexual assault. Although STS has been studied in relation to individual-level characteristics of a variety of types of trauma responders, less research has focused specifically on rape crisis centers as environments that might convey risk or protection from STS, and no research to knowledge has modeled setting-level variation in correlates of STS. The current study uses a sample of 164 staff members representing 40 rape crisis centers across a single Midwestern state to investigate the staff member-and agency-level correlates of STS. Results suggest that correlates exist at both levels of analysis. Younger age and greater severity of sexual assault history were statistically significant individual-level predictors of increased STS. Greater frequency of supervision was more strongly related to secondary stress for non-advocates than for advocates. At the setting level, lower levels of supervision and higher client loads agency-wide accounted for unique variance in staff members' STS. These findings suggest that characteristics of both providers and their settings are important to consider when understanding their STS. © The Author(s) 2014.

  12. Fully discrete Galerkin schemes for the nonlinear and nonlocal Hartree equation

    Directory of Open Access Journals (Sweden)

    Walter H. Aschbacher

    2009-01-01

    Full Text Available We study the time dependent Hartree equation in the continuum, the semidiscrete, and the fully discrete setting. We prove existence-uniqueness, regularity, and approximation properties for the respective schemes, and set the stage for a controlled numerical computation of delicate nonlinear and nonlocal features of the Hartree dynamics in various physical applications.

  13. TE/TM scheme for computation of electromagnetic fields in accelerators

    International Nuclear Information System (INIS)

    Zagorodnov, Igor; Weiland, Thomas

    2005-01-01

    We propose a new two-level economical conservative scheme for short-range wake field calculation in three dimensions. The scheme does not have dispersion in the longitudinal direction and is staircase free (second order convergent). Unlike the finite-difference time domain method (FDTD), it is based on a TE/TM like splitting of the field components in time. Additionally, it uses an enhanced alternating direction splitting of the transverse space operator that makes the scheme computationally as effective as the conventional FDTD method. Unlike the FDTD ADI and low-order Strang methods, the splitting error in our scheme is only of fourth order. As numerical examples show, the new scheme is much more accurate on the long-time scale than the conventional FDTD approach

  14. Sources of funding for community schemes

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-11-01

    There is an increasing level of interest amongst community groups in the UK to become involved in the development of renewable energy schemes. Often however these community groups have only limited funds of their own, so any additional funds that can be identified to help fund their renewable energy scheme can be very useful. There are a range of funding sources available that provide grants or loans for which community groups are eligible to apply. Few of these funding sources are targeted towards renewable energy specifically, nevertheless the funds may be applicable to renewable energy schemes under appropriate circumstances. To date, however, few of these funds have been accessed by community groups for renewable energy initiatives. One of the reasons for this low take-up of funds on offer could be that the funding sources may be difficult and time-consuming to identify, especially where the energy component of the fund is not readily apparent. This directory draws together details about many of the principal funding sources available in the UK that may consider providing funds to community groups wanting to develop a renewable energy scheme. (author)

  15. Hybrid approach for detection of dental caries based on the methods FCM and level sets

    Science.gov (United States)

    Chaabene, Marwa; Ben Ali, Ramzi; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    This paper presents a new technique for detection of dental caries that is a bacterial disease that destroys the tooth structure. In our approach, we have achieved a new segmentation method that combines the advantages of fuzzy C mean algorithm and level set method. The results obtained by the FCM algorithm will be used by Level sets algorithm to reduce the influence of the noise effect on the working of each of these algorithms, to facilitate level sets manipulation and to lead to more robust segmentation. The sensitivity and specificity confirm the effectiveness of proposed method for caries detection.

  16. A generalized form of the Bernoulli Trial collision scheme in DSMC: Derivation and evaluation

    Science.gov (United States)

    Roohi, Ehsan; Stefanov, Stefan; Shoja-Sani, Ahmad; Ejraei, Hossein

    2018-02-01

    The impetus of this research is to present a generalized Bernoulli Trial collision scheme in the context of the direct simulation Monte Carlo (DSMC) method. Previously, a subsequent of several collision schemes have been put forward, which were mathematically based on the Kac stochastic model. These include Bernoulli Trial (BT), Ballot Box (BB), Simplified Bernoulli Trial (SBT) and Intelligent Simplified Bernoulli Trial (ISBT) schemes. The number of considered pairs for a possible collision in the above-mentioned schemes varies between N (l) (N (l) - 1) / 2 in BT, 1 in BB, and (N (l) - 1) in SBT or ISBT, where N (l) is the instantaneous number of particles in the lth cell. Here, we derive a generalized form of the Bernoulli Trial collision scheme (GBT) where the number of selected pairs is any desired value smaller than (N (l) - 1), i.e., Nsel < (N (l) - 1), keeping the same the collision frequency and accuracy of the solution as the original SBT and BT models. We derive two distinct formulas for the GBT scheme, where both formula recover BB and SBT limits if Nsel is set as 1 and N (l) - 1, respectively, and provide accurate solutions for a wide set of test cases. The present generalization further improves the computational efficiency of the BT-based collision models compared to the standard no time counter (NTC) and nearest neighbor (NN) collision models.

  17. Third Order Reconstruction of the KP Scheme for Model of River Tinnelva

    Directory of Open Access Journals (Sweden)

    Susantha Dissanayake

    2017-01-01

    Full Text Available The Saint-Venant equation/Shallow Water Equation is used to simulate flow of river, flow of liquid in an open channel, tsunami etc. The Kurganov-Petrova (KP scheme which was developed based on the local speed of discontinuity propagation, can be used to solve hyperbolic type partial differential equations (PDEs, hence can be used to solve the Saint-Venant equation. The KP scheme is semi discrete: PDEs are discretized in the spatial domain, resulting in a set of Ordinary Differential Equations (ODEs. In this study, the common 2nd order KP scheme is extended into 3rd order scheme while following the Weighted Essentially Non-Oscillatory (WENO and Central WENO (CWENO reconstruction steps. Both the 2nd order and 3rd order schemes have been used in simulation in order to check the suitability of the KP schemes to solve hyperbolic type PDEs. The simulation results indicated that the 3rd order KP scheme shows some better stability compared to the 2nd order scheme. Computational time for the 3rd order KP scheme for variable step-length ode solvers in MATLAB is less compared to the computational time of the 2nd order KP scheme. In addition, it was confirmed that the order of the time integrators essentially should be lower compared to the order of the spatial discretization. However, for computation of abrupt step changes, the 2nd order KP scheme shows a more accurate solution.

  18. Study of the decay scheme of 159Tm

    International Nuclear Information System (INIS)

    Aguer, Pierre; Bastin, Genevieve; Chin Fan Liang; Libert, Jean; Paris, Pierre; Peghaire, Alain

    1975-01-01

    The energy levels of 159 Er have been investigated from the decay of 159 Tm (T(1/2)=9mn). Samples were obtained by (p,xn) reaction and on-line separation through Isocele facility. A level scheme is proposed with 24 levels between 0 and 1.3MeV [fr

  19. Oxygen- and nitrogen-chemisorbed carbon nanostructures for Z-scheme photocatalysis applications

    International Nuclear Information System (INIS)

    Qian Zhao; Pathak, Biswarup; Nisar, Jawad; Ahuja, Rajeev

    2012-01-01

    Here focusing on the very new experimental finding on carbon nanomaterials for solid-state electron mediator applications in Z-scheme photocatalysis, we have investigated different graphene-based nanostructures chemisorbed by various types and amounts of species such as oxygen (O), nitrogen (N) and hydroxyl (OH) and their electronic structures using density functional theory. The work functions of different nanostructures have also been investigated by us to evaluate their potential applications in Z-scheme photocatalysis for water splitting. The N-, O–N-, and N–N-chemisorbed graphene-based nanostructures (32 carbon atoms supercell, corresponding to lattice parameter of about 1 nm) are found promising to be utilized as electron mediators between reduction level and oxidation level of water splitting. The O- or OH-chemisorbed nanostructures have potential to be used as electron conductors between H 2 -evolving photocatalysts and the reduction level (H + /H 2 ). This systematic study is proposed to understand the properties of graphene-based carbon nanostructures in Z-scheme photocatalysis and guide experimentalists to develop better carbon-based nanomaterials for more efficient Z-scheme photocatalysis applications in the future.

  20. Finite Boltzmann schemes

    NARCIS (Netherlands)

    Sman, van der R.G.M.

    2006-01-01

    In the special case of relaxation parameter = 1 lattice Boltzmann schemes for (convection) diffusion and fluid flow are equivalent to finite difference/volume (FD) schemes, and are thus coined finite Boltzmann (FB) schemes. We show that the equivalence is inherent to the homology of the

  1. Emissions trading to combat climate change: The impact of scheme design on transaction costs

    OpenAIRE

    Betz, Regina

    2006-01-01

    This paper explores the likely impact of emissions trading design on transaction costs. Transaction costs include both the costs for the private sector to comply with the scheme rules and the costs of scheme administration. In economic theory transaction costs are often assumed to be zero. But transaction costs are real costs and there is no reason for treating them differently to other costs. Thus, in setting up an emissions trading scheme, transaction costs have to be taken into account in ...

  2. Evaluation of the Norwegian R&D Tax Credit Scheme

    Directory of Open Access Journals (Sweden)

    Ådne Cappelen

    2010-11-01

    Full Text Available We find that the Norwegian R&D tax credit scheme introduced in 2002 mainly works as intended. The scheme is cost-effective and it is used by a large number of firms. It stimulates these firms to invest more in R&D, and, in particular, the effect is positive for small firms with little R&D experience. The returns on the R&D investments supported by the scheme are positive and generally not different from the returns to other R&D investments. We have found examples of what can be interpreted as tax motivated adjustments to the scheme, but to some extent this must be accepted as a cost to subsidy and support schemes intended for use by a large number of economic agents. This is particularly so when attempts are made to keep administrative expenditures and control routines at a low level.

  3. Evaluation of PBL schemes in WRF for high Arctic conditions

    DEFF Research Database (Denmark)

    Kirova-Galabova, Hristina; Batchvarova, Ekaterina; Gryning, Sven-Erik

    2015-01-01

    was examined through two configurations (25 vertical levels and 4km grid step, 42 vertical levels and 1.33 km grid step). WRF was run with two planetary boundary layer schemes: Mellor –Yamada – Janjic with local vertical closure and non – local Yonsei University scheme. Temporal evolution of planetary boundary...... for temperature, above 150 m for relative humidity and for all levels for wind speed. Direct comparison of model and measured data showed that vertical profiles of studied parameters were reconstructed by the model relatively better in cloudy sky conditions, compared to clear skies....

  4. An optimized process flow for rapid segmentation of cortical bones of the craniofacial skeleton using the level-set method.

    Science.gov (United States)

    Szwedowski, T D; Fialkov, J; Pakdel, A; Whyne, C M

    2013-01-01

    Accurate representation of skeletal structures is essential for quantifying structural integrity, for developing accurate models, for improving patient-specific implant design and in image-guided surgery applications. The complex morphology of thin cortical structures of the craniofacial skeleton (CFS) represents a significant challenge with respect to accurate bony segmentation. This technical study presents optimized processing steps to segment the three-dimensional (3D) geometry of thin cortical bone structures from CT images. In this procedure, anoisotropic filtering and a connected components scheme were utilized to isolate and enhance the internal boundaries between craniofacial cortical and trabecular bone. Subsequently, the shell-like nature of cortical bone was exploited using boundary-tracking level-set methods with optimized parameters determined from large-scale sensitivity analysis. The process was applied to clinical CT images acquired from two cadaveric CFSs. The accuracy of the automated segmentations was determined based on their volumetric concurrencies with visually optimized manual segmentations, without statistical appraisal. The full CFSs demonstrated volumetric concurrencies of 0.904 and 0.719; accuracy increased to concurrencies of 0.936 and 0.846 when considering only the maxillary region. The highly automated approach presented here is able to segment the cortical shell and trabecular boundaries of the CFS in clinical CT images. The results indicate that initial scan resolution and cortical-trabecular bone contrast may impact performance. Future application of these steps to larger data sets will enable the determination of the method's sensitivity to differences in image quality and CFS morphology.

  5. Meeting stroke survivors' perceived needs: a qualitative study of a community-based exercise and education scheme.

    Science.gov (United States)

    Reed, Mary; Harrington, Rachel; Duggan, Aine; Wood, Victorine A

    2010-01-01

    A qualitative study using a phenomenological approach, to explore stroke survivors' needs and their perceptions of whether a community stroke scheme met these needs. Semi-structured in-depth interviews of 12 stroke survivors, purposively selected from participants attending a new community stroke scheme. Interpretative phenomenological analysis of interviews by two researchers independently. Participants attending the community stroke scheme sought to reconstruct their lives in the aftermath of their stroke. To enable this they needed internal resources of confidence and sense of purpose to 'create their social self', and external resources of 'responsive services' and an 'informal support network', to provide direction and encouragement. Participants felt the community stroke scheme met some of these needs through exercise, goal setting and peer group interaction, which included social support and knowledge acquisition. Stroke survivors need a variety of internal and external resources so that they can rebuild their lives positively post stroke. A stroke-specific community scheme, based on exercise, life-centred goal setting, peer support and knowledge acquisition, is an external resource that can help with meeting some of the stroke survivor's needs.

  6. Energy Design Advice Scheme (EDAS): operations and achievements 1992-1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-09-01

    The Energy Design Advice Scheme (EDAS) was launched in 1992 under the DTI's Passive Solar Programme to help improve the energy performance of the UK's building stock. It aimed to do this through direct advice and guidance on passive solar design and energy efficient technologies and processes given to the designers of real building projects. Furthermore, the scheme aimed to raise the awareness and take-up of definitive guidance produced under government programmes such as the Passive Solar programme and the Energy Efficiency Best Practice programme. A target energy saving worth Pound 19.3m was set to be achieved by the end of the scheme. This energy saving is equivalent to a reduction in carbon dioxide emission of 220,000 tonnes per year. (author)

  7. Joint level-set and spatio-temporal motion detection for cell segmentation.

    Science.gov (United States)

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan

  8. Efficient and Provable Secure Pairing-Free Security-Mediated Identity-Based Identification Schemes

    Directory of Open Access Journals (Sweden)

    Ji-Jian Chin

    2014-01-01

    Full Text Available Security-mediated cryptography was first introduced by Boneh et al. in 2001. The main motivation behind security-mediated cryptography was the capability to allow instant revocation of a user’s secret key by necessitating the cooperation of a security mediator in any given transaction. Subsequently in 2003, Boneh et al. showed how to convert a RSA-based security-mediated encryption scheme from a traditional public key setting to an identity-based one, where certificates would no longer be required. Following these two pioneering papers, other cryptographic primitives that utilize a security-mediated approach began to surface. However, the security-mediated identity-based identification scheme (SM-IBI was not introduced until Chin et al. in 2013 with a scheme built on bilinear pairings. In this paper, we improve on the efficiency results for SM-IBI schemes by proposing two schemes that are pairing-free and are based on well-studied complexity assumptions: the RSA and discrete logarithm assumptions.

  9. Efficient and provable secure pairing-free security-mediated identity-based identification schemes.

    Science.gov (United States)

    Chin, Ji-Jian; Tan, Syh-Yuan; Heng, Swee-Huay; Phan, Raphael C-W

    2014-01-01

    Security-mediated cryptography was first introduced by Boneh et al. in 2001. The main motivation behind security-mediated cryptography was the capability to allow instant revocation of a user's secret key by necessitating the cooperation of a security mediator in any given transaction. Subsequently in 2003, Boneh et al. showed how to convert a RSA-based security-mediated encryption scheme from a traditional public key setting to an identity-based one, where certificates would no longer be required. Following these two pioneering papers, other cryptographic primitives that utilize a security-mediated approach began to surface. However, the security-mediated identity-based identification scheme (SM-IBI) was not introduced until Chin et al. in 2013 with a scheme built on bilinear pairings. In this paper, we improve on the efficiency results for SM-IBI schemes by proposing two schemes that are pairing-free and are based on well-studied complexity assumptions: the RSA and discrete logarithm assumptions.

  10. Developing a contributing factor classification scheme for Rasmussen's AcciMap: Reliability and validity evaluation.

    Science.gov (United States)

    Goode, N; Salmon, P M; Taylor, N Z; Lenné, M G; Finch, C F

    2017-10-01

    One factor potentially limiting the uptake of Rasmussen's (1997) Accimap method by practitioners is the lack of a contributing factor classification scheme to guide accident analyses. This article evaluates the intra- and inter-rater reliability and criterion-referenced validity of a classification scheme developed to support the use of Accimap by led outdoor activity (LOA) practitioners. The classification scheme has two levels: the system level describes the actors, artefacts and activity context in terms of 14 codes; the descriptor level breaks the system level codes down into 107 specific contributing factors. The study involved 11 LOA practitioners using the scheme on two separate occasions to code a pre-determined list of contributing factors identified from four incident reports. Criterion-referenced validity was assessed by comparing the codes selected by LOA practitioners to those selected by the method creators. Mean intra-rater reliability scores at the system (M = 83.6%) and descriptor (M = 74%) levels were acceptable. Mean inter-rater reliability scores were not consistently acceptable for both coding attempts at the system level (M T1  = 68.8%; M T2  = 73.9%), and were poor at the descriptor level (M T1  = 58.5%; M T2  = 64.1%). Mean criterion referenced validity scores at the system level were acceptable (M T1  = 73.9%; M T2  = 75.3%). However, they were not consistently acceptable at the descriptor level (M T1  = 67.6%; M T2  = 70.8%). Overall, the results indicate that the classification scheme does not currently satisfy reliability and validity requirements, and that further work is required. The implications for the design and development of contributing factors classification schemes are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A new level set model for cell image segmentation

    Science.gov (United States)

    Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun

    2011-02-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

  12. Mutual influences of rated currents, short circuit levels, fault durations and integrated protective schemes for industrial distribution MV switchgears

    Energy Technology Data Exchange (ETDEWEB)

    Gaidano, G. (FIAT Engineering, Torino, Italy); Lionetto, P.F.; Pelizza, C.; Tommazzolli, F.

    1979-01-01

    This paper deals with the problem of integrated and coordinated design of distribution systems, as regards the definition of system structure and parameters together with protection criteria and schemes. Advantages in system operation, dynamic response, heavier loads with reduced machinery rating margins and overall cost reduction, can be achieved. It must be noted that MV switchgears installed in industrial main distribution substations are the vital nodes of the distribution system. Very large amounts of power (up to 100 MW and more) are conveyed through MV busbars, coming from Utility and from in-plant generators and outgoing to subdistribution substations, to step-down transformers and to main concentrated loads (big drivers, furnaces etc.). Criteria and methods already studied and applied to public distribution are examined to assess service continuity and economics by means of the reduction of thermal stresses, minimization of disturbances and improvement of system stability. The life of network components depends on sizing, on fault energy levels and on probability of fault occurrence. Constructional measures and protection schemes, which reduce probability and duration of faults, are the most important tools to improve overall reliability. The introduction of advanced techniques, mainly based on computer application, not only allows drastic reduction of fault duration, but also permits the system to operate, under any possible contingency, in the optimal conditions, as the computer provides adaptive control. This mode of system management makes it possible to size network components with reference to the true magnitude of system quantities, avoiding expensive oversizing connected to the unflexibility of conventional protection and control schemes.

  13. Two-Factor User Authentication with Key Agreement Scheme Based on Elliptic Curve Cryptosystem

    Directory of Open Access Journals (Sweden)

    Juan Qu

    2014-01-01

    Full Text Available A password authentication scheme using smart card is called two-factor authentication scheme. Two-factor authentication scheme is the most accepted and commonly used mechanism that provides the authorized users a secure and efficient method for accessing resources over insecure communication channel. Up to now, various two-factor user authentication schemes have been proposed. However, most of them are vulnerable to smart card loss attack, offline password guessing attack, impersonation attack, and so on. In this paper, we design a password remote user authentication with key agreement scheme using elliptic curve cryptosystem. Security analysis shows that the proposed scheme has high level of security. Moreover, the proposed scheme is more practical and secure in contrast to some related schemes.

  14. Boosting flood warning schemes with fast emulator of detailed hydrodynamic models

    Science.gov (United States)

    Bellos, V.; Carbajal, J. P.; Leitao, J. P.

    2017-12-01

    Floods are among the most destructive catastrophic events and their frequency has incremented over the last decades. To reduce flood impact and risks, flood warning schemes are installed in flood prone areas. Frequently, these schemes are based on numerical models which quickly provide predictions of water levels and other relevant observables. However, the high complexity of flood wave propagation in the real world and the need of accurate predictions in urban environments or in floodplains hinders the use of detailed simulators. This sets the difficulty, we need fast predictions that meet the accuracy requirements. Most physics based detailed simulators although accurate, will not fulfill the speed demand. Even if High Performance Computing techniques are used (the magnitude of required simulation time is minutes/hours). As a consequence, most flood warning schemes are based in coarse ad-hoc approximations that cannot take advantage a detailed hydrodynamic simulation. In this work, we present a methodology for developing a flood warning scheme using an Gaussian Processes based emulator of a detailed hydrodynamic model. The methodology consists of two main stages: 1) offline stage to build the emulator; 2) online stage using the emulator to predict and generate warnings. The offline stage consists of the following steps: a) definition of the critical sites of the area under study, and the specification of the observables to predict at those sites, e.g. water depth, flow velocity, etc.; b) generation of a detailed simulation dataset to train the emulator; c) calibration of the required parameters (if measurements are available). The online stage is carried on using the emulator to predict the relevant observables quickly, and the detailed simulator is used in parallel to verify key predictions of the emulator. The speed gain given by the emulator allows also to quantify uncertainty in predictions using ensemble methods. The above methodology is applied in real

  15. ESCAP mobile training scheme.

    Science.gov (United States)

    Yasas, F M

    1977-01-01

    In response to a United Nations resolution, the Mobile Training Scheme (MTS) was set up to provide training to the trainers of national cadres engaged in frontline and supervisory tasks in social welfare and rural development. The training is innovative in its being based on an analysis of field realities. The MTS team consisted of a leader, an expert on teaching methods and materials, and an expert on action research and evaluation. The country's trainers from different departments were sent to villages to work for a short period and to report their problems in fulfilling their roles. From these grass roots experiences, they made an analysis of the job, determining what knowledge, attitude and skills it required. Analysis of daily incidents and problems were used to produce indigenous teaching materials drawn from actual field practice. How to consider the problems encountered through government structures for policy making and decisions was also learned. Tasks of the students were to identify the skills needed for role performance by job analysis, daily diaries and project histories; to analyze the particular community by village profiles; to produce indigenous teaching materials; and to practice the role skills by actual role performance. The MTS scheme was tried in Nepal in 1974-75; 3 training programs trained 25 trainers and 51 frontline workers; indigenous teaching materials were created; technical papers written; and consultations were provided. In Afghanistan the scheme was used in 1975-76; 45 participants completed the training; seminars were held; and an ongoing Council was created. It is hoped that the training program will be expanded to other countries.

  16. Unequal Error Protected JPEG 2000 Broadcast Scheme with Progressive Fountain Codes

    OpenAIRE

    Chen, Zhao; Xu, Mai; Yin, Luiguo; Lu, Jianhua

    2012-01-01

    This paper proposes a novel scheme, based on progressive fountain codes, for broadcasting JPEG 2000 multimedia. In such a broadcast scheme, progressive resolution levels of images/video have been unequally protected when transmitted using the proposed progressive fountain codes. With progressive fountain codes applied in the broadcast scheme, the resolutions of images (JPEG 2000) or videos (MJPEG 2000) received by different users can be automatically adaptive to their channel qualities, i.e. ...

  17. PPM-based relay communication schemes for wireless body area networks

    NARCIS (Netherlands)

    Zhang, P.; Willems, F.M.J.; Huang, Li

    2012-01-01

    This paper investigates cooperative communication schemes based on a single relay with pulse-position modulation (PPM) signaling, for enhancing energy efficiency of wireless body area networks (WBANs) in noncoherent channel settings. We explore cooperation between the source and the relay such that

  18. QoS Support Polling Scheme for Multimedia Traffic in Wireless LAN MAC Protocol

    Institute of Scientific and Technical Information of China (English)

    YANG Zhijun; ZHAO Dongfeng

    2008-01-01

    Quality of service (QoS) support is a key attribute for multimedia traffic including video, voice, and data in wireless local area networks (LANs) but is limited in 802.11-based wireless LANs. A polling-based scheme called the point coordination function (PCF) was developed for 802.11 LANs to support the trans-mission of multimedia traffic. However, the PCF is not able to meet the desired practical traffic differentiation requirements for real-time data. This paper describes a QoS support polling scheme based on the IEEE 802.11 medium access control (MAC) protocol. The scheme uses a two-level polling mechanism with the QoS classes differentiated by two different access policies. Stations with higher priority traffic such as key or real-time data form the first level and can access the common channel through an exhaustive access policy. Other stations with lower priority traffic form the second level and can access the channel through a gated access policy. A system model based on imbedded Markov chain theory and a generation function were setup to explicitly analyze the mean information packet waiting time of the two-level polling scheme. Theo-retical and simulation results show that the new scheme efficiently differentiates services to guarantee better QoS and system stability.

  19. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    Science.gov (United States)

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  20. Level-set-based reconstruction algorithm for EIT lung images: first clinical results

    International Nuclear Information System (INIS)

    Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz

    2012-01-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)

  1. Evaluation and decision of products conceptual design schemes based on customer requirements

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Hong Zhong; Li, Yan Feng; Liu, Yu; Wang, Zhonglai [University of Electronic Science and Technology of China, Sichuan (China); Liu, Wenhai [2China Science Patent Trademark Agents Ltd., Beijing (China)

    2011-09-15

    Within the competitive market environment, understanding customer requirements is crucial for all corporations to obtain market share and survive competition. Only the products exactly meeting customer requirements can win in the market place. Therefore, customer requirements play a very important role in the evaluation and decision process of conceptual design schemes of products. In this paper, an evaluation and decision method based on customer requirements is presented. It utilizes the importance of customer requirements, the satisfaction degree of each evaluation metric to the specification, and an evaluation metric which models customer requirements to evaluate the satisfaction degree of each design scheme to specific customer requirements via the proposed BP neural networks. In the evaluation and decision process, fuzzy sets are used to describe the importance of customer requirements, the relationship between customer requirements and evaluation metrics, the satisfaction degree of each scheme to customer requirements, and the crisp set is used to describe the satisfaction degree of each metric to specifications. The effectiveness of the proposed method is demonstrated by an example of front suspension fork design of mountain bikes.

  2. Evaluation and decision of products conceptual design schemes based on customer requirements

    International Nuclear Information System (INIS)

    Huang, Hong Zhong; Li, Yan Feng; Liu, Yu; Wang, Zhonglai; Liu, Wenhai

    2011-01-01

    Within the competitive market environment, understanding customer requirements is crucial for all corporations to obtain market share and survive competition. Only the products exactly meeting customer requirements can win in the market place. Therefore, customer requirements play a very important role in the evaluation and decision process of conceptual design schemes of products. In this paper, an evaluation and decision method based on customer requirements is presented. It utilizes the importance of customer requirements, the satisfaction degree of each evaluation metric to the specification, and an evaluation metric which models customer requirements to evaluate the satisfaction degree of each design scheme to specific customer requirements via the proposed BP neural networks. In the evaluation and decision process, fuzzy sets are used to describe the importance of customer requirements, the relationship between customer requirements and evaluation metrics, the satisfaction degree of each scheme to customer requirements, and the crisp set is used to describe the satisfaction degree of each metric to specifications. The effectiveness of the proposed method is demonstrated by an example of front suspension fork design of mountain bikes

  3. Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images

    Directory of Open Access Journals (Sweden)

    Adams Gregg P

    2008-08-01

    Full Text Available Abstract Background The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology. Methods Level set methods embed a 2D contour in a 3D surface and evolve that surface over time according to an image-dependent speed function. A speed function suitable for segmentation of CL's in ovarian ultrasound images was developed. An initial contour was manually placed and contour evolution was allowed to proceed until the rate of change of the area was sufficiently small. The method was tested on ovarian ultrasonographic images (n = 8 obtained ex situ. A expert in ovarian ultrasound interpretation delineated CL boundaries manually to serve as a "ground truth". Accuracy of the level set segmentation algorithm was determined by comparing semi-automatically determined contours with ground truth contours using the mean absolute difference (MAD, root mean squared difference (RMSD, Hausdorff distance (HD, sensitivity, and specificity metrics. Results and discussion The mean MAD was 0.87 mm (sigma = 0.36 mm, RMSD was 1.1 mm (sigma = 0.47 mm, and HD was 3.4 mm (sigma = 2.0 mm indicating that, on average, boundaries were accurate within 1–2 mm, however, deviations in excess of 3 mm from the ground truth were observed indicating under- or over-expansion of the contour. Mean sensitivity and specificity were 0.814 (sigma = 0.171 and 0.990 (sigma = 0.00786, respectively, indicating that CLs were consistently undersegmented but rarely did the contour interior include pixels that were judged by the human expert not to be part of the CL. It was observed that in localities where gradient magnitudes within the CL were strong due to high contrast speckle, contour expansion stopped too early. Conclusion The

  4. Level Set Projection Method for Incompressible Navier-Stokes on Arbitrary Boundaries

    KAUST Repository

    Williams-Rioux, Bertrand

    2012-01-12

    Second order level set projection method for incompressible Navier-Stokes equations is proposed to solve flow around arbitrary geometries. We used rectilinear grid with collocated cell centered velocity and pressure. An explicit Godunov procedure is used to address the nonlinear advection terms, and an implicit Crank-Nicholson method to update viscous effects. An approximate pressure projection is implemented at the end of the time stepping using multigrid as a conventional fast iterative method. The level set method developed by Osher and Sethian [17] is implemented to address real momentum and pressure boundary conditions by the advection of a distance function, as proposed by Aslam [3]. Numerical results for the Strouhal number and drag coefficients validated the model with good accuracy for flow over a cylinder in the parallel shedding regime (47 < Re < 180). Simulations for an array of cylinders and an oscillating cylinder were performed, with the latter demonstrating our methods ability to handle dynamic boundary conditions.

  5. Discontinuous nodal schemes applied to the bidimensional neutron transport equation

    International Nuclear Information System (INIS)

    Delfin L, A.; Valle G, E. Del; Hennart B, J.P.

    1996-01-01

    In this paper several strong discontinuous nodal schemes are described, starting from the one that has only two interpolation parameters per cell to the one having ten. Their application to the spatial discretization of the neutron transport equation in X-Y geometry is also described, giving, for each one of the nodal schemes, the approximation for the angular neutron flux that includes the set of interpolation parameters and the corresponding polynomial space. Numerical results were obtained for several test problems presenting here the problem with the highest degree of difficulty and their comparison with published results 1,2 . (Author)

  6. A chaotic cryptography scheme for generating short ciphertext

    International Nuclear Information System (INIS)

    Wong, Kwok-Wo; Ho, Sun-Wah; Yung, Ching-Ki

    2003-01-01

    Recently, we have proposed a chaotic cryptographic scheme based on iterating the logistic map and updating the look-up table dynamically. The encryption and decryption processes become faster as the number of iterations required is reduced. However, the length of the ciphertext is still at least twice that of the original message. This may result in huge ciphertext files and hence long transmission time when encrypting large multimedia files. In this Letter, we modify the chaotic cryptographic scheme proposed previously so as to reduce the length of the ciphertext to the level slightly longer than that of the original message. Moreover, a session key is introduced in the cryptographic scheme so that the ciphertext length for a given message is not fixed

  7. A High-Accuracy Linear Conservative Difference Scheme for Rosenau-RLW Equation

    Directory of Open Access Journals (Sweden)

    Jinsong Hu

    2013-01-01

    Full Text Available We study the initial-boundary value problem for Rosenau-RLW equation. We propose a three-level linear finite difference scheme, which has the theoretical accuracy of Oτ2+h4. The scheme simulates two conservative properties of original problem well. The existence, uniqueness of difference solution, and a priori estimates in infinite norm are obtained. Furthermore, we analyze the convergence and stability of the scheme by energy method. At last, numerical experiments demonstrate the theoretical results.

  8. Multi-Agent System Based Special Protection and Emergency Control Scheme against Cascading Events in Power System

    DEFF Research Database (Denmark)

    Liu, Zhou

    relay operations due to low voltage or overload state in the post stage of N-1 (or N-k) contingency. If such state could be sensed and adjusted appropriately before those relay actions, the system stability might be sustained. So it is of great significance to develop a suitable protection scheme...... the proposed protection strategy in this thesis, a real time simulation platform based on Real Time Digital Simulator (RTDS) and LabVIEW is built. In this platform, the cases of cascaded blackouts are simulated on the test system simplified from the East Denmark power system. For the MAS based control system......, the distributed power system agents are set up in RTDS, while the agents in higher level are designed by LabVIEW toolkits. The case studies and simulation results demonstrate the effectiveness of real time application of the proposed MAS based special protection and emergency control scheme against the cascaded...

  9. Adaptive protection scheme

    Directory of Open Access Journals (Sweden)

    R. Sitharthan

    2016-09-01

    Full Text Available This paper aims at modelling an electronically coupled distributed energy resource with an adaptive protection scheme. The electronically coupled distributed energy resource is a microgrid framework formed by coupling the renewable energy source electronically. Further, the proposed adaptive protection scheme provides a suitable protection to the microgrid for various fault conditions irrespective of the operating mode of the microgrid: namely, grid connected mode and islanded mode. The outstanding aspect of the developed adaptive protection scheme is that it monitors the microgrid and instantly updates relay fault current according to the variations that occur in the system. The proposed adaptive protection scheme also employs auto reclosures, through which the proposed adaptive protection scheme recovers faster from the fault and thereby increases the consistency of the microgrid. The effectiveness of the proposed adaptive protection is studied through the time domain simulations carried out in the PSCAD⧹EMTDC software environment.

  10. Enhancing of chemical compound and drug name recognition using representative tag scheme and fine-grained tokenization.

    Science.gov (United States)

    Dai, Hong-Jie; Lai, Po-Ting; Chang, Yung-Chun; Tsai, Richard Tzong-Han

    2015-01-01

    The functions of chemical compounds and drugs that affect biological processes and their particular effect on the onset and treatment of diseases have attracted increasing interest with the advancement of research in the life sciences. To extract knowledge from the extensive literatures on such compounds and drugs, the organizers of BioCreative IV administered the CHEMical Compound and Drug Named Entity Recognition (CHEMDNER) task to establish a standard dataset for evaluating state-of-the-art chemical entity recognition methods. This study introduces the approach of our CHEMDNER system. Instead of emphasizing the development of novel feature sets for machine learning, this study investigates the effect of various tag schemes on the recognition of the names of chemicals and drugs by using conditional random fields. Experiments were conducted using combinations of different tokenization strategies and tag schemes to investigate the effects of tag set selection and tokenization method on the CHEMDNER task. This study presents the performance of CHEMDNER of three more representative tag schemes-IOBE, IOBES, and IOB12E-when applied to a widely utilized IOB tag set and combined with the coarse-/fine-grained tokenization methods. The experimental results thus reveal that the fine-grained tokenization strategy performance best in terms of precision, recall and F-scores when the IOBES tag set was utilized. The IOBES model with fine-grained tokenization yielded the best-F-scores in the six chemical entity categories other than the "Multiple" entity category. Nonetheless, no significant improvement was observed when a more representative tag schemes was used with the coarse or fine-grained tokenization rules. The best F-scores that were achieved using the developed system on the test dataset of the CHEMDNER task were 0.833 and 0.815 for the chemical documents indexing and the chemical entity mention recognition tasks, respectively. The results herein highlight the importance

  11. Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme

    Science.gov (United States)

    Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook

    1995-01-01

    Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.

  12. Scheme (in?) dependence in perturbative Lagrangian quantum field theory

    International Nuclear Information System (INIS)

    Slavnov, D.A.

    1995-01-01

    A problem of renormalization - scheme ambiguity in perturbation quantum field theory is investigated. A procedure is described that makes it possible to express uniquely all observable quantities in terms of a set base observables. Renormalization group equations for the base observable are constructed. The case of mass theory is treated. 9 refs

  13. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Mathematical Center for Interdiscipline Research, Soochow University, 1 Shizi Street, Jiangsu, Suzhou 215006 (China); Sun, Hui; Cheng, Li-Tien [Department of Mathematics, University of California, San Diego, La Jolla, California 92093-0112 (United States); Dzubiella, Joachim [Soft Matter and Functional Materials, Helmholtz-Zentrum Berlin, 14109 Berlin, Germany and Institut für Physik, Humboldt-Universität zu Berlin, 12489 Berlin (Germany); Li, Bo, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Quantitative Biology Graduate Program, University of California, San Diego, La Jolla, California 92093-0112 (United States); McCammon, J. Andrew [Department of Chemistry and Biochemistry, Department of Pharmacology, Howard Hughes Medical Institute, University of California, San Diego, La Jolla, California 92093-0365 (United States)

    2016-08-07

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the

  14. Analytical reconstruction schemes for coarse-mesh spectral nodal solution of slab-geometry SN transport problems

    International Nuclear Information System (INIS)

    Barros, R. C.; Filho, H. A.; Platt, G. M.; Oliveira, F. B. S.; Militao, D. S.

    2009-01-01

    Coarse-mesh numerical methods are very efficient in the sense that they generate accurate results in short computational time, as the number of floating point operations generally decrease, as a result of the reduced number of mesh points. On the other hand, they generate numerical solutions that do not give detailed information on the problem solution profile, as the grid points can be located considerably away from each other. In this paper we describe two analytical reconstruction schemes for the coarse-mesh solution generated by the spectral nodal method for neutral particle discrete ordinates (S N ) transport model in slab geometry. The first scheme we describe is based on the analytical reconstruction of the coarse-mesh solution within each discretization cell of the spatial grid set up on the slab. The second scheme is based on the angular reconstruction of the discrete ordinates solution between two contiguous ordinates of the angular quadrature set used in the S N model. Numerical results are given so we can illustrate the accuracy of the two reconstruction schemes, as described in this paper. (authors)

  15. Angular quadrature sets for the streaming ray method in x-y geometry

    International Nuclear Information System (INIS)

    England, R.; Filippone, W.L.

    1983-01-01

    Steaming ray (SR) computations normally employ a set of specially selected ray directions. For x-y geometry, these directions are not uniformly spaced in the azimuthal angle, nor do they conform to any of the standard quadrature sets in current use. For simplicity in all previous SR computations, uniform angular weights were used. This note investigates two methods--a bisection scheme and a Fourier scheme--for selecting more appropriate azimuthal angular weights. In the bisection scheme, the azimuthal weight assigned to an SR direction is half the angular spread (in the x-y plane) between its two adjacent ray directions. In the Fourier method, the weights are chosen such that the number of terms in a Fourier series exactly integrable on the interval (0, 2π) is maximized. Several sample calculations have been performed. While both the Fourier and bisection weights showed significant advantage over the uniform weights used previously, the Fourier scheme appears to be the best method. Lists of bisection and Fourier weights are given for quadrature sets containing 4, 8, 12, ..., 60 azimuthal SR directions

  16. Development of new NDT certification scheme in Singapore

    International Nuclear Information System (INIS)

    Wong, B.S.; Prabhakaran, K.G.; Babu, S.K.; Kuppuswamy, N.

    2009-01-01

    Nondestructive testing plays a vital role in Singapore Industry either it is construction or it it oil and gas. To cope up with the future demands for nondestructive testing personnel and cater to the local industry needs for qualified and certified NDT operators, Nondestructive Testing Society (Singapore)-NDTSS launched the SGNDT Certification Scheme. The aim of the organization is to promote and standardize the quality of NDT through education and training based on a scheme that is on par with internationally recognized 3rd party certifications. The certification also provides a greater confidence to the clients and end users who utilize the NDT test results provided by the certified operators. NDE certification in Singapore varies from industries and currently relies on the in-house certification scheme based on SNT-TC-1A where organizations find it difficult to standardize the skill and reliability of operators. NDE Certification system varies globally from countries to countries. A proper certification system is required to produce successful NDT Practitioners to suit the local industry. This paper outlines the development of Singapore NDT Certification Scheme (SGNDT), the operations, levels of qualification, the method of operation and control measures. The Training and Certification committee, Quality Management system within the certification scheme and the current system practiced in Singapore are discussed in this paper. The paper also highlights the importance of third party certification scheme. (author)

  17. A computerized scheme for lung nodule detection in multiprojection chest radiography

    International Nuclear Information System (INIS)

    Guo Wei; Li Qiang; Boyce, Sarah J.; McAdams, H. Page; Shiraishi, Junji; Doi, Kunio; Samei, Ehsan

    2012-01-01

    Purpose: Our previous study indicated that multiprojection chest radiography could significantly improve radiologists' performance for lung nodule detection in clinical practice. In this study, the authors further verify that multiprojection chest radiography can greatly improve the performance of a computer-aided diagnostic (CAD) scheme. Methods: Our database consisted of 59 subjects, including 43 subjects with 45 nodules and 16 subjects without nodules. The 45 nodules included 7 real and 38 simulated ones. The authors developed a conventional CAD scheme and a new fusion CAD scheme to detect lung nodules. The conventional CAD scheme consisted of four steps for (1) identification of initial nodule candidates inside lungs, (2) nodule candidate segmentation based on dynamic programming, (3) extraction of 33 features from nodule candidates, and (4) false positive reduction using a piecewise linear classifier. The conventional CAD scheme processed each of the three projection images of a subject independently and discarded the correlation information between the three images. The fusion CAD scheme included the four steps in the conventional CAD scheme and two additional steps for (5) registration of all candidates in the three images of a subject, and (6) integration of correlation information between the registered candidates in the three images. The integration step retained all candidates detected at least twice in the three images of a subject and removed those detected only once in the three images as false positives. A leave-one-subject-out testing method was used for evaluation of the performance levels of the two CAD schemes. Results: At the sensitivities of 70%, 65%, and 60%, our conventional CAD scheme reported 14.7, 11.3, and 8.6 false positives per image, respectively, whereas our fusion CAD scheme reported 3.9, 1.9, and 1.2 false positives per image, and 5.5, 2.8, and 1.7 false positives per patient, respectively. The low performance of the conventional

  18. Specific features of two diffraction schemes for a widely divergent X-ray beam

    Energy Technology Data Exchange (ETDEWEB)

    Avetyan, K. T.; Levonyan, L. V.; Semerjian, H. S.; Arakelyan, M. M., E-mail: marakelyan@ysu.am; Badalyan, O. M. [Yerevan State University (Armenia)

    2015-03-15

    We investigated the specific features of two diffraction schemes for a widely divergent X-ray beam that use a circular diaphragm 30–50 μm in diameter as a point source of characteristic radiation. In one of the schemes, the diaphragm was set in front of the crystal (the diaphragm-crystal (d-c) scheme); in the other, it was installed behind the crystal (the crystal-diaphragm (c-d) scheme). It was established that the diffraction image in the c-d scheme is a topographic map of the investigated crystal area. In the d-c scheme at L = 2l (l and L are the distances between the crystal and the diaphragm and between the photographic plate and the diaphragm, respectively), the branches of hyperbolas formed in this family of planes (hkl) by the characteristic K{sub α} and K{sub β} radiations, including higher order reflections, converge into one straight line. It is experimentally demonstrated that this convergence is very sensitive to structural inhomogeneities in the crystal under study.

  19. Evaluating Labour Market Effects of Wage Subsidies for the Disabled -The Danish Flexjobs Scheme

    DEFF Research Database (Denmark)

    Datta Gupta, Nabanita; Larsen, Mona

    2010-01-01

    We evaluate the employment and disability exit effects of a wage subsidy program for the disabled in a setting characterized by universal health insurance and little employment protection. We focus on the Danish Flexjob scheme that was introduced in 1998 and targeted towards improving the employm......We evaluate the employment and disability exit effects of a wage subsidy program for the disabled in a setting characterized by universal health insurance and little employment protection. We focus on the Danish Flexjob scheme that was introduced in 1998 and targeted towards improving...... the employment prospects of the long-term disabled with partial working capacity. We find a substantial, positive employment effect of the scheme in the 1994-2001 period within the target group compared to a control group of closely matched ineligibles, but no discernable effects on the probability of disability...... exit. For the target group employment probability is raised by 33 pct. points after the scheme is introduced relative to a mean employment rate at baseline of 44%. One explanation for a strong employment entry effect concomitant with a non-existent disability exit effect could be that subsidized jobs...

  20. A new level set model for cell image segmentation

    International Nuclear Information System (INIS)

    Ma Jing-Feng; Chen Chun; Hou Kai; Bao Shang-Lian

    2011-01-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing. (cross-disciplinary physics and related areas of science and technology)

  1. Definition of a Robust Supervisory Control Scheme for Sodium-Cooled Fast Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Ponciroli, R.; Passerini, S.; Vilim, R. B.

    2016-04-17

    In this work, an innovative control approach for metal-fueled Sodium-cooled Fast Reactors is proposed. With respect to the classical approach adopted for base-load Nuclear Power Plants, an alternative control strategy for operating the reactor at different power levels by respecting the system physical constraints is presented. In order to achieve a higher operational flexibility along with ensuring that the implemented control loops do not influence the system inherent passive safety features, a dedicated supervisory control scheme for the dynamic definition of the corresponding set-points to be supplied to the PID controllers is designed. In particular, the traditional approach based on the adoption of tabulated lookup tables for the set-point definition is found not to be robust enough when failures of the implemented SISO (Single Input Single Output) actuators occur. Therefore, a feedback algorithm based on the Reference Governor approach, which allows for the optimization of reference signals according to the system operating conditions, is proposed.

  2. An Evaluation of Interference Mitigation Schemes for HAPS Systems

    Directory of Open Access Journals (Sweden)

    Nam Kim

    2008-07-01

    Full Text Available The International Telecommunication Union-Radiocommunication sector (ITU-R has conducted frequency sharing studies between fixed services (FSs using a high altitude platform station (HAPS and fixed-satellite services (FSSs. In particular, ITU-R has investigated the power limitations related to HAPS user terminals (HUTs to facilitate frequency sharing with space station receivers. To reduce the level of interference from the HUTs that can harm a geostationary earth orbit (GEO satellite receiver in a space station, previous studies have taken two approaches: frequency sharing using a separated distance (FSSD and frequency sharing using power control (FSPC. In this paper, various performance evaluation results of interference mitigation schemes are presented. The results include performance evaluations using a new interference mitigation approach as well as conventional approaches. An adaptive beamforming scheme (ABS is introduced as a new scheme for efficient frequency sharing, and the interference mitigation effect on the ABS is examined considering pointing mismatch errors. The results confirm that the application of ABS enables frequency sharing between two systems with a smaller power reduction of HUTs in a cocoverage area compared to this reduction when conventional schemes are utilized. In addition, the analysis results provide the proper amount of modification at the transmitting power level of the HUT required for the suitable frequency sharing.

  3. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Science.gov (United States)

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  4. Asynchronous schemes for CFD at extreme scales

    Science.gov (United States)

    Konduri, Aditya; Donzis, Diego

    2013-11-01

    Recent advances in computing hardware and software have made simulations an indispensable research tool in understanding fluid flow phenomena in complex conditions at great detail. Due to the nonlinear nature of the governing NS equations, simulations of high Re turbulent flows are computationally very expensive and demand for extreme levels of parallelism. Current large simulations are being done on hundreds of thousands of processing elements (PEs). Benchmarks from these simulations show that communication between PEs take a substantial amount of time, overwhelming the compute time, resulting in substantial waste in compute cycles as PEs remain idle. We investigate a novel approach based on widely used finite-difference schemes in which computations are carried out asynchronously, i.e. synchronization of data among PEs is not enforced and computations proceed regardless of the status of messages. This drastically reduces PE idle time and results in much larger computation rates. We show that while these schemes remain stable, their accuracy is significantly affected. We present new schemes that maintain accuracy under asynchronous conditions and provide a viable path towards exascale computing. Performance of these schemes will be shown for simple models like Burgers' equation.

  5. Scalability of Direct Solver for Non-stationary Cahn-Hilliard Simulations with Linearized time Integration Scheme

    KAUST Repository

    Woźniak, M.; Smołka, M.; Cortes, Adriano Mauricio; Paszyński, M.; Schaefer, R.

    2016-01-01

    We study the features of a new mixed integration scheme dedicated to solving the non-stationary variational problems. The scheme is composed of the FEM approximation with respect to the space variable coupled with a 3-leveled time integration scheme

  6. Integration of Fault Detection and Isolation with Control Using Neuro-fuzzy Scheme

    Directory of Open Access Journals (Sweden)

    A. Asokan

    2009-10-01

    Full Text Available In this paper an algorithms is developed for fault diagnosis and fault tolerant control strategy for nonlinear systems subjected to an unknown time-varying fault. At first, the design of fault diagnosis scheme is performed using model based fault detection technique. The neuro-fuzzy chi-square scheme is applied for fault detection and isolation. The fault magnitude and time of occurrence of fault is obtained through neuro-fuzzy chi-square scheme. The estimated magnitude of the fault magnitude is normalized and used by the feed-forward control algorithm to make appropriate changes in the manipulated variable to keep the controlled variable near its set value. The feed-forward controller acts along with feed-back controller to control the multivariable system. The performance of the proposed scheme is applied to a three- tank process for various types of fault inputs to show the effectiveness of the proposed approach.

  7. The generalized scheme-independent Crewther relation in QCD

    Science.gov (United States)

    Shen, Jian-Ming; Wu, Xing-Gang; Ma, Yang; Brodsky, Stanley J.

    2017-07-01

    The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton-nucleon scattering times the Adler function, defined from the cross section for electron-positron annihilation into hadrons, has no pQCD radiative corrections. The ;Generalized Crewther Relation; relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (Dns) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (CBjp) at leading twist. A scheme-dependent ΔCSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both Dns and the inverse coefficient CBjp-1 have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, αˆd (Q) =∑i≥1 αˆg1 i (Qi), at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on the choice of the initial scale is negligible. Similar

  8. Alternative Scheme for Teleportation of Two-Atom Entangled State in Cavity QED

    Institute of Scientific and Technical Information of China (English)

    YANG Zhen-Biao

    2006-01-01

    We have proposed an alternative scheme for teleportation of two-atom entangled state in cavity QED. It is based on the degenerate Raman interaction of a single-mode cavity field with a ∧-type three-level atom. The prominent feature of the scheme is that only one cavity is required, which is prior to the previous one. Moreover, the atoms need to be detected are reduced compared with the previous scheme. The experimental feasibility of the scheme is discussed.The scheme can easily be generalized for teleportation of N-atom GHZ entangled states. The number of the atoms needed to be detected does not increase as the number of the atoms in GHZ state increases.

  9. Level Scheme of 223Fr

    International Nuclear Information System (INIS)

    Gaeta, R.; Gonzalez, J.A.; Gonzalez, L.; Roldan, C.

    1972-01-01

    A study has been made of the decay of 2 27 Ac at levels of 223 F r, means of alpha Spectrometers of Si barrier detector and gamma Spectrometers of Ge(Li). The rotational bands 1/2-(541 ↓ ] , 1/2-(530 ↑ ) and 3/2-(532 ↓ ) have been identified, as well as two octupolar bands associated with the fundamental one. The results obtained indicate that the unified model is applicable in this intermediate zone of the nuclide chart. (Author) 150 refs

  10. Remote unambiguous discrimination of linearly independent symmetric d-level quantum states

    International Nuclear Information System (INIS)

    Chen Libing; Liu Yuhua; Tan Peng; Lu Hong

    2009-01-01

    A set of linearly independent nonorthogonal symmetric d-level quantum states can be discriminated remotely and unambiguously with the aid of two-level Einstein-Podolsky-Rosen (EPR) states. We present a scheme for such a kind of remote unambiguous quantum state discrimination (UD). The probability of discrimination is in agreement with the optimal probability for local unambiguous discrimination among d symmetric states (Chefles and Barnettt 1998 Phys. Lett. A 250 223). This scheme consists of a remote generalized measurement described by a positive operator valued measurement (POVM). This remote POVM can be realized by performing a nonlocal 2d x 2d unitary operation on two spatially separated systems, one is the qudit which is encoded by one of the d symmetric nonorthogonal states to be distinguished and the other is an ancillary qubit, and a conventional local von Neumann orthogonal measurement on the ancilla. By decomposing the evolution process from the initial state to the final state, we construct a quantum network for realizing the remote POVM with a set of two-level nonlocal controlled-rotation gates, and thus provide a feasible physical means to realize the remote UD. A two-level nonlocal controlled-rotation gate can be implemented by using a two-level EPR pair in addition to local operations and classical communications (LOCCs)

  11. High-order asynchrony-tolerant finite difference schemes for partial differential equations

    Science.gov (United States)

    Aditya, Konduri; Donzis, Diego A.

    2017-12-01

    Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.

  12. A Generalized Weight-Based Particle-In-Cell Simulation Scheme

    International Nuclear Information System (INIS)

    Lee, W.W.; Jenkins, T.G.; Ethier, S.

    2010-01-01

    A generalized weight-based particle simulation scheme suitable for simulating magnetized plasmas, where the zeroth-order inhomogeneity is important, is presented. The scheme is an extension of the perturbative simulation schemes developed earlier for particle-in-cell (PIC) simulations. The new scheme is designed to simulate both the perturbed distribution ((delta)f) and the full distribution (full-F) within the same code. The development is based on the concept of multiscale expansion, which separates the scale lengths of the background inhomogeneity from those associated with the perturbed distributions. The potential advantage for such an arrangement is to minimize the particle noise by using (delta)f in the linear stage stage of the simulation, while retaining the flexibility of a full-F capability in the fully nonlinear stage of the development when signals associated with plasma turbulence are at a much higher level than those from the intrinsic particle noise.

  13. International proposal for an acoustic classification scheme for dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2014-01-01

    Acoustic classification schemes specify different quality levels for acoustic conditions. Regulations and classification schemes for dwellings typically include criteria for airborne and impact sound insulation, façade sound insulation and service equipment noise. However, although important...... classes, implying also trade barriers. Thus, a harmonized classification scheme would be useful, and the European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", running 2009-2013 with members from 32 countries, including three overseas...... for quality of life, information about acoustic conditions is rarely available, neither for new or existing housing. Regulatory acoustic requirements will, if enforced, ensure a corresponding quality for new dwellings, but satisfactory conditions for occupants are not guaranteed. Consequently, several...

  14. Atomic mirrors for a Λ-type three-level atom

    International Nuclear Information System (INIS)

    Felemban, Nuha; Aldossary, Omar M; Lembessis, Vassilis E

    2014-01-01

    We propose atom mirror schemes for a three-level atom of Λ-type interacting with two evanescent fields, which are generated as a result of the total internal reflection of two coherent Gaussian laser beams at the interface of a dielectric prism with vacuum. The forces acting on the atom are derived by means of optical Bloch equations, based on the atomic density matrix elements. The theory is illustrated by setting up the equations of motion for 23 Na atom. Two types of excited schemes are examined, namely the cases in which the evanescent fields have polarization types of σ + −σ − and σ + −π. The equations are solved numerically and we get results for atomic trajectories for different parameters. The performance of the mirror for the two types of polarization schemes is quantified and discussed. The possibility of reflecting atoms at pre-determined directions is also discussed. (paper)

  15. National health insurance scheme: Are the artisans benefitting in Lagos state, Nigeria?

    Directory of Open Access Journals (Sweden)

    Princess C Campbell

    2016-01-01

    Full Text Available Background: Health insurance (HI can serve as a vital risk protection for families and small businesses and also increase access to priority health services. This study determined the knowledge, attitude of artisans toward HI as well as their health-seeking pattern and willingness to join the HI scheme. Methodology: This descriptive cross-sectional survey used a multistage sampling technique to recruit 260 participants, using self-designed, pretested, interviewer-administered questionnaire. Data were analyzed using Epi-info version 7.0. Chi-square test, Fisher′s exact test, and logistic regression were used for associations; the level of significance was set at 5%. Results: The respondents were predominantly male, i.e., 195 (75.0%, with a mean age of 32.36 + 6.20 years and mean income of N 29,000 + 5798.5 ($1 ~ N 161. Majority of the respondents, i.e., 226 (86.9% were not aware of HI. The overall knowledge was poor (6.5% and the main source of information was through radio/television (41.2%. Nearly, half of the respondents (33 out of 67 identified the concept of HI as a pool of contributors′ fund for only healthcare service. A high proportion of the respondents (27 out of 34 were aware of the benefits of HI, although majority, i.e., 27 (79.4% identified access to medication as the benefit. The majority of the respondents, i.e., 228 (87.7% expressed negative attitude toward the scheme; however, 76.5% were willing to join the HI scheme. Conclusion: The artisans had low awareness/poor knowledge of HI which translated to a negative attitude toward the scheme. There is need for an aggressive stakeholders′ enlightenment campaign for increasing coverage.

  16. Sound classification schemes in Europe - Quality classes intended for renovated housing

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2010-01-01

    exposure in the home included in the proposed main objectives for a housing policy. In most countries in Europe, building regulations specify minimum requirements concerning acoustical conditions for new dwellings. In addition, several countries have introduced sound classification schemes with classes...... intended to reflect different levels of acoustical comfort. Consequently, acoustic requirements for a dwelling can be specified as the legal minimum requirements or as a specific class in a classification scheme. Most schemes have both higher classes than corresponding to the regulatory requirements...

  17. Energy level schemes of f{sup N} electronic configurations for the di-, tri-, and tetravalent lanthanides and actinides in a free state

    Energy Technology Data Exchange (ETDEWEB)

    Ma, C.-G. [College of Sciences, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Brik, M.G., E-mail: mikhail.brik@ut.ee [College of Sciences, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Institute of Physics, University of Tartu, Ravila 14C, Tartu 50411 (Estonia); Institute of Physics, Jan Dlugosz University, Armii Krajowej 13/15, PL-42200 Czestochowa (Poland); Institute of Physics, Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw (Poland); Liu, D.-X.; Feng, B.; Tian, Ya [College of Sciences, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Suchocki, A. [Institute of Physics, Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw (Poland)

    2016-02-15

    The energy level diagrams are theoretically constructed for the di-, tri-, tetravalent lanthanide and actinide ions, using the Hartree–Fock calculated parameters of the Coulomb and spin–orbit interactions within f{sup N} (N=1…13) electron configurations. These diagrams are analogous to Dieke's diagram, which was obtained experimentally. They can be used for an analysis of the optical spectra of all considered groups of ions in various environments. Systematic variation of some prominent energy levels (especially those ones with a potential for emission transitions) along the isoelectronic 4f/5f ions is considered. - Highlights: • Energy level schemes for di-, tri, tetravalent lanthanides/actinides are calculated. • Systematic variation of the characteristic energy levels across the series is considered. • Potentially interesting emission transitions are identified.

  18. Some free boundary problems in potential flow regime usinga based level set method

    Energy Technology Data Exchange (ETDEWEB)

    Garzon, M.; Bobillo-Ares, N.; Sethian, J.A.

    2008-12-09

    Recent advances in the field of fluid mechanics with moving fronts are linked to the use of Level Set Methods, a versatile mathematical technique to follow free boundaries which undergo topological changes. A challenging class of problems in this context are those related to the solution of a partial differential equation posed on a moving domain, in which the boundary condition for the PDE solver has to be obtained from a partial differential equation defined on the front. This is the case of potential flow models with moving boundaries. Moreover the fluid front will possibly be carrying some material substance which will diffuse in the front and be advected by the front velocity, as for example the use of surfactants to lower surface tension. We present a Level Set based methodology to embed this partial differential equations defined on the front in a complete Eulerian framework, fully avoiding the tracking of fluid particles and its known limitations. To show the advantages of this approach in the field of Fluid Mechanics we present in this work one particular application: the numerical approximation of a potential flow model to simulate the evolution and breaking of a solitary wave propagating over a slopping bottom and compare the level set based algorithm with previous front tracking models.

  19. The social security scheme in Thailand: what lessons can be drawn?

    Science.gov (United States)

    Tangcharoensathien, V; Supachutikul, A; Lertiendumrong, J

    1999-04-01

    The Social Security Scheme was launched in 1990, covering formal sector private employees for non-work related sickness, maternity and invalidity including cash benefits and funeral grants. The scheme is financed by tripartite contributions from government, employers and employees, each of 1.5% of payroll (total of 4.5%). The scheme decided to pay health care providers, whether public or private, on a flat rate capitation basis to cover both ambulatory and inpatient care. Registration of the insured with a contractor hospital was a necessary consequence of the chosen capitation payment system. The aim of this paper is to review the operation of the scheme, and to explore the implications of capitation payment and registration for utilisation levels and provider behaviour. A key weakness of the scheme's design is suggested to be the initial decision to give employers not employees the responsibility for choosing the registered hospitals. This was done for administrative reasons, but it contributed to low levels of use of the contractor hospitals. In addition, low levels of use were also probably the result of the potential for cream skimming, cost shifting from inpatient to ambulatory care and under-provision of patient care, though since monitoring mechanisms by the Social Security Office were weak, these effects are difficult to detect conclusively. Mechanisms to improve utilisation levels were gradually introduced, such as employee choice of registered hospitals and the formation of sub-contractor networks to improve access to care. A beneficial effect of the capitation payment system was that the Social Security Fund generated substantial reserves and expenditures on sickness benefits were well stabilised. The paper ends by recommending that future policy amendments should be guided by research and empirical findings and that tougher monitoring and enforcement of quality of care standards are required.

  20. Sound classification of dwellings – A diversity of national schemes in Europe

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Sound classification schemes for dwellings exist in ten countries in Europe, typically prepared and published as national standards. The schemes define quality classes intended to reflect different levels of acoustical comfort. The main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. This paper presents the sound classification schemes in Europe and compares the class criteria for sound insulation between dwellings. The schemes have been implemented and revised gradually since the early 1990s. However, due to lack...... constructions fulfilling different classes. The current variety of descriptors and classes also causes trade barriers. Thus, there is a need to harmonize characteristics of the schemes, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing...

  1. Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations

    Science.gov (United States)

    Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul

    2015-01-01

    The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems. PMID:26070067

  2. Classification schemes for knowledge translation interventions: a practical resource for researchers.

    Science.gov (United States)

    Slaughter, Susan E; Zimmermann, Gabrielle L; Nuspl, Megan; Hanson, Heather M; Albrecht, Lauren; Esmail, Rosmin; Sauro, Khara; Newton, Amanda S; Donald, Maoliosa; Dyson, Michele P; Thomson, Denise; Hartling, Lisa

    2017-12-06

    As implementation science advances, the number of interventions to promote the translation of evidence into healthcare, health systems, or health policy is growing. Accordingly, classification schemes for these knowledge translation (KT) interventions have emerged. A recent scoping review identified 51 classification schemes of KT interventions to integrate evidence into healthcare practice; however, the review did not evaluate the quality of the classification schemes or provide detailed information to assist researchers in selecting a scheme for their context and purpose. This study aimed to further examine and assess the quality of these classification schemes of KT interventions, and provide information to aid researchers when selecting a classification scheme. We abstracted the following information from each of the original 51 classification scheme articles: authors' objectives; purpose of the scheme and field of application; socioecologic level (individual, organizational, community, system); adaptability (broad versus specific); target group (patients, providers, policy-makers), intent (policy, education, practice), and purpose (dissemination versus implementation). Two reviewers independently evaluated the methodological quality of the development of each classification scheme using an adapted version of the AGREE II tool. Based on these assessments, two independent reviewers reached consensus about whether to recommend each scheme for researcher use, or not. Of the 51 original classification schemes, we excluded seven that were not specific classification schemes, not accessible or duplicates. Of the remaining 44 classification schemes, nine were not recommended. Of the 35 recommended classification schemes, ten focused on behaviour change and six focused on population health. Many schemes (n = 29) addressed practice considerations. Fewer schemes addressed educational or policy objectives. Twenty-five classification schemes had broad applicability

  3. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    Science.gov (United States)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  4. Colour schemes

    DEFF Research Database (Denmark)

    van Leeuwen, Theo

    2013-01-01

    This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation.......This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation....

  5. An Evaluation of Interference Mitigation Schemes for HAPS Systems

    Directory of Open Access Journals (Sweden)

    Kim Nam

    2008-01-01

    Full Text Available Abstract The International Telecommunication Union-Radiocommunication sector (ITU-R has conducted frequency sharing studies between fixed services (FSs using a high altitude platform station (HAPS and fixed-satellite services (FSSs. In particular, ITU-R has investigated the power limitations related to HAPS user terminals (HUTs to facilitate frequency sharing with space station receivers. To reduce the level of interference from the HUTs that can harm a geostationary earth orbit (GEO satellite receiver in a space station, previous studies have taken two approaches: frequency sharing using a separated distance (FSSD and frequency sharing using power control (FSPC. In this paper, various performance evaluation results of interference mitigation schemes are presented. The results include performance evaluations using a new interference mitigation approach as well as conventional approaches. An adaptive beamforming scheme (ABS is introduced as a new scheme for efficient frequency sharing, and the interference mitigation effect on the ABS is examined considering pointing mismatch errors. The results confirm that the application of ABS enables frequency sharing between two systems with a smaller power reduction of HUTs in a cocoverage area compared to this reduction when conventional schemes are utilized. In addition, the analysis results provide the proper amount of modification at the transmitting power level of the HUT required for the suitable frequency sharing.

  6. Evaluating Labour Market Effects of Wage Subsidies for the Disabled – the Danish Flexjob Scheme

    DEFF Research Database (Denmark)

    Datta Gupta, Nabanita; Larsen, Mona

    We evaluate the employment and disability exit effects of a wage subsidy program for the disabled in a setting characterized by universal health insurance and little employment protection. We focus on the Danish Flexjob scheme that was introduced in 1998 and targeted towards improving the employm......We evaluate the employment and disability exit effects of a wage subsidy program for the disabled in a setting characterized by universal health insurance and little employment protection. We focus on the Danish Flexjob scheme that was introduced in 1998 and targeted towards improving...... the employment prospects of the long-term disabled with partial working capacity. We find a substantial, positive employment effect of the scheme in the 1994-2001 period within the target group compared to a control group of closely matched ineligibles, but no discernable effects on the probability of disability...... exit. For the target group employment probability is raised by 33 pct. points after the scheme is introduced relative to a mean employment rate at baseline of 44%. One explanation for a strong employment entry effect concomitant with a non-existent disability exit effect could be that subsidized jobs...

  7. Building fast well-balanced two-stage numerical schemes for a model of two-phase flows

    Science.gov (United States)

    Thanh, Mai Duc

    2014-06-01

    We present a set of well-balanced two-stage schemes for an isentropic model of two-phase flows arisen from the modeling of deflagration-to-detonation transition in granular materials. The first stage is to absorb the source term in nonconservative form into equilibria. Then in the second stage, these equilibria will be composed into a numerical flux formed by using a convex combination of the numerical flux of a stable Lax-Friedrichs-type scheme and the one of a higher-order Richtmyer-type scheme. Numerical schemes constructed in such a way are expected to get the interesting property: they are fast and stable. Tests show that the method works out until the parameter takes on the value CFL, and so any value of the parameter between zero and this value is expected to work as well. All the schemes in this family are shown to capture stationary waves and preserves the positivity of the volume fractions. The special values of the parameter 0,1/2,1/(1+CFL), and CFL in this family define the Lax-Friedrichs-type, FAST1, FAST2, and FAST3 schemes, respectively. These schemes are shown to give a desirable accuracy. The errors and the CPU time of these schemes and the Roe-type scheme are calculated and compared. The constructed schemes are shown to be well-balanced and faster than the Roe-type scheme.

  8. Geometrical correction for the inter- and intramolecular basis set superposition error in periodic density functional theory calculations.

    Science.gov (United States)

    Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan

    2013-09-26

    We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.

  9. An early separation scheme for the LHC luminosity upgrade

    CERN Document Server

    Sterbini, G

    2010-01-01

    The present document is organized in five chapters. In the first chapter the framework of the study is described, developing the motivations, the goals and the requirements for the LHC Luminosity Upgrade. We analyze the need for the crossing angle and its impact on the peak luminosity of the collider. After having introduced the Early Separation Scheme, we explain how it may overcome some limitations of the present machine. We compare the nominal LHC crossing scheme with the proposed one underlining its potential in terms of performance and its issues with respect to the integration in the detectors. An analysis of the integrated magnetic field required is given. In the second chapter we introduce one of the most powerful aspect of the scheme: the luminosity leveling. After the description of the physical model adopted, we compare the results of its analytical and numerical solutions. All the potential improvement due to the Early Separation Scheme are shown on the luminosity plane (peak luminosity versus int...

  10. Implications of sea-level rise in a modern carbonate ramp setting

    Science.gov (United States)

    Lokier, Stephen W.; Court, Wesley M.; Onuma, Takumi; Paul, Andreas

    2018-03-01

    This study addresses a gap in our understanding of the effects of sea-level rise on the sedimentary systems and morphological development of recent and ancient carbonate ramp settings. Many ancient carbonate sequences are interpreted as having been deposited in carbonate ramp settings. These settings are poorly-represented in the Recent. The study documents the present-day transgressive flooding of the Abu Dhabi coastline at the southern shoreline of the Arabian/Persian Gulf, a carbonate ramp depositional system that is widely employed as a Recent analogue for numerous ancient carbonate systems. Fourteen years of field-based observations are integrated with historical and recent high-resolution satellite imagery in order to document and assess the onset of flooding. Predicted rates of transgression (i.e. landward movement of the shoreline) of 2.5 m yr- 1 (± 0.2 m yr- 1) based on global sea-level rise alone were far exceeded by the flooding rate calculated from the back-stepping of coastal features (10-29 m yr- 1). This discrepancy results from the dynamic nature of the flooding with increased water depth exposing the coastline to increased erosion and, thereby, enhancing back-stepping. A non-accretionary transgressive shoreline trajectory results from relatively rapid sea-level rise coupled with a low-angle ramp geometry and a paucity of sediments. The flooding is represented by the landward migration of facies belts, a range of erosive features and the onset of bioturbation. Employing Intergovernmental Panel on Climate Change (Church et al., 2013) predictions for 21st century sea-level rise, and allowing for the post-flooding lag time that is typical for the start-up of carbonate factories, it is calculated that the coastline will continue to retrograde for the foreseeable future. Total passive flooding (without considering feedback in the modification of the shoreline) by the year 2100 is calculated to likely be between 340 and 571 m with a flooding rate of 3

  11. Tag-KEM from Set Partial Domain One-Way Permutations

    Science.gov (United States)

    Abe, Masayuki; Cui, Yang; Imai, Hideki; Kurosawa, Kaoru

    Recently a framework called Tag-KEM/DEM was introduced to construct efficient hybrid encryption schemes. Although it is known that generic encode-then-encrypt construction of chosen ciphertext secure public-key encryption also applies to secure Tag-KEM construction and some known encoding method like OAEP can be used for this purpose, it is worth pursuing more efficient encoding method dedicated for Tag-KEM construction. This paper proposes an encoding method that yields efficient Tag-KEM schemes when combined with set partial one-way permutations such as RSA and Rabin's encryption scheme. To our knowledge, this leads to the most practical hybrid encryption scheme of this type. We also present an efficient Tag-KEM which is CCA-secure under general factoring assumption rather than Blum factoring assumption.

  12. Level densities

    International Nuclear Information System (INIS)

    Ignatyuk, A.V.

    1998-01-01

    For any applications of the statistical theory of nuclear reactions it is very important to obtain the parameters of the level density description from the reliable experimental data. The cumulative numbers of low-lying levels and the average spacings between neutron resonances are usually used as such data. The level density parameters fitted to such data are compiled in the RIPL Starter File for the tree models most frequently used in practical calculations: i) For the Gilber-Cameron model the parameters of the Beijing group, based on a rather recent compilations of the neutron resonance and low-lying level densities and included into the beijing-gc.dat file, are chosen as recommended. As alternative versions the parameters provided by other groups are given into the files: jaeri-gc.dat, bombay-gc.dat, obninsk-gc.dat. Additionally the iljinov-gc.dat, and mengoni-gc.dat files include sets of the level density parameters that take into account the damping of shell effects at high energies. ii) For the backed-shifted Fermi gas model the beijing-bs.dat file is selected as the recommended one. Alternative parameters of the Obninsk group are given in the obninsk-bs.dat file and those of Bombay in bombay-bs.dat. iii) For the generalized superfluid model the Obninsk group parameters included into the obninsk-bcs.dat file are chosen as recommended ones and the beijing-bcs.dat file is included as an alternative set of parameters. iv) For the microscopic approach to the level densities the files are: obninsk-micro.for -FORTRAN 77 source for the microscopical statistical level density code developed in Obninsk by Ignatyuk and coworkers, moller-levels.gz - Moeller single-particle level and ground state deformation data base, moller-levels.for -retrieval code for Moeller single-particle level scheme. (author)

  13. Robust boundary detection of left ventricles on ultrasound images using ASM-level set method.

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Li, Hong; Teng, Yueyang; Kang, Yan

    2015-01-01

    Level set method has been widely used in medical image analysis, but it has difficulties when being used in the segmentation of left ventricular (LV) boundaries on echocardiography images because the boundaries are not very distinguish, and the signal-to-noise ratio of echocardiography images is not very high. In this paper, we introduce the Active Shape Model (ASM) into the traditional level set method to enforce shape constraints. It improves the accuracy of boundary detection and makes the evolution more efficient. The experiments conducted on the real cardiac ultrasound image sequences show a positive and promising result.

  14. European insurance scheme to cover geological risk related to geothermal operations

    Energy Technology Data Exchange (ETDEWEB)

    Tiberi, U [European Community, General Directorate XVII, ALTERNER Program, Bruxelles (Belgium); Demange, J [BRGM, Orleans (France)

    1997-12-01

    The development of geothermal energy can contribute significantly to the growth of NRE (new and renewable energies that are non-nuclear and non-combustible) within the European Community and within Europe as a whole. However, the `mining risk` related to this type of operation still constitutes a major obstacle to its development. Operators find it difficult to raise the necessary financing without a guarantee against the risk of failure during the drilling stage. Standard insurance companies will not cover this type of risk, due to its very nature. We must thus therefore find a specific solution. As a result of the oil crises during the 1970s, the French Government decided promote the use of renewable energies in France. The support provided to these energies, or at least to geothermal energy, was to set up a scheme whereby the resource is guaranteed. Thus the operator, by subscribing to the scheme, benefits from a guarantee of the resource. The insurance works at two level: - in the first place, it covers the mining risk during the drilling stage, i.e. should the resource prove to be insufficient, whether in discharge or temperature, for an economically viable operation, then the totality of the costs are reimbursed, apart form the premium and any government subsidy that might have been received. - A second level of guarantee covers the risk of change in the resource`s parameters over a period of 15 years (a study is in progress to consider the possibility of extending this period to 25 years). (orig.)

  15. Economic Droop Scheme for Decentralized Power Management in DC Microgrids

    Directory of Open Access Journals (Sweden)

    E. Alizadeh

    2016-12-01

    Full Text Available This paper proposes an autonomous and economic droop control scheme for DC microgrid application. In this method, a cost-effective power sharing technique among various types of DG units is properly adopted. The droop settings are determined based on an algorithm to individually manage the power management without any complicated optimization methods commonly applied in the centralized control method. In the proposed scheme, the system retains all the advantages of the traditional droop method while minimizes the generation costs of the DC microgrid. In the proposed method, all DGs are classified in a sorting rule based on their total generation cost and the reference voltage of their droop equations is then determined. The proposed scheme is applied to a typical DC microgrid consisting of four different types of DGs and a controllable load. The simulation results are presented to verify the effectiveness of the proposed method using MATLAB/SIMULINK software.

  16. Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks

    Science.gov (United States)

    Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.

    Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.

  17. Exact density functional and wave function embedding schemes based on orbital localization

    International Nuclear Information System (INIS)

    Hégely, Bence; Nagy, Péter R.; Kállay, Mihály; Ferenczy, György G.

    2016-01-01

    Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.

  18. Exact density functional and wave function embedding schemes based on orbital localization

    Science.gov (United States)

    Hégely, Bence; Nagy, Péter R.; Ferenczy, György G.; Kállay, Mihály

    2016-08-01

    Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.

  19. Exact density functional and wave function embedding schemes based on orbital localization

    Energy Technology Data Exchange (ETDEWEB)

    Hégely, Bence; Nagy, Péter R.; Kállay, Mihály, E-mail: kallay@mail.bme.hu [MTA-BME Lendület Quantum Chemistry Research Group, Department of Physical Chemistry and Materials Science, Budapest University of Technology and Economics, P.O. Box 91, H-1521 Budapest (Hungary); Ferenczy, György G. [Medicinal Chemistry Research Group, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Magyar tudósok körútja 2, H-1117 Budapest (Hungary); Department of Biophysics and Radiation Biology, Semmelweis University, Tűzoltó u. 37-47, H-1094 Budapest (Hungary)

    2016-08-14

    Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.

  20. Scheme for Quantum Computing Immune to Decoherence

    Science.gov (United States)

    Williams, Colin; Vatan, Farrokh

    2008-01-01

    that the derivation provides explicit constructions for finding the exchange couplings in the physical basis needed to implement any arbitrary 1-qubit gate. These constructions lead to spintronic encodings of quantum logic that are more efficient than those of a previously published scheme that utilizes a universal but fixed set of gates.

  1. A numerical relativity scheme for cosmological simulations

    Science.gov (United States)

    Daverio, David; Dirian, Yves; Mitsou, Ermis

    2017-12-01

    Cosmological simulations involving the fully covariant gravitational dynamics may prove relevant in understanding relativistic/non-linear features and, therefore, in taking better advantage of the upcoming large scale structure survey data. We propose a new 3  +  1 integration scheme for general relativity in the case where the matter sector contains a minimally-coupled perfect fluid field. The original feature is that we completely eliminate the fluid components through the constraint equations, thus remaining with a set of unconstrained evolution equations for the rest of the fields. This procedure does not constrain the lapse function and shift vector, so it holds in arbitrary gauge and also works for arbitrary equation of state. An important advantage of this scheme is that it allows one to define and pass an adaptation of the robustness test to the cosmological context, at least in the case of pressureless perfect fluid matter, which is the relevant one for late-time cosmology.

  2. An Energy Efficient Cooperative Hierarchical MIMO Clustering Scheme for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sungyoung Lee

    2011-12-01

    Full Text Available In this work, we present an energy efficient hierarchical cooperative clustering scheme for wireless sensor networks. Communication cost is a crucial factor in depleting the energy of sensor nodes. In the proposed scheme, nodes cooperate to form clusters at each level of network hierarchy ensuring maximal coverage and minimal energy expenditure with relatively uniform distribution of load within the network. Performance is enhanced by cooperative multiple-input multiple-output (MIMO communication ensuring energy efficiency for WSN deployments over large geographical areas. We test our scheme using TOSSIM and compare the proposed scheme with cooperative multiple-input multiple-output (CMIMO clustering scheme and traditional multihop Single-Input-Single-Output (SISO routing approach. Performance is evaluated on the basis of number of clusters, number of hops, energy consumption and network lifetime. Experimental results show significant energy conservation and increase in network lifetime as compared to existing schemes.

  3. Interference Cancellation Schemes for Single-Carrier Block Transmission with Insufficient Cyclic Prefix

    Directory of Open Access Journals (Sweden)

    Hayashi Kazunori

    2008-01-01

    Full Text Available Abstract This paper proposes intersymbol interference (ISI and interblock interference (IBI cancellation schemes at the transmitter and the receiver for the single-carrier block transmission with insufficient cyclic prefix (CP. The proposed scheme at the transmitter can exterminate the interferences by only setting some signals in the transmitted signal block to be the same as those of the previous transmitted signal block. On the other hand, the proposed schemes at the receiver can cancel the interferences without any change in the transmitted signals compared to the conventional method. The IBI components are reduced by using previously detected data signals, while for the ISI cancellation, we firstly change the defective channel matrix into a circulant matrix by using the tentative decisions, which are obtained by our newly derived frequency domain equalization (FDE, and then the conventional FDE is performed to compensate the ISI. Moreover, we propose a pilot signal configuration, which enables us to estimate a channel impulse response whose order is greater than the guard interval (GI. Computer simulations show that the proposed interference cancellation schemes can significantly improve bit error rate (BER performance, and the validity of the proposed channel estimation scheme is also demonstrated.

  4. An Improved Overloading Scheme for Downlink CDMA

    Directory of Open Access Journals (Sweden)

    Marc Moeneclaey

    2005-04-01

    Full Text Available An improved overloading scheme is presented for single-user detection in the downlink of multiple-access systems based on OCDMA/OCDMA (O/O. By displacing in time the orthogonal signatures of the two user sets that make up the overloaded system, the cross-correlation between the users of the two sets is reduced. For random O/O with square-root cosine rolloff chip pulses, the multiuser interference can be decreased by up to 50% (depending on the chip pulse bandwidth as compared to quasiorthogonal sequences (QOS that are presently part of the downlink standard of Cdma2000. This reduction of the multiuser interference gives rise to an increase of the achievable signal-to-interference-plus-noise ratio for a particular channel load.

  5. Quality of Recovery Evaluation of the Protection Schemes for Fiber-Wireless Access Networks

    Science.gov (United States)

    Fu, Minglei; Chai, Zhicheng; Le, Zichun

    2016-03-01

    With the rapid development of fiber-wireless (FiWi) access network, the protection schemes have got more and more attention due to the risk of huge data loss when failures occur. However, there are few studies on the performance evaluation of the FiWi protection schemes by the unified evaluation criterion. In this paper, quality of recovery (QoR) method was adopted to evaluate the performance of three typical protection schemes (MPMC scheme, OBOF scheme and RPMF scheme) against the segment-level failure in FiWi access network. The QoR models of the three schemes were derived in terms of availability, quality of backup path, recovery time and redundancy. To compare the performance of the three protection schemes comprehensively, five different classes of network services such as emergency service, prioritized elastic service, conversational service, etc. were utilized by means of assigning different QoR weights. Simulation results showed that, for the most service cases, RPMF scheme was proved to be the best solution to enhance the survivability when planning the FiWi access network.

  6. Upwind differencing scheme for the equations of ideal magnetohydrodynamics

    International Nuclear Information System (INIS)

    Brio, M.; Wu, C.C.

    1988-01-01

    Recently, upwind differencing schemes have become very popular for solving hyperbolic partial differential equations, especially when discontinuities exist in the solutions. Among many upwind schemes successfully applied to the problems in gas dynamics, Roe's method stands out for its relative simplicity and clarity of the underlying physical model. In this paper, an upwind differencing scheme of Roe-type for the MHD equations is constructed. In each computational cell, the problem is first linearized around some averaged state which preserves the flux differences. Then the solution is advanced in time by computing the wave contributions to the flux at the cell interfaces. One crucial task of the linearization procedure is the construction of a Roe matrix. For the special case γ = 2, a Roe matrix in the form of a mean value Jacobian is found, and for the general case, a simple averaging procedure is introduced. All other necessary ingredients of the construction, which include eigenvalues, and a complete set of right eigenvectors of the Roe matrix and decomposition coefficients are presented. As a numerical example, we chose a coplanar MHD Riemann problem. The problem is solved by the newly constructed second-order upwind scheme as well as by the Lax-Friedrichs, the Lax-Wendroff, and the flux-corrected transport schemes. The results demonstrate several advantages of the upwind scheme. In this paper, we also show that the MHD equations are nonconvex. This is a contrast to the general belief that the fast and slow waves are like sound waves in the Euler equations. As a consequence, the wave structure becomes more complicated; for example, compound waves consisting of a shock and attached to it a rarefaction wave of the same family can exist in MHD. copyright 1988 Academic Press, Inc

  7. Note of the 20 December 2016 on the elaboration of regional biomass schemes

    International Nuclear Information System (INIS)

    Michel, Laurent; GESLAIN-LANEELLE, Catherine

    2016-01-01

    This official note informs Region prefects on modalities of elaboration of regional biomass schemes, and provides them with data on mobilizable biomass resources at the regional level. It briefly presents the legal framework of the scheme elaboration, the elaboration agenda, and the scope of usages of biomass to be considered within the scheme. It discusses the articulation between regional and national schemes, from a global point of view as well as in terms works of elaboration. It briefly describes how the regional scheme is to be assessed. Appendices present planning and programming elements to be taken into account, elements of explanation for the regional biomass table, institutions represented in the information and orientation committee of the national scheme, and regional biomass tables which contain detailed assessments of the various biomass resources for each French region

  8. A highly efficient 3D level-set grain growth algorithm tailored for ccNUMA architecture

    Science.gov (United States)

    Mießen, C.; Velinov, N.; Gottstein, G.; Barrales-Mora, L. A.

    2017-12-01

    A highly efficient simulation model for 2D and 3D grain growth was developed based on the level-set method. The model introduces modern computational concepts to achieve excellent performance on parallel computer architectures. Strong scalability was measured on cache-coherent non-uniform memory access (ccNUMA) architectures. To achieve this, the proposed approach considers the application of local level-set functions at the grain level. Ideal and non-ideal grain growth was simulated in 3D with the objective to study the evolution of statistical representative volume elements in polycrystals. In addition, microstructure evolution in an anisotropic magnetic material affected by an external magnetic field was simulated.

  9. Time-dependent internal density functional theory formalism and Kohn-Sham scheme for self-bound systems

    International Nuclear Information System (INIS)

    Messud, Jeremie

    2009-01-01

    The stationary internal density functional theory (DFT) formalism and Kohn-Sham scheme are generalized to the time-dependent case. It is proven that, in the time-dependent case, the internal properties of a self-bound system (such as an atomic nuclei or a helium droplet) are all defined by the internal one-body density and the initial state. A time-dependent internal Kohn-Sham scheme is set up as a practical way to compute the internal density. The main difference from the traditional DFT formalism and Kohn-Sham scheme is the inclusion of the center-of-mass correlations in the functional.

  10. Topology optimization in acoustics and elasto-acoustics via a level-set method

    Science.gov (United States)

    Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.

    2018-04-01

    Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.

  11. An Efficient and Privacy-Preserving Multiuser Cloud-Based LBS Query Scheme

    Directory of Open Access Journals (Sweden)

    Lu Ou

    2018-01-01

    Full Text Available Location-based services (LBSs are increasingly popular in today’s society. People reveal their location information to LBS providers to obtain personalized services such as map directions, restaurant recommendations, and taxi reservations. Usually, LBS providers offer user privacy protection statement to assure users that their private location information would not be given away. However, many LBSs run on third-party cloud infrastructures. It is challenging to guarantee user location privacy against curious cloud operators while still permitting users to query their own location information data. In this paper, we propose an efficient privacy-preserving cloud-based LBS query scheme for the multiuser setting. We encrypt LBS data and LBS queries with a hybrid encryption mechanism, which can efficiently implement privacy-preserving search over encrypted LBS data and is very suitable for the multiuser setting with secure and effective user enrollment and user revocation. This paper contains security analysis and performance experiments to demonstrate the privacy-preserving properties and efficiency of our proposed scheme.

  12. Proposed criteria for the evaluation of an address assignment scheme in Botswana

    CSIR Research Space (South Africa)

    Ditsela, J

    2011-06-01

    Full Text Available propose criteria for an address assignment scheme in Botswana: a single set of place or area names; different addresses types for urban, rural and farm areas; principles for address numbering assignment; integration of different referencing systems; and a...

  13. Adaptive nonseparable vector lifting scheme for digital holographic data compression.

    Science.gov (United States)

    Xing, Yafei; Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Dufaux, Frédéric

    2015-01-01

    Holographic data play a crucial role in recent three-dimensional imaging as well as microscopic applications. As a result, huge amounts of storage capacity will be involved for this kind of data. Therefore, it becomes necessary to develop efficient hologram compression schemes for storage and transmission purposes. In this paper, we focus on the shifted distance information, obtained by the phase-shifting algorithm, where two sets of difference data need to be encoded. More precisely, a nonseparable vector lifting scheme is investigated in order to exploit the two-dimensional characteristics of the holographic contents. Simulations performed on different digital holograms have shown the effectiveness of the proposed method in terms of bitrate saving and quality of object reconstruction.

  14. Comparison in Schemes for Simulating Depositional Growth of Ice Crystal between Theoretical and Laboratory Data

    Science.gov (United States)

    Zhai, Guoqing; Li, Xiaofan

    2015-04-01

    The Bergeron-Findeisen process has been simulated using the parameterization scheme for the depositional growth of ice crystal with the temperature-dependent theoretically predicted parameters in the past decades. Recently, Westbrook and Heymsfield (2011) calculated these parameters using the laboratory data from Takahashi and Fukuta (1988) and Takahashi et al. (1991) and found significant differences between the two parameter sets. There are two schemes that parameterize the depositional growth of ice crystal: Hsie et al. (1980), Krueger et al. (1995) and Zeng et al. (2008). In this study, we conducted three pairs of sensitivity experiments using three parameterization schemes and the two parameter sets. The pre-summer torrential rainfall event is chosen as the simulated rainfall case in this study. The analysis of root-mean-squared difference and correlation coefficient between the simulation and observation of surface rain rate shows that the experiment with the Krueger scheme and the Takahashi laboratory-derived parameters produces the best rain-rate simulation. The mean simulated rain rates are higher than the mean observational rain rate. The calculations of 5-day and model domain mean rain rates reveal that the three schemes with Takahashi laboratory-derived parameters tend to reduce the mean rain rate. The Krueger scheme together with the Takahashi laboratory-derived parameters generate the closest mean rain rate to the mean observational rain rate. The decrease in the mean rain rate caused by the Takahashi laboratory-derived parameters in the experiment with the Krueger scheme is associated with the reductions in the mean net condensation and the mean hydrometeor loss. These reductions correspond to the suppressed mean infrared radiative cooling due to the enhanced cloud ice and snow in the upper troposphere.

  15. One size fits all? An assessment tool for solid waste management at local and national levels

    Energy Technology Data Exchange (ETDEWEB)

    Broitman, Dani, E-mail: danib@techunix.technion.ac.il [Department of Natural Resources and Environment Management, Graduate school of Management, University of Haifa, Haifa 31905 (Israel); Ayalon, Ofira [Department of Natural Resources and Environment Management, Graduate school of Management, University of Haifa, Haifa 31905 (Israel); Kan, Iddo [Department of Agricultural Economics and Management, Faculty of Agricultural, Food and Environmental Quality Sciences, Rehovot 76100 (Israel)

    2012-10-15

    Highlights: Black-Right-Pointing-Pointer Waste management schemes are generally implemented at national or regional level. Black-Right-Pointing-Pointer Local conditions characteristics and constraints are often neglected. Black-Right-Pointing-Pointer We developed an economic model able to compare multi-level waste management options. Black-Right-Pointing-Pointer A detailed test case with real economic data and a best-fit scenario is described. Black-Right-Pointing-Pointer Most efficient schemes combine clear National directives with local level flexibility. - Abstract: As environmental awareness rises, integrated solid waste management (WM) schemes are increasingly being implemented all over the world. The different WM schemes usually address issues such as landfilling restrictions (mainly due to methane emissions and competing land use), packaging directives and compulsory recycling goals. These schemes are, in general, designed at a national or regional level, whereas local conditions and constraints are sometimes neglected. When national WM top-down policies, in addition to setting goals, also dictate the methods by which they are to be achieved, local authorities lose their freedom to optimize their operational WM schemes according to their specific characteristics. There are a myriad of implementation options at the local level, and by carrying out a bottom-up approach the overall national WM system will be optimal on economic and environmental scales. This paper presents a model for optimizing waste strategies at a local level and evaluates this effect at a national level. This is achieved by using a waste assessment model which enables us to compare both the economic viability of several WM options at the local (single municipal authority) level, and aggregated results for regional or national levels. A test case based on various WM approaches in Israel (several implementations of mixed and separated waste) shows that local characteristics significantly

  16. Review and Analysis of Cryptographic Schemes Implementing Threshold Signature

    Directory of Open Access Journals (Sweden)

    Anastasiya Victorovna Beresneva

    2015-03-01

    Full Text Available This work is devoted to the study of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, ellipt ic curves and bilinear pairings were investigated. Different methods of generation and verification of threshold signatures were explored, e.g. used in a mobile agents, Internet banking and e-currency. The significance of the work is determined by the reduction of the level of counterfeit electronic documents, signed by certain group of users.

  17. Costs and effects of the Tanzanian national voucher scheme for insecticide-treated nets

    Directory of Open Access Journals (Sweden)

    Hanson Kara

    2008-02-01

    Full Text Available Abstract Background The cost-effectiveness of insecticide-treated nets (ITNs in reducing morbidity and mortality is well established. International focus has now moved on to how best to scale up coverage and what financing mechanisms might be used to achieve this. The approach in Tanzania has been to deliver a targeted subsidy for those most vulnerable to the effects of malaria while at the same time providing support to the development of the commercial ITN distribution system. In October 2004, with funds from the Global Fund to Fight AIDS Tuberculosis and Malaria, the government launched the Tanzania National Voucher Scheme (TNVS, a nationwide discounted voucher scheme for ITNs for pregnant women and their infants. This paper analyses the costs and effects of the scheme and compares it with other approaches to distribution. Methods Economic costs were estimated using the ingredients approach whereby all resources required in the delivery of the intervention (including the user contribution are quantified and valued. Effects were measured in terms of number of vouchers used (and therefore nets delivered and treated nets years. Estimates were also made for the cost per malaria case and death averted. Results and Conclusion The total financial cost of the programme represents around 5% of the Ministry of Health's total budget. The average economic cost of delivering an ITN using the voucher scheme, including the user contribution, was $7.57. The cost-effectiveness results are within the benchmarks set by other malaria prevention studies. The Government of Tanzania's approach to scaling up ITNs uses both the public and private sectors in order to achieve and sustain the level of coverage required to meet the Abuja targets. The results presented here suggest that the TNVS is a cost-effective strategy for delivering subsidized ITNs to targeted vulnerable groups.

  18. Time Reversal UWB Communication System: A Novel Modulation Scheme with Experimental Validation

    Directory of Open Access Journals (Sweden)

    Khaleghi A

    2010-01-01

    Full Text Available A new modulation scheme is proposed for a time reversal (TR ultra wide-band (UWB communication system. The new modulation scheme uses the binary pulse amplitude modulation (BPAM and adds a new level of modulation to increase the data rate of a TR UWB communication system. Multiple data bits can be transmitted simultaneously with a cost of little added interference. Bit error rate (BER performance and the maximum achievable data rate of the new modulation scheme are theoretically analyzed. Two separate measurement campaigns are carried out to analyze the proposed modulation scheme. In the first campaign, the frequency responses of a typical indoor channel are measured and the performance is studied by the simulations using the measured frequency responses. Theoretical and the simulative performances are in strong agreement with each other. Furthermore, the BER performance of the proposed modulation scheme is compared with the performance of existing modulation schemes. It is shown that the proposed modulation scheme outperforms QAM and PAM for in an AWGN channel. In the second campaign, an experimental validation of the proposed modulation scheme is done. It is shown that the performances with the two measurement campaigns are in good agreement.

  19. Certificateless short sequential and broadcast multisignature schemes using elliptic curve bilinear pairings

    Directory of Open Access Journals (Sweden)

    SK Hafizul Islam

    2014-01-01

    Full Text Available Several certificateless short signature and multisignature schemes based on traditional public key infrastructure (PKI or identity-based cryptosystem (IBC have been proposed in the literature; however, no certificateless short sequential (or serial multisignature (CL-SSMS or short broadcast (or parallel multisignature (CL-SBMS schemes have been proposed. In this paper, we propose two such new CL-SSMS and CL-SBMS schemes based on elliptic curve bilinear pairing. Like any certificateless public key cryptosystem (CL-PKC, the proposed schemes are free from the public key certificate management burden and the private key escrow problem as found in PKI- and IBC-based cryptosystems, respectively. In addition, the requirements of the expected security level and the fixed length signature with constant verification time have been achieved in our schemes. The schemes are communication efficient as the length of the multisignature is equivalent to a single elliptic curve point and thus become the shortest possible multisignature scheme. The proposed schemes are then suitable for communication systems having resource constrained devices such as PDAs, mobile phones, RFID chips, and sensors where the communication bandwidth, battery life, computing power and storage space are limited.

  20. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    Science.gov (United States)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  1. A risk-based classification scheme for genetically modified foods. III: Evaluation using a panel of reference foods.

    Science.gov (United States)

    Chao, Eunice; Krewski, Daniel

    2008-12-01

    This paper presents an exploratory evaluation of four functional components of a proposed risk-based classification scheme (RBCS) for crop-derived genetically modified (GM) foods in a concordance study. Two independent raters assigned concern levels to 20 reference GM foods using a rating form based on the proposed RBCS. The four components of evaluation were: (1) degree of concordance, (2) distribution across concern levels, (3) discriminating ability of the scheme, and (4) ease of use. At least one of the 20 reference foods was assigned to each of the possible concern levels, demonstrating the ability of the scheme to identify GM foods of different concern with respect to potential health risk. There was reasonably good concordance between the two raters for the three separate parts of the RBCS. The raters agreed that the criteria in the scheme were sufficiently clear in discriminating reference foods into different concern levels, and that with some experience, the scheme was reasonably easy to use. Specific issues and suggestions for improvements identified in the concordance study are discussed.

  2. Exergame Grading Scheme: Concept Development and Preliminary Psychometric Evaluations in Cancer Survivors

    Directory of Open Access Journals (Sweden)

    Hsiao-Lan Wang

    2017-01-01

    Full Text Available The challenge of using exergames to promote physical activity among cancer survivors lies in the selection of the exergames that match their fitness level. There is a need for a standardized grading scheme by which to judge an exergame’s capacity to address specific physical fitness attributes with different levels of physical engagement. The study aimed to develop an Exergame Grading Scheme and preliminarily evaluate its psychometric properties. Fourteen (14 items were created from the human movement and exergame literature. The content validity index (CVI was rated by content experts with two consecutive rounds (N=5 and N=3 independently. The interrater reliability (IRR was determined by two raters who used the Exergame Grading Scheme to determine the grading score of the five exergames performed by two cancer survivors (N=10. Each item had a score of 1 for item-level CVI and 1 for k. For IRR, 9 items had rho values of 1, 1 item had 0.93, and 4 items had between 0.80 and 0.89. This valid and reliable Exergame Grading Scheme makes it possible to develop a personalized physical activity program using any type of exergame or fitness mobile application in rehabilitation practice to meet the needs of cancer survivors.

  3. Packet reversed packet combining scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2006-07-01

    The packet combining scheme is a well defined simple error correction scheme with erroneous copies at the receiver. It offers higher throughput combined with ARQ protocols in networks than that of basic ARQ protocols. But packet combining scheme fails to correct errors when the errors occur in the same bit locations of two erroneous copies. In the present work, we propose a scheme that will correct error if the errors occur at the same bit location of the erroneous copies. The proposed scheme when combined with ARQ protocol will offer higher throughput. (author)

  4. Comparison of nutrient profiling schemes for restricting the marketing of food and drink to children.

    Science.gov (United States)

    Brinsden, H; Lobstein, T

    2013-08-01

    The food and beverage industry have made voluntary pledges to reduce children's exposure to the marketing of energy-dense foods and beverages, and in 2012 announced the replacement of company-specific nutrient profiling schemes with uniform sets of criteria from 2013 (in the USA) and 2014 (in the European Union [EU]). To compare the proposed USA and EU nutrient profiling schemes and three government-led schemes, paying particular attention to the differences in sugar criteria. Food and beverage products permitted to be advertised in the USA under pre-2013 criteria were examined using five nutrient profiling schemes: the forthcoming USA and EU schemes and three government-approved schemes: the US Interagency Working Group (IWG) proposals, the United Kingdom Office of Communications (OfCom) regulations and the Danish Forum co-regulatory Code. Under the new USA and EU nutrient profiling schemes, 88 (49%) and 73 (41%) of a total of 178 products would be permitted to be advertised, respectively. The US IWG permitted 25 (14%) products; the Ofcom regulations permitted 65 (37%) and the Danish Code permitted 13 (7%). Government-led schemes are significantly more restrictive than industry-led schemes, primarily due to their tougher sugar criteria. The Danish Forum (93%) and USA IWG scheme (86%) are the most restrictive of the five examined. Further harmonization of nutrient profiling schemes is needed to reduce children's exposure to the promotion of energy-dense foods. © 2013 The Authors. Pediatric Obesity © 2013 International Association for the Study of Obesity.

  5. Coarse-Grain Bandwidth Estimation Scheme for Large-Scale Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther H.; Sergui, John S.

    2013-01-01

    A large-scale network that supports a large number of users can have an aggregate data rate of hundreds of Mbps at any time. High-fidelity simulation of a large-scale network might be too complicated and memory-intensive for typical commercial-off-the-shelf (COTS) tools. Unlike a large commercial wide-area-network (WAN) that shares diverse network resources among diverse users and has a complex topology that requires routing mechanism and flow control, the ground communication links of a space network operate under the assumption of a guaranteed dedicated bandwidth allocation between specific sparse endpoints in a star-like topology. This work solved the network design problem of estimating the bandwidths of a ground network architecture option that offer different service classes to meet the latency requirements of different user data types. In this work, a top-down analysis and simulation approach was created to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. These techniques were used to estimate the WAN bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network. A new analytical approach, called the "leveling scheme," was developed to model the store-and-forward mechanism of the network data flow. The term "leveling" refers to the spreading of data across a longer time horizon without violating the corresponding latency requirement of the data type. Two versions of the leveling scheme were developed: 1. A straightforward version that simply spreads the data of each data type across the time horizon and doesn't take into account the interactions among data types within a pass, or between data types across overlapping passes at a network node, and is inherently sub-optimal. 2. Two-state Markov leveling scheme that takes into account the second order behavior of

  6. A digital memories based user authentication scheme with privacy preservation.

    Directory of Open Access Journals (Sweden)

    JunLiang Liu

    Full Text Available The traditional username/password or PIN based authentication scheme, which still remains the most popular form of authentication, has been proved insecure, unmemorable and vulnerable to guessing, dictionary attack, key-logger, shoulder-surfing and social engineering. Based on this, a large number of new alternative methods have recently been proposed. However, most of them rely on users being able to accurately recall complex and unmemorable information or using extra hardware (such as a USB Key, which makes authentication more difficult and confusing. In this paper, we propose a Digital Memories based user authentication scheme adopting homomorphic encryption and a public key encryption design which can protect users' privacy effectively, prevent tracking and provide multi-level security in an Internet & IoT environment. Also, we prove the superior reliability and security of our scheme compared to other schemes and present a performance analysis and promising evaluation results.

  7. The new Exponential Directional Iterative (EDI) 3-D Sn scheme for parallel adaptive differencing

    International Nuclear Information System (INIS)

    Sjoden, G.E.

    2005-01-01

    The new Exponential Directional Iterative (EDI) discrete ordinates (Sn) scheme for 3-D Cartesian Coordinates is presented. The EDI scheme is a logical extension of the positive, efficient Exponential Directional Weighted (EDW) Sn scheme currently used as the third level of the adaptive spatial differencing algorithm in the PENTRAN parallel discrete ordinates solver. Here, the derivation and advantages of the EDI scheme are presented; EDI uses EDW-rendered exponential coefficients as initial starting values to begin a fixed point iteration of the exponential coefficients. One issue that required evaluation was an iterative cutoff criterion to prevent the application of an unstable fixed point iteration; although this was needed in some cases, it was readily treated with a default to EDW. Iterative refinement of the exponential coefficients in EDI typically converged in fewer than four fixed point iterations. Moreover, EDI yielded more accurate angular fluxes compared to the other schemes tested, particularly in streaming conditions. Overall, it was found that the EDI scheme was up to an order of magnitude more accurate than the EDW scheme on a given mesh interval in streaming cases, and is potentially a good candidate as a fourth-level differencing scheme in the PENTRAN adaptive differencing sequence. The 3-D Cartesian computational cost of EDI was only about 20% more than the EDW scheme, and about 40% more than Diamond Zero (DZ). More evaluation and testing are required to determine suitable upgrade metrics for EDI to be fully integrated into the current adaptive spatial differencing sequence in PENTRAN. (author)

  8. Interference mitigation enhancement of switched-based scheme in over-loaded femtocells

    KAUST Repository

    Gaaloul, Fakhreddine

    2012-06-01

    This paper proposes adequate methods to improve the interference mitigation capability of a recently investigated switched-based interference reduction scheme in short-range open-access and over-loaded femtocells. It is assumed that the available orthogonal channels for the femtocell network are distributed among operating access points in close vicinity, where each of which knows its allocated channels a priori. For the case when the feedback links are capacity-limited and the available channels can be universally shared and simultaneously used, the paper presents enhanced schemes to identify a channel to serve the desired scheduled user by maintaining the interference power level within a tolerable range. They attempt to either complement the switched-based scheme by minimum interference channel selection or adopt different interference thresholds on available channels, while aiming to reduce the channels examination load. The performance of the proposed schemes is quantified and then compared with those of the single-threshold switched-based scheme via numerical and simulation results. © 2012 IEEE.

  9. The Effect of a Monitoring Scheme on Tutorial Attendance and Assignment Submission

    Science.gov (United States)

    Burke, Grainne; Mac an Bhaird, Ciaran; O'Shea, Ann

    2013-01-01

    We report on the implementation of a monitoring scheme by the Department of Mathematics and Statistics at the National University of Ireland Maynooth. The scheme was introduced in an attempt to increase the level and quality of students' engagement with certain aspects of their undergraduate course. It is well documented that students with higher…

  10. Enhancing Community Detection By Affinity-based Edge Weighting Scheme

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Andy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sanders, Geoffrey [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Henson, Van [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Vassilevski, Panayot [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-05

    Community detection refers to an important graph analytics problem of finding a set of densely-connected subgraphs in a graph and has gained a great deal of interest recently. The performance of current community detection algorithms is limited by an inherent constraint of unweighted graphs that offer very little information on their internal community structures. In this paper, we propose a new scheme to address this issue that weights the edges in a given graph based on recently proposed vertex affinity. The vertex affinity quantifies the proximity between two vertices in terms of their clustering strength, and therefore, it is ideal for graph analytics applications such as community detection. We also demonstrate that the affinity-based edge weighting scheme can improve the performance of community detection algorithms significantly.

  11. National Disability Insurance Scheme, health, hospitals and adults with intellectual disability.

    Science.gov (United States)

    Wallace, Robyn A

    2018-03-01

    Preventable poor health outcomes for adults with intellectual disability in health settings have been known about for years. Subsequent analysis and the sorts of reasonable adjustments required in health and disability support settings to address these health gaps are well described, but have not really been embedded in practice in any significant way in either setting. As far as health is concerned, implementation of the National Disability Insurance Scheme (NDIS, the Scheme) affords an opportunity to recognise individual needs of people with intellectual disability to provide reasonable and necessary functional support for access to mainstream health services, to build capacity of mainstream health providers to supply services and to increase individual capacity to access services. Together these strands have potential to transform health outcomes. Success of the Scheme, however, rests on as yet incompletely defined operational interaction between NDIS and mainstream health services and inherently involves the disability sector. This interaction is especially relevant for adults with intellectual disability, known high users of hospitals and for whom hospital outcomes are particularly poor and preventable. Keys to better hospital outcomes are first, the receiving of quality person-centred healthcare from physicians and hospitals taking into account significance of intellectual disability and second, formulation of organised quality functional supports during hospitalisation. Achieving these require sophisticated engagement between consumers, the National Disability Insurance Agency, Commonwealth, State and Territory government leaders, senior hospital and disability administrators, NDIS service providers and clinicians and involves cross fertilisation of values, sharing of operational policies and procedures, determination of boundaries of fiscal responsibility for functional supports in hospital. © 2018 Royal Australasian College of Physicians.

  12. A circuit scheme to control current surge for RFID-NVM pumps

    Energy Technology Data Exchange (ETDEWEB)

    Li Ming; Kang Jinfeng; Wang Yangyuan [Institute of Microelectronics, Peking University, Beijing 100871 (China); Yang Liwu, E-mail: prettynecess@163.co [Semiconductor Manufacturing International Corporation, Shanghai 201203 (China)

    2010-02-15

    This paper presents a new circuit scheme to control the current surge in the boosting phase of an radio frequency idenfication-nonvolative memory pump. By introducing a circuit block consisting of a current reference and a current mirror, the new circuit scheme can keep the period-average current of the pump constantly below the desired level, for example, 2.5 {mu}A. Therefore, it can prevent the rectified supply of the RFID tag IC from collapsing in the boosting phase of the pump. The presented scheme could effectively reduce the voltage drop on the rectified supply from more than 50% to even zero, but could cost less area. Moreover, an analytical expression to calculate the boosting time of a pump in the new scheme is developed. (semiconductor integrated circuits)

  13. A circuit scheme to control current surge for RFID-NVM pumps

    International Nuclear Information System (INIS)

    Li Ming; Kang Jinfeng; Wang Yangyuan; Yang Liwu

    2010-01-01

    This paper presents a new circuit scheme to control the current surge in the boosting phase of an radio frequency idenfication-nonvolative memory pump. By introducing a circuit block consisting of a current reference and a current mirror, the new circuit scheme can keep the period-average current of the pump constantly below the desired level, for example, 2.5 μA. Therefore, it can prevent the rectified supply of the RFID tag IC from collapsing in the boosting phase of the pump. The presented scheme could effectively reduce the voltage drop on the rectified supply from more than 50% to even zero, but could cost less area. Moreover, an analytical expression to calculate the boosting time of a pump in the new scheme is developed. (semiconductor integrated circuits)

  14. Application of a robust and efficient Lagrangian particle scheme to soot transport in turbulent flames

    KAUST Repository

    Attili, Antonio

    2013-09-01

    A Lagrangian particle scheme is applied to the solution of soot dynamics in turbulent nonpremixed flames. Soot particulate is described using a method of moments and the resulting set of continuum advection-reaction equations is solved using the Lagrangian particle scheme. The key property of the approach is the independence between advection, described by the movement of Lagrangian notional particles along pathlines, and internal aerosol processes, evolving on each notional particle via source terms. Consequently, the method overcomes the issues in Eulerian grid-based schemes for the advection of moments: errors in the advective fluxes pollute the moments compromising their realizability and the stiffness of source terms weakens the stability of the method. The proposed scheme exhibits superior properties with respect to conventional Eulerian schemes in terms of stability, accuracy, and grid convergence. Taking into account the quality of the solution, the Lagrangian approach can be computationally more economical than commonly used Eulerian schemes as it allows the resolution requirements dictated by the different physical phenomena to be independently optimized. Finally, the scheme posseses excellent scalability on massively parallel computers. © 2013 Elsevier Ltd.

  15. Improved inhalation technology for setting safe exposure levels for workplace chemicals

    Science.gov (United States)

    Stuart, Bruce O.

    1993-01-01

    Threshold Limit Values recommended as allowable air concentrations of a chemical in the workplace are often based upon a no-observable-effect-level (NOEL) determined by experimental inhalation studies using rodents. A 'safe level' for human exposure must then be estimated by the use of generalized safety factors in attempts to extrapolate from experimental rodents to man. The recent development of chemical-specific physiologically-based toxicokinetics makes use of measured physiological, biochemical, and metabolic parameters to construct a validated model that is able to 'scale-up' rodent response data to predict the behavior of the chemical in man. This procedure is made possible by recent advances in personal computer software and the emergence of appropriate biological data, and provides an analytical tool for much more reliable risk evaluation and airborne chemical exposure level setting for humans.

  16. Computerized detection of multiple sclerosis candidate regions based on a level set method using an artificial neural network

    International Nuclear Information System (INIS)

    Kuwazuru, Junpei; Magome, Taiki; Arimura, Hidetaka; Yamashita, Yasuo; Oki, Masafumi; Toyofuku, Fukai; Kakeda, Shingo; Yamamoto, Daisuke

    2010-01-01

    Yamamoto et al. developed the system for computer-aided detection of multiple sclerosis (MS) candidate regions. In a level set method in their proposed method, they employed the constant threshold value for the edge indicator function related to a speed function of the level set method. However, it would be appropriate to adjust the threshold value to each MS candidate region, because the edge magnitudes in MS candidates differ from each other. Our purpose of this study was to develop a computerized detection of MS candidate regions in MR images based on a level set method using an artificial neural network (ANN). To adjust the threshold value for the edge indicator function in the level set method to each true positive (TP) and false positive (FP) region, we constructed the ANN. The ANN could provide the suitable threshold value for each candidate region in the proposed level set method so that TP regions can be segmented and FP regions can be removed. Our proposed method detected MS regions at a sensitivity of 82.1% with 0.204 FPs per slice and similarity index of MS candidate regions was 0.717 on average. (author)

  17. A full quantum network scheme

    International Nuclear Information System (INIS)

    Ma Hai-Qiang; Wei Ke-Jin; Yang Jian-Hui; Li Rui-Xue; Zhu Wu

    2014-01-01

    We present a full quantum network scheme using a modified BB84 protocol. Unlike other quantum network schemes, it allows quantum keys to be distributed between two arbitrary users with the help of an intermediary detecting user. Moreover, it has good expansibility and prevents all potential attacks using loopholes in a detector, so it is more practical to apply. Because the fiber birefringence effects are automatically compensated, the scheme is distinctly stable in principle and in experiment. The simple components for every user make our scheme easier for many applications. The experimental results demonstrate the stability and feasibility of this scheme. (general)

  18. An Asset Protection Scheme for Banks Exposed to Troubled Loan Portfolios

    DEFF Research Database (Denmark)

    Grosen, Anders; Jessen, Pernille; Kokholm, Thomas

    2014-01-01

    We examine a specific portfolio credit derivative, an Asset Protection Scheme (APS), and its applicability as a discretionary regulatory tool to reduce asymmetric information and help restore the capital base of troubled banks. The APS can be a fair-valued contract with an appropriate structure...... of incentives. We apply two alternative multivariate structural default risk models: the classical Gaussian Merton model and a model based on Normal Inverse Gaussian processes. Using a data set on annual farm level data from 1996 to 2009, we use the Danish agricultural sector as a case study and price an APS...... on an agricultural loan portfolio. We compute the economic capital for this loan portfolio with and without an APS. Moreover, we illustrate how model risk in the form of parameter uncertainty is reduced when an APS is attached to the loan portfolio....

  19. Transmission usage cost allocation schemes

    International Nuclear Information System (INIS)

    Abou El Ela, A.A.; El-Sehiemy, R.A.

    2009-01-01

    This paper presents different suggested transmission usage cost allocation (TCA) schemes to the system individuals. Different independent system operator (ISO) visions are presented using the proportional rata and flow-based TCA methods. There are two proposed flow-based TCA schemes (FTCA). The first FTCA scheme generalizes the equivalent bilateral exchanges (EBE) concepts for lossy networks through two-stage procedure. The second FTCA scheme is based on the modified sensitivity factors (MSF). These factors are developed from the actual measurements of power flows in transmission lines and the power injections at different buses. The proposed schemes exhibit desirable apportioning properties and are easy to implement and understand. Case studies for different loading conditions are carried out to show the capability of the proposed schemes for solving the TCA problem. (author)

  20. Research on the supercapacitor support schemes for LVRT of variable-frequency drive in the thermal power plant

    Science.gov (United States)

    Han, Qiguo; Zhu, Kai; Shi, Wenming; Wu, Kuayu; Chen, Kai

    2018-02-01

    In order to solve the problem of low voltage ride through(LVRT) of the major auxiliary equipment’s variable-frequency drive (VFD) in thermal power plant, the scheme of supercapacitor paralleled in the DC link of VFD is put forward, furthermore, two solutions of direct parallel support and voltage boost parallel support of supercapacitor are proposed. The capacitor values for the relevant motor loads are calculated according to the law of energy conservation, and they are verified by Matlab simulation. At last, a set of test prototype is set up, and the test results prove the feasibility of the proposed schemes.

  1. Matroids and quantum-secret-sharing schemes

    International Nuclear Information System (INIS)

    Sarvepalli, Pradeep; Raussendorf, Robert

    2010-01-01

    A secret-sharing scheme is a cryptographic protocol to distribute a secret state in an encoded form among a group of players such that only authorized subsets of the players can reconstruct the secret. Classically, efficient secret-sharing schemes have been shown to be induced by matroids. Furthermore, access structures of such schemes can be characterized by an excluded minor relation. No such relations are known for quantum secret-sharing schemes. In this paper we take the first steps toward a matroidal characterization of quantum-secret-sharing schemes. In addition to providing a new perspective on quantum-secret-sharing schemes, this characterization has important benefits. While previous work has shown how to construct quantum-secret-sharing schemes for general access structures, these schemes are not claimed to be efficient. In this context the present results prove to be useful; they enable us to construct efficient quantum-secret-sharing schemes for many general access structures. More precisely, we show that an identically self-dual matroid that is representable over a finite field induces a pure-state quantum-secret-sharing scheme with information rate 1.

  2. Reassessment of MLST schemes for Leptospira spp. typing worldwide.

    Science.gov (United States)

    Varni, Vanina; Ruybal, Paula; Lauthier, Juan José; Tomasini, Nicolás; Brihuega, Bibiana; Koval, Ariel; Caimi, Karina

    2014-03-01

    Leptospirosis is a neglected zoonosis of global importance. Several multilocus sequence typing (MLST) methods have been developed for Leptospira spp., the causative agent of leptospirosis. In this study we reassessed the most commonly used MLST schemes in a set of worldwide isolates, in order to select the loci that achieve the maximum power of discrimination for typing Leptospira spp. Global eBURST algorithm was used to detect clonal complexes among STs and phylogenetic relationships among concatenated and individual sequences were inferred through maximum likelihood (ML) analysis. The evaluation of 12 loci combined to type a subset of strains rendered 57 different STs. Seven of these loci were selected into a final scheme upon studying the number of alleles and polymorphisms, the typing efficiency, the discriminatory power and the ratio dN/dS per nucleotide site for each locus. This new 7-locus scheme was applied to a wider collection of worldwide strains. The ML tree constructed from concatenated sequences of the 7 loci identified 6 major clusters corresponding to 6 Leptospira species. Global eBURST established 8 CCs, which showed that genotypes were clearly related by geographic origin and host. ST52 and ST47, represented mostly by Argentinian isolates, grouped the higher number of isolates. These isolates were serotyped as serogroups Pomona and Icterohaemorrhagiae, showing a unidirectional correlation in which the isolates with the same ST belong to the same serogroup. In summary, this scheme combines the best loci from the most widely used MLST schemes for Leptospira spp. and supports worldwide strains classification. The Argentinian isolates exhibited congruence between allelic profile and serogroup, providing an alternative to serological methods. Published by Elsevier B.V.

  3. Modeling and Analysis of Hybrid Cellular/WLAN Systems with Integrated Service-Based Vertical Handoff Schemes

    Science.gov (United States)

    Xia, Weiwei; Shen, Lianfeng

    We propose two vertical handoff schemes for cellular network and wireless local area network (WLAN) integration: integrated service-based handoff (ISH) and integrated service-based handoff with queue capabilities (ISHQ). Compared with existing handoff schemes in integrated cellular/WLAN networks, the proposed schemes consider a more comprehensive set of system characteristics such as different features of voice and data services, dynamic information about the admitted calls, user mobility and vertical handoffs in two directions. The code division multiple access (CDMA) cellular network and IEEE 802.11e WLAN are taken into account in the proposed schemes. We model the integrated networks by using multi-dimensional Markov chains and the major performance measures are derived for voice and data services. The important system parameters such as thresholds to prioritize handoff voice calls and queue sizes are optimized. Numerical results demonstrate that the proposed ISHQ scheme can maximize the utilization of overall bandwidth resources with the best quality of service (QoS) provisioning for voice and data services.

  4. The use of an integrated variable fuzzy sets in water resources management

    Science.gov (United States)

    Qiu, Qingtai; Liu, Jia; Li, Chuanzhe; Yu, Xinzhe; Wang, Yang

    2018-06-01

    Based on the evaluation of the present situation of water resources and the development of water conservancy projects and social economy, optimal allocation of regional water resources presents an increasing need in the water resources management. Meanwhile it is also the most effective way to promote the harmonic relationship between human and water. In view of the own limitations of the traditional evaluations of which always choose a single index model using in optimal allocation of regional water resources, on the basis of the theory of variable fuzzy sets (VFS) and system dynamics (SD), an integrated variable fuzzy sets model (IVFS) is proposed to address dynamically complex problems in regional water resources management in this paper. The model is applied to evaluate the level of the optimal allocation of regional water resources of Zoucheng in China. Results show that the level of allocation schemes of water resources ranging from 2.5 to 3.5, generally showing a trend of lower level. To achieve optimal regional management of water resources, this model conveys a certain degree of accessing water resources management, which prominently improve the authentic assessment of water resources management by using the eigenvector of level H.

  5. Optimum Electrode Configurations for Two-Probe, Four-Probe and Multi-Probe Schemes in Electrical Resistance Tomography for Delamination Identification in Carbon Fiber Reinforced Composites

    Directory of Open Access Journals (Sweden)

    Luis Waldo Escalona-Galvis

    2018-04-01

    Full Text Available Internal damage in Carbon Fiber Reinforced Polymer (CFRP composites modifies the internal electrical conductivity of the composite material. Electrical Resistance Tomography (ERT is a non-destructive evaluation (NDE technique that determines the extent of damage based on electrical conductivity changes. Implementation of ERT for damage identification in CFRP composites requires the optimal selection of the sensing sites for accurate results. This selection depends on the measuring scheme used. The present work uses an effective independence (EI measure for selecting the minimum set of measurements for ERT damage identification using three measuring schemes: two-probe, four-probe and multi-probe. The electrical potential field in two CFRP laminate layups with 14 electrodes is calculated using finite element analyses (FEA for a set of specified delamination damage cases. The measuring schemes consider the cases of 14 electrodes distributed on both sides and seven electrodes on only one side of the laminate for each layup. The effectiveness of EI reduction is demonstrated by comparing the inverse identification results of delamination cases for the full and the reduced sets using the measuring schemes and electrode sets. This work shows that the EI measure optimally reduces electrode and electrode combinations in ERT based damage identification for different measuring schemes.

  6. Impact and Suggestion of Column-to-Surface Vertical Correction Scheme on the Relationship between Satellite AOD and Ground-Level PM2.5 in China

    Directory of Open Access Journals (Sweden)

    Wei Gong

    2017-10-01

    Full Text Available As China is suffering from severe fine particle pollution from dense industrialization and urbanization, satellite-derived aerosol optical depth (AOD has been widely used for estimating particulate matter with an aerodynamic diameter less than 2.5 μm (PM2.5. However, the correlation between satellite AOD and ground-level PM2.5 could be influenced by aerosol vertical distribution, as satellite AOD represents the entire column, rather than just ground-level concentration. Here, a new column-to-surface vertical correction scheme is proposed to improve separation of the near-surface and elevated aerosol layers, based on the ratio of the integrated extinction coefficient within 200–500 m above ground level (AGL, using the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP aerosol profile products. There are distinct differences in climate, meteorology, terrain, and aerosol transmission throughout China, so comparisons between vertical correction via CALIOP ratio and planetary boundary layer height (PBLH were conducted in different regions from 2014 to 2015, combined with the original Pearson coefficient between satellite AOD and ground-level PM2.5 for reference. Furthermore, the best vertical correction scheme was suggested for different regions to achieve optimal correlation with PM2.5, based on the analysis and discussion of regional and seasonal characteristics of aerosol vertical distribution. According to our results and discussions, vertical correction via PBLH is recommended in northwestern China, where the PBLH varies dramatically, stretching or compressing the surface aerosol layer; vertical correction via the CALIOP ratio is recommended in northeastern China, southwestern China, Central China (excluding summer, North China Plain (excluding Beijing, and the spring in the southeast coast, areas that are susceptible to exogenous aerosols and exhibit the elevated aerosol layer; and original AOD without vertical correction is

  7. A virtual network computer's optical storage virtualization scheme

    Science.gov (United States)

    Wang, Jianzong; Hu, Huaixiang; Wan, Jiguang; Wang, Peng

    2008-12-01

    In this paper, we present the architecture and implementation of a virtual network computers' (VNC) optical storage virtualization scheme called VOSV. Its task is to manage the mapping of virtual optical storage to physical optical storage, a technique known as optical storage virtualization. The design of VOSV aims at the optical storage resources of different clients and servers that have high read-sharing patterns. VOSV uses several schemes such as a two-level Cache mechanism, a VNC server embedded module and the iSCSI protocols to improve the performance. The results measured on the prototype are encouraging, and indicating that VOSV provides the high I/O performance.

  8. Kir2.1 channels set two levels of resting membrane potential with inward rectification.

    Science.gov (United States)

    Chen, Kuihao; Zuo, Dongchuan; Liu, Zheng; Chen, Haijun

    2018-04-01

    Strong inward rectifier K + channels (Kir2.1) mediate background K + currents primarily responsible for maintenance of resting membrane potential. Multiple types of cells exhibit two levels of resting membrane potential. Kir2.1 and K2P1 currents counterbalance, partially accounting for the phenomenon of human cardiomyocytes in subphysiological extracellular K + concentrations or pathological hypokalemic conditions. The mechanism of how Kir2.1 channels contribute to the two levels of resting membrane potential in different types of cells is not well understood. Here we test the hypothesis that Kir2.1 channels set two levels of resting membrane potential with inward rectification. Under hypokalemic conditions, Kir2.1 currents counterbalance HCN2 or HCN4 cation currents in CHO cells that heterologously express both channels, generating N-shaped current-voltage relationships that cross the voltage axis three times and reconstituting two levels of resting membrane potential. Blockade of HCN channels eliminated the phenomenon in K2P1-deficient Kir2.1-expressing human cardiomyocytes derived from induced pluripotent stem cells or CHO cells expressing both Kir2.1 and HCN2 channels. Weakly inward rectifier Kir4.1 or inward rectification-deficient Kir2.1•E224G mutant channels do not set such two levels of resting membrane potential when co-expressed with HCN2 channels in CHO cells or when overexpressed in human cardiomyocytes derived from induced pluripotent stem cells. These findings demonstrate a common mechanism that Kir2.1 channels set two levels of resting membrane potential with inward rectification by balancing inward currents through different cation channels such as hyperpolarization-activated HCN channels or hypokalemia-induced K2P1 leak channels.

  9. Comparison of actinides and fission products recycling scheme with the normal plutonium recycling scheme in fast reactors

    Directory of Open Access Journals (Sweden)

    Salahuddin Asif

    2013-01-01

    Full Text Available Multiple recycling of actinides and non-volatile fission products in fast reactors through the dry re-fabrication/reprocessing atomics international reduction oxidation process has been studied as a possible way to reduce the long-term potential hazard of nuclear waste compared to that resulting from reprocessing in a wet PUREX process. Calculations have been made to compare the actinides and fission products recycling scheme with the normal plutonium recycling scheme in a fast reactor. For this purpose, the Karlsruhe version of isotope generation and depletion code, KORIGEN, has been modified accordingly. An entirely novel fission product yields library for fast reactors has been created which has replaced the old KORIGEN fission products library. For the purposes of this study, the standard 26 groups data set, KFKINR, developed at Forschungszentrum Karlsruhe, Germany, has been extended by the addition of the cross-sections of 13 important actinides and 68 most important fission products. It has been confirmed that these 68 fission products constitute about 95% of the total fission products yield and about 99.5% of the total absorption due to fission products in fast reactors. The amount of fissile material required to guarantee the criticality of the reactor during recycling schemes has also been investigated. Cumulative high active waste per ton of initial heavy metal is also calculated. Results show that the recycling of actinides and fission products in fast reactors through the atomics international reduction oxidation process results in a reduction of the potential hazard of radioactive waste.

  10. Level of health care and services in a tertiary health setting in Nigeria

    African Journals Online (AJOL)

    Level of health care and services in a tertiary health setting in Nigeria. ... Background: There is a growing awareness and demand for quality health care across the world; hence the ... Doctors and nurses formed 64.3% of the study population.

  11. Hierarchical Markov Model in Life Insurance and Social Benefit Schemes

    Directory of Open Access Journals (Sweden)

    Jiwook Jang

    2018-06-01

    Full Text Available We explored the effect of the jump-diffusion process on a social benefit scheme consisting of life insurance, unemployment/disability benefits, and retirement benefits. To do so, we used a four-state Markov chain with multiple decrements. Assuming independent state-wise intensities taking the form of a jump-diffusion process and deterministic interest rates, we evaluated the prospective reserves for this scheme in which the individual is employed at inception. We then numerically demonstrated the state of the reserves for the scheme under jump-diffusion and non-jump-diffusion settings. By decomposing the reserve equation into five components, our numerical illustration indicated that an extension of the retirement age has a spillover effect that would increase government expenses for other social insurance programs. We also conducted sensitivity analyses and examined the total-reserves components by changing the relevant parameters of the transition intensities, which are the average jump-size parameter, average jump frequency, and diffusion parameters of the chosen states, with figures provided. Our computation revealed that the total reserve is most sensitive to changes in average jump frequency.

  12. Efficient JPEG 2000 Image Compression Scheme for Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Halim Sghaier

    2011-08-01

    Full Text Available When using wireless sensor networks for real-time data transmission, some critical points should be considered. Restricted computational power, reduced memory, narrow bandwidth and energy supplied present strong limits in sensor nodes. Therefore, maximizing network lifetime and minimizing energy consumption are always optimization goals. To overcome the computation and energy limitation of individual sensor nodes during image transmission, an energy efficient image transport scheme is proposed, taking advantage of JPEG2000 still image compression standard using MATLAB and C from Jasper. JPEG2000 provides a practical set of features, not necessarily available in the previous standards. These features were achieved using techniques: the discrete wavelet transform (DWT, and embedded block coding with optimized truncation (EBCOT. Performance of the proposed image transport scheme is investigated with respect to image quality and energy consumption. Simulation results are presented and show that the proposed scheme optimizes network lifetime and reduces significantly the amount of required memory by analyzing the functional influence of each parameter of this distributed image compression algorithm.

  13. Scheme Program Documentation Tools

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2004-01-01

    are separate and intended for different documentation purposes they are related to each other in several ways. Both tools are based on XML languages for tool setup and for documentation authoring. In addition, both tools rely on the LAML framework which---in a systematic way---makes an XML language available...... as named functions in Scheme. Finally, the Scheme Elucidator is able to integrate SchemeDoc resources as part of an internal documentation resource....

  14. Geneflow and Cumulative discounted Revenues of Dairy Cattle Cross-Breeding Schemes

    International Nuclear Information System (INIS)

    Kosgey, I.S.; Bebe, B.O.; Kahi, A.K.; Arendonk, J.A.M.V.

    1999-01-01

    A simulation study Using Matrix formulation was used to asses the flow of genes from the nucleus to the commercial population for three nucleus dairy cattle crossbreeding schemes involving indigenous (Zebu or native ) and exotic (European) animals under Kenyan conditions: Artificial insemination (A.I.) or natural mating F 1 production, continuous crossbred (F 2 inter se) production and multiple ovulation and embryo transfer (MOET) F 1 production. The latter two schemes used MOET in the nucleus. cumulative discounted expressions (CDES) and cumulative discounted Revenues (CDR) were calculated to rank these schemes. The Pathways considered were sires and dams to produce sires and dams. The evaluation Criterion was milk production measured in age class 3 through 10 in F 1 and F 2 cow populations. the schemes were evaluated over a 30-year projected period with assumed interest rates of 0% and 10%. Further, the genetic level between the indigenous nucleus animals, the F 1 males and commercial female population was calculated by defining the incidence vector h as the difference between the three groups. The F 1 A.I. or natural scheme had higher CDES of 0.978 and 0.161 at 0% and 10% interest rates respectively. The corresponding values for F 1 MOET scheme were 0.735 and 0.070 and those of F 2 inter se were 0.676 and 0.079 at 0% and 10% interest rates, respectively. For a nucleus with 64 dams, CDR (US$) were 95.50 and 15.80 at 0% and 10% interest rates, respectively for F 1 A.I. or natural scheme. The F 1 MOET scheme had corresponding values of 62.05 and 6.90 while F 2 inter se had 66.10 and 7.75. Under both interest rates, the F 1 A.I. or natural mating schemes had higher CDES and CDR than the other two schemes and is faster in dissemination of genes to the commercial population. F 2 inter se was intermediate. The genetic level of nucleus animal is higher than of F 1 male and females because indigenous nucleus females contribute 50% of the genes. F 2 cows are expected to

  15. BSEA: A Blind Sealed-Bid E-Auction Scheme for E-Commerce Applications

    Directory of Open Access Journals (Sweden)

    Rohit Kumar Das

    2016-12-01

    Full Text Available Due to an increase in the number of internet users, electronic commerce has grown significantly during the last decade. Electronic auction (e-auction is one of the famous e-commerce applications. Even so, security and robustness of e-auction schemes still remain a challenge. Requirements like anonymity and privacy of the b i d value are under threat from the attackers. Any auction protocol must not leak the anonymity and the privacy of the b i d value of an honest Bidder. Keeping these requirements in mind, we have firstly proposed a controlled traceable blind signature scheme (CTBSS because e-auction schemes should be able to trace the Bidders. Using CTBSS, a blind sealed-bid electronic auction scheme is proposed (BSEA. We have incorporated the notion of blind signature to e-auction schemes. Moreover, both the schemes are based upon elliptic curve cryptography (ECC, which provides a similar level of security with a comparatively smaller key size than the discrete logarithm problem (DLP based e-auction protocols. The analysis shows that BSEA fulfills all the requirements of e-auction protocol, and the total computation overhead is lower than the existing schemes.

  16. Heterogeneity in Wage Setting Behavior in a New-Keynesian Model

    NARCIS (Netherlands)

    Eijffinger, S.C.W.; Grajales Olarte, A.; Uras, R.B.

    2015-01-01

    In this paper we estimate a New-Keynesian DSGE model with heterogeneity in price and wage setting behavior. In a recent study, Coibion and Gorodnichenko (2011) develop a DSGE model, in which firms follow four different types of price setting schemes: sticky prices, sticky information, rule of thumb,

  17. Setting ozone critical levels for protecting horticultural Mediterranean crops: Case study of tomato

    International Nuclear Information System (INIS)

    González-Fernández, I.; Calvo, E.; Gerosa, G.; Bermejo, V.; Marzuoli, R.; Calatayud, V.; Alonso, R.

    2014-01-01

    Seven experiments carried out in Italy and Spain have been used to parameterising a stomatal conductance model and establishing exposure– and dose–response relationships for yield and quality of tomato with the main goal of setting O 3 critical levels (CLe). CLe with confidence intervals, between brackets, were set at an accumulated hourly O 3 exposure over 40 nl l −1 , AOT40 = 8.4 (1.2, 15.6) ppm h and a phytotoxic ozone dose above a threshold of 6 nmol m −2 s −1 , POD6 = 2.7 (0.8, 4.6) mmol m −2 for yield and AOT40 = 18.7 (8.5, 28.8) ppm h and POD6 = 4.1 (2.0, 6.2) mmol m −2 for quality, both indices performing equally well. CLe confidence intervals provide information on the quality of the dataset and should be included in future calculations of O 3 CLe for improving current methodologies. These CLe, derived for sensitive tomato cultivars, should not be applied for quantifying O 3 -induced losses at the risk of making important overestimations of the economical losses associated with O 3 pollution. -- Highlights: • Seven independent experiments from Italy and Spain were analysed. • O 3 critical levels are proposed for the protection of summer horticultural crops. • Exposure- and flux-based O 3 indices performed equally well. • Confidence intervals of the new O 3 critical levels are calculated. • A new method to estimate the degree risk of O 3 damage is proposed. -- Critical levels for tomato yield were set at AOT40 = 8.4 ppm h and POD6 = 2.7 mmol m −2 and confidence intervals should be used for improving O 3 risk assessment

  18. Assessing Intraseasonal Variability Produced by Several Deep Convection Schemes in the NCAR CCM3.6

    Science.gov (United States)

    Maloney, E. D.

    2001-05-01

    The Hack, Zhang/McFarlane, and McRAS convection schemes produce very different simulations of intraseasonal variability in the NCAR CCM3.6. A robust analysis of simulation performance requires an expanded set of diagnostics. The use of only one criterion to analyze model Madden-Julian oscillation (MJO) variability, such as equatorial zonal wind variability, may give a misleading impression of model performance. Schemes that produce strong variability in zonal winds may sometimes lack a corresponding coherent signal in precipitation, suggesting that model convection and the large-scale circulation are not as strongly coupled as observed. The McRAS scheme, which includes a parametrization of unsaturated convective downdrafts, produces the best simulation of intraseasonal variability of the three schemes used. Downdrafts in McRAS create a moister equatorial troposphere, which increases equatorial convection. Composite analysis indicates a strong dependence of model intraseasonal variability on the frictional convergence mechanism, which may also be important in nature. The McRAS simulation has limitations, however. Indian Ocean variability is weak, and anomalous convection extends too far east across the Pacific. The dependence of convection on surface friction is too strong, and causes enhanced MJO convection to be associated with low-level easterly wind perturbations, unlike observed MJO convection. Anomalous vertical advection associated with surface convergence influences model convection by moistening the lower troposphere. Based on the work of Hendon (2000), coupling to an interactive ocean is unlikely to change the performance of the CCM3 with McRAS, due to the phase relationship between anomalous convection and zonal winds. Use of the analysis tools presented here indicates areas for improvement in the parametrization of deep convection by atmospheric GCMs.

  19. A Memory Efficient Network Encryption Scheme

    Science.gov (United States)

    El-Fotouh, Mohamed Abo; Diepold, Klaus

    In this paper, we studied the two widely used encryption schemes in network applications. Shortcomings have been found in both schemes, as these schemes consume either more memory to gain high throughput or low memory with low throughput. The need has aroused for a scheme that has low memory requirements and in the same time possesses high speed, as the number of the internet users increases each day. We used the SSM model [1], to construct an encryption scheme based on the AES. The proposed scheme possesses high throughput together with low memory requirements.

  20. Schemes of detecting nuclear spin correlations by dynamical decoupling based quantum sensing

    Science.gov (United States)

    Ma, Wen-Long Ma; Liu, Ren-Bao

    Single-molecule sensitivity of nuclear magnetic resonance (NMR) and angstrom resolution of magnetic resonance imaging (MRI) are the highest challenges in magnetic microscopy. Recent development in dynamical decoupling (DD) enhanced diamond quantum sensing has enabled NMR of single nuclear spins and nanoscale NMR. Similar to conventional NMR and MRI, current DD-based quantum sensing utilizes the frequency fingerprints of target nuclear spins. Such schemes, however, cannot resolve different nuclear spins that have the same noise frequency or differentiate different types of correlations in nuclear spin clusters. Here we show that the first limitation can be overcome by using wavefunction fingerprints of target nuclear spins, which is much more sensitive than the ''frequency fingerprints'' to weak hyperfine interaction between the targets and a sensor, while the second one can be overcome by a new design of two-dimensional DD sequences composed of two sets of periodic DD sequences with different periods, which can be independently set to match two different transition frequencies. Our schemes not only offer an approach to breaking the resolution limit set by ''frequency gradients'' in conventional MRI, but also provide a standard approach to correlation spectroscopy for single-molecule NMR.

  1. County-level poverty is equally associated with unmet health care needs in rural and urban settings.

    Science.gov (United States)

    Peterson, Lars E; Litaker, David G

    2010-01-01

    Regional poverty is associated with reduced access to health care. Whether this relationship is equally strong in both rural and urban settings or is affected by the contextual and individual-level characteristics that distinguish these areas, is unclear. Compare the association between regional poverty with self-reported unmet need, a marker of health care access, by rural/urban setting. Multilevel, cross-sectional analysis of a state-representative sample of 39,953 adults stratified by rural/urban status, linked at the county level to data describing contextual characteristics. Weighted random intercept models examined the independent association of regional poverty with unmet needs, controlling for a range of contextual and individual-level characteristics. The unadjusted association between regional poverty levels and unmet needs was similar in both rural (OR = 1.06 [95% CI, 1.04-1.08]) and urban (OR = 1.03 [1.02-1.05]) settings. Adjusting for other contextual characteristics increased the size of the association in both rural (OR = 1.11 [1.04-1.19]) and urban (OR = 1.11 [1.05-1.18]) settings. Further adjustment for individual characteristics had little additional effect in rural (OR = 1.10 [1.00-1.20]) or urban (OR = 1.11 [1.01-1.22]) settings. To better meet the health care needs of all Americans, health care systems in areas with high regional poverty should acknowledge the relationship between poverty and unmet health care needs. Investments, or other interventions, that reduce regional poverty may be useful strategies for improving health through better access to health care. © 2010 National Rural Health Association.

  2. Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme

    Directory of Open Access Journals (Sweden)

    Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin

    2012-08-01

    Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words:  Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.

  3. Modified Aggressive Packet Combining Scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2010-06-01

    In this letter, a few schemes are presented to improve the performance of aggressive packet combining scheme (APC). To combat error in computer/data communication networks, ARQ (Automatic Repeat Request) techniques are used. Several modifications to improve the performance of ARQ are suggested by recent research and are found in literature. The important modifications are majority packet combining scheme (MjPC proposed by Wicker), packet combining scheme (PC proposed by Chakraborty), modified packet combining scheme (MPC proposed by Bhunia), and packet reversed packet combining (PRPC proposed by Bhunia) scheme. These modifications are appropriate for improving throughput of conventional ARQ protocols. Leung proposed an idea of APC for error control in wireless networks with the basic objective of error control in uplink wireless data network. We suggest a few modifications of APC to improve its performance in terms of higher throughput, lower delay and higher error correction capability. (author)

  4. Canonical Naimark extension for generalized measurements involving sets of Pauli quantum observables chosen at random

    Science.gov (United States)

    Sparaciari, Carlo; Paris, Matteo G. A.

    2013-01-01

    We address measurement schemes where certain observables Xk are chosen at random within a set of nondegenerate isospectral observables and then measured on repeated preparations of a physical system. Each observable has a probability zk to be measured, with ∑kzk=1, and the statistics of this generalized measurement is described by a positive operator-valued measure. This kind of scheme is referred to as quantum roulettes, since each observable Xk is chosen at random, e.g., according to the fluctuating value of an external parameter. Here we focus on quantum roulettes for qubits involving the measurements of Pauli matrices, and we explicitly evaluate their canonical Naimark extensions, i.e., their implementation as indirect measurements involving an interaction scheme with a probe system. We thus provide a concrete model to realize the roulette without destroying the signal state, which can be measured again after the measurement or can be transmitted. Finally, we apply our results to the description of Stern-Gerlach-like experiments on a two-level system.

  5. Revisiting support policies for RES-E adulthood: Towards market compatible schemes

    International Nuclear Information System (INIS)

    Huntington, Samuel C.; Rodilla, Pablo; Herrero, Ignacio; Batlle, Carlos

    2017-01-01

    The past two decades of growth in renewable energy sources of electricity (RES-E) have been largely driven by out-of-market support policies. These schemes were designed to drive deployment on the basis of specific subsidies sustained in time to allow for the larger costs as well as to limit investor risk. While these policies have proven to be effective, the way they have been designed to date has led to costly market distortions that are becoming more difficult to ignore as penetrations reach unpreceded levels. In the context of this growing concern, we provide a critical analysis of the design elements of RES-E support schemes, focusing on how they affect this trade-off between promoting and efficiently integrating RES-E. The emphasis is on the structure of the incentive payment, which in the end turns to be the cornerstone for an efficient integration. We conclude that, while needed, a well-designed and further developed capacity-based support mechanism complemented with ex-post compensations defined for reference benchmark plants, such as the mechanism currently implemented in Spain, is an alternative with good properties if the major goal is truly market integration. The approach is robust to future developments in technology cost, performance and market penetration of RES-E. - Highlights: • Market distortions due to RES support mechanisms are becoming difficult to ignore. • This paper provides a critical analysis of the design elements of RES support schemes. • The emphasis is on the structure of the incentive, key for an efficient integration. • We argue in favor of a further developed capacity-based support mechanism. • The incentive should be combined with the design of a set of reference plants.

  6. Scope of physician procedures independently billed by mid-level providers in the office setting.

    Science.gov (United States)

    Coldiron, Brett; Ratnarathorn, Mondhipa

    2014-11-01

    Mid-level providers (nurse practitioners and physician assistants) were originally envisioned to provide primary care services in underserved areas. This study details the current scope of independent procedural billing to Medicare of difficult, invasive, and surgical procedures by medical mid-level providers. To understand the scope of independent billing to Medicare for procedures performed by mid-level providers in an outpatient office setting for a calendar year. Analyses of the 2012 Medicare Physician/Supplier Procedure Summary Master File, which reflects fee-for-service claims that were paid by Medicare, for Current Procedural Terminology procedures independently billed by mid-level providers. Outpatient office setting among health care providers. The scope of independent billing to Medicare for procedures performed by mid-level providers. In 2012, nurse practitioners and physician assistants billed independently for more than 4 million procedures at our cutoff of 5000 paid claims per procedure. Most (54.8%) of these procedures were performed in the specialty area of dermatology. The findings of this study are relevant to safety and quality of care. Recently, the shortage of primary care clinicians has prompted discussion of widening the scope of practice for mid-level providers. It would be prudent to temper widening the scope of practice of mid-level providers by recognizing that mid-level providers are not solely limited to primary care, and may involve procedures for which they may not have formal training.

  7. A New Quantum Key Distribution Scheme Based on Frequency and Time Coding

    International Nuclear Information System (INIS)

    Chang-Hua, Zhu; Chang-Xing, Pei; Dong-Xiao, Quan; Jing-Liang, Gao; Nan, Chen; Yun-Hui, Yi

    2010-01-01

    A new scheme of quantum key distribution (QKD) using frequency and time coding is proposed, in which the security is based on the frequency-time uncertainty relation. In this scheme, the binary information sequence is encoded randomly on either the central frequency or the time delay of the optical pulse at the sender. The central frequency of the single photon pulse is set as ω 1 for bit 0 and set as ω 2 for bit 1 when frequency coding is selected. However, the single photon pulse is not delayed for bit 0 and is delayed in τ for 1 when time coding is selected. At the receiver, either the frequency or the time delay of the pulse is measured randomly, and the final key is obtained after basis comparison, data reconciliation and privacy amplification. With the proposed method, the effect of the noise in the fiber channel and environment on the QKD system can be reduced effectively

  8. Multi-level iteration optimization for diffusive critical calculation

    International Nuclear Information System (INIS)

    Li Yunzhao; Wu Hongchun; Cao Liangzhi; Zheng Youqi

    2013-01-01

    In nuclear reactor core neutron diffusion calculation, there are usually at least three levels of iterations, namely the fission source iteration, the multi-group scattering source iteration and the within-group iteration. Unnecessary calculations occur if the inner iterations are converged extremely tight. But the convergence of the outer iteration may be affected if the inner ones are converged insufficiently tight. Thus, a common scheme suit for most of the problems was proposed in this work to automatically find the optimized settings. The basic idea is to optimize the relative error tolerance of the inner iteration based on the corresponding convergence rate of the outer iteration. Numerical results of a typical thermal neutron reactor core problem and a fast neutron reactor core problem demonstrate the effectiveness of this algorithm in the variational nodal method code NODAL with the Gauss-Seidel left preconditioned multi-group GMRES algorithm. The multi-level iteration optimization scheme reduces the number of multi-group and within-group iterations respectively by a factor of about 1-2 and 5-21. (authors)

  9. Sanitizing sensitive association rules using fuzzy correlation scheme

    International Nuclear Information System (INIS)

    Hameed, S.; Shahzad, F.; Asghar, S.

    2013-01-01

    Data mining is used to extract useful information hidden in the data. Sometimes this extraction of information leads to revealing sensitive information. Privacy preservation in Data Mining is a process of sanitizing sensitive information. This research focuses on sanitizing sensitive rules discovered in quantitative data. The proposed scheme, Privacy Preserving in Fuzzy Association Rules (PPFAR) is based on fuzzy correlation analysis. In this work, fuzzy set concept is integrated with fuzzy correlation analysis and Apriori algorithm to mark interesting fuzzy association rules. The identified rules are called sensitive. For sanitization, we use modification technique where we substitute maximum value of fuzzy items with zero, which occurs most frequently. Experiments demonstrate that PPFAR method hides sensitive rules with minimum modifications. The technique also maintains the modified data's quality. The PPFAR scheme has applications in various domains e.g. temperature control, medical analysis, travel time prediction, genetic behavior prediction etc. We have validated the results on medical dataset. (author)

  10. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    International Nuclear Information System (INIS)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra; Kalogeropoulou, Christina; Pratikakis, Ioannis; Costaridou, Lena

    2015-01-01

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the

  11. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    Energy Technology Data Exchange (ETDEWEB)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra [Department of Medical Physics, School of Medicine,University of Patras, Patras 26504 (Greece); Kalogeropoulou, Christina [Department of Radiology, School of Medicine, University of Patras, Patras 26504 (Greece); Pratikakis, Ioannis [Department of Electrical and Computer Engineering, Democritus University of Thrace, Xanthi 67100 (Greece); Costaridou, Lena, E-mail: costarid@upatras.gr [Department of Medical Physics, School of Medicine, University of Patras, Patras 26504 (Greece)

    2015-08-15

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the

  12. An accurate anisotropic adaptation method for solving the level set advection equation

    International Nuclear Information System (INIS)

    Bui, C.; Dapogny, C.; Frey, P.

    2012-01-01

    In the present paper, a mesh adaptation process for solving the advection equation on a fully unstructured computational mesh is introduced, with a particular interest in the case it implicitly describes an evolving surface. This process mainly relies on a numerical scheme based on the method of characteristics. However, low order, this scheme lends itself to a thorough analysis on the theoretical side. It gives rise to an anisotropic error estimate which enjoys a very natural interpretation in terms of the Hausdorff distance between the exact and approximated surfaces. The computational mesh is then adapted according to the metric supplied by this estimate. The whole process enjoys a good accuracy as far as the interface resolution is concerned. Some numerical features are discussed and several classical examples are presented and commented in two or three dimensions. (authors)

  13. Non-state global environmental governance : the emergence and effectiveness of forest and fisheries certification schemes

    OpenAIRE

    Gulbrandsen, Lars H.

    2009-01-01

    There is growing scholarly interest in the role and function of non-state actors in global governance. A number of non-state governance schemes have been created in recent years to set environmental and social standards for the certification of private companies and producers. This thesis focuses on certification schemes in the forestry and fisheries sectors, as initiatives in these two sectors arguably represent the most advanced cases of non-state rulemaking and governance in the environmen...

  14. Bonus schemes and trading activity

    NARCIS (Netherlands)

    Pikulina, E.S.; Renneboog, L.D.R.; ter Horst, J.R.; Tobler, P.N.

    2014-01-01

    Little is known about how different bonus schemes affect traders' propensity to trade and which bonus schemes improve traders' performance. We study the effects of linear versus threshold bonus schemes on traders' behavior. Traders buy and sell shares in an experimental stock market on the basis of

  15. An Optimization Scheme for Water Pump Control in Smart Fish Farm with Efficient Energy Consumption

    Directory of Open Access Journals (Sweden)

    Israr Ullah

    2018-06-01

    Full Text Available Healthy fish production requires intensive care and ensuring stable and healthy production environment inside the farm tank is a challenging task. An Internet of Things (IoT based automated system is highly desirable that can continuously monitor the fish tanks with optimal resources utilization. Significant cost reduction can be achieved if farm equipment and water pumps are operated only when required using optimization schemes. In this paper, we present a general system design for smart fish farms. We have developed an optimization scheme for water pump control to maintain desired water level in fish tank with efficient energy consumption through appropriate selection of pumping flow rate and tank filling level. Proposed optimization scheme attempts to achieve a trade-off between pumping duration and flow rate through selection of optimized water level. Kalman filter algorithm is applied to remove error in sensor readings. We observed through simulation results that optimization scheme achieve significant reduction in energy consumption as compared to the two alternate schemes, i.e., pumping with maximum and minimum flow rates. Proposed system can help in collecting the data about the farm for long-term analysis and better decision making in future for efficient resource utilization and overall profit maximization.

  16. Adverse selection in a voluntary Rural Mutual Health Care health insurance scheme in China.

    Science.gov (United States)

    Wang, Hong; Zhang, Licheng; Yip, Winnie; Hsiao, William

    2006-09-01

    This study examines adverse selection in a subsidized voluntary health insurance scheme, the Rural Mutual Health Care (RMHC) scheme, in a poor rural area of China. The study was made possible by a unique longitudinal data set: the total sample includes 3492 rural residents from 1020 households. Logistic regression was employed for the data analysis. The results show that although this subsidized scheme achieved a considerable high enrollment rate of 71% of rural residents, adverse selection still exists. In general, individuals with worse health status are more likely to enroll in RMHC than individuals with better health status. Although the household is set as the enrollment unit for the RMHC for the purpose of reducing adverse selection, nearly 1/3 of enrolled households are actually only partially enrolled. Furthermore, we found that adverse selection mainly occurs in partially enrolled households. The non-enrolled individuals in partially enrolled households have the best health status, while the enrolled individuals in partially enrolled households have the worst health status. Pre-RMHC, medical expenditure for enrolled individuals in partially enrolled households was 206.6 yuan per capita per year, which is 1.7 times as much as the pre-RMHC medical expenditure for non-enrolled individuals in partially enrolled households. The study also reveals that the pre-enrolled medical expenditure per capita per year of enrolled individuals was 9.6% higher than the pre-enrolled medical expenditure of all residents, including both enrolled and non-enrolled individuals. In conclusion, although the subsidized RMHC scheme reached a very high enrollment rate and the household is set as the enrollment unit for the purpose of reducing adverse selection, adverse selection still exists, especially within partially enrolled households. Voluntary RMHC will not be financially sustainable if the adverse selection is not fully taken into account.

  17. A more accurate scheme for calculating Earth's skin temperature

    Science.gov (United States)

    Tsuang, Ben-Jei; Tu, Chia-Ying; Tsai, Jeng-Lin; Dracup, John A.; Arpe, Klaus; Meyers, Tilden

    2009-02-01

    The theoretical framework of the vertical discretization of a ground column for calculating Earth’s skin temperature is presented. The suggested discretization is derived from the evenly heat-content discretization with the optimal effective thickness for layer-temperature simulation. For the same level number, the suggested discretization is more accurate in skin temperature as well as surface ground heat flux simulations than those used in some state-of-the-art models. A proposed scheme (“op(3,2,0)”) can reduce the normalized root-mean-square error (or RMSE/STD ratio) of the calculated surface ground heat flux of a cropland site significantly to 2% (or 0.9 W m-2), from 11% (or 5 W m-2) by a 5-layer scheme used in ECMWF, from 19% (or 8 W m-2) by a 5-layer scheme used in ECHAM, and from 74% (or 32 W m-2) by a single-layer scheme used in the UCLA GCM. Better accuracy can be achieved by including more layers to the vertical discretization. Similar improvements are expected for other locations with different land types since the numerical error is inherited into the models for all the land types. The proposed scheme can be easily implemented into state-of-the-art climate models for the temperature simulation of snow, ice and soil.

  18. Use of Whole-Genus Genome Sequence Data To Develop a Multilocus Sequence Typing Tool That Accurately Identifies Yersinia Isolates to the Species and Subspecies Levels

    Science.gov (United States)

    Hall, Miquette; Chattaway, Marie A.; Reuter, Sandra; Savin, Cyril; Strauch, Eckhard; Carniel, Elisabeth; Connor, Thomas; Van Damme, Inge; Rajakaruna, Lakshani; Rajendram, Dunstan; Jenkins, Claire; Thomson, Nicholas R.

    2014-01-01

    The genus Yersinia is a large and diverse bacterial genus consisting of human-pathogenic species, a fish-pathogenic species, and a large number of environmental species. Recently, the phylogenetic and population structure of the entire genus was elucidated through the genome sequence data of 241 strains encompassing every known species in the genus. Here we report the mining of this enormous data set to create a multilocus sequence typing-based scheme that can identify Yersinia strains to the species level to a level of resolution equal to that for whole-genome sequencing. Our assay is designed to be able to accurately subtype the important human-pathogenic species Yersinia enterocolitica to whole-genome resolution levels. We also report the validation of the scheme on 386 strains from reference laboratory collections across Europe. We propose that the scheme is an important molecular typing system to allow accurate and reproducible identification of Yersinia isolates to the species level, a process often inconsistent in nonspecialist laboratories. Additionally, our assay is the most phylogenetically informative typing scheme available for Y. enterocolitica. PMID:25339391

  19. Essays on evaluating a community based health insurance scheme in rural Ethiopia

    NARCIS (Netherlands)

    A.D. Mebratie (Anagaw)

    2015-01-01

    markdownabstract__Abstract__ Since the late 1990s, in a move away from user fees for health care and with the aim of creating universal access, several low and middle income countries have set up community-based health insurance (CBHI) schemes. Following this approach, in June 2011, with the

  20. Application of physiologically based pharmacokinetic modeling in setting acute exposure guideline levels for methylene chloride.

    NARCIS (Netherlands)

    Bos, Peter Martinus Jozef; Zeilmaker, Marco Jacob; Eijkeren, Jan Cornelis Henri van

    2006-01-01

    Acute exposure guideline levels (AEGLs) are derived to protect the human population from adverse health effects in case of single exposure due to an accidental release of chemicals into the atmosphere. AEGLs are set at three different levels of increasing toxicity for exposure durations ranging from

  1. A method to test the reproducibility and to improve performance of computer-aided detection schemes for digitized mammograms

    International Nuclear Information System (INIS)

    Zheng Bin; Gur, David; Good, Walter F.; Hardesty, Lara A.

    2004-01-01

    The purpose of this study is to develop a new method for assessment of the reproducibility of computer-aided detection (CAD) schemes for digitized mammograms and to evaluate the possibility of using the implemented approach for improving CAD performance. Two thousand digitized mammograms (representing 500 cases) with 300 depicted verified masses were selected in the study. Series of images were generated for each digitized image by resampling after a series of slight image rotations. A CAD scheme developed in our laboratory was applied to all images to detect suspicious mass regions. We evaluated the reproducibility of the scheme using the detection sensitivity and false-positive rates for the original and resampled images. We also explored the possibility of improving CAD performance using three methods of combining results from the original and resampled images, including simple grouping, averaging output scores, and averaging output scores after grouping. The CAD scheme generated a detection score (from 0 to 1) for each identified suspicious region. A region with a detection score >0.5 was considered as positive. The CAD scheme detected 238 masses (79.3% case-based sensitivity) and identified 1093 false-positive regions (average 0.55 per image) in the original image dataset. In eleven repeated tests using original and ten sets of rotated and resampled images, the scheme detected a maximum of 271 masses and identified as many as 2359 false-positive regions. Two hundred and eighteen masses (80.4%) and 618 false-positive regions (26.2%) were detected in all 11 sets of images. Combining detection results improved reproducibility and the overall CAD performance. In the range of an average false-positive detection rate between 0.5 and 1 per image, the sensitivity of the scheme could be increased approximately 5% after averaging the scores of the regions detected in at least four images. At low false-positive rate (e.g., ≤average 0.3 per image), the grouping method

  2. Modeling and Analysis of DIPPM: A New Modulation Scheme for Visible Light Communications

    Directory of Open Access Journals (Sweden)

    Sana Ullah Jan

    2015-01-01

    Full Text Available Visible Light Communication (VLC uses an Intensity-Modulation and Direct-Detection (IM/DD scheme to transmit data. However, the light source used in VLC systems is continuously switched on and off quickly, resulting in flickering. In addition, recent illumination systems include dimming support to allow users to dim the light sources to the desired level. Therefore, the modulation scheme for data transmission in VLC system must include flicker mitigation and dimming control capabilities. In this paper, the authors propose a Double Inverse Pulse Position Modulation (DIPPM scheme that minimizes flickering and supports a high level of dimming for the illumination sources in VLC systems. To form DIPPM, some changes are made in the symbol structure of the IPPM scheme, and a detailed explanation and mathematical model of DIPPM are given in this paper. Furthermore, both analytical and simulation results for the error performance of 2-DIPPM are compared with the performance of VPPM. Also, the communication performance of DIPPM is analyzed in terms of the normalized required power.

  3. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  4. Energy-preserving H1-Galerkin schemes for shallow water wave equations with peakon solutions

    International Nuclear Information System (INIS)

    Miyatake, Yuto; Matsuo, Takayasu

    2012-01-01

    New energy-preserving Galerkin schemes for the Camassa–Holm and the Degasperis–Procesi equations which model shallow water waves are presented. The schemes can be implemented only with cheap H 1 elements, which is expected to be sufficient to catch the characteristic peakon solutions. The keys of the derivation are the Hamiltonian structures of the equations and an L 2 -projection technique newly employed in the present Letter to mimic the Hamiltonian structures in a discrete setting, so that the desired energy-preserving property rightly follows. Numerical examples confirm the effectiveness of the schemes. -- Highlights: ► Numerical integration of the Camassa–Holm and Degasperis–Procesi equation. ► New energy-preserving Galerkin schemes for these equations are proposed. ► They can be implemented only with P1 elements. ► They well capture the characteristic peakon solutions over long time. ► The keys are the Hamiltonian structures and L 2 -projection technique.

  5. Range Sidelobe Suppression Using Complementary Sets in Distributed Multistatic Radar Networks

    Science.gov (United States)

    Wang, Xuezhi; Song, Yongping; Huang, Xiaotao; Moran, Bill

    2017-01-01

    We propose an alternative waveform scheme built on mutually-orthogonal complementary sets for a distributed multistatic radar. Our analysis and simulation show a reduced frequency band requirement for signal separation between antennas with centralized signal processing using the same carrier frequency. While the scheme can tolerate fluctuations of carrier frequencies and phases, range sidelobes arise when carrier frequencies between antennas are significantly different. PMID:29295566

  6. A SCHEDULING SCHEME WITH DYNAMIC FREQUENCY CLOCKING AND MULTIPLE VOLTAGES FOR LOW POWER DESIGNS

    Institute of Scientific and Technical Information of China (English)

    Wen Dongxin; Wang Ling; Yang Xiaozong

    2007-01-01

    In this letter, a scheduling scheme based on Dynamic Frequency Clocking (DFC) and multiple voltages is proposed for low power designs under the timing and the resource constraints.Unlike the conventional methods at high level synthesis where only voltages of nodes were considered,the scheme based on a gain function considers both voltage and frequency simultaneously to reduce energy consumption. Experiments with a number of DSP benchmarks show that the proposed scheme achieves an effective energy reduction.

  7. India : Note on Public Financial Management and Accountability in Centrally Sponsored Schemes

    OpenAIRE

    World Bank

    2006-01-01

    The budget outlay for Centrally Sponsored Schemes (CSS) for India in 2005-06 is significantly higher as compared to the previous year's level of Rs.395,000 million. This includes increased allocations for rural roads, rural employment, and education and nutritional support for pre-school children. At present there are over 200 such schemes in operation, of which a dozen accounts for more t...

  8. A thick level set interface model for simulating fatigue-drive delamination in composites

    NARCIS (Netherlands)

    Latifi, M.; Van der Meer, F.P.; Sluys, L.J.

    2015-01-01

    This paper presents a new damage model for simulating fatigue-driven delamination in composite laminates. This model is developed based on the Thick Level Set approach (TLS) and provides a favorable link between damage mechanics and fracture mechanics through the non-local evaluation of the energy

  9. An Optimized, Grid Independent, Narrow Band Data Structure for High Resolution Level Sets

    DEFF Research Database (Denmark)

    Nielsen, Michael Bang; Museth, Ken

    2004-01-01

    enforced by the convex boundaries of an underlying cartesian computational grid. Here we present a novel very memory efficient narrow band data structure, dubbed the Sparse Grid, that enables the representation of grid independent high resolution level sets. The key features our new data structure are...

  10. Straightening the Hierarchical Staircase for Basis Set Extrapolations: A Low-Cost Approach to High-Accuracy Computational Chemistry

    Science.gov (United States)

    Varandas, António J. C.

    2018-04-01

    Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.

  11. Improvements and validation of the linear surface characteristics scheme

    International Nuclear Information System (INIS)

    Santandrea, S.; Jaboulay, J.C.; Bellier, P.; Fevotte, F.; Golfier, H.

    2009-01-01

    In this paper we present the last improvements of the recently proposed linear surface (LS) characteristics scheme for unstructured meshes. First we introduce a new numerical tracking technique, specifically adapted to the LS method, which tailors transverse integration weights to take into account the geometrical discontinuities that appear along the pipe affected to every trajectory in classical characteristics schemes. Another development allows using the volumetric flux variation of the LS method to re-compute step-wise constant fluxes to be used in other parts of a computational scheme. This permits to take greater advantage of the higher precision of the LS method without necessarily conceiving specialized theories for all the modular functionalities of a spectral code such as APOLLO2. Moreover we present a multi-level domain decomposition method for solving the synthetic acceleration operator that is used to accelerate the free iterations for the LS method. We discuss all these new developments by illustrating some benchmarks results obtained with the LS method. This is done by detailed comparisons with Monte-Carlo calculations. In particular we show that the new method can be used not only as a reference tool, but also inside a suitable industrial calculation scheme

  12. Expanding health insurance scheme in the informal sector in Nigeria: awareness as a potential demand-side tool.

    Science.gov (United States)

    Adewole, David Ayobami; Akanbi, Saidat Abisola; Osungbade, Kayode Omoniyi; Bello, Segun

    2017-01-01

    The implementation and expansion of a health insurance scheme in the informal sector, particularly in developing countries, is a challenge. With the aid of an innovative Information-Education and Communication model, titled 'Understanding the concept of health insurance: An innovative social marketing tool', an assessment of the awareness and perception of the scheme among market women was carried out. This is a cross-sectional descriptive survey, carried out among market women in Ibadan, Nigeria. In a multi-stage sampling technique, a total of 351 women were interviewed using an interviewer-administered, semi-structured questionnaire. The data was analysed using SPSS version 16. Chi-square test was used to test associations between selected variables of interest. Logistic regression model was used to determine predictors of awareness of the National Health Insurance Scheme (NHIS). A model controlling for participants' enrolment status was built and Adjusted Odds Ratio (AOR) reported. Level of statistical significance was set at p market women aged 18 years and above participated in the study, a response rate of 98.0%. Respondents' educational status was the only predictor significantly associated with awareness of the NHIS. Respondents with post-primary education had 10 times the odds of being aware of the NHIS than respondents with no education or only primary education (Adjusted Odds Ratio = 10.3; 95% CI = 4.1-26.0). Innovative models to enable potential beneficiaries, especially among the informal sector, to better comprehend and accept the concept of prepayment methods of financing healthcare costs is important in efforts to implement and expand a social health insurance scheme.

  13. A mass conserving level set method for detailed numerical simulation of liquid atomization

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Kun; Shao, Changxiao [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China); Yang, Yue [State Key Laboratory of Turbulence and Complex Systems, Peking University, Beijing 100871 (China); Fan, Jianren, E-mail: fanjr@zju.edu.cn [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)

    2015-10-01

    An improved mass conserving level set method for detailed numerical simulations of liquid atomization is developed to address the issue of mass loss in the existing level set method. This method introduces a mass remedy procedure based on the local curvature at the interface, and in principle, can ensure the absolute mass conservation of the liquid phase in the computational domain. Three benchmark cases, including Zalesak's disk, a drop deforming in a vortex field, and the binary drop head-on collision, are simulated to validate the present method, and the excellent agreement with exact solutions or experimental results is achieved. It is shown that the present method is able to capture the complex interface with second-order accuracy and negligible additional computational cost. The present method is then applied to study more complex flows, such as a drop impacting on a liquid film and the swirling liquid sheet atomization, which again, demonstrates the advantages of mass conservation and the capability to represent the interface accurately.

  14. Transporter taxonomy - a comparison of different transport protein classification schemes.

    Science.gov (United States)

    Viereck, Michael; Gaulton, Anna; Digles, Daniela; Ecker, Gerhard F

    2014-06-01

    Currently, there are more than 800 well characterized human membrane transport proteins (including channels and transporters) and there are estimates that about 10% (approx. 2000) of all human genes are related to transport. Membrane transport proteins are of interest as potential drug targets, for drug delivery, and as a cause of side effects and drug–drug interactions. In light of the development of Open PHACTS, which provides an open pharmacological space, we analyzed selected membrane transport protein classification schemes (Transporter Classification Database, ChEMBL, IUPHAR/BPS Guide to Pharmacology, and Gene Ontology) for their ability to serve as a basis for pharmacology driven protein classification. A comparison of these membrane transport protein classification schemes by using a set of clinically relevant transporters as use-case reveals the strengths and weaknesses of the different taxonomy approaches.

  15. Effects of sparse sampling schemes on image quality in low-dose CT

    International Nuclear Information System (INIS)

    Abbas, Sajid; Lee, Taewon; Cho, Seungryong; Shin, Sukyoung; Lee, Rena

    2013-01-01

    Purpose: Various scanning methods and image reconstruction algorithms are actively investigated for low-dose computed tomography (CT) that can potentially reduce a health-risk related to radiation dose. Particularly, compressive-sensing (CS) based algorithms have been successfully developed for reconstructing images from sparsely sampled data. Although these algorithms have shown promises in low-dose CT, it has not been studied how sparse sampling schemes affect image quality in CS-based image reconstruction. In this work, the authors present several sparse-sampling schemes for low-dose CT, quantitatively analyze their data property, and compare effects of the sampling schemes on the image quality.Methods: Data properties of several sampling schemes are analyzed with respect to the CS-based image reconstruction using two measures: sampling density and data incoherence. The authors present five different sparse sampling schemes, and simulated those schemes to achieve a targeted dose reduction. Dose reduction factors of about 75% and 87.5%, compared to a conventional scan, were tested. A fully sampled circular cone-beam CT data set was used as a reference, and sparse sampling has been realized numerically based on the CBCT data.Results: It is found that both sampling density and data incoherence affect the image quality in the CS-based reconstruction. Among the sampling schemes the authors investigated, the sparse-view, many-view undersampling (MVUS)-fine, and MVUS-moving cases have shown promising results. These sampling schemes produced images with similar image quality compared to the reference image and their structure similarity index values were higher than 0.92 in the mouse head scan with 75% dose reduction.Conclusions: The authors found that in CS-based image reconstructions both sampling density and data incoherence affect the image quality, and suggest that a sampling scheme should be devised and optimized by use of these indicators. With this strategic

  16. A first generation numerical geomagnetic storm prediction scheme

    International Nuclear Information System (INIS)

    Akasofu, S.-I.; Fry, C.F.

    1986-01-01

    Because geomagnetic and auroral disturbances cause significant interference on many electrical systems, it is essential to develop a reliable geomagnetic and auroral storm prediction scheme. A first generation numerical prediction scheme has been developed. The scheme consists of two major computer codes which in turn consist of a large number of subroutine codes and of empirical relationships. First of all, when a solar flare occurs, six flare parameters are determined as the input data set for the first code which is devised to show the simulated propagation of solar wind disturbances in the heliosphere to a distance of 2 a.u. Thus, one can determine the relative location of the propagating disturbances with the Earth's position. The solar wind speed and the three interplanetary magnetic field (IMF) components are then computed as a function of time at the Earth's location or any other desired (space probe) locations. These quantities in turn become the input parameters for the second major code which computes first the power of the solar wind-magnetosphere dynamo as a function of time. The power thus obtained and the three IMF components can be used to compute or infer: the predicted geometry of the auroral oval; the cross-polar cap potential; the two geomagnetic indices AE and Dst; the total energy injection rate into the polar ionosphere; and the atmospheric temperature, etc. (author)

  17. A Comparative Study on Metadata Scheme of Chinese and American Open Data Platforms

    Directory of Open Access Journals (Sweden)

    Yang Sinan

    2018-01-01

    Full Text Available [Purpose/significance] Open government data is conducive to the rational development and utilization of data resources. It can encourage social innovation and promote economic development. Besides, in order to ensure effective utilization and social increment of open government data, high-quality metadata schemes is necessary. [Method/process] Firstly, this paper analyzed the related research of open government data at home and abroad. Then, it investigated the open metadata schemes of some Chinese main local governments’ data platforms, and made a comparison with the metadata standard of American open government data. [Result/conclusion] This paper reveals that there are some disadvantages about Chinese local government open data affect the use effect of open data, which including that different governments use different data metadata schemes, the description of data set is too simple for further utilization and usually presented in HTML Web page format with lower machine-readable. Therefore, our government should come up with a standardized metadata schemes by drawing on the international mature and effective metadata standard, to ensure the social needs of high quality and high value data.

  18. Shape Reconstruction of Thin Electromagnetic Inclusions via Boundary Measurements: Level-Set Method Combined with the Topological Derivative

    Directory of Open Access Journals (Sweden)

    Won-Kwang Park

    2013-01-01

    Full Text Available An inverse problem for reconstructing arbitrary-shaped thin penetrable electromagnetic inclusions concealed in a homogeneous material is considered in this paper. For this purpose, the level-set evolution method is adopted. The topological derivative concept is incorporated in order to evaluate the evolution speed of the level-set functions. The results of the corresponding numerical simulations with and without noise are presented in this paper.

  19. Robust space-time extraction of ventricular surface evolution using multiphase level sets

    Science.gov (United States)

    Drapaca, Corina S.; Cardenas, Valerie; Studholme, Colin

    2004-05-01

    This paper focuses on the problem of accurately extracting the CSF-tissue boundary, particularly around the ventricular surface, from serial structural MRI of the brain acquired in imaging studies of aging and dementia. This is a challenging problem because of the common occurrence of peri-ventricular lesions which locally alter the appearance of white matter. We examine a level set approach which evolves a four dimensional description of the ventricular surface over time. This has the advantage of allowing constraints on the contour in the temporal dimension, improving the consistency of the extracted object over time. We follow the approach proposed by Chan and Vese which is based on the Mumford and Shah model and implemented using the Osher and Sethian level set method. We have extended this to the 4 dimensional case to propagate a 4D contour toward the tissue boundaries through the evolution of a 5D implicit function. For convergence we use region-based information provided by the image rather than the gradient of the image. This is adapted to allow intensity contrast changes between time frames in the MRI sequence. Results on time sequences of 3D brain MR images are presented and discussed.

  20. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. The role of experience curves for setting MEPS for appliances

    International Nuclear Information System (INIS)

    Siderius, Hans-Paul

    2013-01-01

    Minimum efficiency performance standards (MEPS) are an important policy instrument to raise the efficiency of products. In most schemes the concept of life cycle costs (LCC) is used to guide setting the MEPS levels. Although a large body of literature shows that product cost is decreasing with increasing cumulative production, the experience curve, this is currently not used for setting MEPS. This article shows how to integrate the concept of the experience curve into LCC calculations for setting MEPS in the European Union and applies this to household laundry driers, refrigerator-freezers and televisions. The results indicate that for driers and refrigerator-freezers at least twice the energy savings compared to the current approach can be achieved. These products also show that energy label classes can successfully be used for setting MEPS. For televisions an experience curve is provided, showing a learning rate of 29%. However, television prices do not show a relation with energy efficiency but are to a large extent determined by the time the product is placed on the market. This suggests to policy makers that for televisions and other products with a short (re)design and market cycle timing is more important than the MEPS levels itself. - Highlights: • We integrate experience curves into life cycle cost calculations for MEPS. • For driers and refrigerators this results in at least twice the energy savings. • For flat panel televisions an experience curve is provided

  2. Genetic progress in multistage dairy cattle breeding schemes using genetic markers.

    Science.gov (United States)

    Schrooten, C; Bovenhuis, H; van Arendonk, J A M; Bijma, P

    2005-04-01

    QTL explaining 5% of the additive genetic variance allowed a 35% reduction in the number of progeny tested bulls, while maintaining genetic response at the level of the base scheme. Genetic progress was up to 31.3% higher for schemes with increased embryo production and selection of embryos based on QTL information. The challenge for breeding organizations is to find the optimum breeding program with regard to additional genetic progress and additional (or reduced) cost.

  3. The Higgs boson inclusive decay channels $H \\to b\\bar{b}$ and $H \\to gg$ up to four-loop level

    OpenAIRE

    Wang, Sheng-Quan; Wu, Xing-Gang; Zheng, Xu-Chang; Shen, Jian-Ming; Zhang, Qiong-Lian

    2013-01-01

    The principle of maximum conformality (PMC) has been suggested to eliminate the renormalization scheme and renormalization scale uncertainties, which are unavoidable for the conventional scale setting and are usually important errors for theoretical estimations. In this paper, by applying PMC scale setting, we analyze two important inclusive Standard Model Higgs decay channels, H→bb¯ and H→gg , up to four-loop and three-loop levels, respectively. After PMC scale setting, it is found that the ...

  4. A Spatial Domain Quantum Watermarking Scheme

    International Nuclear Information System (INIS)

    Wei Zhan-Hong; Chen Xiu-Bo; Niu Xin-Xin; Yang Yi-Xian; Xu Shu-Jiang

    2016-01-01

    This paper presents a spatial domain quantum watermarking scheme. For a quantum watermarking scheme, a feasible quantum circuit is a key to achieve it. This paper gives a feasible quantum circuit for the presented scheme. In order to give the quantum circuit, a new quantum multi-control rotation gate, which can be achieved with quantum basic gates, is designed. With this quantum circuit, our scheme can arbitrarily control the embedding position of watermark images on carrier images with the aid of auxiliary qubits. Besides reversely acting the given quantum circuit, the paper gives another watermark extracting algorithm based on quantum measurements. Moreover, this paper also gives a new quantum image scrambling method and its quantum circuit. Differ from other quantum watermarking schemes, all given quantum circuits can be implemented with basic quantum gates. Moreover, the scheme is a spatial domain watermarking scheme, and is not based on any transform algorithm on quantum images. Meanwhile, it can make sure the watermark be secure even though the watermark has been found. With the given quantum circuit, this paper implements simulation experiments for the presented scheme. The experimental result shows that the scheme does well in the visual quality and the embedding capacity. (paper)

  5. Public goods and voting on formal sanction schemes

    DEFF Research Database (Denmark)

    Putterman, Louis; Tyran, Jean-Robert Karl; Kamei, Kenju

    2011-01-01

    The burgeoning literature on the use of sanctions to support the provision of public goods has largely neglected the use of formal or centralized sanctions. We let subjects playing a linear public goods game vote on the parameters of a formal sanction scheme capable of either resolving...... or exacerbating the free-rider problem, depending on parameter settings. Most groups quickly learned to choose parameters inducing efficient outcomes. We find that cooperative orientation, political attitude, gender and intelligence have a small but sometimes significant influence on voting....

  6. Public Goods and Voting on Formal Sanction Schemes

    DEFF Research Database (Denmark)

    Putterman, Louis; Tyran, Jean-Robert; Kamei, Kenju

    The burgeoning literature on the use of sanctions to support public goods provision has largely neglected the use of formal or centralized sanctions. We let subjects playing a linear public goods game vote on the parameters of a formal sanction scheme capable both of resolving and of exacerbating...... the free-rider problem, depending on parameter settings. Most groups quickly learned to choose parameters inducing efficient outcomes. But despite uniform money payoffs implying common interest in those parameters, voting patterns suggest significant influence of cooperative orientation, political...

  7. Associated computational plasticity schemes for nonassociated frictional materials

    DEFF Research Database (Denmark)

    Krabbenhoft, K.; Karim, M. R.; Lyamin, A. V.

    2012-01-01

    A new methodology for computational plasticity of nonassociated frictional materials is presented. The new approach is inspired by the micromechanical origins of friction and results in a set of governing equations similar to those of standard associated plasticity. As such, procedures previously...... developed for associated plasticity are applicable with minor modification. This is illustrated by adaptation of the standard implicit scheme. Moreover, the governing equations can be cast in terms of a variational principle, which after discretization is solved by means of a newly developed second...

  8. An IDS Alerts Aggregation Algorithm Based on Rough Set Theory

    Science.gov (United States)

    Zhang, Ru; Guo, Tao; Liu, Jianyi

    2018-03-01

    Within a system in which has been deployed several IDS, a great number of alerts can be triggered by a single security event, making real alerts harder to be found. To deal with redundant alerts, we propose a scheme based on rough set theory. In combination with basic concepts in rough set theory, the importance of attributes in alerts was calculated firstly. With the result of attributes importance, we could compute the similarity of two alerts, which will be compared with a pre-defined threshold to determine whether these two alerts can be aggregated or not. Also, time interval should be taken into consideration. Allowed time interval for different types of alerts is computed individually, since different types of alerts may have different time gap between two alerts. In the end of this paper, we apply proposed scheme on DAPRA98 dataset and the results of experiment show that our scheme can efficiently reduce the redundancy of alerts so that administrators of security system could avoid wasting time on useless alerts.

  9. Labeling schemes for bounded degree graphs

    DEFF Research Database (Denmark)

    Adjiashvili, David; Rotbart, Noy Galil

    2014-01-01

    We investigate adjacency labeling schemes for graphs of bounded degree Δ = O(1). In particular, we present an optimal (up to an additive constant) log n + O(1) adjacency labeling scheme for bounded degree trees. The latter scheme is derived from a labeling scheme for bounded degree outerplanar...... graphs. Our results complement a similar bound recently obtained for bounded depth trees [Fraigniaud and Korman, SODA 2010], and may provide new insights for closing the long standing gap for adjacency in trees [Alstrup and Rauhe, FOCS 2002]. We also provide improved labeling schemes for bounded degree...

  10. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  11. A Novel, Automatic Quality Control Scheme for Real Time Image Transmission

    Directory of Open Access Journals (Sweden)

    S. Ramachandran

    2002-01-01

    Full Text Available A novel scheme to compute energy on-the-fly and thereby control the quality of the image frames dynamically is presented along with its FPGA implementation. This scheme is suitable for incorporation in image compression systems such as video encoders. In this new scheme, processing is automatically stopped when the desired quality is achieved for the image being processed by using a concept called pruning. Pruning also increases the processing speed by a factor of more than two when compared to the conventional method of processing without pruning. An MPEG-2 encoder implemented using this scheme is capable of processing good quality monochrome and color images of sizes up to 1024 × 768 pixels at the rate of 42 and 28 frames per second, respectively, with a compression ratio of over 17:1. The encoder is also capable of working in the fixed pruning level mode with user programmable features.

  12. A variational approach to multi-phase motion of gas, liquid and solid based on the level set method

    Science.gov (United States)

    Yokoi, Kensuke

    2009-07-01

    We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.

  13. Cluster synchronization for directed community networks via pinning partial schemes

    International Nuclear Information System (INIS)

    Hu Cheng; Jiang Haijun

    2012-01-01

    Highlights: ► Cluster synchronization for directed community networks is proposed by pinning partial schemes. ► Each community is considered as a whole. ► Several novel pinning criteria are derived based on the information of communities. ► A numerical example with simulation is provided. - Abstract: In this paper, we focus on driving a class of directed networks to achieve cluster synchronization by pinning schemes. The desired cluster synchronization states are no longer decoupled orbits but a set of un-decoupled trajectories. Each community is considered as a whole and the synchronization criteria are derived based on the information of communities. Several pinning schemes including feedback control and adaptive strategy are proposed to select controlled communities by analyzing the information of each community such as indegrees and outdegrees. In all, this paper answers several challenging problems in pinning control of directed community networks: (1) What communities should be chosen as controlled candidates? (2) How many communities are needed to be controlled? (3) How large should the control gains be used in a given community network to achieve cluster synchronization? Finally, an example with numerical simulations is given to demonstrate the effectiveness of the theoretical results.

  14. Integrated data lookup and replication scheme in mobile ad hoc networks

    Science.gov (United States)

    Chen, Kai; Nahrstedt, Klara

    2001-11-01

    Accessing remote data is a challenging task in mobile ad hoc networks. Two problems have to be solved: (1) how to learn about available data in the network; and (2) how to access desired data even when the original copy of the data is unreachable. In this paper, we develop an integrated data lookup and replication scheme to solve these problems. In our scheme, a group of mobile nodes collectively host a set of data to improve data accessibility for all members of the group. They exchange data availability information by broadcasting advertising (ad) messages to the group using an adaptive sending rate policy. The ad messages are used by other nodes to derive a local data lookup table, and to reduce data redundancy within a connected group. Our data replication scheme predicts group partitioning based on each node's current location and movement patterns, and replicates data to other partitions before partitioning occurs. Our simulations show that data availability information can quickly propagate throughout the network, and that the successful data access ratio of each node is significantly improved.

  15. On piecewise constant level-set (PCLS) methods for the identification of discontinuous parameters in ill-posed problems

    International Nuclear Information System (INIS)

    De Cezaro, A; Leitão, A; Tai, X-C

    2013-01-01

    We investigate level-set-type methods for solving ill-posed problems with discontinuous (piecewise constant) coefficients. The goal is to identify the level sets as well as the level values of an unknown parameter function on a model described by a nonlinear ill-posed operator equation. The PCLS approach is used here to parametrize the solution of a given operator equation in terms of a L 2 level-set function, i.e. the level-set function itself is assumed to be a piecewise constant function. Two distinct methods are proposed for computing stable solutions of the resulting ill-posed problem: the first is based on Tikhonov regularization, while the second is based on the augmented Lagrangian approach with total variation penalization. Classical regularization results (Engl H W et al 1996 Mathematics and its Applications (Dordrecht: Kluwer)) are derived for the Tikhonov method. On the other hand, for the augmented Lagrangian method, we succeed in proving the existence of (generalized) Lagrangian multipliers in the sense of (Rockafellar R T and Wets R J-B 1998 Grundlehren der Mathematischen Wissenschaften (Berlin: Springer)). Numerical experiments are performed for a 2D inverse potential problem (Hettlich F and Rundell W 1996 Inverse Problems 12 251–66), demonstrating the capabilities of both methods for solving this ill-posed problem in a stable way (complicated inclusions are recovered without any a priori geometrical information on the unknown parameter). (paper)

  16. Local Economic Trading Schemes and their implications for marketing assumptions, concepts, and practices

    NARCIS (Netherlands)

    Crowther, D.; Greene, A-M.; Hosking, D.M.

    2002-01-01

    This paper focuses on the relationship between a particular social practice - local exchange trading systems or schemes (LETS) - and what we here call the "mainstream" marketing paradigm. It begins by discussing some of the key principles that are thought to set LETS apart from other, "more

  17. Elucidation of molecular kinetic schemes from macroscopic traces using system identification.

    Directory of Open Access Journals (Sweden)

    Miguel Fribourg

    2017-02-01

    Full Text Available Overall cellular responses to biologically-relevant stimuli are mediated by networks of simpler lower-level processes. Although information about some of these processes can now be obtained by visualizing and recording events at the molecular level, this is still possible only in especially favorable cases. Therefore the development of methods to extract the dynamics and relationships between the different lower-level (microscopic processes from the overall (macroscopic response remains a crucial challenge in the understanding of many aspects of physiology. Here we have devised a hybrid computational-analytical method to accomplish this task, the SYStems-based MOLecular kinetic scheme Extractor (SYSMOLE. SYSMOLE utilizes system-identification input-output analysis to obtain a transfer function between the stimulus and the overall cellular response in the Laplace-transformed domain. It then derives a Markov-chain state molecular kinetic scheme uniquely associated with the transfer function by means of a classification procedure and an analytical step that imposes general biological constraints. We first tested SYSMOLE with synthetic data and evaluated its performance in terms of its rate of convergence to the correct molecular kinetic scheme and its robustness to noise. We then examined its performance on real experimental traces by analyzing macroscopic calcium-current traces elicited by membrane depolarization. SYSMOLE derived the correct, previously known molecular kinetic scheme describing the activation and inactivation of the underlying calcium channels and correctly identified the accepted mechanism of action of nifedipine, a calcium-channel blocker clinically used in patients with cardiovascular disease. Finally, we applied SYSMOLE to study the pharmacology of a new class of glutamate antipsychotic drugs and their crosstalk mechanism through a heteromeric complex of G protein-coupled receptors. Our results indicate that our methodology

  18. Feedback Compression Schemes for Downlink Carrier Aggregation in LTE-Advanced

    DEFF Research Database (Denmark)

    Nguyen, Hung Tuan; Kovac, Istvan; Wang, Yuanye

    2011-01-01

    With full channel state information (CSI) available, it has been shown that carrier aggregation (CA) in the downlink can significantly improve the data rate experienced at the user equipments (UE) [1], [2], [3], [4]. However, full CSI feedback in all component carriers (CCs) requires a large...... portion of the uplink bandwidth and the feedback information increases linearly with the number of CCs. Therefore, the performance gain brought by deploying CA could be easily hindered if the amount of CSI feedback is not thoroughly controlled. In this paper we analyze several feedback overhead...... compression schemes in CA systems. To avoid a major re-design of the feedback schemes, only CSI compression schemes closely related to the ones specified in LTE-Release 8 and LTE-Release 9 are considered. Extensive simulations at system level were carried out to evaluate the performance of these feedback...

  19. GPU accelerated edge-region based level set evolution constrained by 2D gray-scale histogram.

    Science.gov (United States)

    Balla-Arabé, Souleymane; Gao, Xinbo; Wang, Bin

    2013-07-01

    Due to its intrinsic nature which allows to easily handle complex shapes and topological changes, the level set method (LSM) has been widely used in image segmentation. Nevertheless, LSM is computationally expensive, which limits its applications in real-time systems. For this purpose, we propose a new level set algorithm, which uses simultaneously edge, region, and 2D histogram information in order to efficiently segment objects of interest in a given scene. The computational complexity of the proposed LSM is greatly reduced by using the highly parallelizable lattice Boltzmann method (LBM) with a body force to solve the level set equation (LSE). The body force is the link with image data and is defined from the proposed LSE. The proposed LSM is then implemented using an NVIDIA graphics processing units to fully take advantage of the LBM local nature. The new algorithm is effective, robust against noise, independent to the initial contour, fast, and highly parallelizable. The edge and region information enable to detect objects with and without edges, and the 2D histogram information enable the effectiveness of the method in a noisy environment. Experimental results on synthetic and real images demonstrate subjectively and objectively the performance of the proposed method.

  20. Level-set dynamics and mixing efficiency of passive and active scalars in DNS and LES of turbulent mixing layers

    NARCIS (Netherlands)

    Geurts, Bernard J.; Vreman, Bert; Kuerten, Hans; Luo, Kai H.

    2001-01-01

    The mixing efficiency in a turbulent mixing layer is quantified by monitoring the surface-area of level-sets of scalar fields. The Laplace transform is applied to numerically calculate integrals over arbitrary level-sets. The analysis includes both direct and large-eddy simulation and is used to

  1. Sequential injection/bead injection lab-on-valve schemes for on-line solid phase extraction and preconcentration of ultra-trace levels of heavy metals with determination by ETAAS and ICPMS

    DEFF Research Database (Denmark)

    Wang, Jianhua; Hansen, Elo Harald; Miró, Manuel

    2003-01-01

    are focused on the applications of SI-BI-LOV protocols for on-line microcolumn based solid phase extraction of ultra-trace levels of heavy metals, employing the so-called renewable surface separation and preconcentration manipulatory scheme. Two types of sorbents have been employed as packing material...

  2. HYBRID SYSTEM BASED FUZZY-PID CONTROL SCHEMES FOR UNPREDICTABLE PROCESS

    Directory of Open Access Journals (Sweden)

    M.K. Tan

    2011-07-01

    Full Text Available In general, the primary aim of polymerization industry is to enhance the process operation in order to obtain high quality and purity product. However, a sudden and large amount of heat will be released rapidly during the mixing process of two reactants, i.e. phenol and formalin due to its exothermic behavior. The unpredictable heat will cause deviation of process temperature and hence affect the quality of the product. Therefore, it is vital to control the process temperature during the polymerization. In the modern industry, fuzzy logic is commonly used to auto-tune PID controller to control the process temperature. However, this method needs an experienced operator to fine tune the fuzzy membership function and universe of discourse via trial and error approach. Hence, the setting of fuzzy inference system might not be accurate due to the human errors. Besides that, control of the process can be challenging due to the rapid changes in the plant parameters which will increase the process complexity. This paper proposes an optimization scheme using hybrid of Q-learning (QL and genetic algorithm (GA to optimize the fuzzy membership function in order to allow the conventional fuzzy-PID controller to control the process temperature more effectively. The performances of the proposed optimization scheme are compared with the existing fuzzy-PID scheme. The results show that the proposed optimization scheme is able to control the process temperature more effectively even if disturbance is introduced.

  3. Assessing the groundwater recharge under various irrigation schemes in Central Taiwan

    Science.gov (United States)

    Chen, Shih-Kai; Jang, Cheng-Shin; Lin, Zih-Ciao; Tsai, Cheng-Bin

    2014-05-01

    The flooded paddy fields can be considered as a major source of groundwater recharge in Central Taiwan. The risk of rice production has increased notably due to climate change in this area. To respond to agricultural water shortage caused by climate change without affecting rice yield in the future, the application of water-saving irrigation is the substantial resolution. The System of Rice Intensification (SRI) was developed as a set of insights and practices used in growing irrigated rice. Based on the water-saving irrigation practice of SRI, impacts of the new methodology on the reducing of groundwater recharge were assessed in central Taiwan. The three-dimensional finite element groundwater model (FEMWATER) with the variable boundary condition analog functions, was applied in simulating groundwater recharge under different irrigation schemes. According to local climatic and environmental characteristics associated with SRI methodology, the change of infiltration rate was evaluated and compared with the traditional irrigation schemes, including continuous irrigation and rotational irrigation scheme. The simulation results showed that the average infiltration rate in the rice growing season decreased when applying the SRI methodology, and the total groundwater recharge amount of SRI with a 5-day irrigation interval reduced 12% and 9% compared with continuous irrigation (6cm constant ponding water depth) and rotational scheme (5-day irrigation interval with 6 cm initial ponding water depth), respectively. The results could be used as basis for planning long-term adaptive water resource management strategies to climate change in Central Taiwan. Keywords: SRI, Irrigation schemes, Groundwater recharge, Infiltration

  4. Steps to discern sustainability criteria for a certification scheme of bioethanol in Brazil: Approach and difficulties

    International Nuclear Information System (INIS)

    Delzeit, R.; Holm-Mueller, K.

    2009-01-01

    Taking Brazilian bioethanol as an example, this paper presents possible sustainability criteria for a certification scheme aimed to minimize negative socio-ecological impacts and to increase the sustainable production of biomass. We describe the methods that have led us to the identification of a first set of feasible sustainability criteria for Brazilian bioethanol and discuss issues to be considered when developing certification schemes for sustainability. General problems of a certification scheme lie in the inherent danger of introducing new non-tariff trade barriers and in the problems of including important higher scale issues like land conversion and food security. A certification system cannot replace a thorough analysis of policy impacts on sustainability issues. (author)

  5. SET overexpression in HEK293 cells regulates mitochondrial uncoupling proteins levels within a mitochondrial fission/reduced autophagic flux scenario

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Luciana O.; Goto, Renata N. [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Neto, Marinaldo P.C. [Department of Physics and Chemistry, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Sousa, Lucas O. [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Curti, Carlos [Department of Physics and Chemistry, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Leopoldino, Andréia M., E-mail: andreiaml@usp.br [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil)

    2015-03-06

    We hypothesized that SET, a protein accumulated in some cancer types and Alzheimer disease, is involved in cell death through mitochondrial mechanisms. We addressed the mRNA and protein levels of the mitochondrial uncoupling proteins UCP1, UCP2 and UCP3 (S and L isoforms) by quantitative real-time PCR and immunofluorescence as well as other mitochondrial involvements, in HEK293 cells overexpressing the SET protein (HEK293/SET), either in the presence or absence of oxidative stress induced by the pro-oxidant t-butyl hydroperoxide (t-BHP). SET overexpression in HEK293 cells decreased UCP1 and increased UCP2 and UCP3 (S/L) mRNA and protein levels, whilst also preventing lipid peroxidation and decreasing the content of cellular ATP. SET overexpression also (i) decreased the area of mitochondria and increased the number of organelles and lysosomes, (ii) increased mitochondrial fission, as demonstrated by increased FIS1 mRNA and FIS-1 protein levels, an apparent accumulation of DRP-1 protein, and an increase in the VDAC protein level, and (iii) reduced autophagic flux, as demonstrated by a decrease in LC3B lipidation (LC3B-II) in the presence of chloroquine. Therefore, SET overexpression in HEK293 cells promotes mitochondrial fission and reduces autophagic flux in apparent association with up-regulation of UCP2 and UCP3; this implies a potential involvement in cellular processes that are deregulated such as in Alzheimer's disease and cancer. - Highlights: • SET, UCPs and autophagy prevention are correlated. • SET action has mitochondrial involvement. • UCP2/3 may reduce ROS and prevent autophagy. • SET protects cell from ROS via UCP2/3.

  6. Numerical Simulations of Reacting Flows Using Asynchrony-Tolerant Schemes for Exascale Computing

    Science.gov (United States)

    Cleary, Emmet; Konduri, Aditya; Chen, Jacqueline

    2017-11-01

    Communication and data synchronization between processing elements (PEs) are likely to pose a major challenge in scalability of solvers at the exascale. Recently developed asynchrony-tolerant (AT) finite difference schemes address this issue by relaxing communication and synchronization between PEs at a mathematical level while preserving accuracy, resulting in improved scalability. The performance of these schemes has been validated for simple linear and nonlinear homogeneous PDEs. However, many problems of practical interest are governed by highly nonlinear PDEs with source terms, whose solution may be sensitive to perturbations caused by communication asynchrony. The current work applies the AT schemes to combustion problems with chemical source terms, yielding a stiff system of PDEs with nonlinear source terms highly sensitive to temperature. Examples shown will use single-step and multi-step CH4 mechanisms for 1D premixed and nonpremixed flames. Error analysis will be discussed both in physical and spectral space. Results show that additional errors introduced by the AT schemes are negligible and the schemes preserve their accuracy. We acknowledge funding from the DOE Computational Science Graduate Fellowship administered by the Krell Institute.

  7. Hierarchical Control Scheme for Voltage Harmonics Compensation in an Islanded Droop-Controlled Microgrid

    DEFF Research Database (Denmark)

    Savaghebi, Mehdi; Guerrero, Josep M.; Jalilian, Alireza

    2011-01-01

    In this paper, a microgrid hierarchical control scheme is proposed which includes primary and secondary control levels. The primary level comprises distributed generators (DGs) local controllers. The local controller mainly consists of active and reactive power controllers, voltage and current...... controllers, and virtual impedance loop. A novel virtual impedance structure is proposed to achieve proper sharing of non-fundamental power among the microgrid DGs. The secondary level is designed to manage compensation of voltage harmonics at the microgrid load bus (LB) to which the sensitive loads may...... be connected. Also, restoration of LB voltage amplitude and microgrid frequency to the rated values is directed by the secondary level. These functions are achieved by sending proper control signals to the local controllers. The simulation results show the effectiveness of the proposed control scheme....

  8. Development of the Latvian scheme for energy auditing of buildings and inspection of boilers and air-conditioning systems. Final report institutional set-up

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-12-01

    To implement EU directive 93/76/EEC on reduction of carbon dioxide emission by increasing energy efficiency and EU directive 2002/91/EC on building energy efficiency, Latvia must establish and institutional scheme and define all the organisations involved. From a general perspective the institutional scheme must as a minimum include the following four key players: the administrator, the operating unit, the auditors or independent experts, and finally the client. Furthermore, institutions dealing with financing of energy efficiency improvement activities, training and certification of experts, information about auditing and energy efficiency etc. need to be involved. At present there is no governmental or private Latvian organisation that could fully rearrange and assume the duties of an energy audit scheme secretariat. It is therefore recommended initially to place the secretariat as a separate, new unit within the Ministry of Economy, financed by the Ministry of Economy, with the intention of establishing at a later stage (after e.g. 5 years) a separate, new agency, an Energy Efficiency Agency partly financed by the incomes from the energy audit and boiler inspection schemes. The Secretariat should, both in its initial phase and later, assign the tasks of training, information campaigns, quality assurance and evaluation to external organisations. (BA)

  9. Development of a moisture scheme for the explicit numerical simulation of moist convection

    CSIR Research Space (South Africa)

    Bopape, Mary-Jane M

    2010-09-01

    Full Text Available . The aim of this study is to add a moisture scheme to the NSM. As a first step a simple model that is equivalent to the first pressure-coordinate nonhydrostatic model used to simulate cumulonimbus clouds in 1974 is developed. The equation set that includes...

  10. An improved level set method for brain MR images segmentation and bias correction.

    Science.gov (United States)

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  11. Predictors of Availing Maternal Health Schemes: A community based study in Gujarat, India

    Directory of Open Access Journals (Sweden)

    Kranti Suresh Vora

    2014-06-01

    Full Text Available Background: India continues to face challenges in improving key maternal health indicators with about 1/3rd of global maternal deaths happening in India. Utilization of health care services is an important issue in India with significant proportion of home deliveries and majority of mothers not receiving adequate antenatal care. Mortality among poor rural women is the highest with lowest utilization. To make maternal healthcare more equitable, numerous schemes such as Janani Suraksha Yojana, Chiranjeevi Yojana, Kasturba Poshan Sahay Yojana have been introduced. Studies suggest that utilization of such schemes by target population is low and there is a need to understand factors affecting maternal health care utilization in the context of these schemes. Current community based study was done in rural Gujarat to understand characteristics of women who utilize such schemes and predictors of utilization. Methodology: Data collection was done in two districts of Gujarat from June to August, 2013 as a pilot phase of MATIND project. Community based cross-sectional study included 827 households and socio-demographic details of 1454 women of 15-49 years age groups were collected. 265 mothers, who had delivered after 1st January, 2013 are included in the regression analyses. The data analysis carried out with R version 3.0.1 software.  Results: The analysis indicates socioeconomic variables such as caste, maternal variables such as education and health system variables such as use of government facility are important predictors of maternal health scheme utilization. Results suggest that socioeconomic and health system factors are the best predictors for availing scheme. Conclusion: Health system variables along with individual level variables are important predictors for availing maternal health schemes. The study indicates the need to examine all levels of predictors for utilizing government health schemes to maximize the benefit for underserved

  12. Gonioscopy in the dog: inter-examiner variability and the search for a grading scheme.

    Science.gov (United States)

    Oliver, J A C; Cottrell, B C; Newton, J R; Mellersh, C S

    2017-11-01

    To investigate inter-examiner variability in gonioscopic evaluation of pectinate ligament abnormality in dogs and to assess level of inter-examiner agreement for four different gonioscopy grading schemes. Two examiners performed gonioscopy in 98 eyes of 49 Welsh springer spaniel dogs and estimated the percentage circumference of iridocorneal angle affected by pectinate ligament abnormality to the nearest 5%. Percentage scores assigned to each eye by the two examiners were compared. Inter-examiner agreement was assessed following assignment of the percentage scores to each of four grading schemes by Cohen's kappa statistic. There was a strong positive correlation between the results of the two examiners (R=0·91). In general, Examiner 1 scored individual eyes higher than Examiner 2, especially for eyes in which both examiners diagnosed pectinate ligament abnormality. A "good" level of agreement could only be achieved with a gonioscopy grading scheme of no more than three categories and with a relatively large intermediate bandwidth (κ=0·68). A three-tiered grading scheme might represent an improvement on hereditary eye disease schemes which simply classify dogs to be either "affected" or "unaffected" for pectinate ligament abnormality. However, the large intermediate bandwidth of this scheme would only allow for the additional detection of those dogs with marked progression of pectinate ligament abnormality which would be considered most at risk of primary closed-angle glaucoma. © 2017 British Small Animal Veterinary Association.

  13. Multiuser switched diversity scheduling schemes

    KAUST Repository

    Shaqfeh, Mohammad; Alnuweiri, Hussein M.; Alouini, Mohamed-Slim

    2012-01-01

    Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.

  14. Short-Term Saved Leave Scheme

    CERN Multimedia

    2007-01-01

    As announced at the meeting of the Standing Concertation Committee (SCC) on 26 June 2007 and in http://Bulletin No. 28/2007, the existing Saved Leave Scheme will be discontinued as of 31 December 2007. Staff participating in the Scheme will shortly receive a contract amendment stipulating the end of financial contributions compensated by save leave. Leave already accumulated on saved leave accounts can continue to be taken in accordance with the rules applicable to the current scheme. A new system of saved leave will enter into force on 1 January 2008 and will be the subject of a new implementation procedure entitled "Short-term saved leave scheme" dated 1 January 2008. At its meeting on 4 December 2007, the SCC agreed to recommend the Director-General to approve this procedure, which can be consulted on the HR Department’s website at the following address: https://cern.ch/hr-services/services-Ben/sls_shortterm.asp All staff wishing to participate in the new scheme a...

  15. Short-Term Saved Leave Scheme

    CERN Multimedia

    HR Department

    2007-01-01

    As announced at the meeting of the Standing Concertation Committee (SCC) on 26 June 2007 and in http://Bulletin No. 28/2007, the existing Saved Leave Scheme will be discontinued as of 31 December 2007. Staff participating in the Scheme will shortly receive a contract amendment stipulating the end of financial contributions compensated by save leave. Leave already accumulated on saved leave accounts can continue to be taken in accordance with the rules applicable to the current scheme. A new system of saved leave will enter into force on 1 January 2008 and will be the subject of a new im-plementation procedure entitled "Short-term saved leave scheme" dated 1 January 2008. At its meeting on 4 December 2007, the SCC agreed to recommend the Director-General to approve this procedure, which can be consulted on the HR Department’s website at the following address: https://cern.ch/hr-services/services-Ben/sls_shortterm.asp All staff wishing to participate in the new scheme ...

  16. Multiuser switched diversity scheduling schemes

    KAUST Repository

    Shaqfeh, Mohammad

    2012-09-01

    Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.

  17. Compact Spreader Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Placidi, M.; Jung, J. -Y.; Ratti, A.; Sun, C.

    2014-07-25

    This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.

  18. (t, n) Threshold d-Level Quantum Secret Sharing.

    Science.gov (United States)

    Song, Xiu-Li; Liu, Yan-Bing; Deng, Hong-Yao; Xiao, Yong-Gang

    2017-07-25

    Most of Quantum Secret Sharing(QSS) are (n, n) threshold 2-level schemes, in which the 2-level secret cannot be reconstructed until all n shares are collected. In this paper, we propose a (t, n) threshold d-level QSS scheme, in which the d-level secret can be reconstructed only if at least t shares are collected. Compared with (n, n) threshold 2-level QSS, the proposed QSS provides better universality, flexibility, and practicability. Moreover, in this scheme, any one of the participants does not know the other participants' shares, even the trusted reconstructor Bob 1 is no exception. The transformation of the particles includes some simple operations such as d-level CNOT, Quantum Fourier Transform(QFT), Inverse Quantum Fourier Transform(IQFT), and generalized Pauli operator. The transformed particles need not to be transmitted from one participant to another in the quantum channel. Security analysis shows that the proposed scheme can resist intercept-resend attack, entangle-measure attack, collusion attack, and forgery attack. Performance comparison shows that it has lower computation and communication costs than other similar schemes when 2 < t < n - 1.

  19. OUTLAWING AMNESTY: THE RETURN OF CRIMINAL JUSTICE IN TRANSITIONAL JUSTICE SCHEMES*

    Directory of Open Access Journals (Sweden)

    Lisa J. Laplante, University of Connecticut-School of Law, Estados Unidos

    2012-11-01

    Full Text Available Abstract: This Article responds to an apparent gap in the scholarly literature which fails to merge the fields of human rights law and international criminal law—a step that would resolve the current debate as to whether any amnesty in transitional justice settings is lawful. More specifically, even though both fields are a subset of transitional justice in general, the discipline of international criminal law still supports the theory of “qualified amnesties” in transitional justice schemes, while international human rights law now stands for the proposition that no amnesty is lawful in those settings. This Article brings attention to this new development through a discussion of the Barrios Altos case. This Article seeks to reveal how an international human rights decision can dramatically impact state practice, thus also contributing to a pending question in international human rights law as to whether such jurisprudence is effective in increasing human rights protections. The Article concludes by looking at the implications of this new legal development in regard to amnesties in order to encourage future research regarding the role of criminal justice in transitional justice schemes. Keywords: Amnesty in the Americas. Transitional Justice. Human Rights Violations

  20. Developing and supporting coordinators of structured mentoring schemes in South Africa

    Directory of Open Access Journals (Sweden)

    Penny Abbott

    2010-10-01

    Research purpose: The aim of this research is to discover what the characteristics of coordinators of structured mentoring schemes in South Africa are, what is required of such coordinators and how they feel about their role, with a view to improving development and support for them. Motivation for the study: The limited amount of information about role requirements for coordinators which is available in the literature is not based on empirical research. This study aims to supply the empirical basis for improved development and support for coordinators. Research design and method: A purposive sample of 25 schemes was identified and both quantitative and qualitative data, obtained through questionnaires and interviews, were analysed using descriptive statistics and thematic analysis. Main findings: Functions of coordinators tend to be similar across different types of mentoring schemes. A passion for mentoring is important, as the role involves many frustrations. There is little formalised development and support for coordinators. Practical/managerial implications: The study clarifies the functions of the coordinator, offers a job description and profile and makes suggestions on how to improve the development of the coordinator’s skills. Contribution/value-add: An understanding of what is required from a coordinator, how the necessary knowledge and skills can be developed and how the coordinator can be supported,adds value to an organisation setting up or reviewing its structured mentoring schemes.