WorldWideScience

Sample records for level set scheme

  1. Discretisation Schemes for Level Sets of Planar Gaussian Fields

    Science.gov (United States)

    Beliaev, D.; Muirhead, S.

    2018-01-01

    Smooth random Gaussian functions play an important role in mathematical physics, a main example being the random plane wave model conjectured by Berry to give a universal description of high-energy eigenfunctions of the Laplacian on generic compact manifolds. Our work is motivated by questions about the geometry of such random functions, in particular relating to the structure of their nodal and level sets. We study four discretisation schemes that extract information about level sets of planar Gaussian fields. Each scheme recovers information up to a different level of precision, and each requires a maximum mesh-size in order to be valid with high probability. The first two schemes are generalisations and enhancements of similar schemes that have appeared in the literature (Beffara and Gayet in Publ Math IHES, 2017. https://doi.org/10.1007/s10240-017-0093-0; Mischaikow and Wanner in Ann Appl Probab 17:980-1018, 2007); these give complete topological information about the level sets on either a local or global scale. As an application, we improve the results in Beffara and Gayet (2017) on Russo-Seymour-Welsh estimates for the nodal set of positively-correlated planar Gaussian fields. The third and fourth schemes are, to the best of our knowledge, completely new. The third scheme is specific to the nodal set of the random plane wave, and provides global topological information about the nodal set up to `visible ambiguities'. The fourth scheme gives a way to approximate the mean number of excursion domains of planar Gaussian fields.

  2. Multi-domain, higher order level set scheme for 3D image segmentation on the GPU

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Zhang, Qin; Anton, François

    2010-01-01

    to evaluate level set surfaces that are $C^2$ continuous, but are slow due to high computational burden. In this paper, we provide a higher order GPU based solver for fast and efficient segmentation of large volumetric images. We also extend the higher order method to multi-domain segmentation. Our streaming...

  3. Gamma spectrometry; level schemes

    International Nuclear Information System (INIS)

    Blachot, J.; Bocquet, J.P.; Monnand, E.; Schussler, F.

    1977-01-01

    The research presented dealt with: a new beta emitter, isomer of 131 Sn; the 136 I levels fed through the radioactive decay of 136 Te (20.9s); the A=145 chain (β decay of Ba, La and Ce, and level schemes for 145 La, 145 Ce, 145 Pr); the A=47 chain (La and Ce, β decay, and the level schemes of 147 Ce and 147 Pr) [fr

  4. Riemann-problem and level-set approaches for two-fluid flow computations I. Linearized Godunov scheme

    NARCIS (Netherlands)

    B. Koren (Barry); M.R. Lewis; E.H. van Brummelen (Harald); B. van Leer

    2001-01-01

    textabstractA finite-volume method is presented for the computation of compressible flows of two immiscible fluids at very different densities. The novel ingredient in the method is a two-fluid linearized Godunov scheme, allowing for flux computations in case of different fluids (e.g., water and

  5. On 165Ho level scheme

    International Nuclear Information System (INIS)

    Ardisson, Claire; Ardisson, Gerard.

    1976-01-01

    A 165 Ho level scheme was constructed which led to the interpretation of sixty γ rays belonging to the decay of 165 Dy. A new 702.9keV level was identified to be the 5/2 - member of the 1/2 ) 7541{ Nilsson orbit. )] [fr

  6. Numerical simulations of natural or mixed convection in vertical channels: comparisons of level-set numerical schemes for the modeling of immiscible incompressible fluid flows

    International Nuclear Information System (INIS)

    Li, R.

    2012-01-01

    The aim of this research dissertation is at studying natural and mixed convections of fluid flows, and to develop and validate numerical schemes for interface tracking in order to treat incompressible and immiscible fluid flows, later. In a first step, an original numerical method, based on Finite Volume discretizations, is developed for modeling low Mach number flows with large temperature gaps. Three physical applications on air flowing through vertical heated parallel plates were investigated. We showed that the optimum spacing corresponding to the peak heat flux transferred from an array of isothermal parallel plates cooled by mixed convection is smaller than those for natural or forced convections when the pressure drop at the outlet keeps constant. We also proved that mixed convection flows resulting from an imposed flow rate may exhibit unexpected physical solutions; alternative model based on prescribed total pressure at inlet and fixed pressure at outlet sections gives more realistic results. For channels heated by heat flux on one wall only, surface radiation tends to suppress the onset of re-circulations at the outlet and to unify the walls temperature. In a second step, the mathematical model coupling the incompressible Navier-Stokes equations and the Level-Set method for interface tracking is derived. Improvements in fluid volume conservation by using high order discretization (ENO-WENO) schemes for the transport equation and variants of the signed distance equation are discussed. (author)

  7. LevelScheme: A level scheme drawing and scientific figure preparation system for Mathematica

    Science.gov (United States)

    Caprio, M. A.

    2005-09-01

    LevelScheme is a scientific figure preparation system for Mathematica. The main emphasis is upon the construction of level schemes, or level energy diagrams, as used in nuclear, atomic, molecular, and hadronic physics. LevelScheme also provides a general infrastructure for the preparation of publication-quality figures, including support for multipanel and inset plotting, customizable tick mark generation, and various drawing and labeling tasks. Coupled with Mathematica's plotting functions and powerful programming language, LevelScheme provides a flexible system for the creation of figures combining diagrams, mathematical plots, and data plots. Program summaryTitle of program:LevelScheme Catalogue identifier:ADVZ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVZ Operating systems:Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux Programming language used:Mathematica 4 Number of bytes in distributed program, including test and documentation:3 051 807 Distribution format:tar.gz Nature of problem:Creation of level scheme diagrams. Creation of publication-quality multipart figures incorporating diagrams and plots. Method of solution:A set of Mathematica packages has been developed, providing a library of level scheme drawing objects, tools for figure construction and labeling, and control code for producing the graphics.

  8. Level Scheme of 223Fr

    International Nuclear Information System (INIS)

    Gaeta, R.; Gonzalez, J.A.; Gonzalez, L.; Roldan, C.

    1972-01-01

    A study has been made of the decay of 2 27 Ac at levels of 223 F r, means of alpha Spectrometers of Si barrier detector and gamma Spectrometers of Ge(Li). The rotational bands 1/2-(541 ↓ ] , 1/2-(530 ↑ ) and 3/2-(532 ↓ ) have been identified, as well as two octupolar bands associated with the fundamental one. The results obtained indicate that the unified model is applicable in this intermediate zone of the nuclide chart. (Author) 150 refs

  9. An efficient quantum scheme for Private Set Intersection

    Science.gov (United States)

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-01

    Private Set Intersection allows a client to privately compute set intersection with the collaboration of the server, which is one of the most fundamental and key problems within the multiparty collaborative computation of protecting the privacy of the parties. In this paper, we first present a cheat-sensitive quantum scheme for Private Set Intersection. Compared with classical schemes, our scheme has lower communication complexity, which is independent of the size of the server's set. Therefore, it is very suitable for big data services in Cloud or large-scale client-server networks.

  10. A segmentation and classification scheme for single tooth in MicroCT images based on 3D level set and k-means+.

    Science.gov (United States)

    Wang, Liansheng; Li, Shusheng; Chen, Rongzhen; Liu, Sze-Yu; Chen, Jyh-Cheng

    2017-04-01

    Accurate classification of different anatomical structures of teeth from medical images provides crucial information for the stress analysis in dentistry. Usually, the anatomical structures of teeth are manually labeled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS) method improved by fully utilizing 3 dimensional (3D) information, and classify the tooth by employing unsupervised learning i.e., k-means++ method. In order to evaluate the proposed method, the experiments are conducted on the sufficient and extensive datasets of mandibular molars. The experimental results show that our method can achieve higher accuracy and robustness compared to other three clustering methods. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Single particle level scheme for alpha decay

    International Nuclear Information System (INIS)

    Mirea, M.

    1998-01-01

    The fine structure phenomenon in alpha decay was evidenced by Rosenblum. In this process the kinetic energy of the emitted particle has several determined values related to the structure of the parent and the daughter nucleus. The probability to find the daughter in a low lying state was considered strongly dependent on the spectroscopic factor defined as the square of overlap between the wave function of the parent in the ground state and the wave functions of the specific excited states of the daughter. This treatment provides a qualitative agreement with the experimental results if the variations of the penetrability between different excited states are neglected. Based on single particle structure during fission, a new formalism explained quantitatively the fine structure of the cluster decay. It was suggested that this formalism can be applied also to alpha decay. For this purpose, the first step is to construct the level scheme of this type of decay. Such a scheme, obtained with the super-asymmetric two-center potential, is plotted for the alpha decay of 223 Ra. It is interesting to note that, diabatically, the level with spin 3/2 emerging from 1i 11/2 (ground state of the parent) reaches an excited state of the daughter in agreement with the experiment. (author)

  12. Setting aside transactions from pyramid schemes as impeachable ...

    African Journals Online (AJOL)

    These schemes, which are often referred to as pyramid or Ponzi schemes, are unsustainable operations and give rise to problems in the law of insolvency. Investors in these schemes are often left empty-handed upon the scheme's eventual collapse and insolvency. Investors who received pay-outs from the scheme find ...

  13. Statistical interpretation of low energy nuclear level schemes

    Energy Technology Data Exchange (ETDEWEB)

    Egidy, T von; Schmidt, H H; Behkami, A N

    1988-01-01

    Nuclear level schemes and neutron resonance spacings yield information on level densities and level spacing distributions. A total of 75 nuclear level schemes with 1761 levels and known spins and parities was investigated. The A-dependence of level density parameters is discussed. The spacing distributions of levels near the groundstate indicate transitional character between regular and chaotic properties while chaos dominates near the neutron binding energy.

  14. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    Energy Technology Data Exchange (ETDEWEB)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au [School of Chemistry and Biochemistry, The University of Western Australia, Perth, WA 6009 (Australia)

    2015-05-15

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.

  15. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    International Nuclear Information System (INIS)

    Spackman, Peter R.; Karton, Amir

    2015-01-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L α two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol –1 . The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol –1

  16. Two-level schemes for the advection equation

    Science.gov (United States)

    Vabishchevich, Petr N.

    2018-06-01

    The advection equation is the basis for mathematical models of continuum mechanics. In the approximate solution of nonstationary problems it is necessary to inherit main properties of the conservatism and monotonicity of the solution. In this paper, the advection equation is written in the symmetric form, where the advection operator is the half-sum of advection operators in conservative (divergent) and non-conservative (characteristic) forms. The advection operator is skew-symmetric. Standard finite element approximations in space are used. The standard explicit two-level scheme for the advection equation is absolutely unstable. New conditionally stable regularized schemes are constructed, on the basis of the general theory of stability (well-posedness) of operator-difference schemes, the stability conditions of the explicit Lax-Wendroff scheme are established. Unconditionally stable and conservative schemes are implicit schemes of the second (Crank-Nicolson scheme) and fourth order. The conditionally stable implicit Lax-Wendroff scheme is constructed. The accuracy of the investigated explicit and implicit two-level schemes for an approximate solution of the advection equation is illustrated by the numerical results of a model two-dimensional problem.

  17. Revisiting the level scheme of the proton emitter 151Lu

    International Nuclear Information System (INIS)

    Wang, F.; Sun, B.H.; Liu, Z.; Scholey, C.; Eeckhaudt, S.; Grahn, T.; Greenlees, P.T.; Jones, P.; Julin, R.; Juutinen, S.; Kettelhut, S.; Leino, M.; Nyman, M.; Rahkila, P.; Saren, J.; Sorri, J.; Uusitalo, J.; Ashley, S.F.; Cullen, I.J.; Garnsworthy, A.B.; Gelletly, W.; Jones, G.A.; Pietri, S.; Podolyak, Z.; Steer, S.; Thompson, N.J.; Walker, P.M.; Williams, S.; Bianco, L.; Darby, I.G.; Joss, D.T.; Page, R.D.; Pakarinen, J.; Rigby, S.; Cullen, D.M.; Khan, S.; Kishada, A.; Gomez-Hornillos, M.B.; Simpson, J.; Jenkins, D.G.; Niikura, M.; Seweryniak, D.; Shizuma, Toshiyuki

    2015-01-01

    An experiment aiming to search for new isomers in the region of proton emitter 151 Lu was performed at the Accelerator Laboratory of the University of Jyväskylä (JYFL), by combining the high resolution γ-ray array JUROGAM, gas-filled RITU separator and GREAT detectors with the triggerless total data readout acquisition (TDR) system. In this proceeding, we revisit the level scheme of 151 Lu by using the proton-tagging technique. A level scheme consistent with the latest experimental results is obtained, and 3 additional levels are identified at high excitation energies. (author)

  18. A MEPS is a MEPS is a MEPS. Comparing Ecodesign and Top Runner schemes for setting product efficiency standards

    Energy Technology Data Exchange (ETDEWEB)

    Siderius, P.J.S. [NL Agency, Croeselaan 15, P.O. Box 8242, 3503 RE Utrecht (Netherlands); Nakagami, H. [Jyukankyo Research Institute, 3-29, Kioi-cho, Chiyoda-ku Tokyo, 102-0094 (Japan)

    2013-02-15

    Both Top Runner in Japan and Ecodesign in the European Union are schemes to set requirements on the energy efficiency (minimum efficiency performance standards, MEPS) of a variety of products. This article provides an overview of the main characteristics and results of both schemes and gives recommendations for improving them. Both schemes contribute significantly to the energy efficiency targets set by the European Commission and the Japanese government. Although it is difficult to compare the absolute levels of the requirements, comparison of the relative improvements and of the savings on household electricity consumption (11 % in Japan, 16 % in the EU) suggest they are in the same range. Furthermore, the time needed to set or review requirements is in both schemes considerable (between 5 and 6 years on average) and the manageability increasingly will become a challenge. The appeal of the Top Runner approach is that the most efficient product (Top Runner) sets the standard for all products at the next target year. Although the Ecodesign scheme includes the elements for a Top Runner approach, it could exploit this principle more explicitly. On the other hand, the Top Runner scheme could benefit by using a real minimum efficiency performance standard instead of a fleet average. This would make the monitoring and enforcement more simple and transparent, and would open the scheme for products where the market situation is less clear.

  19. A new level set model for multimaterial flows

    Energy Technology Data Exchange (ETDEWEB)

    Starinshak, David P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Karni, Smadar [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of Mathematics; Roe, Philip L. [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of AerospaceEngineering

    2014-01-08

    We present a new level set model for representing multimaterial flows in multiple space dimensions. Instead of associating a level set function with a specific fluid material, the function is associated with a pair of materials and the interface that separates them. A voting algorithm collects sign information from all level sets and determines material designations. M(M ₋1)/2 level set functions might be needed to represent a general M-material configuration; problems of practical interest use far fewer functions, since not all pairs of materials share an interface. The new model is less prone to producing indeterminate material states, i.e. regions claimed by more than one material (overlaps) or no material at all (vacuums). It outperforms existing material-based level set models without the need for reinitialization schemes, thereby avoiding additional computational costs and preventing excessive numerical diffusion.

  20. On reinitializing level set functions

    Science.gov (United States)

    Min, Chohong

    2010-04-01

    In this paper, we consider reinitializing level functions through equation ϕt+sgn(ϕ0)(‖∇ϕ‖-1)=0[16]. The method of Russo and Smereka [11] is taken in the spatial discretization of the equation. The spatial discretization is, simply speaking, the second order ENO finite difference with subcell resolution near the interface. Our main interest is on the temporal discretization of the equation. We compare the three temporal discretizations: the second order Runge-Kutta method, the forward Euler method, and a Gauss-Seidel iteration of the forward Euler method. The fact that the time in the equation is fictitious makes a hypothesis that all the temporal discretizations result in the same result in their stationary states. The fact that the absolute stability region of the forward Euler method is not wide enough to include all the eigenvalues of the linearized semi-discrete system of the second order ENO spatial discretization makes another hypothesis that the forward Euler temporal discretization should invoke numerical instability. Our results in this paper contradict both the hypotheses. The Runge-Kutta and Gauss-Seidel methods obtain the second order accuracy, and the forward Euler method converges with order between one and two. Examining all their properties, we conclude that the Gauss-Seidel method is the best among the three. Compared to the Runge-Kutta, it is twice faster and requires memory two times less with the same accuracy.

  1. New data on excited level scheme of 73Ge nucleus

    International Nuclear Information System (INIS)

    Kosyak, Yu.G.; Kaipov, D.K.; Chekushina, L.V.

    1990-01-01

    New data on the scheme of 73 Ge decay obtained by the method of reactor fast neutron inelastic scattering are presented. γ-Spectra from reaction 73 Ge(n, n'γ) 73 Ge at the angles of 90 and 124 deg of relatively incident neutron beam have been measured. Experimental populations of the levels are studied. 29 new γ-transitions have been identified, two new levels have been introduced

  2. Sensor Data Security Level Estimation Scheme for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Alex Ramos

    2015-01-01

    Full Text Available Due to their increasing dissemination, wireless sensor networks (WSNs have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE, a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates.

  3. Sensor Data Security Level Estimation Scheme for Wireless Sensor Networks

    Science.gov (United States)

    Ramos, Alex; Filho, Raimir Holanda

    2015-01-01

    Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates. PMID:25608215

  4. Sensor data security level estimation scheme for wireless sensor networks.

    Science.gov (United States)

    Ramos, Alex; Filho, Raimir Holanda

    2015-01-19

    Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates.

  5. GRAP, Gamma-Ray Level-Scheme Assignment

    International Nuclear Information System (INIS)

    Franklyn, C.B.

    2002-01-01

    1 - Description of program or function: An interactive program for allocating gamma-rays to an energy level scheme. Procedure allows for searching for new candidate levels of the form: 1) L1 + G(A) + G(B) = L2; 2) G(A) + G(B) = G(C); 3) G(A) + G(B) = C (C is a user defined number); 4) L1 + G(A) + G(B) + G(C) = L2. Procedure indicates intensity balance of feed and decay of each energy level. Provides for optimization of a level energy (and associated error). Overall procedure allows for pre-defining of certain gamma-rays as belonging to particular regions of the level scheme, for example, high energy transition levels, or due to beta- decay. 2 - Method of solution: Search for cases in which the energy difference between two energy levels is equal to a gamma-ray energy within user-defined limits. 3 - Restrictions on the complexity of the problem: Maximum number of gamma-rays: 999; Maximum gamma ray energy: 32000 units; Minimum gamma ray energy: 10 units; Maximum gamma-ray intensity: 32000 units; Minimum gamma-ray intensity: 0.001 units; Maximum number of levels: 255; Maximum level energy: 32000 units; Minimum level energy: 10 units; Maximum error on energy, intensity: 32 units; Minimum error on energy, intensity: 0.001 units; Maximum number of combinations: 6400 (ca); Maximum number of gamma-ray types : 127

  6. Optimized Basis Sets for the Environment in the Domain-Specific Basis Set Approach of the Incremental Scheme.

    Science.gov (United States)

    Anacker, Tony; Hill, J Grant; Friedrich, Joachim

    2016-04-21

    Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.

  7. Healthy incentive scheme in the Irish full-day-care pre-school setting.

    LENUS (Irish Health Repository)

    Molloy, C Johnston

    2013-12-16

    A pre-school offering a full-day-care service provides for children aged 0-5 years for more than 4 h\\/d. Researchers have called for studies that will provide an understanding of nutrition and physical activity practices in this setting. Obesity prevention in pre-schools, through the development of healthy associations with food and health-related practices, has been advocated. While guidelines for the promotion of best nutrition and health-related practice in the early years\\' setting exist in a number of jurisdictions, associated regulations have been noted to be poor, with the environment of the child-care facility mainly evaluated for safety. Much cross-sectional research outlines poor nutrition and physical activity practice in this setting. However, there are few published environmental and policy-level interventions targeting the child-care provider with, to our knowledge, no evidence of such interventions in Ireland. The aim of the present paper is to review international guidelines and recommendations relating to health promotion best practice in the pre-school setting: service and resource provision; food service and food availability; and the role and involvement of parents in pre-schools. Intervention programmes and assessment tools available to measure such practice are outlined; and insight is provided into an intervention scheme, formulated from available best practice, that was introduced into the Irish full-day-care pre-school setting.

  8. Fast Sparse Level Sets on Graphics Hardware

    NARCIS (Netherlands)

    Jalba, Andrei C.; Laan, Wladimir J. van der; Roerdink, Jos B.T.M.

    The level-set method is one of the most popular techniques for capturing and tracking deformable interfaces. Although level sets have demonstrated great potential in visualization and computer graphics applications, such as surface editing and physically based modeling, their use for interactive

  9. Adopting the EU sustainable performance scheme Level(s) in the Danish building sector

    DEFF Research Database (Denmark)

    Kanafani, Kai; Rasmussen, Freja Nygaard; Zimmermann, Regitze Kjær

    2018-01-01

    to life cycle assessment (LCA) requirements within the Level(s) scheme. As a measure for the Danish building sector’s LCA practice, the specifications for LCAbyg, the official Danish building LCA tool, is used. In 2017, the European commission’s Joint Research Centre has launched Level(s) as a vo...

  10. Multi-phase flow monitoring with electrical impedance tomography using level set based method

    International Nuclear Information System (INIS)

    Liu, Dong; Khambampati, Anil Kumar; Kim, Sin; Kim, Kyung Youn

    2015-01-01

    Highlights: • LSM has been used for shape reconstruction to monitor multi-phase flow using EIT. • Multi-phase level set model for conductivity is represented by two level set functions. • LSM handles topological merging and breaking naturally during evolution process. • To reduce the computational time, a narrowband technique was applied. • Use of narrowband and optimization approach results in efficient and fast method. - Abstract: In this paper, a level set-based reconstruction scheme is applied to multi-phase flow monitoring using electrical impedance tomography (EIT). The proposed scheme involves applying a narrowband level set method to solve the inverse problem of finding the interface between the regions having different conductivity values. The multi-phase level set model for the conductivity distribution inside the domain is represented by two level set functions. The key principle of the level set-based method is to implicitly represent the shape of interface as the zero level set of higher dimensional function and then solve a set of partial differential equations. The level set-based scheme handles topological merging and breaking naturally during the evolution process. It also offers several advantages compared to traditional pixel-based approach. Level set-based method for multi-phase flow is tested with numerical and experimental data. It is found that level set-based method has better reconstruction performance when compared to pixel-based method

  11. Level set method for image segmentation based on moment competition

    Science.gov (United States)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  12. Level-set techniques for facies identification in reservoir modeling

    Science.gov (United States)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  13. Level-set techniques for facies identification in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2011-01-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil–water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301–29; 2004 Inverse Problems 20 259–82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg–Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush–Kuhn–Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies

  14. Pixel detector readout electronics with two-level discriminator scheme

    International Nuclear Information System (INIS)

    Pengg, F.

    1998-01-01

    In preparation for a silicon pixel detector with more than 3,000 readout channels per chip for operation at the future large hadron collider (LHC) at CERN the analog front end of the readout electronics has been designed and measured on several test-arrays with 16 by 4 cells. They are implemented in the HP 0.8 microm process but compatible with the design rules of the radiation hard Honeywell 0.8 microm bulk process. Each cell contains bump bonding pad, preamplifier, discriminator and control logic for masking and testing within a layout area of only 50 microm by 140 microm. A new two-level discriminator scheme has been implemented to cope with the problems of time-walk and interpixel cross-coupling. The measured gain of the preamplifier is 900 mV for a minimum ionizing particle (MIP, about 24,000 e - for a 300 microm thick Si-detector) with a return to baseline within 750 ns for a 1 MIP input signal. The full readout chain (without detector) shows an equivalent noise charge to 60e - r.m.s. The time-walk, a function of the separation between the two threshold levels, is measured to be 22 ns at a separation of 1,500 e - , which is adequate for the 40 MHz beam-crossing frequency at the LHC. The interpixel cross-coupling, measured with a 40fF coupling capacitance, is less than 3%. A single cell consumes 35 microW at 3.5 V supply voltage

  15. Set of difference spitting schemes for solving the Navier-Stokes incompressible equations in natural variables

    International Nuclear Information System (INIS)

    Koleshko, S.B.

    1989-01-01

    A three-parametric set of difference schemes is suggested to solve Navier-Stokes equations with the use of the relaxation form of the continuity equation. The initial equations are stated for time increments. Use is made of splitting the operator into one-dimensional forms that reduce calculations to scalar factorizations. Calculated results for steady- and unsteady-state flows in a cavity are presented

  16. Schemes for Probabilistic Teleportation of an Unknown Three-Particle Three-Level Entangled State

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this paper, two schemes for teleporting an unknown three-particle three-level entangled state are proposed. In the first scheme, two partial three-particle three-level entangled states are used as the quantum channels, while in the second scheme, three two-particle three-level non-maximally entangled states are employed as quantum channels.It is shown that the teleportation can be successfully realized with certain probability, for both two schemes, if a receiver adopts some appropriate unitary transformations. It is shown also that the successful probabilities of these two schemes are different.

  17. Voltage protection scheme for MG sets used to drive inductive energy storage systems

    International Nuclear Information System (INIS)

    Campen, G.L.; Easter, R.B.

    1977-01-01

    A recent tokamak proposal at ORNL called for MG (motor-generator) sets to drive the ohmic heating (OH] coil, which was to be subjected to 20 kV immediately after coil charge-up to initiate the experiment. Since most rotating machinery is inherently low voltage, including the machines available at ORNL, a mechanism was necessary to isolate the generators from the high voltage portions of the circuit before the appearance of this voltage. It is not the expected 20 kV at the coil that causes difficulty, because the main interrupting switch handles this. The voltage induced in the armature due to di/dt and the possibility of faults are the greatest causes for concern and are responsible for the complexity of the voltage protection scheme, which must accommodate any possible combination of fault time and location. Such a protection scheme is presented in this paper

  18. Energy mesh optimization for multi-level calculation schemes

    International Nuclear Information System (INIS)

    Mosca, P.; Taofiki, A.; Bellier, P.; Prevost, A.

    2011-01-01

    The industrial calculations of third generation nuclear reactors are based on sophisticated strategies of homogenization and collapsing at different spatial and energetic levels. An important issue to ensure the quality of these calculation models is the choice of the collapsing energy mesh. In this work, we show a new approach to generate optimized energy meshes starting from the SHEM 281-group library. The optimization model is applied on 1D cylindrical cells and consists of finding an energy mesh which minimizes the errors between two successive collision probability calculations. The former is realized over the fine SHEM mesh with Livolant-Jeanpierre self-shielded cross sections and the latter is performed with collapsed cross sections over the energy mesh being optimized. The optimization is done by the particle swarm algorithm implemented in the code AEMC and multigroup flux solutions are obtained from standard APOLLO2 solvers. By this new approach, a set of new optimized meshes which encompass from 10 to 50 groups has been defined for PWR and BWR calculations. This set will allow users to adapt the energy detail of the solution to the complexity of the calculation (assembly, multi-assembly, two-dimensional whole core). Some preliminary verifications, in which the accuracy of the new meshes is measured compared to a direct 281-group calculation, show that the 30-group optimized mesh offers a good compromise between simulation time and accuracy for a standard 17 x 17 UO 2 assembly with and without control rods. (author)

  19. An intelligent hybrid scheme for optimizing parking space: A Tabu metaphor and rough set based approach

    Directory of Open Access Journals (Sweden)

    Soumya Banerjee

    2011-03-01

    Full Text Available Congested roads, high traffic, and parking problems are major concerns for any modern city planning. Congestion of on-street spaces in official neighborhoods may give rise to inappropriate parking areas in office and shopping mall complex during the peak time of official transactions. This paper proposes an intelligent and optimized scheme to solve parking space problem for a small city (e.g., Mauritius using a reactive search technique (named as Tabu Search assisted by rough set. Rough set is being used for the extraction of uncertain rules that exist in the databases of parking situations. The inclusion of rough set theory depicts the accuracy and roughness, which are used to characterize uncertainty of the parking lot. Approximation accuracy is employed to depict accuracy of a rough classification [1] according to different dynamic parking scenarios. And as such, the hybrid metaphor proposed comprising of Tabu Search and rough set could provide substantial research directions for other similar hard optimization problems.

  20. Performance of a Two-Level Call Admission Control Scheme for DS-CDMA Wireless Networks

    Directory of Open Access Journals (Sweden)

    Fapojuwo Abraham O

    2007-01-01

    Full Text Available We propose a two-level call admission control (CAC scheme for direct sequence code division multiple access (DS-CDMA wireless networks supporting multimedia traffic and evaluate its performance. The first-level admission control assigns higher priority to real-time calls (also referred to as class 0 calls in gaining access to the system resources. The second level admits nonreal-time calls (or class 1 calls based on the resources remaining after meeting the resource needs for real-time calls. However, to ensure some minimum level of performance for nonreal-time calls, the scheme reserves some resources for such calls. The proposed two-level CAC scheme utilizes the delay-tolerant characteristic of non-real-time calls by incorporating a queue to temporarily store those that cannot be assigned resources at the time of initial access. We analyze and evaluate the call blocking, outage probability, throughput, and average queuing delay performance of the proposed two-level CAC scheme using Markov chain theory. The analytic results are validated by simulation results. The numerical results show that the proposed two-level CAC scheme provides better performance than the single-level CAC scheme. Based on these results, it is concluded that the proposed two-level CAC scheme serves as a good solution for supporting multimedia applications in DS-CDMA wireless communication systems.

  1. Performance of a Two-Level Call Admission Control Scheme for DS-CDMA Wireless Networks

    Directory of Open Access Journals (Sweden)

    Abraham O. Fapojuwo

    2007-11-01

    Full Text Available We propose a two-level call admission control (CAC scheme for direct sequence code division multiple access (DS-CDMA wireless networks supporting multimedia traffic and evaluate its performance. The first-level admission control assigns higher priority to real-time calls (also referred to as class 0 calls in gaining access to the system resources. The second level admits nonreal-time calls (or class 1 calls based on the resources remaining after meeting the resource needs for real-time calls. However, to ensure some minimum level of performance for nonreal-time calls, the scheme reserves some resources for such calls. The proposed two-level CAC scheme utilizes the delay-tolerant characteristic of non-real-time calls by incorporating a queue to temporarily store those that cannot be assigned resources at the time of initial access. We analyze and evaluate the call blocking, outage probability, throughput, and average queuing delay performance of the proposed two-level CAC scheme using Markov chain theory. The analytic results are validated by simulation results. The numerical results show that the proposed two-level CAC scheme provides better performance than the single-level CAC scheme. Based on these results, it is concluded that the proposed two-level CAC scheme serves as a good solution for supporting multimedia applications in DS-CDMA wireless communication systems.

  2. Geminal embedding scheme for optimal atomic basis set construction in correlated calculations

    Energy Technology Data Exchange (ETDEWEB)

    Sorella, S., E-mail: sorella@sissa.it [International School for Advanced Studies (SISSA), Via Beirut 2-4, 34014 Trieste, Italy and INFM Democritos National Simulation Center, Trieste (Italy); Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr [Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France); Mazzola, G., E-mail: gmazzola@phys.ethz.ch [Theoretische Physik, ETH Zurich, 8093 Zurich (Switzerland); Casula, M., E-mail: michele.casula@impmc.upmc.fr [CNRS and Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France)

    2015-12-28

    We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.

  3. Four-level conservative finite-difference schemes for Boussinesq paradigm equation

    Science.gov (United States)

    Kolkovska, N.

    2013-10-01

    In this paper a two-parametric family of four level conservative finite difference schemes is constructed for the multidimensional Boussinesq paradigm equation. The schemes are explicit in the sense that no inner iterations are needed for evaluation of the numerical solution. The preservation of the discrete energy with this method is proved. The schemes have been numerically tested on one soliton propagation model and two solitons interaction model. The numerical experiments demonstrate that the proposed family of schemes has second order of convergence in space and time steps in the discrete maximal norm.

  4. Transport and diffusion of material quantities on propagating interfaces via level set methods

    CERN Document Server

    Adalsteinsson, D

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies.

  5. Transport and diffusion of material quantities on propagating interfaces via level set methods

    International Nuclear Information System (INIS)

    Adalsteinsson, David; Sethian, J.A.

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies

  6. Discrete level schemes sublibrary. Progress report by Budapest group

    International Nuclear Information System (INIS)

    Oestor, J.; Belgya, T.; Molnar, G.L.

    1997-01-01

    An entirely new discrete levels file has been created by the Budapest group according to the recommended principles, using the Evaluated Nuclear Structure Data File, ENSDF as a source. The resulting library contains 96,834 levels and 105,423 gamma rays for 2,585 nuclei, with their characteristics such as energy, spin, parity, half-life as well gamma-ray energy and branching percentage

  7. Application of the level set method for multi-phase flow computation in fusion engineering

    International Nuclear Information System (INIS)

    Luo, X-Y.; Ni, M-J.; Ying, A.; Abdou, M.

    2006-01-01

    Numerical simulation of multi-phase flow is essential to evaluate the feasibility of a liquid protection scheme for the power plant chamber. The level set method is one of the best methods for computing and analyzing the motion of interface among the multi-phase flow. This paper presents a general formula for the second-order projection method combined with the level set method to simulate unsteady incompressible multi-phase flow with/out phase change flow encountered in fusion science and engineering. The third-order ENO scheme and second-order semi-implicit Crank-Nicholson scheme is used to update the convective and diffusion term. The numerical results show this method can handle the complex deformation of the interface and the effect of liquid-vapor phase change will be included in the future work

  8. Soft rotator model and {sup 246}Cm low-lying level scheme

    Energy Technology Data Exchange (ETDEWEB)

    Porodzinskij, Yu.V.; Sukhovitskij, E.Sh. [Radiation Physics and Chemistry Problems Inst., Minsk-Sosny (Belarus)

    1997-03-01

    Non-axial soft rotator nuclear model is suggested as self-consistent approach for interpretation of level schemes, {gamma}-transition probabilities and neutron interaction with even-even nuclei. (author)

  9. Setting aside Transactions from Pyramid Schemes as Impeachable Dispositions under South African Insolvency Legislation

    Directory of Open Access Journals (Sweden)

    Zingapi Mabe

    2016-10-01

    Full Text Available South African courts have experienced a rise in the number of cases involving schemes that promise a return on investment with interest rates which are considerably above the maximum amount allowed by law, or schemes which promise compensation from the active recruitment of participants. These schemes, which are often referred to as pyramid or Ponzi schemes, are unsustainable operations and give rise to problems in the law of insolvency. Investors in these schemes are often left empty-handed upon the scheme’s eventual collapse and insolvency. Investors who received pay-outs from the scheme find themselves in the defence against the trustee’s claims for the return of the pay-outs to the insolvent estate. As the schemes are illegal and the pay-outs are often in terms of void agreements, the question arises whether they can be returned to the insolvent estate. A similar situation arose in Griffiths v Janse van Rensburg 2015 ZASCA 158 (26 October 2015. The point of contention in this case was whether the illegality of the business of the scheme was a relevant consideration in determining whether the pay-outs were made in the ordinary course of business of the scheme. This paper discusses pyramid schemes in the context of impeachable dispositions in terms of the Insolvency Act 24 of 1936.

  10. Scheme of 2-dimensional atom localization for a three-level atom via quantum coherence

    OpenAIRE

    Zafar, Sajjad; Ahmed, Rizwan; Khan, M. Khalid

    2013-01-01

    We present a scheme for two-dimensional (2D) atom localization in a three-level atomic system. The scheme is based on quantum coherence via classical standing wave fields between the two excited levels. Our results show that conditional position probability is significantly phase dependent of the applied field and frequency detuning of spontaneously emitted photons. We obtain a single localization peak having probability close to unity by manipulating the control parameters. The effect of ato...

  11. Setting the stage for master's level success

    Science.gov (United States)

    Roberts, Donna

    Comprehensive reading, writing, research, and study skills play a critical role in a graduate student's success and ability to contribute to a field of study effectively. The literature indicated a need to support graduate student success in the areas of mentoring, navigation, as well as research and writing. The purpose of this two-phased mixed methods explanatory study was to examine factors that characterize student success at the Master's level in the fields of education, sociology and social work. The study was grounded in a transformational learning framework which focused on three levels of learning: technical knowledge, practical or communicative knowledge, and emancipatory knowledge. The study included two data collection points. Phase one consisted of a Master's Level Success questionnaire that was sent via Qualtrics to graduate level students at three colleges and universities in the Central Valley of California: a California State University campus, a University of California campus, and a private college campus. The results of the chi-square indicated that seven questionnaire items were significant with p values less than .05. Phase two in the data collection included semi-structured interview questions that resulted in three themes emerged using Dedoose software: (1) the need for more language and writing support at the Master's level, (2) the need for mentoring, especially for second-language learners, and (3) utilizing the strong influence of faculty in student success. It is recommended that institutions continually assess and strengthen their programs to meet the full range of learners and to support students to degree completion.

  12. Validation of a simple evaporation-transpiration scheme (SETS) to estimate evaporation using micro-lysimeter measurements

    Science.gov (United States)

    Ghazanfari, Sadegh; Pande, Saket; Savenije, Hubert

    2014-05-01

    Several methods exist to estimate E and T. The Penman-Montieth or Priestly-Taylor methods along with the Jarvis scheme for estimating vegetation resistance are commonly used to estimate these fluxes as a function of land cover, atmospheric forcing and soil moisture content. In this study, a simple evaporation transpiration method is developed based on MOSAIC Land Surface Model that explicitly accounts for soil moisture. Soil evaporation and transpiration estimated by SETS is validated on a single column of soil profile with measured evaporation data from three micro-lysimeters located at Ferdowsi University of Mashhad synoptic station, Iran, for the year 2005. SETS is run using both implicit and explicit computational schemes. Results show that the implicit scheme estimates the vapor flux close to that by the explicit scheme. The mean difference between the implicit and explicit scheme is -0.03 mm/day. The paired T-test of mean difference (p-Value = 0.042 and t-Value = 2.04) shows that there is no significant difference between the two methods. The sum of soil evaporation and transpiration from SETS is also compared with P-M equation and micro-lysimeters measurements. The SETS predicts the actual evaporation with a lower bias (= 1.24mm/day) than P-M (= 1.82 mm/day) and with R2 value of 0.82.

  13. An accurate conservative level set/ghost fluid method for simulating turbulent atomization

    International Nuclear Information System (INIS)

    Desjardins, Olivier; Moureau, Vincent; Pitsch, Heinz

    2008-01-01

    This paper presents a novel methodology for simulating incompressible two-phase flows by combining an improved version of the conservative level set technique introduced in [E. Olsson, G. Kreiss, A conservative level set method for two phase flow, J. Comput. Phys. 210 (2005) 225-246] with a ghost fluid approach. By employing a hyperbolic tangent level set function that is transported and re-initialized using fully conservative numerical schemes, mass conservation issues that are known to affect level set methods are greatly reduced. In order to improve the accuracy of the conservative level set method, high order numerical schemes are used. The overall robustness of the numerical approach is increased by computing the interface normals from a signed distance function reconstructed from the hyperbolic tangent level set by a fast marching method. The convergence of the curvature calculation is ensured by using a least squares reconstruction. The ghost fluid technique provides a way of handling the interfacial forces and large density jumps associated with two-phase flows with good accuracy, while avoiding artificial spreading of the interface. Since the proposed approach relies on partial differential equations, its implementation is straightforward in all coordinate systems, and it benefits from high parallel efficiency. The robustness and efficiency of the approach is further improved by using implicit schemes for the interface transport and re-initialization equations, as well as for the momentum solver. The performance of the method is assessed through both classical level set transport tests and simple two-phase flow examples including topology changes. It is then applied to simulate turbulent atomization of a liquid Diesel jet at Re=3000. The conservation errors associated with the accurate conservative level set technique are shown to remain small even for this complex case

  14. Numerical simulation of interface movement in gas-liquid two-phase flows with Level Set method

    International Nuclear Information System (INIS)

    Li Huixiong; Chinese Academy of Sciences, Beijing; Deng Sheng; Chen Tingkuan; Zhao Jianfu; Wang Fei

    2005-01-01

    Numerical simulation of gas-liquid two-phase flow and heat transfer has been an attractive work for a quite long time, but still remains as a knotty difficulty due to the inherent complexities of the gas-liquid two-phase flow resulted from the existence of moving interfaces with topology changes. This paper reports the effort and the latest advances that have been made by the authors, with special emphasis on the methods for computing solutions to the advection equation of the Level set function, which is utilized to capture the moving interfaces in gas-liquid two-phase flows. Three different schemes, i.e. the simple finite difference scheme, the Superbee-TVD scheme and the 5-order WENO scheme in combination with the Runge-Kutta method are respectively applied to solve the advection equation of the Level Set. A numerical procedure based on the well-verified SIMPLER method is employed to numerically calculate the momentum equations of the two-phase flow. The above-mentioned three schemes are employed to simulate the movement of four typical interfaces under 5 typical flowing conditions. Analysis of the numerical results shows that the 5-order WENO scheme and the Superbee-TVD scheme are much better than the simple finite difference scheme, and the 5-order WENO scheme is the best to compute solutions to the advection equation of the Level Set. The 5-order WENO scheme will be employed as the main scheme to get solutions to the advection equations of the Level Set when gas-liquid two-phase flows are numerically studied in the future. (authors)

  15. A Robust Control Scheme for Medium-Voltage-Level DVR Implementation

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Loh, Poh Chiang; Li, Yun Wei

    2007-01-01

    of Hinfin controller weighting function selection, inner current loop tuning, and system disturbance rejection capability is presented. Finally, the designed control scheme is extensively tested on a laboratory 10-kV MV-level DVR system with varying voltage sag (balanced and unbalanced) and loading (linear....../nonlinear load and induction motor load) conditions. It is shown that the proposed control scheme is effective in both balanced and unbalanced sag compensation and load disturbance rejection, as its robustness is explicitly specified....

  16. Intercomparison between BATS and LSPM surface schemes, using point micrometeorological data set

    Energy Technology Data Exchange (ETDEWEB)

    Ruti, P.M.; Cacciamani, C.; Paccagnella, T. [Servizio Meteorologico Regionale, Bologna (Italy); Cassardo, C. [Turin Univ., Alessandria (Italy). Dipt. di Scienze e Technologie Avanzate; Longhetto, A. [Turin Univ. (Italy). Ist. di Fisica Generale; Bargagli, A. [ENEA, Roma (Italy). Gruppo di Dinamica dell`Atmosfera e dell`Oceano

    1997-08-01

    This work has been developed with the aim to create an archive of climatological values of sensible, latent and ground-atmosphere heat fluxes in the Po valley (CLIPS experiment); due to the unavailability of climatological archives of turbulent fluxes at synoptic scale, we have used the outputs of ``stand-alone`` runnings of biospheric models; this archive could be used to check the parametrizations of large- and mesoscale models in the surface layer. We started to check the reliability of our proposal by testing the model outputs by a comparison with observed data. We selected a flat, rural area in the middle-east Po valley (San Pietro Capofiume, Italy) and used the data gathered in the experimental campaign SPCFLUX93 carried out there. The models adopted for the intercomparison have been the biosphere-atmosphere transfer scheme (BATS) of Dickinson et al. (1986 version) and the land surface process model (LSPM) of Cassardo et al. (1996 version). An improved version of BATS has been implemented by us changing in a substantial way the soil thermal and hydrological subroutines. The upper boundary conditions used for all models were taken by interpolating the synoptic observations carried out at San Pietro Capofiume (Italy) station; the algorithm used for the interpolations was tested with the data achieved in a fortnight campaign (SPCFLUX93) carried out at the same location during June 1993, showing a good agreement between interpolated and observed variables. Two experiments have been carried out; in the first one, the vegetation parameter set used by BATS has been used to force all models, while in the second one a vegetation cover value closest to the observations in the site has been used. 30 refs.

  17. A Study on the Security Levels of Spread-Spectrum Embedding Schemes in the WOA Framework.

    Science.gov (United States)

    Wang, Yuan-Gen; Zhu, Guopu; Kwong, Sam; Shi, Yun-Qing

    2017-08-23

    Security analysis is a very important issue for digital watermarking. Several years ago, according to Kerckhoffs' principle, the famous four security levels, namely insecurity, key security, subspace security, and stego-security, were defined for spread-spectrum (SS) embedding schemes in the framework of watermarked-only attack. However, up to now there has been little application of the definition of these security levels to the theoretical analysis of the security of SS embedding schemes, due to the difficulty of the theoretical analysis. In this paper, based on the security definition, we present a theoretical analysis to evaluate the security levels of five typical SS embedding schemes, which are the classical SS, the improved SS (ISS), the circular extension of ISS, the nonrobust and robust natural watermarking, respectively. The theoretical analysis of these typical SS schemes are successfully performed by taking advantage of the convolution of probability distributions to derive the probabilistic models of watermarked signals. Moreover, simulations are conducted to illustrate and validate our theoretical analysis. We believe that the theoretical and practical analysis presented in this paper can bridge the gap between the definition of the four security levels and its application to the theoretical analysis of SS embedding schemes.

  18. ESCL8R and LEVIT8R: interactive graphical analysis of {gamma}-{gamma} and {gamma}-{gamma}-{gamma} coincidence data for level schemes

    Energy Technology Data Exchange (ETDEWEB)

    Radford, D C [Atomic Energy of Canada Ltd., Chalk River, ON (Canada). Chalk River Nuclear Labs.

    1992-08-01

    The extraction of complete and consistent nuclear level schemes from high-fold coincidence data will require intelligent computer programs. These will need to present the relevant data in an easily assimilated manner, keep track of all {gamma}-ray assignments and expected coincidence intensities, and quickly find significant discrepancies between a proposed level scheme and the data. Some steps in this direction have been made at Chalk River. The programs ESCL8R and LEVIT8R, for analysis of two-fold and three-fold data sets respectively, allow fast and easy inspection of the data, and compare the results to expectations calculations on the basis of a proposed level scheme. Least-squares fits directly to the 2D and/or 3D data, with the intensities and energies of the level scheme transitions as parameters, allow fast and easy extraction of the optimum physics results. (author). 4 refs., 3 figs.

  19. Abolition of set-aside schemes and its impacts on habitat styructure in Denmark from 2007-2008

    DEFF Research Database (Denmark)

    Levin, Gregor

    2010-01-01

    Agriculture accounts for 65% of the Danish land area. Habitats for wild species are characterized by small patches, surrounded by intensive agriculture. Due to extensive management, set-aside land can if located close to habitats, improve habitat structure in terms of patch size and connectivity....... In 2008 set-aside schemes were abolished, leading to a decline in the area of set-aside land from 6% of all agricultural land in 2007 to 3% in 2008. We developed an indicator aiming to measure the effect of the reduced area of set-aside land on habitat structure. The indicator combines distance...... to habitats, potential corridors between habitats and area percentage of set-aside land. Analyses show that the halving of the area of set-aside land has led to a 55% reduction of the effect of set-aside land on habitat structure....

  20. Carrier-based modulation schemes for various three-level matrix converters

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Loh, P.C.; Rong, R.C.

    2008-01-01

    different performance merits. To avoid confusion and hence fasten the converter applications in the industry, it would surely be better for modulation schemes to be developed from a common set of modulation principles that unfortunately has not yet been thoroughly defined. Contributing to that area...... a limited set of switching vectors because of its lower semiconductor count. Through simulation and experimental testing, all the evaluated matrix converters are shown to produce satisfactory sinusoidal input and output quantities using the same set of generic modulation principles, which can conveniently...

  1. Abolition of set-aside schemes and its impact on habitat connectivity in Denmark from 2007 - 2008

    DEFF Research Database (Denmark)

    Levin, Gregor

    In Denmark, agriculture occupies 28,000 km² or 65% of the land. As a consequence, habitats for wild species are mainly characterized by small patches, surrounded by intensive agriculture. Due to extensive agricultural management, set-aside land can spatially connect habitats and thus positively...... affect habitat connectivity, which is of importance to the survival of wild species. In 2008 set-aside schemes were abolished, leading to a considerable re-cultivation of former set-aside land and consequently to a decline in the area of set-aside land from 6% of all agricultural land in 2007 to 3......% in 2008. The main argument against regulations of the re-cultivation of set-aside land with the aim to minimize declines in habitat-connectivity was that re-cultivation would primarily occur on highly productive land at a long distance from habitats, while set-aside land located on marginal land, close...

  2. A parametric level-set method for partially discrete tomography

    NARCIS (Netherlands)

    A. Kadu (Ajinkya); T. van Leeuwen (Tristan); K.J. Batenburg (Joost)

    2017-01-01

    textabstractThis paper introduces a parametric level-set method for tomographic reconstruction of partially discrete images. Such images consist of a continuously varying background and an anomaly with a constant (known) grey-value. We express the geometry of the anomaly using a level-set function,

  3. Identifying Heterogeneities in Subsurface Environment using the Level Set Method

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Hongzhuan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lu, Zhiming [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vesselinov, Velimir Valentinov [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-25

    These are slides from a presentation on identifying heterogeneities in subsurface environment using the level set method. The slides start with the motivation, then explain Level Set Method (LSM), the algorithms, some examples are given, and finally future work is explained.

  4. Synchronised PWM Schemes for Three-level Inverters with Zero Common-mode Voltage

    DEFF Research Database (Denmark)

    Oleschuk, Valentin; Blaabjerg, Frede

    2002-01-01

    This paper presents results of analysis and comparison of novel synchronised schemes of pulsewidth modulation (PWM), applied to three-level voltage source inverters with control algorithms providing elimination of the common-mode voltage. The proposed approach is based on a new strategy of digital...

  5. An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs

    Directory of Open Access Journals (Sweden)

    Kishore R. Mosaliganti

    2013-12-01

    Full Text Available In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse and grid representations (point, mesh, and image-based. Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g. gradient and Hessians across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a

  6. Novel gene sets improve set-level classification of prokaryotic gene expression data.

    Science.gov (United States)

    Holec, Matěj; Kuželka, Ondřej; Železný, Filip

    2015-10-28

    Set-level classification of gene expression data has received significant attention recently. In this setting, high-dimensional vectors of features corresponding to genes are converted into lower-dimensional vectors of features corresponding to biologically interpretable gene sets. The dimensionality reduction brings the promise of a decreased risk of overfitting, potentially resulting in improved accuracy of the learned classifiers. However, recent empirical research has not confirmed this expectation. Here we hypothesize that the reported unfavorable classification results in the set-level framework were due to the adoption of unsuitable gene sets defined typically on the basis of the Gene ontology and the KEGG database of metabolic networks. We explore an alternative approach to defining gene sets, based on regulatory interactions, which we expect to collect genes with more correlated expression. We hypothesize that such more correlated gene sets will enable to learn more accurate classifiers. We define two families of gene sets using information on regulatory interactions, and evaluate them on phenotype-classification tasks using public prokaryotic gene expression data sets. From each of the two gene-set families, we first select the best-performing subtype. The two selected subtypes are then evaluated on independent (testing) data sets against state-of-the-art gene sets and against the conventional gene-level approach. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. Novel gene sets defined on the basis of regulatory interactions improve set-level classification of gene expression data. The experimental scripts and other material needed to reproduce the experiments are available at http://ida.felk.cvut.cz/novelgenesets.tar.gz.

  7. A level set approach for shock-induced α-γ phase transition of RDX

    Science.gov (United States)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  8. A Variational Level Set Model Combined with FCMS for Image Clustering Segmentation

    Directory of Open Access Journals (Sweden)

    Liming Tang

    2014-01-01

    Full Text Available The fuzzy C means clustering algorithm with spatial constraint (FCMS is effective for image segmentation. However, it lacks essential smoothing constraints to the cluster boundaries and enough robustness to the noise. Samson et al. proposed a variational level set model for image clustering segmentation, which can get the smooth cluster boundaries and closed cluster regions due to the use of level set scheme. However it is very sensitive to the noise since it is actually a hard C means clustering model. In this paper, based on Samson’s work, we propose a new variational level set model combined with FCMS for image clustering segmentation. Compared with FCMS clustering, the proposed model can get smooth cluster boundaries and closed cluster regions due to the use of level set scheme. In addition, a block-based energy is incorporated into the energy functional, which enables the proposed model to be more robust to the noise than FCMS clustering and Samson’s model. Some experiments on the synthetic and real images are performed to assess the performance of the proposed model. Compared with some classical image segmentation models, the proposed model has a better performance for the images contaminated by different noise levels.

  9. Volume Sculpting Using the Level-Set Method

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Christensen, Niels Jørgen

    2002-01-01

    In this paper, we propose the use of the Level--Set Method as the underlying technology of a volume sculpting system. The main motivation is that this leads to a very generic technique for deformation of volumetric solids. In addition, our method preserves a distance field volume representation....... A scaling window is used to adapt the Level--Set Method to local deformations and to allow the user to control the intensity of the tool. Level--Set based tools have been implemented in an interactive sculpting system, and we show sculptures created using the system....

  10. A SCHEME FOR TEMPLATE SECURITY AT FEATURE FUSION LEVEL IN MULTIMODAL BIOMETRIC SYSTEM

    Directory of Open Access Journals (Sweden)

    Arvind Selwal

    2016-09-01

    Full Text Available Biometric is the science of human recognition based upon using their biological, chemical or behavioural traits. These systems are used in many real life applications simply from biometric based attendance system to providing security at very sophisticated level. A biometric system deals with raw data captured using a sensor and feature template extracted from raw image. One of the challenges being faced by designers of these systems is to secure template data extracted from the biometric modalities of the user and protect the raw images. To minimize spoof attacks on biometric systems by unauthorised users one of the solutions is to use multi-biometric systems. Multi-modal biometric system works by using fusion technique to merge feature templates generated from different modalities of the human. In this work a new scheme is proposed to secure template during feature fusion level. Scheme is based on union operation of fuzzy relations of templates of modalities during fusion process of multimodal biometric systems. This approach serves dual purpose of feature fusion as well as transformation of templates into a single secured non invertible template. The proposed technique is cancelable and experimentally tested on a bimodal biometric system comprising of fingerprint and hand geometry. Developed scheme removes the problem of an attacker learning the original minutia position in fingerprint and various measurements of hand geometry. Given scheme provides improved performance of the system with reduction in false accept rate and improvement in genuine accept rate.

  11. High-spin level scheme of odd-odd 142Pm

    International Nuclear Information System (INIS)

    Liu Minliang; Zhang Yuhu; Zhou Xiaohong; He Jianjun; Guo Yingxiang; Lei Xiangguo; Huang Wenxue; Liu Zhong; Luo Yixiao; Feng Xichen; Zhang Shuangquan; Xu Xiao; Zheng Yong; Luo Wanju

    2002-01-01

    The level structure of doubly odd nucleus 142 Pm has been studied via the 128 Te( 19 F, 5nγ) 142 Pm reaction in the energy region from 75 to 95 MeV. In-beam γ rays were measured including the excited function, γ-ray singles and γ-γ coincidences in experiment. The level scheme of 142 Pm has been extended up to excitation energy of 7030.0 keV including 25 new γ rays and 13 new levels. Based on the measured γ-ray anisotropies, the level spins in 142 Pm have been suggested

  12. A level set method for multiple sclerosis lesion segmentation.

    Science.gov (United States)

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Some numerical studies of interface advection properties of level set ...

    Indian Academy of Sciences (India)

    explicit computational elements moving through an Eulerian grid. ... location. The interface is implicitly defined (captured) as the location of the discontinuity in the ... This level set function is advected with the background flow field and thus ...

  14. Aerostructural Level Set Topology Optimization for a Common Research Model Wing

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2014-01-01

    The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.

  15. Macro-level integrated renewable energy production schemes for sustainable development

    International Nuclear Information System (INIS)

    Subhadra, Bobban G.

    2011-01-01

    The production of renewable clean energy is a prime necessity for the sustainable future existence of our planet. However, because of the resource-intensive nature, and other challenges associated with these new generation renewable energy sources, novel industrial frameworks need to be co-developed. Integrated renewable energy production schemes with foundations on resource sharing, carbon neutrality, energy-efficient design, source reduction, green processing plan, anthropogenic use of waste resources for the production green energy along with the production of raw material for allied food and chemical industries is imperative for the sustainable development of this sector especially in an emission-constrained future industrial scenario. To attain these objectives, the scope of hybrid renewable production systems and integrated renewable energy industrial ecology is briefly described. Further, the principles of Integrated Renewable Energy Park (IREP) approach, an example for macro-level energy production, and its benefits and global applications are also explored. - Research highlights: → Discusses the need for macro-level renewable energy production schemes. → Scope of hybrid and integrated industrial ecology for renewable energy production. → Integrated Renewable Energy Parks (IREPs): A macro-level energy production scheme. → Discusses the principle foundations and global applications of IREPs. → Describes the significance of IREPs in the carbon-neutral future business arena.

  16. Exploring the level sets of quantum control landscapes

    International Nuclear Information System (INIS)

    Rothman, Adam; Ho, Tak-San; Rabitz, Herschel

    2006-01-01

    A quantum control landscape is defined by the value of a physical observable as a functional of the time-dependent control field E(t) for a given quantum-mechanical system. Level sets through this landscape are prescribed by a particular value of the target observable at the final dynamical time T, regardless of the intervening dynamics. We present a technique for exploring a landscape level set, where a scalar variable s is introduced to characterize trajectories along these level sets. The control fields E(s,t) accomplishing this exploration (i.e., that produce the same value of the target observable for a given system) are determined by solving a differential equation over s in conjunction with the time-dependent Schroedinger equation. There is full freedom to traverse a level set, and a particular trajectory is realized by making an a priori choice for a continuous function f(s,t) that appears in the differential equation for the control field. The continuous function f(s,t) can assume an arbitrary form, and thus a level set generally contains a family of controls, where each control takes the quantum system to the same final target value, but produces a distinct control mechanism. In addition, although the observable value remains invariant over the level set, other dynamical properties (e.g., the degree of robustness to control noise) are not specifically preserved and can vary greatly. Examples are presented to illustrate the continuous nature of level-set controls and their associated induced dynamical features, including continuously morphing mechanisms for population control in model quantum systems

  17. On multiple level-set regularization methods for inverse problems

    International Nuclear Information System (INIS)

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  18. Proposed classification scheme for high-level and other radioactive wastes

    International Nuclear Information System (INIS)

    Kocher, D.C.; Croff, A.G.

    1986-01-01

    The Nuclear Waste Policy Act (NWPA) of 1982 defines high-level radioactive waste (HLW) as: (A) the highly radioactive material resulting from the reprocessing of spent nuclear fuel....that contains fission products in sufficient concentrations; and (B) other highly radioactive material that the Commission....determines....requires permanent isolation. This paper presents a generally applicable quantitative definition of HLW that addresses the description in paragraph (B). The approach also results in definitions of other waste classes, i.e., transuranic (TRU) and low-level waste (LLW). A basic waste classification scheme results from the quantitative definitions

  19. Independent attacks in imperfect settings: A case for a two-way quantum key distribution scheme

    International Nuclear Information System (INIS)

    Shaari, J.S.; Bahari, Iskandar

    2010-01-01

    We review the study on a two-way quantum key distribution protocol given imperfect settings through a simple analysis of a toy model and show that it can outperform a BB84 setup. We provide the sufficient condition for this as a ratio of optimal intensities for the protocols.

  20. Level-Set Topology Optimization with Aeroelastic Constraints

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2015-01-01

    Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.

  1. Level Set Structure of an Integrable Cellular Automaton

    Directory of Open Access Journals (Sweden)

    Taichiro Takagi

    2010-03-01

    Full Text Available Based on a group theoretical setting a sort of discrete dynamical system is constructed and applied to a combinatorial dynamical system defined on the set of certain Bethe ansatz related objects known as the rigged configurations. This system is then used to study a one-dimensional periodic cellular automaton related to discrete Toda lattice. It is shown for the first time that the level set of this cellular automaton is decomposed into connected components and every such component is a torus.

  2. Variational Level Set Method for Two-Stage Image Segmentation Based on Morphological Gradients

    Directory of Open Access Journals (Sweden)

    Zemin Ren

    2014-01-01

    Full Text Available We use variational level set method and transition region extraction techniques to achieve image segmentation task. The proposed scheme is done by two steps. We first develop a novel algorithm to extract transition region based on the morphological gradient. After this, we integrate the transition region into a variational level set framework and develop a novel geometric active contour model, which include an external energy based on transition region and fractional order edge indicator function. The external energy is used to drive the zero level set toward the desired image features, such as object boundaries. Due to this external energy, the proposed model allows for more flexible initialization. The fractional order edge indicator function is incorporated into the length regularization term to diminish the influence of noise. Moreover, internal energy is added into the proposed model to penalize the deviation of the level set function from a signed distance function. The results evolution of the level set function is the gradient flow that minimizes the overall energy functional. The proposed model has been applied to both synthetic and real images with promising results.

  3. A deep level set method for image segmentation

    OpenAIRE

    Tang, Min; Valipour, Sepehr; Zhang, Zichen Vincent; Cobzas, Dana; MartinJagersand

    2017-01-01

    This paper proposes a novel image segmentation approachthat integrates fully convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the integrated method can incorporatesmoothing and prior information to achieve an accurate segmentation.Furthermore, different than using the level set model as a post-processingtool, we integrate it into the training phase to fine-tune the FCN. Thisallows the use of unlabeled data during training in a semi-supervisedsetting. Using two types o...

  4. Poroelastic measurement schemes resulting in complete data sets for granular and other anisotropic porous media

    Energy Technology Data Exchange (ETDEWEB)

    Berryman, J.G.

    2009-11-20

    Poroelastic analysis usually progresses from assumed knowledge of dry or drained porous media to the predicted behavior of fluid-saturated and undrained porous media. Unfortunately, the experimental situation is often incompatible with these assumptions, especially when field data (from hydrological or oil/gas reservoirs) are involved. The present work considers several different experimental scenarios typified by one in which a set of undrained poroelastic (stiffness) constants has been measured using either ultrasound or seismic wave analysis, while some or all of the dry or drained constants are normally unknown. Drained constants for such a poroelastic system can be deduced for isotropic systems from available data if a complete set of undrained compliance data for the principal stresses are available - together with a few other commonly measured quantities such as porosity, fluid bulk modulus, and grain bulk modulus. Similar results are also developed here for anisotropic systems having up to orthotropic symmetry if the system is granular (i.e., composed of solid grains assembled into a solid matrix, either by a cementation process or by applied stress) and the grains are known to be elastically homogeneous. Finally, the analysis is also fully developed for anisotropic systems with nonhomogeneous (more than one mineral type), but still isotropic, grains - as well as for uniform collections of anisotropic grains as long as their axes of symmetry are either perfectly aligned or perfectly random.

  5. The effect of hearing aid signal-processing schemes on acceptable noise levels: perception and prediction.

    Science.gov (United States)

    Wu, Yu-Hsiang; Stangl, Elizabeth

    2013-01-01

    The acceptable noise level (ANL) test determines the maximum noise level that an individual is willing to accept while listening to speech. The first objective of the present study was to systematically investigate the effect of wide dynamic range compression processing (WDRC), and its combined effect with digital noise reduction (DNR) and directional processing (DIR), on ANL. Because ANL represents the lowest signal-to-noise ratio (SNR) that a listener is willing to accept, the second objective was to examine whether the hearing aid output SNR could predict aided ANL across different combinations of hearing aid signal-processing schemes. Twenty-five adults with sensorineural hearing loss participated in the study. ANL was measured monaurally in two unaided and seven aided conditions, in which the status of the hearing aid processing schemes (enabled or disabled) and the location of noise (front or rear) were manipulated. The hearing aid output SNR was measured for each listener in each condition using a phase-inversion technique. The aided ANL was predicted by unaided ANL and hearing aid output SNR, under the assumption that the lowest acceptable SNR at the listener's eardrum is a constant across different ANL test conditions. Study results revealed that, on average, WDRC increased (worsened) ANL by 1.5 dB, while DNR and DIR decreased (improved) ANL by 1.1 and 2.8 dB, respectively. Because the effects of WDRC and DNR on ANL were opposite in direction but similar in magnitude, the ANL of linear/DNR-off was not significantly different from that of WDRC/DNR-on. The results further indicated that the pattern of ANL change across different aided conditions was consistent with the pattern of hearing aid output SNR change created by processing schemes. Compared with linear processing, WDRC creates a noisier sound image and makes listeners less willing to accept noise. However, this negative effect on noise acceptance can be offset by DNR, regardless of microphone mode

  6. A Level Set Discontinuous Galerkin Method for Free Surface Flows

    DEFF Research Database (Denmark)

    Grooss, Jesper; Hesthaven, Jan

    2006-01-01

    We present a discontinuous Galerkin method on a fully unstructured grid for the modeling of unsteady incompressible fluid flows with free surfaces. The surface is modeled by embedding and represented by a levelset. We discuss the discretization of the flow equations and the level set equation...

  7. Level Sets and Voronoi based Feature Extraction from any Imagery

    DEFF Research Database (Denmark)

    Sharma, O.; Anton, François; Mioc, Darka

    2012-01-01

    Polygon features are of interest in many GEOProcessing applications like shoreline mapping, boundary delineation, change detection, etc. This paper presents a unique new GPU-based methodology to automate feature extraction combining level sets, or mean shift based segmentation together with Voron...

  8. Level set methods for inverse scattering—some recent developments

    International Nuclear Information System (INIS)

    Dorn, Oliver; Lesselier, Dominique

    2009-01-01

    We give an update on recent techniques which use a level set representation of shapes for solving inverse scattering problems, completing in that matter the exposition made in (Dorn and Lesselier 2006 Inverse Problems 22 R67) and (Dorn and Lesselier 2007 Deformable Models (New York: Springer) pp 61–90), and bringing it closer to the current state of the art

  9. Structural level set inversion for microwave breast screening

    International Nuclear Information System (INIS)

    Irishina, Natalia; Álvarez, Diego; Dorn, Oliver; Moscoso, Miguel

    2010-01-01

    We present a new inversion strategy for the early detection of breast cancer from microwave data which is based on a new multiphase level set technique. This novel structural inversion method uses a modification of the color level set technique adapted to the specific situation of structural breast imaging taking into account the high complexity of the breast tissue. We only use data of a few microwave frequencies for detecting the tumors hidden in this complex structure. Three level set functions are employed for describing four different types of breast tissue, where each of these four regions is allowed to have a complicated topology and to have an interior structure which needs to be estimated from the data simultaneously with the region interfaces. The algorithm consists of several stages of increasing complexity. In each stage more details about the anatomical structure of the breast interior is incorporated into the inversion model. The synthetic breast models which are used for creating simulated data are based on real MRI images of the breast and are therefore quite realistic. Our results demonstrate the potential and feasibility of the proposed level set technique for detecting, locating and characterizing a small tumor in its early stage of development embedded in such a realistic breast model. Both the data acquisition simulation and the inversion are carried out in 2D

  10. Level-Set Methodology on Adaptive Octree Grids

    Science.gov (United States)

    Gibou, Frederic; Guittet, Arthur; Mirzadeh, Mohammad; Theillard, Maxime

    2017-11-01

    Numerical simulations of interfacial problems in fluids require a methodology capable of tracking surfaces that can undergo changes in topology and capable to imposing jump boundary conditions in a sharp manner. In this talk, we will discuss recent advances in the level-set framework, in particular one that is based on adaptive grids.

  11. A Memory and Computation Efficient Sparse Level-Set Method

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.

    Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the

  12. Microwave imaging of dielectric cylinder using level set method and conjugate gradient algorithm

    International Nuclear Information System (INIS)

    Grayaa, K.; Bouzidi, A.; Aguili, T.

    2011-01-01

    In this paper, we propose a computational method for microwave imaging cylinder and dielectric object, based on combining level set technique and the conjugate gradient algorithm. By measuring the scattered field, we tried to retrieve the shape, localisation and the permittivity of the object. The forward problem is solved by the moment method, while the inverse problem is reformulate in an optimization one and is solved by the proposed scheme. It found that the proposed method is able to give good reconstruction quality in terms of the reconstructed shape and permittivity.

  13. CT liver volumetry using geodesic active contour segmentation with a level-set algorithm

    Science.gov (United States)

    Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard

    2010-03-01

    Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.

  14. Visual privacy by context: proposal and evaluation of a level-based visualisation scheme.

    Science.gov (United States)

    Padilla-López, José Ramón; Chaaraoui, Alexandros Andre; Gu, Feng; Flórez-Revuelta, Francisco

    2015-06-04

    Privacy in image and video data has become an important subject since cameras are being installed in an increasing number of public and private spaces. Specifically, in assisted living, intelligent monitoring based on computer vision can allow one to provide risk detection and support services that increase people's autonomy at home. In the present work, a level-based visualisation scheme is proposed to provide visual privacy when human intervention is necessary, such as at telerehabilitation and safety assessment applications. Visualisation levels are dynamically selected based on the previously modelled context. In this way, different levels of protection can be provided, maintaining the necessary intelligibility required for the applications. Furthermore, a case study of a living room, where a top-view camera is installed, is presented. Finally, the performed survey-based evaluation indicates the degree of protection provided by the different visualisation models, as well as the personal privacy preferences and valuations of the users.

  15. Gradient augmented level set method for phase change simulations

    Science.gov (United States)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  16. REFERQUAL: a pilot study of a new service quality assessment instrument in the GP exercise referral scheme setting

    Science.gov (United States)

    Cock, Don; Adams, Iain C; Ibbetson, Adrian B; Baugh, Phil

    2006-01-01

    Background The development of an instrument accurately assessing service quality in the GP Exercise Referral Scheme (ERS) industry could potentially inform scheme organisers of the factors that affect adherence rates leading to the implementation of strategic interventions aimed at reducing client drop-out. Methods A modified version of the SERVQUAL instrument was designed for use in the ERS setting and subsequently piloted amongst 27 ERS clients. Results Test re-test correlations were calculated via Pearson's 'r' or Spearman's 'rho', depending on whether the variables were Normally Distributed, to show a significant (mean r = 0.957, SD = 0.02, p < 0.05; mean rho = 0.934, SD = 0.03, p < 0.05) relationship between all items within the questionnaire. In addition, satisfactory internal consistency was demonstrated via Cronbach's 'α'. Furthermore, clients responded favourably towards the usability, wording and applicability of the instrument's items. Conclusion REFERQUAL is considered to represent promise as a suitable tool for future evaluation of service quality within the ERS community. Future research should further assess the validity and reliability of this instrument through the use of a confirmatory factor analysis to scrutinise the proposed dimensional structure. PMID:16725021

  17. Economic sustainability, water security and multi-level governance of local water schemes in Nepal

    Directory of Open Access Journals (Sweden)

    Emma Hakala

    2017-07-01

    Full Text Available This article explores the role of multi-level governance and power structures in local water security through a case study of the Nawalparasi district in Nepal. It focuses on economic sustainability as a measure to address water security, placing this thematic in the context of a complicated power structure consisting of local, district and national administration as well as external development cooperation actors. The study aims to find out whether efforts to improve the economic sustainability of water schemes have contributed to water security at the local level. In addition, it will consider the interactions between water security, power structures and local equality and justice. The research builds upon survey data from the Nepalese districts of Nawalparasi and Palpa, and a case study based on interviews and observation in Nawalparasi. The survey was performed in water schemes built within a Finnish development cooperation programme spanning from 1990 to 2004, allowing a consideration of the long-term sustainability of water management projects. This adds a crucial external influence into the intra-state power structures shaping water management in Nepal. The article thus provides an alternative perspective to cross-regional water security through a discussion combining transnational involvement with national and local points of view.

  18. A novel two-level dynamic parallel data scheme for large 3-D SN calculations

    International Nuclear Information System (INIS)

    Sjoden, G.E.; Shedlock, D.; Haghighat, A.; Yi, C.

    2005-01-01

    We introduce a new dynamic parallel memory optimization scheme for executing large scale 3-D discrete ordinates (Sn) simulations on distributed memory parallel computers. In order for parallel transport codes to be truly scalable, they must use parallel data storage, where only the variables that are locally computed are locally stored. Even with parallel data storage for the angular variables, cumulative storage requirements for large discrete ordinates calculations can be prohibitive. To address this problem, Memory Tuning has been implemented into the PENTRAN 3-D parallel discrete ordinates code as an optimized, two-level ('large' array, 'small' array) parallel data storage scheme. Memory Tuning can be described as the process of parallel data memory optimization. Memory Tuning dynamically minimizes the amount of required parallel data in allocated memory on each processor using a statistical sampling algorithm. This algorithm is based on the integral average and standard deviation of the number of fine meshes contained in each coarse mesh in the global problem. Because PENTRAN only stores the locally computed problem phase space, optimal two-level memory assignments can be unique on each node, depending upon the parallel decomposition used (hybrid combinations of angular, energy, or spatial). As demonstrated in the two large discrete ordinates models presented (a storage cask and an OECD MOX Benchmark), Memory Tuning can save a substantial amount of memory per parallel processor, allowing one to accomplish very large scale Sn computations. (authors)

  19. Solving the Sea-Level Equation in an Explicit Time Differencing Scheme

    Science.gov (United States)

    Klemann, V.; Hagedoorn, J. M.; Thomas, M.

    2016-12-01

    In preparation of coupling the solid-earth to an ice-sheet compartment in an earth-system model, the dependency of initial topography on the ice-sheet history and viscosity structure has to be analysed. In this study, we discuss this dependency and how it influences the reconstruction of former sea level during a glacial cycle. The modelling is based on the VILMA code in which the field equations are solved in the time domain applying an explicit time-differencing scheme. The sea-level equation is solved simultaneously in the same explicit scheme as the viscoleastic field equations (Hagedoorn et al., 2007). With the assumption of only small changes, we neglect the iterative solution at each time step as suggested by e.g. Kendall et al. (2005). Nevertheless, the prediction of the initial paleo topography in case of moving coastlines remains to be iterated by repeated integration of the whole load history. The sensitivity study sketched at the beginning is accordingly motivated by the question if the iteration of the paleo topography can be replaced by a predefined one. This study is part of the German paleoclimate modelling initiative PalMod. Lit:Hagedoorn JM, Wolf D, Martinec Z, 2007. An estimate of global mean sea-level rise inferred from tide-gauge measurements using glacial-isostatic models consistent with the relative sea-level record. Pure appl. Geophys. 164: 791-818, doi:10.1007/s00024-007-0186-7Kendall RA, Mitrovica JX, Milne GA, 2005. On post-glacial sea level - II. Numerical formulation and comparative reesults on spherically symmetric models. Geophys. J. Int., 161: 679-706, doi:10.1111/j.365-246.X.2005.02553.x

  20. The scheme machine: A case study in progress in design derivation at system levels

    Science.gov (United States)

    Johnson, Steven D.

    1995-01-01

    The Scheme Machine is one of several design projects of the Digital Design Derivation group at Indiana University. It differs from the other projects in its focus on issues of system design and its connection to surrounding research in programming language semantics, compiler construction, and programming methodology underway at Indiana and elsewhere. The genesis of the project dates to the early 1980's, when digital design derivation research branched from the surrounding research effort in programming languages. Both branches have continued to develop in parallel, with this particular project serving as a bridge. However, by 1990 there remained little real interaction between the branches and recently we have undertaken to reintegrate them. On the software side, researchers have refined a mathematically rigorous (but not mechanized) treatment starting with the fully abstract semantic definition of Scheme and resulting in an efficient implementation consisting of a compiler and virtual machine model, the latter typically realized with a general purpose microprocessor. The derivation includes a number of sophisticated factorizations and representations and is also deep example of the underlying engineering methodology. The hardware research has created a mechanized algebra supporting the tedious and massive transformations often seen at lower levels of design. This work has progressed to the point that large scale devices, such as processors, can be derived from first-order finite state machine specifications. This is roughly where the language oriented research stops; thus, together, the two efforts establish a thread from the highest levels of abstract specification to detailed digital implementation. The Scheme Machine project challenges hardware derivation research in several ways, although the individual components of the system are of a similar scale to those we have worked with before. The machine has a custom dual-ported memory to support garbage collection

  1. Level sets and extrema of random processes and fields

    CERN Document Server

    Azais, Jean-Marc

    2009-01-01

    A timely and comprehensive treatment of random field theory with applications across diverse areas of study Level Sets and Extrema of Random Processes and Fields discusses how to understand the properties of the level sets of paths as well as how to compute the probability distribution of its extremal values, which are two general classes of problems that arise in the study of random processes and fields and in related applications. This book provides a unified and accessible approach to these two topics and their relationship to classical theory and Gaussian processes and fields, and the most modern research findings are also discussed. The authors begin with an introduction to the basic concepts of stochastic processes, including a modern review of Gaussian fields and their classical inequalities. Subsequent chapters are devoted to Rice formulas, regularity properties, and recent results on the tails of the distribution of the maximum. Finally, applications of random fields to various areas of mathematics a...

  2. Skull defect reconstruction based on a new hybrid level set.

    Science.gov (United States)

    Zhang, Ziqun; Zhang, Ran; Song, Zhijian

    2014-01-01

    Skull defect reconstruction is an important aspect of surgical repair. Historically, a skull defect prosthesis was created by the mirroring technique, surface fitting, or formed templates. These methods are not based on the anatomy of the individual patient's skull, and therefore, the prosthesis cannot precisely correct the defect. This study presented a new hybrid level set model, taking into account both the global optimization region information and the local accuracy edge information, while avoiding re-initialization during the evolution of the level set function. Based on the new method, a skull defect was reconstructed, and the skull prosthesis was produced by rapid prototyping technology. This resulted in a skull defect prosthesis that well matched the skull defect with excellent individual adaptation.

  3. A new level set model for cell image segmentation

    Science.gov (United States)

    Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun

    2011-02-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

  4. Level Set Approach to Anisotropic Wet Etching of Silicon

    Directory of Open Access Journals (Sweden)

    Branislav Radjenović

    2010-05-01

    Full Text Available In this paper a methodology for the three dimensional (3D modeling and simulation of the profile evolution during anisotropic wet etching of silicon based on the level set method is presented. Etching rate anisotropy in silicon is modeled taking into account full silicon symmetry properties, by means of the interpolation technique using experimentally obtained values for the etching rates along thirteen principal and high index directions in KOH solutions. The resulting level set equations are solved using an open source implementation of the sparse field method (ITK library, developed in medical image processing community, extended for the case of non-convex Hamiltonians. Simulation results for some interesting initial 3D shapes, as well as some more practical examples illustrating anisotropic etching simulation in the presence of masks (simple square aperture mask, convex corner undercutting and convex corner compensation, formation of suspended structures are shown also. The obtained results show that level set method can be used as an effective tool for wet etching process modeling, and that is a viable alternative to the Cellular Automata method which now prevails in the simulations of the wet etching process.

  5. Proposed classification scheme for high-level and other radioactive wastes

    International Nuclear Information System (INIS)

    Kocher, D.C.; Croff, A.G.

    1986-01-01

    The Nuclear Waste Policy Act (NWPA) of 1982 defines high-level (radioactive) waste (HLW) as (A) the highly radioactive material resulting from the reprocessing of spent nuclear fuel...that contains fission products in sufficient concentrations; and (B) other highly radioactive material that the Commission...determines...requires permanent isolation. This paper presents a generally applicable quantitative definition of HLW that addresses the description in paragraph B. The approach also results in definitions of other wastes classes, i.e., transuranic (TRU) and low-level waste (LLW). The basic waste classification scheme that results from the quantitative definitions of highly radioactive and requires permanent isolation is depicted. The concentrations of radionuclides that correspond to these two boundaries, and that may be used to classify radioactive wastes, are given

  6. Reevaluation of steam generator level trip set point

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Yoon Sub; Soh, Dong Sub; Kim, Sung Oh; Jung, Se Won; Sung, Kang Sik; Lee, Joon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-06-01

    The reactor trip by the low level of steam generator water accounts for a substantial portion of reactor scrams in a nuclear plant and the feasibility of modification of the steam generator water level trip system of YGN 1/2 was evaluated in this study. The study revealed removal of the reactor trip function from the SG water level trip system is not possible because of plant safety but relaxation of the trip set point by 9 % is feasible. The set point relaxation requires drilling of new holes for level measurement to operating steam generators. Characteristics of negative neutron flux rate trip and reactor trip were also reviewed as an additional work. Since the purpose of the trip system modification for reduction of a reactor scram frequency is not to satisfy legal requirements but to improve plant performance and the modification yields positive and negative aspects, the decision of actual modification needs to be made based on the results of this study and also the policy of a plant owner. 37 figs, 6 tabs, 14 refs. (Author).

  7. Modulation Schemes of Multi-phase Three-Level Z-Source Inverters

    DEFF Research Database (Denmark)

    Gao, F.; Loh, P.C.; Blaabjerg, Frede

    2007-01-01

    different modulation requirement and output performance. For clearly illustrating the detailed modulation process, time domain analysis instead of the traditional multi-dimensional space vector demonstration is assumed which reveals the right way to insert shoot-through durations in the switching sequence...... with minimal commutation count. Lastly, the theoretical findings are verified in Matlab/PLECS simulation and experimentally using constructed laboratory prototypes.......This paper investigates the modulation schemes of three-level multiphase Z-source inverters with either two Z-source networks or single Z-source network connected between the dc sources and inverter circuitry. With the proper offset added for achieving both desired four-leg operation and optimized...

  8. A new level set model for cell image segmentation

    International Nuclear Information System (INIS)

    Ma Jing-Feng; Chen Chun; Hou Kai; Bao Shang-Lian

    2011-01-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing. (cross-disciplinary physics and related areas of science and technology)

  9. Discrete level schemes and their gamma radiation branching ratios (CENPL-DLS): Pt.2

    Energy Technology Data Exchange (ETDEWEB)

    Limin, Zhang; Zongdi, Su; Zhengjun, Sun [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    The DLS data files contains the data and information of nuclear discrete levels and gamma rays. At present, it has 79461 levels and 93177 gamma rays for 1908 nuclides. The DLS sub-library has been set up at the CNDC, and widely used for nuclear model calculation and other field. the DLS management retrieval code DLS is introduced and an example is given for {sup 56}Fe. (1 tab.).

  10. Different-Level Simultaneous Minimization Scheme for Fault Tolerance of Redundant Manipulator Aided with Discrete-Time Recurrent Neural Network.

    Science.gov (United States)

    Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang

    2017-01-01

    By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network.

  11. Two-level MOC calculation scheme in APOLLO2 for cross-section library generation for LWR hexagonal assemblies

    International Nuclear Information System (INIS)

    Petrov, Nikolay; Todorova, Galina; Kolev, Nikola; Damian, Frederic

    2011-01-01

    The accurate and efficient MOC calculation scheme in APOLLO2, developed by CEA for generating multi-parameterized cross-section libraries for PWR assemblies, has been adapted to hexagonal assemblies. The neutronic part of this scheme is based on a two-level calculation methodology. At the first level, a multi-cell method is used in 281 energy groups for cross-section definition and self-shielding. At the second level, precise MOC calculations are performed in a collapsed energy mesh (30-40 groups). In this paper, the application and validation of the two-level scheme for hexagonal assemblies is described. Solutions for a VVER assembly are compared with TRIPOLI4® calculations and direct 281g MOC solutions. The results show that the accuracy is close to that of the 281g MOC calculation while the CPU time is substantially reduced. Compared to the multi-cell method, the accuracy is markedly improved. (author)

  12. Surface-to-surface registration using level sets

    DEFF Research Database (Denmark)

    Hansen, Mads Fogtmann; Erbou, Søren G.; Vester-Christensen, Martin

    2007-01-01

    This paper presents a general approach for surface-to-surface registration (S2SR) with the Euclidean metric using signed distance maps. In addition, the method is symmetric such that the registration of a shape A to a shape B is identical to the registration of the shape B to the shape A. The S2SR...... problem can be approximated by the image registration (IR) problem of the signed distance maps (SDMs) of the surfaces confined to some narrow band. By shrinking the narrow bands around the zero level sets the solution to the IR problem converges towards the S2SR problem. It is our hypothesis...... that this approach is more robust and less prone to fall into local minima than ordinary surface-to-surface registration. The IR problem is solved using the inverse compositional algorithm. In this paper, a set of 40 pelvic bones of Duroc pigs are registered to each other w.r.t. the Euclidean transformation...

  13. New level schemes with high-spin states of 105,107,109Tc

    International Nuclear Information System (INIS)

    Luo, Y.X.; Rasmussen, J.O.; Lee, I.Y.; Fallon, P.; Hamilton, J.H.; Ramayya, A.V.; Hwang, J.K.; Gore, P.M.; Zhu, S.J.; Wu, S.C.; Ginter, T.N.; Ter-Akopian, G.M.; Daniel, A.V.; Stoyer, M.A.; Donangelo, R.; Gelberg, A.

    2004-01-01

    New level schemes of odd-Z 105,107,109 Tc are proposed based on the 252 Cf spontaneous-fission-gamma data taken with Gammasphere in 2000. Bands of levels are considerably extended and expanded to show rich spectroscopic information. Spin/parity and configuration assignments are made based on determinations of multipolarities of low-lying transitions and the level analogies to the previously reported levels, and to those of the neighboring Rh isotopes. A non-yrast negative-parity band built on the 3/2 - [301] orbital is observed for the first time in 105 Tc. A positive-parity band built on the 1/2 + [431] intruder orbital originating from the π(g 7/2 /d 5/2 ) subshells and having a strong deformation-driving effect is observed for the first time in 105 Tc, and assigned in 107 Tc. A positive-parity band built on the excited 11/2 + level, which has rather low excitation energy and predominantly decays into the 9/2 + level of the ground state band, provides evidence of triaxiality in 107,109 Tc, and probably also in 105 Tc. Rotational constants are calculated and discussed for the K=1/2 intruder bands using the Bohr-Mottelson formula. Level systematics are discussed in terms of the locations of proton Fermi levels and deformations. The band crossings of yrast positive-parity bands are observed, most likely related to h 11/2 neutron alignment. Triaxial-rotor-plus-particle model calculations performed with ε=0.32 and γ=-22.5 deg. on the prolate side of maximum triaxiality yielded the best reproduction of the excitation energies, signature splittings, and branching ratios of the positive-parity bands (except for the intruder bands) of these Tc isotopes. The significant discrepancies between the triaxial-rotor-plus-particle model calculations and experiment for the K=1/2 intruder bands in 105,107 Tc need further theoretical studies

  14. Social deterministic factors to participation in the National Health Insurance Scheme in the context of rural Ghanaian setting

    Directory of Open Access Journals (Sweden)

    Stephen Manortey

    2014-04-01

    Full Text Available The primary purpose of this study is to identify predictors of complete household enrollment into the National Health Insurance Scheme (NHIS among inhabitants of the Barekese sub-district in the Ashanti Region of Ghana. Heads of households in 20 communities from the Barekuma Collaborative Community Project site were interviewed to gather data on demographic, socioeconomic status (SES indicators and complete household subscription in the NHIS. Logistic regression model was used to predict enrollment in the NHIS. Of the 3228 heads of households interviewed, 60 percent reported having all members of their respective households enrolled in the NHIS. Residents in the classified Middle and High SES brackets had 1.47 (95% CI: 1.21-1.77 and 1.66 (95% CI: 1.27- 2.16 times higher odds, respectively, of complete household enrollment compared to their counterparts in the Low SES category. The odds of enrolling in the program tend to increase progressively with the highest level of education attained by the head of the family unit. Eight years after the introduction of the national health insurance policy in Ghana, the reported subscription rate for complete households was about 60 percent in the 20 rural communities that participated in the study. This finding calls for the need to step up further national strategies that will help increase enrollment coverage, especially among the poor and less educated in the rural communities.

  15. An Enhanced Three-Level Voltage Switching State Scheme for Direct Torque Controlled Open End Winding Induction Motor

    Science.gov (United States)

    Kunisetti, V. Praveen Kumar; Thippiripati, Vinay Kumar

    2018-01-01

    Open End Winding Induction Motors (OEWIM) are popular for electric vehicles, ship propulsion applications due to less DC link voltage. Electric vehicles, ship propulsions require ripple free torque. In this article, an enhanced three-level voltage switching state scheme for direct torque controlled OEWIM drive is implemented to reduce torque and flux ripples. The limitations of conventional Direct Torque Control (DTC) are: possible problems during low speeds and starting, it operates with variable switching frequency due to hysteresis controllers and produces higher torque and flux ripple. The proposed DTC scheme can abate the problems of conventional DTC with an enhanced voltage switching state scheme. The three-level inversion was obtained by operating inverters with equal DC-link voltages and it produces 18 voltage space vectors. These 18 vectors are divided into low and high frequencies of operation based on rotor speed. The hardware results prove the validity of proposed DTC scheme during steady-state and transients. From simulation and experimental results, proposed DTC scheme gives less torque and flux ripples on comparison to two-level DTC. The proposed DTC is implemented using dSPACE DS-1104 control board interface with MATLAB/SIMULINK-RTI model.

  16. Fluoroscopy in paediatric fractures - Setting a local diagnostic reference level

    International Nuclear Information System (INIS)

    Pillai, A.; McAuley, A.; McMurray, K.; Jain, M.

    2006-01-01

    Background: The ionizing radiations (Medical Exposure) Regulation 2000 has made it mandatory to establish diagnostic reference levels (DRLs) for all typical radiological examinations. Objectives: We attempt to provide dose data for some common fluoroscopic procedures used in orthopaedic trauma that may be used as the basis for setting DRLs for paediatric patients. Materials and methods: The dose area product (DAP) in 865 paediatric trauma examinations was analysed. Median DAP values and screening times for each procedure type along with quartile values for each range are presented. Results: In the upper limb, elbow examinations had maximum exposure with a median DAP value of 1.21 cGy cm 2 . Median DAP values for forearm and wrist examinations were 0.708 and 0.538 cGy cm 2 , respectively. In lower limb, tibia and fibula examinations had a median DAP value of 3.23 cGy cm 2 followed by ankle examinations with a median DAP of 3.10 cGy cm 2 . The rounded third quartile DAP value for each distribution can be used as a provisional DRL for the specific procedure type. (authors)

  17. Topology optimization of hyperelastic structures using a level set method

    Science.gov (United States)

    Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.

    2017-12-01

    Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.

  18. Segmenting the Parotid Gland using Registration and Level Set Methods

    DEFF Research Database (Denmark)

    Hollensen, Christian; Hansen, Mads Fogtmann; Højgaard, Liselotte

    . The method was evaluated on a test set consisting of 8 corresponding data sets. The attained total volume Dice coefficient and mean Haussdorff distance were 0.61 ± 0.20 and 15.6 ± 7.4 mm respectively. The method has improvement potential which could be exploited in order for clinical introduction....

  19. Adaptable Value-Set Analysis for Low-Level Code

    OpenAIRE

    Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.

    2012-01-01

    This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...

  20. Goal oriented Mathematics Survey at Preparatory Level- Revised set ...

    African Journals Online (AJOL)

    This cross sectional study design on mathematical syllabi at preparatory levels of the high schools was to investigate the efficiency of the subject at preparatory level education serving as a basis for several streams, like Natural science, Technology, Computer Science, Health Science and Agriculture found at tertiary levels.

  1. Bit-level quantum color image encryption scheme with quantum cross-exchange operation and hyper-chaotic system

    Science.gov (United States)

    Zhou, Nanrun; Chen, Weiwei; Yan, Xinyu; Wang, Yunqian

    2018-06-01

    In order to obtain higher encryption efficiency, a bit-level quantum color image encryption scheme by exploiting quantum cross-exchange operation and a 5D hyper-chaotic system is designed. Additionally, to enhance the scrambling effect, the quantum channel swapping operation is employed to swap the gray values of corresponding pixels. The proposed color image encryption algorithm has larger key space and higher security since the 5D hyper-chaotic system has more complex dynamic behavior, better randomness and unpredictability than those based on low-dimensional hyper-chaotic systems. Simulations and theoretical analyses demonstrate that the presented bit-level quantum color image encryption scheme outperforms its classical counterparts in efficiency and security.

  2. A Prediction Packetizing Scheme for Reducing Channel Traffic in Transaction-Level Hardware/Software Co-Emulation

    OpenAIRE

    Lee , Jae-Gon; Chung , Moo-Kyoung; Ahn , Ki-Yong; Lee , Sang-Heon; Kyung , Chong-Min

    2005-01-01

    Submitted on behalf of EDAA (http://www.edaa.com/); International audience; This paper presents a scheme for efficient channel usage between simulator and accelerator where the accelerator models some RTL sub-blocks in the accelerator-based hardware/software co-simulation while the simulator runs transaction-level model of the remaining part of the whole chip being verified. With conventional simulation accelerator, evaluations of simulator and accelerator alternate at every valid simulation ...

  3. Level and decay schemes of even-A Se and Ge isotopes from (n,n'γ) reaction studies

    Energy Technology Data Exchange (ETDEWEB)

    Sigaud, J.; Patin, Y.; McEllistrem, M. T.; Haouat, G.; Lachkar, J.

    1975-06-01

    The energy levels and the decay schemes of {sup 76}Se, {sup 78}Se, {sup 80}Se, {sup 82}Se and {sup 76}Ge have been studied through the measurements of (n,n'γ) differential cross sections. Gamma-ray excitation functions have been measured between 2.0- and 4.1-MeV incident neutron energy, and angular distribution have been observed for all of these isotopes.

  4. Improving Limit Surface Search Algorithms in RAVEN Using Acceleration Schemes: Level II Milestone

    Energy Technology Data Exchange (ETDEWEB)

    Alfonsi, Andrea [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Cogliati, Joshua Joseph [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Sen, Ramazan Sonat [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2015-07-01

    The RAVEN code is becoming a comprehensive tool to perform Probabilistic Risk Assessment (PRA); Uncertainty Quantification (UQ) and Propagation; and Verification and Validation (V&V). The RAVEN code is being developed to support the Risk-Informed Safety Margin Characterization (RISMC) pathway by developing an advanced set of methodologies and algorithms for use in advanced risk analysis. The RISMC approach uses system simulator codes applied to stochastic analysis tools. The fundamental idea behind this coupling approach to perturb (by employing sampling strategies) timing and sequencing of events, internal parameters of the system codes (i.e., uncertain parameters of the physics model) and initial conditions to estimate values ranges and associated probabilities of figures of merit of interest for engineering and safety (e.g. core damage probability, etc.). This approach applied to complex systems such as nuclear power plants requires performing a series of computationally expensive simulation runs. The large computational burden is caused by the large set of (uncertain) parameters characterizing those systems. Consequently, exploring the uncertain/parametric domain, with a good level of confidence, is generally not affordable, considering the limited computational resources that are currently available. In addition, the recent tendency to develop newer tools, characterized by higher accuracy and larger computational resources (if compared with the presently used legacy codes, that have been developed decades ago), has made this issue even more compelling. In order to overcome to these limitations, the strategy for the exploration of the uncertain/parametric space needs to use at best the computational resources focusing the computational effort in those regions of the uncertain/parametric space that are “interesting” (e.g., risk-significant regions of the input space) with respect the targeted Figures Of Merit (FOM): for example, the failure of the system

  5. Propagation of frequency-chirped laser pulses in a medium of atoms with a Λ-level scheme

    International Nuclear Information System (INIS)

    Demeter, G.; Dzsotjan, D.; Djotyan, G. P.

    2007-01-01

    We study the propagation of frequency-chirped laser pulses in optically thick media. We consider a medium of atoms with a Λ level-scheme (Lambda atoms) and also, for comparison, a medium of two-level atoms. Frequency-chirped laser pulses that induce adiabatic population transfer between the atomic levels are considered. They induce transitions between the two lower (metastable) levels of the Λ-atoms and between the ground and excited states of the two-level atoms. We show that associated with this adiabatic population transfer in Λ-atoms, there is a regime of enhanced transparency of the medium--the pulses are distorted much less than in the medium of two-level atoms and retain their ability to transfer the atomic population much longer during propagation

  6. Trusting Politicians and Institutions in a Multi-Level Setting

    DEFF Research Database (Denmark)

    Hansen, Sune Welling; Kjær, Ulrik

    Trust in government and in politicians is a very crucial prerequisite for democratic processes. This goes not only for the national level of government but also for the regional and local. We make use of a large scale survey among citizens in Denmark to evaluate trust in politicians at different...... formation processes can negatively influence trust in the mayor and the councilors. Reaching out for the local power by being disloyal to one’s own party or by breaking deals already made can sometimes secure the mayoralty but it comes with a prize: lower trust among the electorate....

  7. High-level waste tank farm set point document

    International Nuclear Information System (INIS)

    Anthony, J.A. III.

    1995-01-01

    Setpoints for nuclear safety-related instrumentation are required for actions determined by the design authorization basis. Minimum requirements need to be established for assuring that setpoints are established and held within specified limits. This document establishes the controlling methodology for changing setpoints of all classifications. The instrumentation under consideration involve the transfer, storage, and volume reduction of radioactive liquid waste in the F- and H-Area High-Level Radioactive Waste Tank Farms. The setpoint document will encompass the PROCESS AREA listed in the Safety Analysis Report (SAR) (DPSTSA-200-10 Sup 18) which includes the diversion box HDB-8 facility. In addition to the PROCESS AREAS listed in the SAR, Building 299-H and the Effluent Transfer Facility (ETF) are also included in the scope

  8. High-level waste tank farm set point document

    Energy Technology Data Exchange (ETDEWEB)

    Anthony, J.A. III

    1995-01-15

    Setpoints for nuclear safety-related instrumentation are required for actions determined by the design authorization basis. Minimum requirements need to be established for assuring that setpoints are established and held within specified limits. This document establishes the controlling methodology for changing setpoints of all classifications. The instrumentation under consideration involve the transfer, storage, and volume reduction of radioactive liquid waste in the F- and H-Area High-Level Radioactive Waste Tank Farms. The setpoint document will encompass the PROCESS AREA listed in the Safety Analysis Report (SAR) (DPSTSA-200-10 Sup 18) which includes the diversion box HDB-8 facility. In addition to the PROCESS AREAS listed in the SAR, Building 299-H and the Effluent Transfer Facility (ETF) are also included in the scope.

  9. An integrated extended Kalman filter–implicit level set algorithm for monitoring planar hydraulic fractures

    International Nuclear Information System (INIS)

    Peirce, A; Rochinha, F

    2012-01-01

    We describe a novel approach to the inversion of elasto-static tiltmeter measurements to monitor planar hydraulic fractures propagating within three-dimensional elastic media. The technique combines the extended Kalman filter (EKF), which predicts and updates state estimates using tiltmeter measurement time-series, with a novel implicit level set algorithm (ILSA), which solves the coupled elasto-hydrodynamic equations. The EKF and ILSA are integrated to produce an algorithm to locate the unknown fracture-free boundary. A scaling argument is used to derive a strategy to tune the algorithm parameters to enable measurement information to compensate for unmodeled dynamics. Synthetic tiltmeter data for three numerical experiments are generated by introducing significant changes to the fracture geometry by altering the confining geological stress field. Even though there is no confining stress field in the dynamic model used by the new EKF-ILSA scheme, it is able to use synthetic data to arrive at remarkably accurate predictions of the fracture widths and footprints. These experiments also explore the robustness of the algorithm to noise and to placement of tiltmeter arrays operating in the near-field and far-field regimes. In these experiments, the appropriate parameter choices and strategies to improve the robustness of the algorithm to significant measurement noise are explored. (paper)

  10. Mind-sets, low-level exposures, and research

    International Nuclear Information System (INIS)

    Sagan, L.A.

    1993-01-01

    Much of our environmental policy is based on the notion that carcinogenic agents are harmful at even minuscule doses. From where does this thinking come? What is the scientific evidence that supports such policy? Moreover, why is the public willing to buy into this? Or is it the other way around: Has the scientific community bought into a paradigm that has its origins in public imagery? Or, most likely, are there interactions between the two? It is essential that we find out whether or not there are risks associated with low-level exposures to radiation. The author can see three obvious areas where the future depends on better information: The increasing radiation exposures resulting from the use of medical diagnostic and therapeutic practices need to be properly evaluated for safety; Environmental policies, which direct enormous resources to the reduction of small radiation exposures, needs to be put on a firmer scientific basis; The future of nuclear energy, dependent as it is on public acceptance, may well rely upon a better understanding of low-dose effects. Nuclear energy could provide an important solution of global warming and other possible environmental hazards, but will probably not be implemented as long as fear of low-dose radiation persists. Although an established paradigm has great resilience, it cannot resist the onslaught of inconsistent scientific observations or of the social value system that supports it. Only new research will enable us to determine if a paradigm shift is in order here

  11. Energy-level scheme and transition probabilities of Si-like ions

    International Nuclear Information System (INIS)

    Huang, K.N.

    1984-01-01

    Theoretical energy levels and transition probabilities are presented for 27 low-lying levels of silicon-like ions from Z = 15 to Z = 106. The multiconfiguration Dirac-Fock technique is used to calculate energy levels and wave functions. The Breit interaction and Lamb shift contributions are calculated perturbatively as corrections to the Dirac-Fock energy. The M1 and E2 transitions between the first nine levels and the E1 transitions between excited and the ground levels are presented

  12. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints in the Spectral-Element Solver Nek5000

    Energy Technology Data Exchange (ETDEWEB)

    Schanen, Michel; Marin, Oana; Zhang, Hong; Anitescu, Mihai

    2016-01-01

    Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validate it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.

  13. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    Science.gov (United States)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  14. The perceptions and experiences of people injured in motor vehicle crashes in a compensation scheme setting: a qualitative study.

    Science.gov (United States)

    Murgatroyd, Darnel; Lockwood, Keri; Garth, Belinda; Cameron, Ian D

    2015-04-25

    The evidence that compensation related factors are associated with poor recovery is substantial but these measures are generic and do not consider the complexity of scheme design. The objectives of this study were to understand people's perceptions and experiences of the claims process after sustaining a compensable injury in a motor vehicle crash (including why people seek legal representation); and to explore ways to assist people following a compensable injury and improve their experience with the claims process. A qualitative study in a Compulsory Third Party (CTP) personal injury scheme covering the state of New South Wales (NSW), Australia. A series of five focus groups, with a total of 32 participants who had sustained mild to moderate injuries in a motor vehicle crash, were conducted from May to June 2011 with four to eight attendees in each group. These were audio-recorded and transcribed. The methodology was based on a grounded theory approach using thematic analysis and constant comparison to generate coding categories for themes. Data saturation was reached. Analyst triangulation was used to ensure credibility of the results. Five primary themes were identified: complexity of the claims process; requirement of legal representation; injury recovery expectations; importance of timely healthcare decision making; and improvements for injury recovery. Some participants struggled, finding the claims process stressful and subsequently sought legal advice; whilst others reported a straight forward recovery, helpful insurer interactions and no legal representation. Most participants were influenced by injury recovery expectations, and timely healthcare decision making. To assist with injury recovery, access to objective information about the claims process using online technology and social media was considered paramount. Participants had contrasting injury recovery experiences and their perceptions of the claims process differed and were influenced by injury

  15. Natural Assurance Scheme: A level playing field framework for Green-Grey infrastructure development.

    Science.gov (United States)

    Denjean, Benjamin; Altamirano, Mónica A; Graveline, Nina; Giordano, Raffaele; van der Keur, Peter; Moncoulon, David; Weinberg, Josh; Máñez Costa, María; Kozinc, Zdravko; Mulligan, Mark; Pengal, Polona; Matthews, John; van Cauwenbergh, Nora; López Gunn, Elena; Bresch, David N

    2017-11-01

    This paper proposes a conceptual framework to systematize the use of Nature-based solutions (NBS) by integrating their resilience potential into Natural Assurance Scheme (NAS), focusing on insurance value as corner stone for both awareness-raising and valuation. As such one of its core goal is to align research and pilot projects with infrastructure development constraints and priorities. Under NAS, the integrated contribution of natural infrastructure to Disaster Risk Reduction is valued in the context of an identified growing need for climate robust infrastructure. The potential of NAS benefits and trade-off are explored by through the alternative lens of Disaster Resilience Enhancement (DRE). Such a system requires a joint effort of specific knowledge transfer from research groups and stakeholders to potential future NAS developers and investors. We therefore match the knowledge gaps with operational stages of the development of NAS from a project designer perspective. We start by highlighting the key role of the insurance industry in incentivizing and assessing disaster and slow onset resilience enhancement strategies. In parallel we place the public sector as potential kick-starters in DRE initiatives through the existing initiatives and constraints of infrastructure procurement. Under this perspective the paper explores the required alignment of Integrated Water resources planning and Public investment systems. Ultimately this will provide the possibility for both planners and investors to design no regret NBS and mixed Grey-Green infrastructures systems. As resources and constraints are widely different between infrastructure development contexts, the framework does not provide explicit methodological choices but presents current limits of knowledge and know-how. In conclusion the paper underlines the potential of NAS to ease the infrastructure gap in water globally by stressing the advantages of investment in the protection, enhancement and restoration of

  16. Discrete level schemes and their gamma radiation branching ratios (CENPL-DLS). Pt. 1

    International Nuclear Information System (INIS)

    Su Zongdi; Zhang Limin; Zhou Chunmei; Sun Zhengjun

    1994-01-01

    The DLS data file, which is a sub-library (version 1) of Chinese Evaluated Nuclear Parameter Library (CENPL), consists of data and information of discrete levels and gamma radiations. The data and information of this data file are translated from the Evaluated Nuclear Structure Data File (ENSDF). The transforming code from ENSDF to DLS was written. In the DLS data file, there are the data on discrete levels with determinate energy and their gamma radiations. At present, this file contains the data of 79456 levels and 100411 gammas for 1908 nuclides

  17. Score level fusion scheme based on adaptive local Gabor features for face-iris-fingerprint multimodal biometric

    Science.gov (United States)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Chen, Ying

    2014-05-01

    A multimodal biometric system has been considered a promising technique to overcome the defects of unimodal biometric systems. We have introduced a fusion scheme to gain a better understanding and fusion method for a face-iris-fingerprint multimodal biometric system. In our case, we use particle swarm optimization to train a set of adaptive Gabor filters in order to achieve the proper Gabor basic functions for each modality. For a closer analysis of texture information, two different local Gabor features for each modality are produced by the corresponding Gabor coefficients. Next, all matching scores of the two Gabor features for each modality are projected to a single-scalar score via a trained, supported, vector regression model for a final decision. A large-scale dataset is formed to validate the proposed scheme using the Facial Recognition Technology database-fafb and CASIA-V3-Interval together with FVC2004-DB2a datasets. The experimental results demonstrate that as well as achieving further powerful local Gabor features of multimodalities and obtaining better recognition performance by their fusion strategy, our architecture also outperforms some state-of-the-art individual methods and other fusion approaches for face-iris-fingerprint multimodal biometric systems.

  18. Coherence modulation at the photon-counting level: A new scheme for secure communication

    International Nuclear Information System (INIS)

    Rhodes, William T; Boughanmi, Abdellatif; Moreno, Yezid Torres

    2016-01-01

    When operated at the photon-counting level, coherence modulation can provide quantifiably secure binary signal transmission between two entities, security being based on the nonclonability of photons. (paper)

  19. PNGMDR 2013-2015. Industrial scheme for very-low-level waste management

    International Nuclear Information System (INIS)

    2015-08-01

    The objectives of this document are to recall quantitative and qualitative situations of the very-low-level waste management sector, to consolidate production perspectives for producers, to make an inventory of possibilities of extension and optimisation of industrial capacities of the sector, to define priorities for the different envisaged optimisation options, and to describe the organisation for the follow-up of action progress. After a brief presentation of the context, it presents the French very-low-level waste sector which is specific to the French context, outlines the main challenges for the industrial very-low-level waste management sector, indicates current projects of assessment for the sector, reports an analysis of the relevance of the different envisaged volume optimisation ways, briefly presents different scenarios, gives a brief overview of examples of very-low-level waste management in other countries, and finally states some proposals

  20. Using an Ecosystem Approach to complement protection schemes based on organism-level endpoints

    International Nuclear Information System (INIS)

    Bradshaw, Clare; Kapustka, Lawrence; Barnthouse, Lawrence; Brown, Justin; Ciffroy, Philippe; Forbes, Valery; Geras'kin, Stanislav; Kautsky, Ulrik; Bréchignac, François

    2014-01-01

    Radiation protection goals for ecological resources are focussed on ecological structures and functions at population-, community-, and ecosystem-levels. The current approach to radiation safety for non-human biota relies on organism-level endpoints, and as such is not aligned with the stated overarching protection goals of international agencies. Exposure to stressors can trigger non-linear changes in ecosystem structure and function that cannot be predicted from effects on individual organisms. From the ecological sciences, we know that important interactive dynamics related to such emergent properties determine the flows of goods and services in ecological systems that human societies rely upon. A previous Task Group of the IUR (International Union of Radioecology) has presented the rationale for adding an Ecosystem Approach to the suite of tools available to manage radiation safety. In this paper, we summarize the arguments for an Ecosystem Approach and identify next steps and challenges ahead pertaining to developing and implementing a practical Ecosystem Approach to complement organism-level endpoints currently used in radiation safety. - Highlights: • An Ecosystem Approach to radiation safety complements the organism-level approach. • Emergent properties in ecosystems are not captured by organism-level endpoints. • The proposed Ecosystem Approach better aligns with management goals. • Practical guidance with respect to system-level endpoints is needed. • Guidance on computational model selection would benefit an Ecosystem Approach

  1. Face-iris multimodal biometric scheme based on feature level fusion

    Science.gov (United States)

    Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei

    2015-11-01

    Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.

  2. Evaluation and interconversion of various indicator PCB schemes for ∑PCB and dioxin-like PCB toxic equivalent levels in fish.

    Science.gov (United States)

    Gandhi, Nilima; Bhavsar, Satyendra P; Reiner, Eric J; Chen, Tony; Morse, Dave; Arhonditsis, George B; Drouillard, Ken G

    2015-01-06

    Polychlorinated biphenyls (PCBs) remain chemicals of concern more than three decades after the ban on their production. Technical mixture-based total PCB measurements are unreliable due to weathering and degradation, while detailed full congener specific measurements can be time-consuming and costly for large studies. Measurements using a subset of indicator PCBs (iPCBs) have been considered appropriate; however, inclusion of different PCB congeners in various iPCB schemes makes it challenging to readily compare data. Here, using an extensive data set, we examine the performance of existing iPCB3 (PCB 138, 153, and 180), iPCB6 (iPCB3 plus 28, 52, and 101) and iPCB7 (iPCB6 plus 118) schemes, and new iPCB schemes in estimating total of PCB congeners (∑PCB) and dioxin-like PCB toxic equivalent (dlPCB-TEQ) concentrations in sport fish fillets and the whole body of juvenile fish. The coefficients of determination (R(2)) for regressions conducted using logarithmically transformed data suggest that inclusion of an increased number of PCBs in an iPCB improves relationship with ∑PCB but not dlPCB-TEQs. Overall, novel iPCB3 (PCB 95, 118, and 153), iPCB4 (iPCB3 plus 138) and iPCB5 (iPCB4 plus 110) presented in this study and existing iPCB6 and iPCB7 are the most optimal indicators, while the current iPCB3 should be avoided. Measurement of ∑PCB based on a more detailed analysis (50+ congeners) is also overall a good approach for assessing PCB contamination and to track PCB origin in fish. Relationships among the existing and new iPCB schemes have been presented to facilitate their interconversion. The iPCB6 equiv levels for the 6.5 and 10 pg/g benchmarks of dlPCB-TEQ05 are about 50 and 120 ng/g ww, respectively, which are lower than the corresponding iPCB6 limits of 125 and 300 ng/g ww set by the European Union.

  3. A local level set method based on a finite element method for unstructured meshes

    International Nuclear Information System (INIS)

    Ngo, Long Cu; Choi, Hyoung Gwon

    2016-01-01

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time

  4. A local level set method based on a finite element method for unstructured meshes

    Energy Technology Data Exchange (ETDEWEB)

    Ngo, Long Cu; Choi, Hyoung Gwon [School of Mechanical Engineering, Seoul National University of Science and Technology, Seoul (Korea, Republic of)

    2016-12-15

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time.

  5. Mutual influences of rated currents, short circuit levels, fault durations and integrated protective schemes for industrial distribution MV switchgears

    Energy Technology Data Exchange (ETDEWEB)

    Gaidano, G. (FIAT Engineering, Torino, Italy); Lionetto, P.F.; Pelizza, C.; Tommazzolli, F.

    1979-01-01

    This paper deals with the problem of integrated and coordinated design of distribution systems, as regards the definition of system structure and parameters together with protection criteria and schemes. Advantages in system operation, dynamic response, heavier loads with reduced machinery rating margins and overall cost reduction, can be achieved. It must be noted that MV switchgears installed in industrial main distribution substations are the vital nodes of the distribution system. Very large amounts of power (up to 100 MW and more) are conveyed through MV busbars, coming from Utility and from in-plant generators and outgoing to subdistribution substations, to step-down transformers and to main concentrated loads (big drivers, furnaces etc.). Criteria and methods already studied and applied to public distribution are examined to assess service continuity and economics by means of the reduction of thermal stresses, minimization of disturbances and improvement of system stability. The life of network components depends on sizing, on fault energy levels and on probability of fault occurrence. Constructional measures and protection schemes, which reduce probability and duration of faults, are the most important tools to improve overall reliability. The introduction of advanced techniques, mainly based on computer application, not only allows drastic reduction of fault duration, but also permits the system to operate, under any possible contingency, in the optimal conditions, as the computer provides adaptive control. This mode of system management makes it possible to size network components with reference to the true magnitude of system quantities, avoiding expensive oversizing connected to the unflexibility of conventional protection and control schemes.

  6. A SCHEME FOR TEMPLATE SECURITY AT FEATURE FUSION LEVEL IN MULTIMODAL BIOMETRIC SYSTEM

    OpenAIRE

    Arvind Selwal; Sunil Kumar Gupta; Surender Kumar

    2016-01-01

    Biometric is the science of human recognition based upon using their biological, chemical or behavioural traits. These systems are used in many real life applications simply from biometric based attendance system to providing security at very sophisticated level. A biometric system deals with raw data captured using a sensor and feature template extracted from raw image. One of the challenges being faced by designers of these systems is to secure template data extracted from the biometric mod...

  7. Level Scheme of {sup 223}Fr; Estudio del esquema de niveles del {sup 223}Fr

    Energy Technology Data Exchange (ETDEWEB)

    Gaeta, R; Gonzalez, L; Roldan, C

    1972-07-01

    A study has been made of the decay of {sup 227}Ac at levels of {sub 223}Fr, means of alpha Spectrometers of Si barrier detector and gamma Spectrometers of Ge(Li). The rotational bands 1/2-(541 {down_arrow}), 1/2-(530 {up_arrow}) and 3/2-(532 {down_arrow}) have been identified, as well as two octupolar bands associated with the fundamental one. The results obtained indicate that the unified model is applicable in this intermediate zone of the nuclide chart. (Author) 150 refs.

  8. Phase diagram of a QED-cavity array coupled via a N-type level scheme

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Jiasen; Rossini, Davide [CNR, NEST, Scuola Normale Superiore and Istituto di Nanoscienze, Pisa (Italy); Fazio, Rosario [CNR, NEST, Scuola Normale Superiore and Istituto di Nanoscienze, Pisa (Italy); National University of Singapore, Center for Quantum Technologies, Singapore (Singapore)

    2015-01-01

    We study the zero-temperature phase diagram of a one-dimensional array of QED cavities where, besides the single-photon hopping, an additional coupling between neighboring cavities is mediated by an N-type four-level system. By varying the relative strength of the various couplings, the array is shown to exhibit a variety of quantum phases including a polaritonic Mott insulator, a density-wave and a superfluid phase. Our results have been obtained by means of numerical density-matrix renormalization group calculations. The phase diagram was obtained by analyzing the energy gaps for the polaritons, as well as through a study of two-point correlation functions. (orig.)

  9. Two-level modulation scheme to reduce latency for optical mobile fronthaul networks.

    Science.gov (United States)

    Sung, Jiun-Yu; Chow, Chi-Wai; Yeh, Chien-Hung; Chang, Gee-Kung

    2016-10-31

    A system using optical two-level orthogonal-frequency-division-multiplexing (OFDM) - amplitude-shift-keying (ASK) modulation is proposed and demonstrated to reduce the processing latency for the optical mobile fronthaul networks. At the proposed remote-radio-head (RRH), the high data rate OFDM signal does not need to be processed, but is directly launched into a high speed photodiode (HSPD) and subsequently emitted by an antenna. Only a low bandwidth PD is needed to recover the low data rate ASK control signal. Hence, it is simple and provides low-latency. Furthermore, transporting the proposed system over the already deployed optical-distribution-networks (ODNs) of passive-optical-networks (PONs) is also demonstrated with 256 ODN split-ratios.

  10. Comparative Evaluation of Cash Benefit Scheme of Janani Suraksha Yojana for Beneficiary Mothers from Different Health Care Settings of Rewa District, Madhya Pradesh, India.

    Directory of Open Access Journals (Sweden)

    Trivedi R

    2014-05-01

    Full Text Available Introduction: For better outcomes in mother and child health, Government of India launched the National Rural Health Mission (NRHM in 2005 with a major objective of providing accessible, affordable and quality health care to the rural population; especially the vulnerable. Reduction in MMR to 100/100,000 is one of its goals and the Janani Suraksha Yojana (JSY is the key strategy of NRHM to achieve this reduction. The JSY, as a safe motherhood intervention and modified alternative of the National Maternity Benefit Scheme (NMBS, has been implemented in all states and Union territories with special focus on low performing states. The main objective and vision of JSY is to reduce maternal, neo-natal mortality and promote institutional delivery among the poor pregnant women of rural and urban areas. This scheme is 100% centrally sponsored and has an integrated delivery and post delivery care with the help of a key person i.e. ASHA (Accredited Social Health Activist, followed by cash monetary help to the women. Objectives: 1To evaluate cash benefit service provided under JSY at different health care settings. 2 To know the perception and elicit suggestions of beneficiaries on quality of cash benefit scheme of JSY. Methodology: This is a health care institute based observational cross sectional study including randomly selected 200 JSY beneficiary mothers from the different health care settings i.e., Primary Health Centres, Community Health Centres, District Hospital and Medical College Hospital of Rewa District of Madhya Pradesh state. Data was collected with the help of set pro forma and then analysed with Epi Info 2000. Chi square test was applied appropriately. Results: 60% and 80% beneficiaries from PHC and CHC received cash within 1 week after discharge whereas 100% beneficiaries of District Hospital and Medical College Hospital received cash at the time of discharge; the overall distribution of time of cash disbursement among beneficiaries of

  11. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin; Matevosyan, Norayr; Wolfram, Marie-Therese

    2011-01-01

    analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its

  12. Assessment of building integrated energy supply and energy saving schemes on a national level in Denmark

    Energy Technology Data Exchange (ETDEWEB)

    Muenster, M.; Morthorst, P.E.; Birkl, C.

    2011-06-15

    In the future, buildings will not only act as consumers of energy but as producers as well. For these ''prosumers'', energy production by use of solar panels, photovoltaics and heat pumps etc will be essential. The objective of this project was to find the most optimal combinations of building insulation and use of renewable energy sources in existing buildings in terms of economics and climate impacts. Five houses were analyzed based on different personal load, consumption profiles, solar orientation and proposed building envelope improvements and use of combinations of renewable energy systems. The results of these analyses were integrated in five scenarios to examine the consequences at national level of implementing insulation together with solar panels, photovoltaics and heat pumps in single-family houses. The simulations focused on the building period between 1961 and 1972 characterised by high building activity and low energy performance. The five scenarios - a baseline scenario, a maximum savings scenario, a maximum production scenario, and a combination scenario - showed that regardless of scenario, a consequent use of individual heat pumps leads to the greatest energy savings and CO{sub 2} reductions. (ln)

  13. A comparative study of reinitialization approaches of the level set method for simulating free-surface flows

    Energy Technology Data Exchange (ETDEWEB)

    Sufyan, Muhammad; Ngo, Long Cu; Choi, Hyoung Gwon [Seoul National University, Seoul (Korea, Republic of)

    2016-04-15

    Unstructured grids were used to compare the performance of a direct reinitialization scheme with those of two reinitialization approaches based on the solution of a hyperbolic Partial differential equation (PDE). The problems of moving interface were solved in the context of a finite element method. A least-square weighted residual method was used to discretize the advection equation of the level set method. The benchmark problems of rotating Zalesak's disk, time-reversed single vortex, and two-dimensional sloshing were examined. Numerical results showed that the direct reinitialization scheme performed better than the PDE-based reinitialization approaches in terms of mass conservation, dissipation and dispersion error, and computational time. In the case of sloshing, numerical results were found to be in good agreement with existing experimental data. The direct reinitialization approach consumed considerably less CPU time than the PDE-based simulations for 20 time periods of sloshing. This approach was stable, accurate, and efficient for all the problems considered in this study.

  14. Hybrid approach for detection of dental caries based on the methods FCM and level sets

    Science.gov (United States)

    Chaabene, Marwa; Ben Ali, Ramzi; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    This paper presents a new technique for detection of dental caries that is a bacterial disease that destroys the tooth structure. In our approach, we have achieved a new segmentation method that combines the advantages of fuzzy C mean algorithm and level set method. The results obtained by the FCM algorithm will be used by Level sets algorithm to reduce the influence of the noise effect on the working of each of these algorithms, to facilitate level sets manipulation and to lead to more robust segmentation. The sensitivity and specificity confirm the effectiveness of proposed method for caries detection.

  15. Out-of-Core Computations of High-Resolution Level Sets by Means of Code Transformation

    DEFF Research Database (Denmark)

    Christensen, Brian Bunch; Nielsen, Michael Bang; Museth, Ken

    2012-01-01

    We propose a storage efficient, fast and parallelizable out-of-core framework for streaming computations of high resolution level sets. The fundamental techniques are skewing and tiling transformations of streamed level set computations which allow for the combination of interface propagation, re...... computations are now CPU bound and consequently the overall performance is unaffected by disk latency and bandwidth limitations. We demonstrate this with several benchmark tests that show sustained out-of-core throughputs close to that of in-core level set simulations....

  16. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Science.gov (United States)

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  17. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin

    2011-04-01

    In this paper, we construct a level set method for an elliptic obstacle problem, which can be reformulated as a shape optimization problem. We provide a detailed shape sensitivity analysis for this reformulation and a stability result for the shape Hessian at the optimal shape. Using the shape sensitivities, we construct a geometric gradient flow, which can be realized in the context of level set methods. We prove the convergence of the gradient flow to an optimal shape and provide a complete analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its behavior through several computational experiments. © 2011 World Scientific Publishing Company.

  18. A simple mass-conserved level set method for simulation of multiphase flows

    Science.gov (United States)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  19. Evaluating healthcare priority setting at the meso level: A thematic review of empirical literature

    Science.gov (United States)

    Waithaka, Dennis; Tsofa, Benjamin; Barasa, Edwine

    2018-01-01

    Background: Decentralization of health systems has made sub-national/regional healthcare systems the backbone of healthcare delivery. These regions are tasked with the difficult responsibility of determining healthcare priorities and resource allocation amidst scarce resources. We aimed to review empirical literature that evaluated priority setting practice at the meso (sub-national) level of health systems. Methods: We systematically searched PubMed, ScienceDirect and Google scholar databases and supplemented these with manual searching for relevant studies, based on the reference list of selected papers. We only included empirical studies that described and evaluated, or those that only evaluated priority setting practice at the meso-level. A total of 16 papers were identified from LMICs and HICs. We analyzed data from the selected papers by thematic review. Results: Few studies used systematic priority setting processes, and all but one were from HICs. Both formal and informal criteria are used in priority-setting, however, informal criteria appear to be more perverse in LMICs compared to HICs. The priority setting process at the meso-level is a top-down approach with minimal involvement of the community. Accountability for reasonableness was the most common evaluative framework as it was used in 12 of the 16 studies. Efficiency, reallocation of resources and options for service delivery redesign were the most common outcome measures used to evaluate priority setting. Limitations: Our study was limited by the fact that there are very few empirical studies that have evaluated priority setting at the meso-level and there is likelihood that we did not capture all the studies. Conclusions: Improving priority setting practices at the meso level is crucial to strengthening health systems. This can be achieved through incorporating and adapting systematic priority setting processes and frameworks to the context where used, and making considerations of both process

  20. Federal and state regulatory schemes affecting liability for high-level waste transportation incidents: opportunities for clarification and amendment

    International Nuclear Information System (INIS)

    Friel, L.E.; Livingston-Behan, E.A.

    1985-01-01

    The Price-Anderson Act of 1957 provides extensive public liability coverage in the event of a serious accident involving the transportation of nuclear materials to or from certain federally-licensed, or federal contractor-operated facilities. While actual liability for a nuclear incident and the extent of damages are usually determined by state law, the Act establishes a comprehensive system for the payment of such damages. Despite the federally-mandated scheme for liability coverage several aspects of the Act's application to transportation to a permanent repository have not yet been settled and are open to various interpretations. Some areas of uncertainty apply not only to future waste transport to a repository, but also to current transportation activities, and include: coverage for emergency response and clean-up costs; coverage for precautionary evacuations; and the federal government's financial liability. The need to address liability issues is also increasingly recognized at the state level. The state laws which are used to determine liability and the extent of damages in the event of a transportation accident vary widely among states and significantly affect the compensation that an injured person will receive under the provisions of the Price-Anderson Act. Areas of state law deserving special attention include: standards for determining liability; statutes of limitations; standards for proof of causation; state sovereign immunity statutes; and recovery of unique emergency response costs

  1. Mapping topographic structure in white matter pathways with level set trees.

    Directory of Open Access Journals (Sweden)

    Brian P Kent

    Full Text Available Fiber tractography on diffusion imaging data offers rich potential for describing white matter pathways in the human brain, but characterizing the spatial organization in these large and complex data sets remains a challenge. We show that level set trees--which provide a concise representation of the hierarchical mode structure of probability density functions--offer a statistically-principled framework for visualizing and analyzing topography in fiber streamlines. Using diffusion spectrum imaging data collected on neurologically healthy controls (N = 30, we mapped white matter pathways from the cortex into the striatum using a deterministic tractography algorithm that estimates fiber bundles as dimensionless streamlines. Level set trees were used for interactive exploration of patterns in the endpoint distributions of the mapped fiber pathways and an efficient segmentation of the pathways that had empirical accuracy comparable to standard nonparametric clustering techniques. We show that level set trees can also be generalized to model pseudo-density functions in order to analyze a broader array of data types, including entire fiber streamlines. Finally, resampling methods show the reliability of the level set tree as a descriptive measure of topographic structure, illustrating its potential as a statistical descriptor in brain imaging analysis. These results highlight the broad applicability of level set trees for visualizing and analyzing high-dimensional data like fiber tractography output.

  2. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation.

    Science.gov (United States)

    Barasa, Edwine W; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-09-16

    Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these complementary schools of thought. © 2015

  3. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    Science.gov (United States)

    Barasa, Edwine W.; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-01-01

    Background: Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods: We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results: Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Conclusion: Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these

  4. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    Directory of Open Access Journals (Sweden)

    Edwine W. Barasa

    2015-11-01

    Full Text Available Background Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1 Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a Stakeholder satisfaction, (b Stakeholder understanding, (c Shifted priorities (reallocation of resources, and (d Implementation of decisions. (2 Priority setting processes should also meet the procedural conditions of (a Stakeholder engagement, (b Stakeholder empowerment, (c Transparency, (d Use of evidence, (e Revisions, (f Enforcement, and (g Being grounded on community values. Conclusion Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from

  5. 76 FR 9004 - Public Comment on Setting Achievement Levels in Writing

    Science.gov (United States)

    2011-02-16

    ... DEPARTMENT OF EDUCATION Public Comment on Setting Achievement Levels in Writing AGENCY: U.S... Achievement Levels in Writing. SUMMARY: The National Assessment Governing Board (Governing Board) is... for NAEP in writing. This notice provides opportunity for public comment and submitting...

  6. Application of physiologically based pharmacokinetic modeling in setting acute exposure guideline levels for methylene chloride.

    NARCIS (Netherlands)

    Bos, Peter Martinus Jozef; Zeilmaker, Marco Jacob; Eijkeren, Jan Cornelis Henri van

    2006-01-01

    Acute exposure guideline levels (AEGLs) are derived to protect the human population from adverse health effects in case of single exposure due to an accidental release of chemicals into the atmosphere. AEGLs are set at three different levels of increasing toxicity for exposure durations ranging from

  7. Attitudes towards help-seeking for sexual and gender-based violence in humanitarian settings: the case of Rwamwanja refugee settlement scheme in Uganda.

    Science.gov (United States)

    Odwe, George; Undie, Chi-Chi; Obare, Francis

    2018-03-12

    Sexual and gender-based violence (SGBV) remains a silent epidemic in many humanitarian settings with many survivors concealing their experiences. Attitudes towards help-seeking for SGBV is an important determinant of SGBV service use. This paper examined the association between attitudes towards seeking care and knowledge and perceptions about SGBV among men and women in a humanitarian setting in Uganda. A cross-sectional survey was conducted from May to June 2015 among 601 heads of refugee households (261 females and 340 males) in Rwamwanja Refugees Settlement Scheme, South West Uganda. Analysis entails cross-tabulation with chi-square test and estimation of a multivariate logistic regression model. Results showed increased odds of having a favorable attitude toward seeking help for SGBV among women with progressive attitudes towards SGBV (OR = 2.78, 95% CI: 1.56-4.95); who felt that SBGV was not tolerated in the community (OR = 2.03, 95% CI: 1.03-4.00); those who had not experienced violence (OR = 2.08, 95% CI: 1.06-4.07); and those who were aware of the timing for post-exposure prophylaxis (OR = 3.08, 95% CI: 1.57-6.04). In contrast, results for men sample showed lack of variations in attitude toward seeking help for SGBV for all independent variables except timing for PEP (OR = 2.57, 95% CI: 1.30-5.10). Among individuals who had experienced SGBV, the odds of seeking help was more likely among those with favorable attitude towards seeking help (OR = 4.22, 95% CI: 1.47-12.06) than among those with unfavorable help-seeking attitudes. The findings of the paper suggest that targeted interventions aimed at promoting awareness and progressive attitudes towards SGBV are likely to encourage positive help-seeking attitudes and behaviors in humanitarian contexts.

  8. Appropriate criteria set for personnel promotion across organizational levels using analytic hierarchy process (AHP

    Directory of Open Access Journals (Sweden)

    Charles Noven Castillo

    2017-01-01

    Full Text Available Currently, there has been limited established specific set of criteria for personnel promotion to each level of the organization. This study is conducted in order to develop a personnel promotion strategy by identifying specific sets of criteria for each level of the organization. The complexity of identifying the criteria set along with the subjectivity of these criteria require the use of multi-criteria decision-making approach particularly the analytic hierarchy process (AHP. Results show different sets of criteria for each management level which are consistent with several frameworks in literature. These criteria sets would help avoid mismatch of employee skills and competencies and their job, and at the same time eliminate the issues in personnel promotion such as favouritism, glass ceiling, and gender and physical attractiveness preference. This work also shows that personality and traits, job satisfaction and experience and skills are more critical rather than social capital across different organizational levels. The contribution of this work is in identifying relevant criteria in developing a personnel promotion strategy across organizational levels.

  9. Feasibility of megavoltage portal CT using an electronic portal imaging device (EPID) and a multi-level scheme algebraic reconstruction technique (MLS-ART)

    International Nuclear Information System (INIS)

    Guan, Huaiqun; Zhu, Yunping

    1998-01-01

    Although electronic portal imaging devices (EPIDs) are efficient tools for radiation therapy verification, they only provide images of overlapped anatomic structures. We investigated using a fluorescent screen/CCD-based EPID, coupled with a novel multi-level scheme algebraic reconstruction technique (MLS-ART), for a feasibility study of portal computed tomography (CT) reconstructions. The CT images might be useful for radiation treatment planning and verification. We used an EPID, set it to work at the linear dynamic range and collimated 6 MV photons from a linear accelerator to a slit beam of 1 cm wide and 25 cm long. We performed scans under a total of ∼200 monitor units (MUs) for several phantoms in which we varied the number of projections and MUs per projection. The reconstructed images demonstrated that using the new MLS-ART technique megavoltage portal CT with a total of 200 MUs can achieve a contrast detectibility of ∼2.5% (object size 5mmx5mm) and a spatial resolution of 2.5 mm. (author)

  10. Priority setting at the micro-, meso- and macro-levels in Canada, Norway and Uganda.

    Science.gov (United States)

    Kapiriri, Lydia; Norheim, Ole Frithjof; Martin, Douglas K

    2007-06-01

    The objectives of this study were (1) to describe the process of healthcare priority setting in Ontario-Canada, Norway and Uganda at the three levels of decision-making; (2) to evaluate the description using the framework for fair priority setting, accountability for reasonableness; so as to identify lessons of good practices. We carried out case studies involving key informant interviews, with 184 health practitioners and health planners from the macro-level, meso-level and micro-level from Canada-Ontario, Norway and Uganda (selected by virtue of their varying experiences in priority setting). Interviews were audio-recorded, transcribed and analyzed using a modified thematic approach. The descriptions were evaluated against the four conditions of "accountability for reasonableness", relevance, publicity, revisions and enforcement. Areas of adherence to these conditions were identified as lessons of good practices; areas of non-adherence were identified as opportunities for improvement. (i) at the macro-level, in all three countries, cabinet makes most of the macro-level resource allocation decisions and they are influenced by politics, public pressure, and advocacy. Decisions within the ministries of health are based on objective formulae and evidence. International priorities influenced decisions in Uganda. Some priority-setting reasons are publicized through circulars, printed documents and the Internet in Canada and Norway. At the meso-level, hospital priority-setting decisions were made by the hospital managers and were based on national priorities, guidelines, and evidence. Hospital departments that handle emergencies, such as surgery, were prioritized. Some of the reasons are available on the hospital intranet or presented at meetings. Micro-level practitioners considered medical and social worth criteria. These reasons are not publicized. Many practitioners lacked knowledge of the macro- and meso-level priority-setting processes. (ii) Evaluation

  11. Reconstruction of thin electromagnetic inclusions by a level-set method

    International Nuclear Information System (INIS)

    Park, Won-Kwang; Lesselier, Dominique

    2009-01-01

    In this contribution, we consider a technique of electromagnetic imaging (at a single, non-zero frequency) which uses the level-set evolution method for reconstructing a thin inclusion (possibly made of disconnected parts) with either dielectric or magnetic contrast with respect to the embedding homogeneous medium. Emphasis is on the proof of the concept, the scattering problem at hand being so far based on a two-dimensional scalar model. To do so, two level-set functions are employed; the first one describes location and shape, and the other one describes connectivity and length. Speeds of evolution of the level-set functions are calculated via the introduction of Fréchet derivatives of a least-square cost functional. Several numerical experiments on noiseless and noisy data as well illustrate how the proposed method behaves

  12. Multi person detection and tracking based on hierarchical level-set method

    Science.gov (United States)

    Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid

    2018-04-01

    In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.

  13. Level set methods for detonation shock dynamics using high-order finite elements

    Energy Technology Data Exchange (ETDEWEB)

    Dobrev, V. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Grogan, F. C. [Univ. of California, San Diego, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kolev, T. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rieben, R [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tomov, V. Z. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-26

    Level set methods are a popular approach to modeling evolving interfaces. We present a level set ad- vection solver in two and three dimensions using the discontinuous Galerkin method with high-order nite elements. During evolution, the level set function is reinitialized to a signed distance function to maintain ac- curacy. Our approach leads to stable front propagation and convergence on high-order, curved, unstructured meshes. The ability of the solver to implicitly track moving fronts lends itself to a number of applications; in particular, we highlight applications to high-explosive (HE) burn and detonation shock dynamics (DSD). We provide results for two- and three-dimensional benchmark problems as well as applications to DSD.

  14. A parametric level-set approach for topology optimization of flow domains

    DEFF Research Database (Denmark)

    Pingen, Georg; Waidmann, Matthias; Evgrafov, Anton

    2010-01-01

    of the design variables in the traditional approaches is seen as a possible cause for the slow convergence. Non-smooth material distributions are suspected to trigger premature onset of instationary flows which cannot be treated by steady-state flow models. In the present work, we study whether the convergence...... and the versatility of topology optimization methods for fluidic systems can be improved by employing a parametric level-set description. In general, level-set methods allow controlling the smoothness of boundaries, yield a non-local influence of design variables, and decouple the material description from the flow...... field discretization. The parametric level-set method used in this study utilizes a material distribution approach to represent flow boundaries, resulting in a non-trivial mapping between design variables and local material properties. Using a hydrodynamic lattice Boltzmann method, we study...

  15. Setting-level influences on implementation of the responsive classroom approach.

    Science.gov (United States)

    Wanless, Shannon B; Patton, Christine L; Rimm-Kaufman, Sara E; Deutsch, Nancy L

    2013-02-01

    We used mixed methods to examine the association between setting-level factors and observed implementation of a social and emotional learning intervention (Responsive Classroom® approach; RC). In study 1 (N = 33 3rd grade teachers after the first year of RC implementation), we identified relevant setting-level factors and uncovered the mechanisms through which they related to implementation. In study 2 (N = 50 4th grade teachers after the second year of RC implementation), we validated our most salient Study 1 finding across multiple informants. Findings suggested that teachers perceived setting-level factors, particularly principal buy-in to the intervention and individualized coaching, as influential to their degree of implementation. Further, we found that intervention coaches' perspectives of principal buy-in were more related to implementation than principals' or teachers' perspectives. Findings extend the application of setting theory to the field of implementation science and suggest that interventionists may want to consider particular accounts of school setting factors before determining the likelihood of schools achieving high levels of implementation.

  16. Robust boundary detection of left ventricles on ultrasound images using ASM-level set method.

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Li, Hong; Teng, Yueyang; Kang, Yan

    2015-01-01

    Level set method has been widely used in medical image analysis, but it has difficulties when being used in the segmentation of left ventricular (LV) boundaries on echocardiography images because the boundaries are not very distinguish, and the signal-to-noise ratio of echocardiography images is not very high. In this paper, we introduce the Active Shape Model (ASM) into the traditional level set method to enforce shape constraints. It improves the accuracy of boundary detection and makes the evolution more efficient. The experiments conducted on the real cardiac ultrasound image sequences show a positive and promising result.

  17. Accurate prediction of complex free surface flow around a high speed craft using a single-phase level set method

    Science.gov (United States)

    Broglia, Riccardo; Durante, Danilo

    2017-11-01

    This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to

  18. Repository schemes for high-level radioactive waste disposal. Report on task no. 1(b)- Review of schemes for argillaceous and saliferous formations

    Energy Technology Data Exchange (ETDEWEB)

    1981-01-01

    This Report examines international proposals for the underground storage of high level radioactive waste. From this study proposals have been developed in relation to conditions that can be expected in the United Kingdom. This Report is restricted to the consideration of repositories in the softer rocks, the clays and the shales which can be encountered in many parts of the United Kingdom. It has also considered the construction of a repository in rock salt. The only such deposits which could be developed for the purpose are found under the North Sea. For the purpose of this Study it has been assumed that suitable sites can be located near enough to a coast line to allow works to be constructed from the land. The likely cost of a repository will vary widely depending upon the nature of the ground in which it is constructed and the depth. The choice here is not an engineering matter but is dictated by the degree of protection which it is necessary to give to the environment, both within the forseeable future and for many generations to come. Costs are estimated making various assumptions.

  19. Individual-and Setting-Level Correlates of Secondary Traumatic Stress in Rape Crisis Center Staff.

    Science.gov (United States)

    Dworkin, Emily R; Sorell, Nicole R; Allen, Nicole E

    2016-02-01

    Secondary traumatic stress (STS) is an issue of significant concern among providers who work with survivors of sexual assault. Although STS has been studied in relation to individual-level characteristics of a variety of types of trauma responders, less research has focused specifically on rape crisis centers as environments that might convey risk or protection from STS, and no research to knowledge has modeled setting-level variation in correlates of STS. The current study uses a sample of 164 staff members representing 40 rape crisis centers across a single Midwestern state to investigate the staff member-and agency-level correlates of STS. Results suggest that correlates exist at both levels of analysis. Younger age and greater severity of sexual assault history were statistically significant individual-level predictors of increased STS. Greater frequency of supervision was more strongly related to secondary stress for non-advocates than for advocates. At the setting level, lower levels of supervision and higher client loads agency-wide accounted for unique variance in staff members' STS. These findings suggest that characteristics of both providers and their settings are important to consider when understanding their STS. © The Author(s) 2014.

  20. Demons versus level-set motion registration for coronary 18F-sodium fluoride PET

    Science.gov (United States)

    Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R.; Fletcher, Alison; Motwani, Manish; Thomson, Louise E.; Germano, Guido; Dey, Damini; Berman, Daniel S.; Newby, David E.; Slomka, Piotr J.

    2016-03-01

    Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18F-sodium fluoride (18F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18F-NaF PET. To this end, fifteen patients underwent 18F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically

  1. Analysis of Forensic Autopsy in 120 Cases of Medical Disputes Among Different Levels of Institutional Settings.

    Science.gov (United States)

    Yu, Lin-Sheng; Ye, Guang-Hua; Fan, Yan-Yan; Li, Xing-Biao; Feng, Xiang-Ping; Han, Jun-Ge; Lin, Ke-Zhi; Deng, Miao-Wu; Li, Feng

    2015-09-01

    Despite advances in medical science, the causes of death can sometimes only be determined by pathologists after a complete autopsy. Few studies have investigated the importance of forensic autopsy in medically disputed cases among different levels of institutional settings. Our study aimed to analyze forensic autopsy in 120 cases of medical disputes among five levels of institutional settings between 2001 and 2012 in Wenzhou, China. The results showed an overall concordance rate of 55%. Of the 39% of clinically missed diagnosis, cardiovascular pathology comprises 55.32%, while respiratory pathology accounts for the remaining 44. 68%. Factors that increase the likelihood of missed diagnoses were private clinics, community settings, and county hospitals. These results support that autopsy remains an important tool in establishing causes of death in medically disputed case, which may directly determine or exclude the fault of medical care and therefore in helping in resolving these cases. © 2015 American Academy of Forensic Sciences.

  2. An investigation of children's levels of inquiry in an informal science setting

    Science.gov (United States)

    Clark-Thomas, Beth Anne

    Elementary school students' understanding of both science content and processes are enhanced by the higher level thinking associated with inquiry-based science investigations. Informal science setting personnel, elementary school teachers, and curriculum specialists charged with designing inquiry-based investigations would be well served by an understanding of the varying influence of certain present factors upon the students' willingness and ability to delve into such higher level inquiries. This study examined young children's use of inquiry-based materials and factors which may influence the level of inquiry they engaged in during informal science activities. An informal science setting was selected as the context for the examination of student inquiry behaviors because of the rich inquiry-based environment present at the site and the benefits previously noted in the research regarding the impact of informal science settings upon the construction of knowledge in science. The study revealed several patterns of behavior among children when they are engaged in inquiry-based activities at informal science exhibits. These repeated behaviors varied in the children's apparent purposeful use of the materials at the exhibits. These levels of inquiry behavior were taxonomically defined as high/medium/low within this study utilizing a researcher-developed tool. Furthermore, in this study adult interventions, questions, or prompting were found to impact the level of inquiry engaged in by the children. This study revealed that higher levels of inquiry were preceded by task directed and physical feature prompts. Moreover, the levels of inquiry behaviors were haltered, even lowered, when preceded by a prompt that focused on a science content or concept question. Results of this study have implications for the enhancement of inquiry-based science activities in elementary schools as well as in informal science settings. These findings have significance for all science educators

  3. Education level inequalities and transportation injury mortality in the middle aged and elderly in European settings

    NARCIS (Netherlands)

    Borrell, C.; Plasència, A.; Huisman, M.; Costa, G.; Kunst, A.; Andersen, O.; Bopp, M.; Borgan, J.-K.; Deboosere, P.; Glickman, M.; Gadeyne, S.; Minder, C.; Regidor, E.; Spadea, T.; Valkonen, T.; Mackenbach, J. P.

    2005-01-01

    OBJECTIVE: To study the differential distribution of transportation injury mortality by educational level in nine European settings, among people older than 30 years, during the 1990s. METHODS: Deaths of men and women older than 30 years from transportation injuries were studied. Rate differences

  4. A thick level set interface model for simulating fatigue-drive delamination in composites

    NARCIS (Netherlands)

    Latifi, M.; Van der Meer, F.P.; Sluys, L.J.

    2015-01-01

    This paper presents a new damage model for simulating fatigue-driven delamination in composite laminates. This model is developed based on the Thick Level Set approach (TLS) and provides a favorable link between damage mechanics and fracture mechanics through the non-local evaluation of the energy

  5. Level of health care and services in a tertiary health setting in Nigeria

    African Journals Online (AJOL)

    Level of health care and services in a tertiary health setting in Nigeria. ... Background: There is a growing awareness and demand for quality health care across the world; hence the ... Doctors and nurses formed 64.3% of the study population.

  6. Two Surface-Tension Formulations For The Level Set Interface-Tracking Method

    International Nuclear Information System (INIS)

    Shepel, S.V.; Smith, B.L.

    2005-01-01

    The paper describes a comparative study of two surface-tension models for the Level Set interface tracking method. In both models, the surface tension is represented as a body force, concentrated near the interface, but the technical implementation of the two options is different. The first is based on a traditional Level Set approach, in which the surface tension is distributed over a narrow band around the interface using a smoothed Delta function. In the second model, which is based on the integral form of the fluid-flow equations, the force is imposed only in those computational cells through which the interface passes. Both models have been incorporated into the Finite-Element/Finite-Volume Level Set method, previously implemented into the commercial Computational Fluid Dynamics (CFD) code CFX-4. A critical evaluation of the two models, undertaken in the context of four standard Level Set benchmark problems, shows that the first model, based on the smoothed Delta function approach, is the more general, and more robust, of the two. (author)

  7. Fast Streaming 3D Level set Segmentation on the GPU for Smooth Multi-phase Segmentation

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Zhang, Qin; Anton, François

    2011-01-01

    Level set method based segmentation provides an efficient tool for topological and geometrical shape handling, but it is slow due to high computational burden. In this work, we provide a framework for streaming computations on large volumetric images on the GPU. A streaming computational model...

  8. An Optimized, Grid Independent, Narrow Band Data Structure for High Resolution Level Sets

    DEFF Research Database (Denmark)

    Nielsen, Michael Bang; Museth, Ken

    2004-01-01

    enforced by the convex boundaries of an underlying cartesian computational grid. Here we present a novel very memory efficient narrow band data structure, dubbed the Sparse Grid, that enables the representation of grid independent high resolution level sets. The key features our new data structure are...

  9. Scope of physician procedures independently billed by mid-level providers in the office setting.

    Science.gov (United States)

    Coldiron, Brett; Ratnarathorn, Mondhipa

    2014-11-01

    Mid-level providers (nurse practitioners and physician assistants) were originally envisioned to provide primary care services in underserved areas. This study details the current scope of independent procedural billing to Medicare of difficult, invasive, and surgical procedures by medical mid-level providers. To understand the scope of independent billing to Medicare for procedures performed by mid-level providers in an outpatient office setting for a calendar year. Analyses of the 2012 Medicare Physician/Supplier Procedure Summary Master File, which reflects fee-for-service claims that were paid by Medicare, for Current Procedural Terminology procedures independently billed by mid-level providers. Outpatient office setting among health care providers. The scope of independent billing to Medicare for procedures performed by mid-level providers. In 2012, nurse practitioners and physician assistants billed independently for more than 4 million procedures at our cutoff of 5000 paid claims per procedure. Most (54.8%) of these procedures were performed in the specialty area of dermatology. The findings of this study are relevant to safety and quality of care. Recently, the shortage of primary care clinicians has prompted discussion of widening the scope of practice for mid-level providers. It would be prudent to temper widening the scope of practice of mid-level providers by recognizing that mid-level providers are not solely limited to primary care, and may involve procedures for which they may not have formal training.

  10. Numerical schemes for explosion hazards

    International Nuclear Information System (INIS)

    Therme, Nicolas

    2015-01-01

    In nuclear facilities, internal or external explosions can cause confinement breaches and radioactive materials release in the environment. Hence, modeling such phenomena is crucial for safety matters. Blast waves resulting from explosions are modeled by the system of Euler equations for compressible flows, whereas Navier-Stokes equations with reactive source terms and level set techniques are used to simulate the propagation of flame front during the deflagration phase. The purpose of this thesis is to contribute to the creation of efficient numerical schemes to solve these complex models. The work presented here focuses on two major aspects: first, the development of consistent schemes for the Euler equations, then the buildup of reliable schemes for the front propagation. In both cases, explicit in time schemes are used, but we also introduce a pressure correction scheme for the Euler equations. Staggered discretization is used in space. It is based on the internal energy formulation of the Euler system, which insures its positivity and avoids tedious discretization of the total energy over staggered grids. A discrete kinetic energy balance is derived from the scheme and a source term is added in the discrete internal energy balance equation to preserve the exact total energy balance at the limit. High order methods of MUSCL type are used in the discrete convective operators, based solely on material velocity. They lead to positivity of density and internal energy under CFL conditions. This ensures that the total energy cannot grow and we can furthermore derive a discrete entropy inequality. Under stability assumptions of the discrete L8 and BV norms of the scheme's solutions one can prove that a sequence of converging discrete solutions necessarily converges towards the weak solution of the Euler system. Besides it satisfies a weak entropy inequality at the limit. Concerning the front propagation, we transform the flame front evolution equation (the so called

  11. Online monitoring of oil film using electrical capacitance tomography and level set method

    International Nuclear Information System (INIS)

    Xue, Q.; Ma, M.; Sun, B. Y.; Cui, Z. Q.; Wang, H. X.

    2015-01-01

    In the application of oil-air lubrication system, electrical capacitance tomography (ECT) provides a promising way for monitoring oil film in the pipelines by reconstructing cross sectional oil distributions in real time. While in the case of small diameter pipe and thin oil film, the thickness of the oil film is hard to be observed visually since the interface of oil and air is not obvious in the reconstructed images. And the existence of artifacts in the reconstructions has seriously influenced the effectiveness of image segmentation techniques such as level set method. Besides, level set method is also unavailable for online monitoring due to its low computation speed. To address these problems, a modified level set method is developed: a distance regularized level set evolution formulation is extended to image two-phase flow online using an ECT system, a narrowband image filter is defined to eliminate the influence of artifacts, and considering the continuity of the oil distribution variation, the detected oil-air interface of a former image can be used as the initial contour for the detection of the subsequent frame; thus, the propagation from the initial contour to the boundary can be greatly accelerated, making it possible for real time tracking. To testify the feasibility of the proposed method, an oil-air lubrication facility with 4 mm inner diameter pipe is measured in normal operation using an 8-electrode ECT system. Both simulation and experiment results indicate that the modified level set method is capable of visualizing the oil-air interface accurately online

  12. Voluntary agreements, implementation and efficiency. European relevance of case study results. Reflections on transferability to voluntary agreement schemes at the European level

    Energy Technology Data Exchange (ETDEWEB)

    Helby, Peter

    2000-04-01

    As a policy instrument, voluntary agreements often fascinate policy-makers.This is fuelled by a number of assumed advantages, such as the opportunity for co-operation rather than confrontation, speed and flexibility and the cost-effectiveness. Some advantages might even be accentuated at the European level: Co-operation has added advantage at the European level where the culture of consensus decision is strong. Flexibility is extra attractive for policy makers dealing with an economy less homogeneous than the average national economy. Speed is certainly welcomed by policy-makers otherwise faced with the slow-winding European legislative process. Cost-effectiveness is eagerly sought by European policy makers facing tight administrative budgets and staff limits. This report examines lessons from the VAIE case studies that may be useful to policy makers engaged in the development of voluntary approaches at the European level. These case studies are about voluntary agreement schemes for industrial energy efficiency deployed in Denmark, France, Germany, Netherlands, and Sweden. For a summary of these case studies, please refer to the the VAIE final report. More detailed information is available in the VAIE national reports. It needs to be emphasised that the empirical base is very narrow. The 'lessons' presented can only be hypotheses, based on an inductive leap from a very narrow experience. The reader will need to check these hypotheses against her own broader experience and personal judgement. According to the principle of subsidiarity, voluntary agreements should be implemented at the European level only if that would have significant advantage over national action. Action at the European level, rather than the national level, would have these potential advantages: Being more consistent with the development of the single market; Allowing higher demands on energy efficiency without negative effect on competitiveness and employment; Stimulating company

  13. INTEGRATED SFM TECHNIQUES USING DATA SET FROM GOOGLE EARTH 3D MODEL AND FROM STREET LEVEL

    Directory of Open Access Journals (Sweden)

    L. Inzerillo

    2017-08-01

    Full Text Available Structure from motion (SfM represents a widespread photogrammetric method that uses the photogrammetric rules to carry out a 3D model from a photo data set collection. Some complex ancient buildings, such as Cathedrals, or Theatres, or Castles, etc. need to implement the data set (realized from street level with the UAV one in order to have the 3D roof reconstruction. Nevertheless, the use of UAV is strong limited from the government rules. In these last years, Google Earth (GE has been enriched with the 3D models of the earth sites. For this reason, it seemed convenient to start to test the potentiality offered by GE in order to extract from it a data set that replace the UAV function, to close the aerial building data set, using screen images of high resolution 3D models. Users can take unlimited “aerial photos” of a scene while flying around in GE at any viewing angle and altitude. The challenge is to verify the metric reliability of the SfM model carried out with an integrated data set (the one from street level and the one from GE aimed at replace the UAV use in urban contest. This model is called integrated GE SfM model (i-GESfM. In this paper will be present a case study: the Cathedral of Palermo.

  14. Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images

    Directory of Open Access Journals (Sweden)

    Adams Gregg P

    2008-08-01

    Full Text Available Abstract Background The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology. Methods Level set methods embed a 2D contour in a 3D surface and evolve that surface over time according to an image-dependent speed function. A speed function suitable for segmentation of CL's in ovarian ultrasound images was developed. An initial contour was manually placed and contour evolution was allowed to proceed until the rate of change of the area was sufficiently small. The method was tested on ovarian ultrasonographic images (n = 8 obtained ex situ. A expert in ovarian ultrasound interpretation delineated CL boundaries manually to serve as a "ground truth". Accuracy of the level set segmentation algorithm was determined by comparing semi-automatically determined contours with ground truth contours using the mean absolute difference (MAD, root mean squared difference (RMSD, Hausdorff distance (HD, sensitivity, and specificity metrics. Results and discussion The mean MAD was 0.87 mm (sigma = 0.36 mm, RMSD was 1.1 mm (sigma = 0.47 mm, and HD was 3.4 mm (sigma = 2.0 mm indicating that, on average, boundaries were accurate within 1–2 mm, however, deviations in excess of 3 mm from the ground truth were observed indicating under- or over-expansion of the contour. Mean sensitivity and specificity were 0.814 (sigma = 0.171 and 0.990 (sigma = 0.00786, respectively, indicating that CLs were consistently undersegmented but rarely did the contour interior include pixels that were judged by the human expert not to be part of the CL. It was observed that in localities where gradient magnitudes within the CL were strong due to high contrast speckle, contour expansion stopped too early. Conclusion The

  15. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Level Set Projection Method for Incompressible Navier-Stokes on Arbitrary Boundaries

    KAUST Repository

    Williams-Rioux, Bertrand

    2012-01-12

    Second order level set projection method for incompressible Navier-Stokes equations is proposed to solve flow around arbitrary geometries. We used rectilinear grid with collocated cell centered velocity and pressure. An explicit Godunov procedure is used to address the nonlinear advection terms, and an implicit Crank-Nicholson method to update viscous effects. An approximate pressure projection is implemented at the end of the time stepping using multigrid as a conventional fast iterative method. The level set method developed by Osher and Sethian [17] is implemented to address real momentum and pressure boundary conditions by the advection of a distance function, as proposed by Aslam [3]. Numerical results for the Strouhal number and drag coefficients validated the model with good accuracy for flow over a cylinder in the parallel shedding regime (47 < Re < 180). Simulations for an array of cylinders and an oscillating cylinder were performed, with the latter demonstrating our methods ability to handle dynamic boundary conditions.

  17. Automatic segmentation of Leishmania parasite in microscopic images using a modified CV level set method

    Science.gov (United States)

    Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab

    2015-12-01

    Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.

  18. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    Science.gov (United States)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  19. A level-set method for two-phase flows with soluble surfactant

    Science.gov (United States)

    Xu, Jian-Jun; Shi, Weidong; Lai, Ming-Chih

    2018-01-01

    A level-set method is presented for solving two-phase flows with soluble surfactant. The Navier-Stokes equations are solved along with the bulk surfactant and the interfacial surfactant equations. In particular, the convection-diffusion equation for the bulk surfactant on the irregular moving domain is solved by using a level-set based diffusive-domain method. A conservation law for the total surfactant mass is derived, and a re-scaling procedure for the surfactant concentrations is proposed to compensate for the surfactant mass loss due to numerical diffusion. The whole numerical algorithm is easy for implementation. Several numerical simulations in 2D and 3D show the effects of surfactant solubility on drop dynamics under shear flow.

  20. Embedded Real-Time Architecture for Level-Set-Based Active Contours

    Directory of Open Access Journals (Sweden)

    Dejnožková Eva

    2005-01-01

    Full Text Available Methods described by partial differential equations have gained a considerable interest because of undoubtful advantages such as an easy mathematical description of the underlying physics phenomena, subpixel precision, isotropy, or direct extension to higher dimensions. Though their implementation within the level set framework offers other interesting advantages, their vast industrial deployment on embedded systems is slowed down by their considerable computational effort. This paper exploits the high parallelization potential of the operators from the level set framework and proposes a scalable, asynchronous, multiprocessor platform suitable for system-on-chip solutions. We concentrate on obtaining real-time execution capabilities. The performance is evaluated on a continuous watershed and an object-tracking application based on a simple gradient-based attraction force driving the active countour. The proposed architecture can be realized on commercially available FPGAs. It is built around general-purpose processor cores, and can run code developed with usual tools.

  1. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    Science.gov (United States)

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  2. Level-set-based reconstruction algorithm for EIT lung images: first clinical results

    International Nuclear Information System (INIS)

    Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz

    2012-01-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)

  3. Kir2.1 channels set two levels of resting membrane potential with inward rectification.

    Science.gov (United States)

    Chen, Kuihao; Zuo, Dongchuan; Liu, Zheng; Chen, Haijun

    2018-04-01

    Strong inward rectifier K + channels (Kir2.1) mediate background K + currents primarily responsible for maintenance of resting membrane potential. Multiple types of cells exhibit two levels of resting membrane potential. Kir2.1 and K2P1 currents counterbalance, partially accounting for the phenomenon of human cardiomyocytes in subphysiological extracellular K + concentrations or pathological hypokalemic conditions. The mechanism of how Kir2.1 channels contribute to the two levels of resting membrane potential in different types of cells is not well understood. Here we test the hypothesis that Kir2.1 channels set two levels of resting membrane potential with inward rectification. Under hypokalemic conditions, Kir2.1 currents counterbalance HCN2 or HCN4 cation currents in CHO cells that heterologously express both channels, generating N-shaped current-voltage relationships that cross the voltage axis three times and reconstituting two levels of resting membrane potential. Blockade of HCN channels eliminated the phenomenon in K2P1-deficient Kir2.1-expressing human cardiomyocytes derived from induced pluripotent stem cells or CHO cells expressing both Kir2.1 and HCN2 channels. Weakly inward rectifier Kir4.1 or inward rectification-deficient Kir2.1•E224G mutant channels do not set such two levels of resting membrane potential when co-expressed with HCN2 channels in CHO cells or when overexpressed in human cardiomyocytes derived from induced pluripotent stem cells. These findings demonstrate a common mechanism that Kir2.1 channels set two levels of resting membrane potential with inward rectification by balancing inward currents through different cation channels such as hyperpolarization-activated HCN channels or hypokalemia-induced K2P1 leak channels.

  4. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    Science.gov (United States)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  5. Computing the dynamics of biomembranes by combining conservative level set and adaptive finite element methods

    OpenAIRE

    Laadhari , Aymen; Saramito , Pierre; Misbah , Chaouqi

    2014-01-01

    International audience; The numerical simulation of the deformation of vesicle membranes under simple shear external fluid flow is considered in this paper. A new saddle-point approach is proposed for the imposition of the fluid incompressibility and the membrane inextensibility constraints, through Lagrange multipliers defined in the fluid and on the membrane respectively. Using a level set formulation, the problem is approximated by mixed finite elements combined with an automatic adaptive ...

  6. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Mathematical Center for Interdiscipline Research, Soochow University, 1 Shizi Street, Jiangsu, Suzhou 215006 (China); Sun, Hui; Cheng, Li-Tien [Department of Mathematics, University of California, San Diego, La Jolla, California 92093-0112 (United States); Dzubiella, Joachim [Soft Matter and Functional Materials, Helmholtz-Zentrum Berlin, 14109 Berlin, Germany and Institut für Physik, Humboldt-Universität zu Berlin, 12489 Berlin (Germany); Li, Bo, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Quantitative Biology Graduate Program, University of California, San Diego, La Jolla, California 92093-0112 (United States); McCammon, J. Andrew [Department of Chemistry and Biochemistry, Department of Pharmacology, Howard Hughes Medical Institute, University of California, San Diego, La Jolla, California 92093-0365 (United States)

    2016-08-07

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the

  7. A Velocity-Level Bi-Criteria Optimization Scheme for Coordinated Path Tracking of Dual Robot Manipulators Using Recurrent Neural Network.

    Science.gov (United States)

    Xiao, Lin; Zhang, Yongsheng; Liao, Bolin; Zhang, Zhijun; Ding, Lei; Jin, Long

    2017-01-01

    A dual-robot system is a robotic device composed of two robot arms. To eliminate the joint-angle drift and prevent the occurrence of high joint velocity, a velocity-level bi-criteria optimization scheme, which includes two criteria (i.e., the minimum velocity norm and the repetitive motion), is proposed and investigated for coordinated path tracking of dual robot manipulators. Specifically, to realize the coordinated path tracking of dual robot manipulators, two subschemes are first presented for the left and right robot manipulators. After that, such two subschemes are reformulated as two general quadratic programs (QPs), which can be formulated as one unified QP. A recurrent neural network (RNN) is thus presented to solve effectively the unified QP problem. At last, computer simulation results based on a dual three-link planar manipulator further validate the feasibility and the efficacy of the velocity-level optimization scheme for coordinated path tracking using the recurrent neural network.

  8. An optimized process flow for rapid segmentation of cortical bones of the craniofacial skeleton using the level-set method.

    Science.gov (United States)

    Szwedowski, T D; Fialkov, J; Pakdel, A; Whyne, C M

    2013-01-01

    Accurate representation of skeletal structures is essential for quantifying structural integrity, for developing accurate models, for improving patient-specific implant design and in image-guided surgery applications. The complex morphology of thin cortical structures of the craniofacial skeleton (CFS) represents a significant challenge with respect to accurate bony segmentation. This technical study presents optimized processing steps to segment the three-dimensional (3D) geometry of thin cortical bone structures from CT images. In this procedure, anoisotropic filtering and a connected components scheme were utilized to isolate and enhance the internal boundaries between craniofacial cortical and trabecular bone. Subsequently, the shell-like nature of cortical bone was exploited using boundary-tracking level-set methods with optimized parameters determined from large-scale sensitivity analysis. The process was applied to clinical CT images acquired from two cadaveric CFSs. The accuracy of the automated segmentations was determined based on their volumetric concurrencies with visually optimized manual segmentations, without statistical appraisal. The full CFSs demonstrated volumetric concurrencies of 0.904 and 0.719; accuracy increased to concurrencies of 0.936 and 0.846 when considering only the maxillary region. The highly automated approach presented here is able to segment the cortical shell and trabecular boundaries of the CFS in clinical CT images. The results indicate that initial scan resolution and cortical-trabecular bone contrast may impact performance. Future application of these steps to larger data sets will enable the determination of the method's sensitivity to differences in image quality and CFS morphology.

  9. Some free boundary problems in potential flow regime usinga based level set method

    Energy Technology Data Exchange (ETDEWEB)

    Garzon, M.; Bobillo-Ares, N.; Sethian, J.A.

    2008-12-09

    Recent advances in the field of fluid mechanics with moving fronts are linked to the use of Level Set Methods, a versatile mathematical technique to follow free boundaries which undergo topological changes. A challenging class of problems in this context are those related to the solution of a partial differential equation posed on a moving domain, in which the boundary condition for the PDE solver has to be obtained from a partial differential equation defined on the front. This is the case of potential flow models with moving boundaries. Moreover the fluid front will possibly be carrying some material substance which will diffuse in the front and be advected by the front velocity, as for example the use of surfactants to lower surface tension. We present a Level Set based methodology to embed this partial differential equations defined on the front in a complete Eulerian framework, fully avoiding the tracking of fluid particles and its known limitations. To show the advantages of this approach in the field of Fluid Mechanics we present in this work one particular application: the numerical approximation of a potential flow model to simulate the evolution and breaking of a solitary wave propagating over a slopping bottom and compare the level set based algorithm with previous front tracking models.

  10. A level set method for cupping artifact correction in cone-beam CT

    International Nuclear Information System (INIS)

    Xie, Shipeng; Li, Haibo; Ge, Qi; Li, Chunming

    2015-01-01

    Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts in CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts

  11. Level-set simulations of buoyancy-driven motion of single and multiple bubbles

    International Nuclear Information System (INIS)

    Balcázar, Néstor; Lehmkuhl, Oriol; Jofre, Lluís; Oliva, Assensi

    2015-01-01

    Highlights: • A conservative level-set method is validated and verified. • An extensive study of buoyancy-driven motion of single bubbles is performed. • The interactions of two spherical and ellipsoidal bubbles is studied. • The interaction of multiple bubbles is simulated in a vertical channel. - Abstract: This paper presents a numerical study of buoyancy-driven motion of single and multiple bubbles by means of the conservative level-set method. First, an extensive study of the hydrodynamics of single bubbles rising in a quiescent liquid is performed, including its shape, terminal velocity, drag coefficients and wake patterns. These results are validated against experimental and numerical data well established in the scientific literature. Then, a further study on the interaction of two spherical and ellipsoidal bubbles is performed for different orientation angles. Finally, the interaction of multiple bubbles is explored in a periodic vertical channel. The results show that the conservative level-set approach can be used for accurate modelling of bubble dynamics. Moreover, it is demonstrated that the present method is numerically stable for a wide range of Morton and Reynolds numbers.

  12. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    Science.gov (United States)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  13. Considering Actionability at the Participant's Research Setting Level for Anticipatable Incidental Findings from Clinical Research.

    Science.gov (United States)

    Ortiz-Osorno, Alberto Betto; Ehler, Linda A; Brooks, Judith

    2015-01-01

    Determining what constitutes an anticipatable incidental finding (IF) from clinical research and defining whether, and when, this IF should be returned to the participant have been topics of discussion in the field of human subject protections for the last 10 years. It has been debated that implementing a comprehensive IF-approach that addresses both the responsibility of researchers to return IFs and the expectation of participants to receive them can be logistically challenging. IFs have been debated at different levels, such as the ethical reasoning for considering their disclosure or the need for planning for them during the development of the research study. Some authors have discussed the methods for re-contacting participants for disclosing IFs, as well as the relevance of considering the clinical importance of the IFs. Similarly, other authors have debated about when IFs should be disclosed to participants. However, no author has addressed how the "actionability" of the IFs should be considered, evaluated, or characterized at the participant's research setting level. This paper defines the concept of "Actionability at the Participant's Research Setting Level" (APRSL) for anticipatable IFs from clinical research, discusses some related ethical concepts to justify the APRSL concept, proposes a strategy to incorporate APRSL into the planning and management of IFs, and suggests a strategy for integrating APRSL at each local research setting. © 2015 American Society of Law, Medicine & Ethics, Inc.

  14. Records for radioactive waste management up to repository closure: Managing the primary level information (PLI) set

    International Nuclear Information System (INIS)

    2004-07-01

    The objective of this publication is to highlight the importance of the early establishment of a comprehensive records system to manage primary level information (PLI) as an integrated set of information, not merely as a collection of information, throughout all the phases of radioactive waste management. Early establishment of a comprehensive records system to manage Primary Level Information as an integrated set of information throughout all phases of radioactive waste management is important. In addition to the information described in the waste inventory record keeping system (WIRKS), the PLI of a radioactive waste repository consists of the entire universe of information, data and records related to any aspect of the repository's life cycle. It is essential to establish PLI requirements based on integrated set of needs from Regulators and Waste Managers involved in the waste management chain and to update these requirements as needs change over time. Information flow for radioactive waste management should be back-end driven. Identification of an Authority that will oversee the management of PLI throughout all phases of the radioactive waste management life cycle would guarantee the information flow to future generations. The long term protection of information essential to future generations can only be assured by the timely establishment of a comprehensive and effective RMS capable of capturing, indexing and evaluating all PLI. The loss of intellectual control over the PLI will make it very difficult to subsequently identify the ILI and HLI information sets. At all times prior to the closure of a radioactive waste repository, there should be an identifiable entity with a legally enforceable financial and management responsibility for the continued operation of a PLI Records Management System. The information presented in this publication will assist Member States in ensuring that waste and repository records, relevant for retention after repository closure

  15. Topological Hausdorff dimension and level sets of generic continuous functions on fractals

    International Nuclear Information System (INIS)

    Balka, Richárd; Buczolich, Zoltán; Elekes, Márton

    2012-01-01

    Highlights: ► We examine a new fractal dimension, the so called topological Hausdorff dimension. ► The generic continuous function has a level set of maximal Hausdorff dimension. ► This maximal dimension is the topological Hausdorff dimension minus one. ► Homogeneity implies that “most” level sets are of this dimension. ► We calculate the various dimensions of the graph of the generic function. - Abstract: In an earlier paper we introduced a new concept of dimension for metric spaces, the so called topological Hausdorff dimension. For a compact metric space K let dim H K and dim tH K denote its Hausdorff and topological Hausdorff dimension, respectively. We proved that this new dimension describes the Hausdorff dimension of the level sets of the generic continuous function on K, namely sup{ dim H f -1 (y):y∈R} =dim tH K-1 for the generic f ∈ C(K), provided that K is not totally disconnected, otherwise every non-empty level set is a singleton. We also proved that if K is not totally disconnected and sufficiently homogeneous then dim H f −1 (y) = dim tH K − 1 for the generic f ∈ C(K) and the generic y ∈ f(K). The most important goal of this paper is to make these theorems more precise. As for the first result, we prove that the supremum is actually attained on the left hand side of the first equation above, and also show that there may only be a unique level set of maximal Hausdorff dimension. As for the second result, we characterize those compact metric spaces for which for the generic f ∈ C(K) and the generic y ∈ f(K) we have dim H f −1 (y) = dim tH K − 1. We also generalize a result of B. Kirchheim by showing that if K is self-similar then for the generic f ∈ C(K) for every y∈intf(K) we have dim H f −1 (y) = dim tH K − 1. Finally, we prove that the graph of the generic f ∈ C(K) has the same Hausdorff and topological Hausdorff dimension as K.

  16. Development of the Latvian scheme for energy auditing of buildings and inspection of boilers and air-conditioning systems. Final report institutional set-up

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-12-01

    To implement EU directive 93/76/EEC on reduction of carbon dioxide emission by increasing energy efficiency and EU directive 2002/91/EC on building energy efficiency, Latvia must establish and institutional scheme and define all the organisations involved. From a general perspective the institutional scheme must as a minimum include the following four key players: the administrator, the operating unit, the auditors or independent experts, and finally the client. Furthermore, institutions dealing with financing of energy efficiency improvement activities, training and certification of experts, information about auditing and energy efficiency etc. need to be involved. At present there is no governmental or private Latvian organisation that could fully rearrange and assume the duties of an energy audit scheme secretariat. It is therefore recommended initially to place the secretariat as a separate, new unit within the Ministry of Economy, financed by the Ministry of Economy, with the intention of establishing at a later stage (after e.g. 5 years) a separate, new agency, an Energy Efficiency Agency partly financed by the incomes from the energy audit and boiler inspection schemes. The Secretariat should, both in its initial phase and later, assign the tasks of training, information campaigns, quality assurance and evaluation to external organisations. (BA)

  17. A combined single-multiphase flow formulation of the premixing phase using the level set method

    International Nuclear Information System (INIS)

    Leskovar, M.; Marn, J.

    1999-01-01

    The premixing phase of a steam explosion covers the interaction of the melt jet or droplets with the water prior to any steam explosion occurring. To get a better insight of the hydrodynamic processes during the premixing phase beside hot premixing experiments, where the water evaporation is significant, also cold isothermal premixing experiments are performed. The specialty of isothermal premixing experiments is that three phases are involved: the water, the air and the spheres phase, but only the spheres phase mixes with the other two phases whereas the water and air phases do not mix and remain separated by a free surface. Our idea therefore was to treat the isothermal premixing process with a combined single-multiphase flow model. In this combined model the water and air phase are treated as a single phase with discontinuous phase properties at the water air interface, whereas the spheres are treated as usually with a multiphase flow model, where the spheres represent the dispersed phase and the common water-air phase represents the continuous phase. The common water-air phase was described with the front capturing method based on the level set formulation. In the level set formulation, the boundary of two-fluid interfaces is modeled as the zero set of a smooth signed normal distance function defined on the entire physical domain. The boundary is then updated by solving a nonlinear equation of the Hamilton-Jacobi type on the whole domain. With this single-multiphase flow model the Queos isothermal premixing Q08 has been simulated. A numerical analysis using different treatments of the water-air interface (level set, high-resolution and upwind) has been performed for the incompressible and compressible case and the results were compared to experimental measurements.(author)

  18. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less

  19. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    International Nuclear Information System (INIS)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-01-01

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F≤f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  20. Improved inhalation technology for setting safe exposure levels for workplace chemicals

    Science.gov (United States)

    Stuart, Bruce O.

    1993-01-01

    Threshold Limit Values recommended as allowable air concentrations of a chemical in the workplace are often based upon a no-observable-effect-level (NOEL) determined by experimental inhalation studies using rodents. A 'safe level' for human exposure must then be estimated by the use of generalized safety factors in attempts to extrapolate from experimental rodents to man. The recent development of chemical-specific physiologically-based toxicokinetics makes use of measured physiological, biochemical, and metabolic parameters to construct a validated model that is able to 'scale-up' rodent response data to predict the behavior of the chemical in man. This procedure is made possible by recent advances in personal computer software and the emergence of appropriate biological data, and provides an analytical tool for much more reliable risk evaluation and airborne chemical exposure level setting for humans.

  1. Setting ozone critical levels for protecting horticultural Mediterranean crops: Case study of tomato

    International Nuclear Information System (INIS)

    González-Fernández, I.; Calvo, E.; Gerosa, G.; Bermejo, V.; Marzuoli, R.; Calatayud, V.; Alonso, R.

    2014-01-01

    Seven experiments carried out in Italy and Spain have been used to parameterising a stomatal conductance model and establishing exposure– and dose–response relationships for yield and quality of tomato with the main goal of setting O 3 critical levels (CLe). CLe with confidence intervals, between brackets, were set at an accumulated hourly O 3 exposure over 40 nl l −1 , AOT40 = 8.4 (1.2, 15.6) ppm h and a phytotoxic ozone dose above a threshold of 6 nmol m −2 s −1 , POD6 = 2.7 (0.8, 4.6) mmol m −2 for yield and AOT40 = 18.7 (8.5, 28.8) ppm h and POD6 = 4.1 (2.0, 6.2) mmol m −2 for quality, both indices performing equally well. CLe confidence intervals provide information on the quality of the dataset and should be included in future calculations of O 3 CLe for improving current methodologies. These CLe, derived for sensitive tomato cultivars, should not be applied for quantifying O 3 -induced losses at the risk of making important overestimations of the economical losses associated with O 3 pollution. -- Highlights: • Seven independent experiments from Italy and Spain were analysed. • O 3 critical levels are proposed for the protection of summer horticultural crops. • Exposure- and flux-based O 3 indices performed equally well. • Confidence intervals of the new O 3 critical levels are calculated. • A new method to estimate the degree risk of O 3 damage is proposed. -- Critical levels for tomato yield were set at AOT40 = 8.4 ppm h and POD6 = 2.7 mmol m −2 and confidence intervals should be used for improving O 3 risk assessment

  2. Numerical Modelling of Three-Fluid Flow Using The Level-set Method

    Science.gov (United States)

    Li, Hongying; Lou, Jing; Shang, Zhi

    2014-11-01

    This work presents a numerical model for simulation of three-fluid flow involving two different moving interfaces. These interfaces are captured using the level-set method via two different level-set functions. A combined formulation with only one set of conservation equations for the whole physical domain, consisting of the three different immiscible fluids, is employed. Numerical solution is performed on a fixed mesh using the finite volume method. Surface tension effect is incorporated using the Continuum Surface Force model. Validation of the present model is made against available results for stratified flow and rising bubble in a container with a free surface. Applications of the present model are demonstrated by a variety of three-fluid flow systems including (1) three-fluid stratified flow, (2) two-fluid stratified flow carrying the third fluid in the form of drops and (3) simultaneous rising and settling of two drops in a stationary third fluid. The work is supported by a Thematic and Strategic Research from A*STAR, Singapore (Ref. #: 1021640075).

  3. The behaviour of the lande factor and effective exchange parameter in a group of Pr intermetallics observed through reduced level scheme models

    International Nuclear Information System (INIS)

    Ranke, P.J. von; Caldas, A.; Palermo, L.

    1993-01-01

    The present work constitutes a portion of a continuing series of studies dealing with models, in which we retain only the two lowest levels of the crystal field splitting scheme of rare-earth ion in rare-earth intermetallics. In these reduced level scheme models, the crystal field and the magnetic Hamiltonians are represented in matrix notation. These two matrices constitute the model Hamiltonian proposed in this paper, from which we derive the magnetic state equations of interest for this work. Putting into these equations a group of adequate experimental data found in the literature for a particular rare-earth intermetallic we obtain the Lande factor and effective exchange parameter related to this rare-earth intermetallic. This study will be applied to a group of Pr intermetallics, in cubic symmetry, in which the ground level may be a non-magnetic singlet level or a non-magnetic doublet level. In both cases, the first excited level is a triplet one. (orig.)

  4. Nurses' comfort level with spiritual assessment: a study among nurses working in diverse healthcare settings.

    Science.gov (United States)

    Cone, Pamela H; Giske, Tove

    2017-10-01

    To gain knowledge about nurses' comfort level in assessing spiritual matters and to learn what questions nurses use in practice related to spiritual assessment. Spirituality is important in holistic nursing care; however, nurses report feeling uncomfortable and ill-prepared to address this domain with patients. Education is reported to impact nurses' ability to engage in spiritual care. This cross-sectional exploratory survey reports on a mixed-method study examining how comfortable nurses are with spiritual assessment. In 2014, a 21-item survey with 10 demographic variables and three open-ended questions were distributed to Norwegian nurses working in diverse care settings with 172 nurse responses (72 % response rate). SPSS was used to analyse quantitative data; thematic analysis examined the open-ended questions. Norwegian nurses reported a high level of comfort with most questions even though spirituality is seen as private. Nurses with some preparation or experience in spiritual care were most comfortable assessing spirituality. Statistically significant correlations were found between the nurses' comfort level with spiritual assessment and their preparedness and sense of the importance of spiritual assessment. How well-prepared nurses felt was related to years of experience, degree of spirituality and religiosity, and importance of spiritual assessment. Many nurses are poorly prepared for spiritual assessment and care among patients in diverse care settings; educational preparation increases their comfort level with facilitating such care. Nurses who feel well prepared with spirituality feel more comfortable with the spiritual domain. By fostering a culture where patients' spirituality is discussed and reflected upon in everyday practice and in continued education, nurses' sense of preparedness, and thus their level of comfort, can increase. Clinical supervision and interprofessional collaboration with hospital chaplains and/or other spiritual leaders can

  5. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    International Nuclear Information System (INIS)

    Hosntalab, Mohammad; Aghaeizadeh Zoroofi, Reza; Abbaspour Tehrani-Fard, Ali; Shirani, Gholamreza

    2008-01-01

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  6. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    Energy Technology Data Exchange (ETDEWEB)

    Hosntalab, Mohammad [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Aghaeizadeh Zoroofi, Reza [University of Tehran, Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, Tehran (Iran); Abbaspour Tehrani-Fard, Ali [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Sharif University of Technology, Department of Electrical Engineering, Tehran (Iran); Shirani, Gholamreza [Faculty of Dentistry Medical Science of Tehran University, Oral and Maxillofacial Surgery Department, Tehran (Iran)

    2008-09-15

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  7. Performance Evaluation of PBL Schemes of ARW Model in Simulating Thermo-Dynamical Structure of Pre-Monsoon Convective Episodes over Kharagpur Using STORM Data Sets

    Science.gov (United States)

    Madala, Srikanth; Satyanarayana, A. N. V.; Srinivas, C. V.; Tyagi, Bhishma

    2016-05-01

    In the present study, advanced research WRF (ARW) model is employed to simulate convective thunderstorm episodes over Kharagpur (22°30'N, 87°20'E) region of Gangetic West Bengal, India. High-resolution simulations are conducted using 1 × 1 degree NCEP final analysis meteorological fields for initial and boundary conditions for events. The performance of two non-local [Yonsei University (YSU), Asymmetric Convective Model version 2 (ACM2)] and two local turbulence kinetic energy closures [Mellor-Yamada-Janjic (MYJ), Bougeault-Lacarrere (BouLac)] are evaluated in simulating planetary boundary layer (PBL) parameters and thermodynamic structure of the atmosphere. The model-simulated parameters are validated with available in situ meteorological observations obtained from micro-meteorological tower as well has high-resolution DigiCORA radiosonde ascents during STORM-2007 field experiment at the study location and Doppler Weather Radar (DWR) imageries. It has been found that the PBL structure simulated with the TKE closures MYJ and BouLac are in better agreement with observations than the non-local closures. The model simulations with these schemes also captured the reflectivity, surface pressure patterns such as wake-low, meso-high, pre-squall low and the convective updrafts and downdrafts reasonably well. Qualitative and quantitative comparisons reveal that the MYJ followed by BouLac schemes better simulated various features of the thunderstorm events over Kharagpur region. The better performance of MYJ followed by BouLac is evident in the lesser mean bias, mean absolute error, root mean square error and good correlation coefficient for various surface meteorological variables as well as thermo-dynamical structure of the atmosphere relative to other PBL schemes. The better performance of the TKE closures may be attributed to their higher mixing efficiency, larger convective energy and better simulation of humidity promoting moist convection relative to non

  8. Level Set-Based Topology Optimization for the Design of an Electromagnetic Cloak With Ferrite Material

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Andkjær, Jacob Anders

    2013-01-01

    . A level set-based topology optimization method incorporating a fictitious interface energy is used to find optimized configurations of the ferrite material. The numerical results demonstrate that the optimization successfully found an appropriate ferrite configuration that functions as an electromagnetic......This paper presents a structural optimization method for the design of an electromagnetic cloak made of ferrite material. Ferrite materials exhibit a frequency-dependent degree of permeability, due to a magnetic resonance phenomenon that can be altered by changing the magnitude of an externally...

  9. Joint level-set and spatio-temporal motion detection for cell segmentation.

    Science.gov (United States)

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan

  10. A multilevel, level-set method for optimizing eigenvalues in shape design problems

    International Nuclear Information System (INIS)

    Haber, E.

    2004-01-01

    In this paper, we consider optimal design problems that involve shape optimization. The goal is to determine the shape of a certain structure such that it is either as rigid or as soft as possible. To achieve this goal we combine two new ideas for an efficient solution of the problem. First, we replace the eigenvalue problem with an approximation by using inverse iteration. Second, we use a level set method but rather than propagating the front we use constrained optimization methods combined with multilevel continuation techniques. Combining these two ideas we obtain a robust and rapid method for the solution of the optimal design problem

  11. Modeling Restrained Shrinkage Induced Cracking in Concrete Rings Using the Thick Level Set Approach

    Directory of Open Access Journals (Sweden)

    Rebecca Nakhoul

    2018-03-01

    Full Text Available Modeling restrained shrinkage-induced damage and cracking in concrete is addressed herein. The novel Thick Level Set (TLS damage growth and crack propagation model is used and adapted by introducing shrinkage contribution into the formulation. The TLS capacity to predict damage evolution, crack initiation and growth triggered by restrained shrinkage in absence of external loads is evaluated. A study dealing with shrinkage-induced cracking in elliptical concrete rings is presented herein. Key results such as the effect of rings oblateness on stress distribution and critical shrinkage strain needed to initiate damage are highlighted. In addition, crack positions are compared to those observed in experiments and are found satisfactory.

  12. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    Science.gov (United States)

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  13. Implications of sea-level rise in a modern carbonate ramp setting

    Science.gov (United States)

    Lokier, Stephen W.; Court, Wesley M.; Onuma, Takumi; Paul, Andreas

    2018-03-01

    This study addresses a gap in our understanding of the effects of sea-level rise on the sedimentary systems and morphological development of recent and ancient carbonate ramp settings. Many ancient carbonate sequences are interpreted as having been deposited in carbonate ramp settings. These settings are poorly-represented in the Recent. The study documents the present-day transgressive flooding of the Abu Dhabi coastline at the southern shoreline of the Arabian/Persian Gulf, a carbonate ramp depositional system that is widely employed as a Recent analogue for numerous ancient carbonate systems. Fourteen years of field-based observations are integrated with historical and recent high-resolution satellite imagery in order to document and assess the onset of flooding. Predicted rates of transgression (i.e. landward movement of the shoreline) of 2.5 m yr- 1 (± 0.2 m yr- 1) based on global sea-level rise alone were far exceeded by the flooding rate calculated from the back-stepping of coastal features (10-29 m yr- 1). This discrepancy results from the dynamic nature of the flooding with increased water depth exposing the coastline to increased erosion and, thereby, enhancing back-stepping. A non-accretionary transgressive shoreline trajectory results from relatively rapid sea-level rise coupled with a low-angle ramp geometry and a paucity of sediments. The flooding is represented by the landward migration of facies belts, a range of erosive features and the onset of bioturbation. Employing Intergovernmental Panel on Climate Change (Church et al., 2013) predictions for 21st century sea-level rise, and allowing for the post-flooding lag time that is typical for the start-up of carbonate factories, it is calculated that the coastline will continue to retrograde for the foreseeable future. Total passive flooding (without considering feedback in the modification of the shoreline) by the year 2100 is calculated to likely be between 340 and 571 m with a flooding rate of 3

  14. Colour schemes

    DEFF Research Database (Denmark)

    van Leeuwen, Theo

    2013-01-01

    This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation.......This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation....

  15. Robust space-time extraction of ventricular surface evolution using multiphase level sets

    Science.gov (United States)

    Drapaca, Corina S.; Cardenas, Valerie; Studholme, Colin

    2004-05-01

    This paper focuses on the problem of accurately extracting the CSF-tissue boundary, particularly around the ventricular surface, from serial structural MRI of the brain acquired in imaging studies of aging and dementia. This is a challenging problem because of the common occurrence of peri-ventricular lesions which locally alter the appearance of white matter. We examine a level set approach which evolves a four dimensional description of the ventricular surface over time. This has the advantage of allowing constraints on the contour in the temporal dimension, improving the consistency of the extracted object over time. We follow the approach proposed by Chan and Vese which is based on the Mumford and Shah model and implemented using the Osher and Sethian level set method. We have extended this to the 4 dimensional case to propagate a 4D contour toward the tissue boundaries through the evolution of a 5D implicit function. For convergence we use region-based information provided by the image rather than the gradient of the image. This is adapted to allow intensity contrast changes between time frames in the MRI sequence. Results on time sequences of 3D brain MR images are presented and discussed.

  16. An improved level set method for brain MR images segmentation and bias correction.

    Science.gov (United States)

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  17. Topology optimization in acoustics and elasto-acoustics via a level-set method

    Science.gov (United States)

    Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.

    2018-04-01

    Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.

  18. A mass conserving level set method for detailed numerical simulation of liquid atomization

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Kun; Shao, Changxiao [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China); Yang, Yue [State Key Laboratory of Turbulence and Complex Systems, Peking University, Beijing 100871 (China); Fan, Jianren, E-mail: fanjr@zju.edu.cn [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)

    2015-10-01

    An improved mass conserving level set method for detailed numerical simulations of liquid atomization is developed to address the issue of mass loss in the existing level set method. This method introduces a mass remedy procedure based on the local curvature at the interface, and in principle, can ensure the absolute mass conservation of the liquid phase in the computational domain. Three benchmark cases, including Zalesak's disk, a drop deforming in a vortex field, and the binary drop head-on collision, are simulated to validate the present method, and the excellent agreement with exact solutions or experimental results is achieved. It is shown that the present method is able to capture the complex interface with second-order accuracy and negligible additional computational cost. The present method is then applied to study more complex flows, such as a drop impacting on a liquid film and the swirling liquid sheet atomization, which again, demonstrates the advantages of mass conservation and the capability to represent the interface accurately.

  19. On the Relationship between Variational Level Set-Based and SOM-Based Active Contours

    Science.gov (United States)

    Abdelsamea, Mohammed M.; Gnecco, Giorgio; Gaber, Mohamed Medhat; Elyan, Eyad

    2015-01-01

    Most Active Contour Models (ACMs) deal with the image segmentation problem as a functional optimization problem, as they work on dividing an image into several regions by optimizing a suitable functional. Among ACMs, variational level set methods have been used to build an active contour with the aim of modeling arbitrarily complex shapes. Moreover, they can handle also topological changes of the contours. Self-Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly in modeling an active contour based on the idea of utilizing the prototypes (weights) of a SOM to control the evolution of the contour. SOM-based models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey, we illustrate the main concepts of variational level set-based ACMs, SOM-based ACMs, and their relationship and review in a comprehensive fashion the development of their state-of-the-art models from a machine learning perspective, with a focus on their strengths and weaknesses. PMID:25960736

  20. HPC in Basin Modeling: Simulating Mechanical Compaction through Vertical Effective Stress using Level Sets

    Science.gov (United States)

    McGovern, S.; Kollet, S. J.; Buerger, C. M.; Schwede, R. L.; Podlaha, O. G.

    2017-12-01

    In the context of sedimentary basins, we present a model for the simulation of the movement of ageological formation (layers) during the evolution of the basin through sedimentation and compactionprocesses. Assuming a single phase saturated porous medium for the sedimentary layers, the modelfocuses on the tracking of the layer interfaces, through the use of the level set method, as sedimentationdrives fluid-flow and reduction of pore space by compaction. On the assumption of Terzaghi's effectivestress concept, the coupling of the pore fluid pressure to the motion of interfaces in 1-D is presented inMcGovern, et.al (2017) [1] .The current work extends the spatial domain to 3-D, though we maintain the assumption ofvertical effective stress to drive the compaction. The idealized geological evolution is conceptualized asthe motion of interfaces between rock layers, whose paths are determined by the magnitude of a speedfunction in the direction normal to the evolving layer interface. The speeds normal to the interface aredependent on the change in porosity, determined through an effective stress-based compaction law,such as the exponential Athy's law. Provided with the speeds normal to the interface, the level setmethod uses an advection equation to evolve a potential function, whose zero level set defines theinterface. Thus, the moving layer geometry influences the pore pressure distribution which couplesback to the interface speeds. The flexible construction of the speed function allows extension, in thefuture, to other terms to represent different physical processes, analogous to how the compaction rulerepresents material deformation.The 3-D model is implemented using the generic finite element method framework Deal II,which provides tools, building on p4est and interfacing to PETSc, for the massively parallel distributedsolution to the model equations [2]. Experiments are being run on the Juelich Supercomputing Center'sJureca cluster. [1] McGovern, et.al. (2017

  1. Energy level schemes of f{sup N} electronic configurations for the di-, tri-, and tetravalent lanthanides and actinides in a free state

    Energy Technology Data Exchange (ETDEWEB)

    Ma, C.-G. [College of Sciences, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Brik, M.G., E-mail: mikhail.brik@ut.ee [College of Sciences, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Institute of Physics, University of Tartu, Ravila 14C, Tartu 50411 (Estonia); Institute of Physics, Jan Dlugosz University, Armii Krajowej 13/15, PL-42200 Czestochowa (Poland); Institute of Physics, Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw (Poland); Liu, D.-X.; Feng, B.; Tian, Ya [College of Sciences, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Suchocki, A. [Institute of Physics, Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw (Poland)

    2016-02-15

    The energy level diagrams are theoretically constructed for the di-, tri-, tetravalent lanthanide and actinide ions, using the Hartree–Fock calculated parameters of the Coulomb and spin–orbit interactions within f{sup N} (N=1…13) electron configurations. These diagrams are analogous to Dieke's diagram, which was obtained experimentally. They can be used for an analysis of the optical spectra of all considered groups of ions in various environments. Systematic variation of some prominent energy levels (especially those ones with a potential for emission transitions) along the isoelectronic 4f/5f ions is considered. - Highlights: • Energy level schemes for di-, tri, tetravalent lanthanides/actinides are calculated. • Systematic variation of the characteristic energy levels across the series is considered. • Potentially interesting emission transitions are identified.

  2. A progressive diagonalization scheme for the Rabi Hamiltonian

    International Nuclear Information System (INIS)

    Pan, Feng; Guan, Xin; Wang, Yin; Draayer, J P

    2010-01-01

    A diagonalization scheme for the Rabi Hamiltonian, which describes a qubit interacting with a single-mode radiation field via a dipole interaction, is proposed. It is shown that the Rabi Hamiltonian can be solved almost exactly using a progressive scheme that involves a finite set of one variable polynomial equations. The scheme is especially efficient for the lower part of the spectrum. Some low-lying energy levels of the model with several sets of parameters are calculated and compared to those provided by the recently proposed generalized rotating-wave approximation and a full matrix diagonalization.

  3. Architecture-Level Exploration of Alternative Interconnection Schemes Targeting 3D FPGAs: A Software-Supported Methodology

    Directory of Open Access Journals (Sweden)

    Kostas Siozios

    2008-01-01

    Full Text Available In current reconfigurable architectures, the interconnection structures increasingly contribute more to the delay and power consumption. The demand for increased clock frequencies and logic density (smaller area footprint makes the problem even more important. Three-dimensional (3D architectures are able to alleviate this problem by accommodating a number of functional layers, each of which might be fabricated in different technology. However, the benefits of such integration technology have not been sufficiently explored yet. In this paper, we propose a software-supported methodology for exploring and evaluating alternative interconnection schemes for 3D FPGAs. In order to support the proposed methodology, three new CAD tools were developed (part of the 3D MEANDER Design Framework. During our exploration, we study the impact of vertical interconnection between functional layers in a number of design parameters. More specifically, the average gains in operation frequency, power consumption, and wirelength are 35%, 32%, and 13%, respectively, compared to existing 2D FPGAs with identical logic resources. Also, we achieve higher utilization ratio for the vertical interconnections compared to existing approaches by 8% for designing 3D FPGAs, leading to cheaper and more reliable devices.

  4. Reservoir characterisation by a binary level set method and adaptive multiscale estimation

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Lars Kristian

    2006-01-15

    The main focus of this work is on estimation of the absolute permeability as a solution of an inverse problem. We have both considered a single-phase and a two-phase flow model. Two novel approaches have been introduced and tested numerical for solving the inverse problems. The first approach is a multi scale zonation technique which is treated in Paper A. The purpose of the work in this paper is to find a coarse scale solution based on production data from wells. In the suggested approach, the robustness of an already developed method, the adaptive multi scale estimation (AME), has been improved by utilising information from several candidate solutions generated by a stochastic optimizer. The new approach also suggests a way of combining a stochastic and a gradient search method, which in general is a problematic issue. The second approach is a piecewise constant level set approach and is applied in Paper B, C, D and E. Paper B considers the stationary single-phase problem, while Paper C, D and E use a two-phase flow model. In the two-phase flow problem we have utilised information from both production data in wells and spatially distributed data gathered from seismic surveys. Due to the higher content of information provided by the spatially distributed data, we search solutions on a slightly finer scale than one typically does with only production data included. The applied level set method is suitable for reconstruction of fields with a supposed known facies-type of solution. That is, the solution should be close to piecewise constant. This information is utilised through a strong restriction of the number of constant levels in the estimate. On the other hand, the flexibility in the geometries of the zones is much larger for this method than in a typical zonation approach, for example the multi scale approach applied in Paper A. In all these papers, the numerical studies are done on synthetic data sets. An advantage of synthetic data studies is that the true

  5. CT Findings of Disease with Elevated Serum D-Dimer Levels in an Emergency Room Setting

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Ji Youn; Kwon, Woo Cheol; Kim, Young Ju [Dept. of Radiology, Wonju Christian Hospital, Yensei University Wonju College of Medicine, Wonju (Korea, Republic of)

    2012-01-15

    Pulmonary embolism and deep vein thrombosis are the leading causes of elevated serum D-dimer levels in the emergency room. Although D-dimer is a useful screening test because of its high sensitivity and negative predictive value, it has a low specificity. In addition, D-dimer can be elevated in various diseases. Therefore, information on the various diseases with elevated D-dimer levels and their radiologic findings may allow for accurate diagnosis and proper management. Herein, we report the CT findings of various diseases with elevated D-dimer levels in an emergency room setting, including an intravascular contrast filling defect with associated findings in a venous thromboembolism, fracture with soft tissue swelling and hematoma formation in a trauma patient, enlargement with contrast enhancement in the infected organ of a patient, coronary artery stenosis with a perfusion defect of the myocardium in a patient with acute myocardial infarction, high density of acute thrombus in a cerebral vessel with a low density of affected brain parenchyma in an acute cerebral infarction, intimal flap with two separated lumens in a case of aortic dissection, organ involvement of malignancy in a cancer patient, and atrophy of a liver with a dilated portal vein and associated findings.

  6. Glycated albumin is set lower in relation to plasma glucose levels in patients with Cushing's syndrome.

    Science.gov (United States)

    Kitamura, Tetsuhiro; Otsuki, Michio; Tamada, Daisuke; Tabuchi, Yukiko; Mukai, Kosuke; Morita, Shinya; Kasayama, Soji; Shimomura, Iichiro; Koga, Masafumi

    2013-09-23

    Glycated albumin (GA) is an indicator of glycemic control, which has some specific characters in comparison with HbA1c. Since glucocorticoids (GC) promote protein catabolism including serum albumin, GC excess state would influence GA levels. We therefore investigated GA levels in patients with Cushing's syndrome. We studied 16 patients with Cushing's syndrome (8 patients had diabetes mellitus and the remaining 8 patients were non-diabetic). Thirty-two patients with type 2 diabetes mellitus and 32 non-diabetic subjects matched for age, sex and BMI were used as controls. In the patients with Cushing's syndrome, GA was significantly correlated with HbA1c, but the regression line shifted downwards as compared with the controls. The GA/HbA1c ratio in the patients with Cushing's syndrome was also significantly lower than the controls. HbA1c in the non-diabetic patients with Cushing's syndrome was not different from the non-diabetic controls, whereas GA was significantly lower. In 7 patients with Cushing's syndrome who performed self-monitoring of blood glucose, the measured HbA1c was matched with HbA1c estimated from mean blood glucose, whereas the measured GA was significantly lower than the estimated GA. We clarified that GA is set lower in relation to plasma glucose levels in patients with Cushing's syndrome. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. CT Findings of Disease with Elevated Serum D-Dimer Levels in an Emergency Room Setting

    International Nuclear Information System (INIS)

    Choi, Ji Youn; Kwon, Woo Cheol; Kim, Young Ju

    2012-01-01

    Pulmonary embolism and deep vein thrombosis are the leading causes of elevated serum D-dimer levels in the emergency room. Although D-dimer is a useful screening test because of its high sensitivity and negative predictive value, it has a low specificity. In addition, D-dimer can be elevated in various diseases. Therefore, information on the various diseases with elevated D-dimer levels and their radiologic findings may allow for accurate diagnosis and proper management. Herein, we report the CT findings of various diseases with elevated D-dimer levels in an emergency room setting, including an intravascular contrast filling defect with associated findings in a venous thromboembolism, fracture with soft tissue swelling and hematoma formation in a trauma patient, enlargement with contrast enhancement in the infected organ of a patient, coronary artery stenosis with a perfusion defect of the myocardium in a patient with acute myocardial infarction, high density of acute thrombus in a cerebral vessel with a low density of affected brain parenchyma in an acute cerebral infarction, intimal flap with two separated lumens in a case of aortic dissection, organ involvement of malignancy in a cancer patient, and atrophy of a liver with a dilated portal vein and associated findings.

  8. Enzymatic versus Inorganic Oxygen Reduction Catalysts: Comparison of the Energy Levels in a Free-Energy Scheme

    DEFF Research Database (Denmark)

    Kjærgaard, Christian Hauge; Rossmeisl, Jan; Nørskov, Jens Kehlet

    2010-01-01

    In this paper, we present a method to directly compare the energy levels of intermediates in enzymatic and inorganic oxygen reduction catalysts. We initially describe how the energy levels of a Pt(111) catalyst, operating at pH = 0, are obtained. By a simple procedure, we then convert the energy...... levels of cytochrome c oxidase (CcO) models obtained at physiological pH = 7 to the energy levels at pH = 0, which allows for comparison. Furthermore, we illustrate how different bias voltages will affect the free-energy landscapes of the catalysts. This allows us to determine the so-called theoretical...

  9. Tabled Execution in Scheme

    Energy Technology Data Exchange (ETDEWEB)

    Willcock, J J; Lumsdaine, A; Quinlan, D J

    2008-08-19

    Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.

  10. Transport equations, Level Set and Eulerian mechanics. Application to fluid-structure coupling

    International Nuclear Information System (INIS)

    Maitre, E.

    2008-11-01

    My works were devoted to numerical analysis of non-linear elliptic-parabolic equations, to neutron transport equation and to the simulation of fabrics draping. More recently I developed an Eulerian method based on a level set formulation of the immersed boundary method to deal with fluid-structure coupling problems arising in bio-mechanics. Some of the more efficient algorithms to solve the neutron transport equation make use of the splitting of the transport operator taking into account its characteristics. In the present work we introduced a new algorithm based on this splitting and an adaptation of minimal residual methods to infinite dimensional case. We present the case where the velocity space is of dimension 1 (slab geometry) and 2 (plane geometry) because the splitting is simpler in the former

  11. Numerical simulation of overflow at vertical weirs using a hybrid level set/VOF method

    Science.gov (United States)

    Lv, Xin; Zou, Qingping; Reeve, Dominic

    2011-10-01

    This paper presents the applications of a newly developed free surface flow model to the practical, while challenging overflow problems for weirs. Since the model takes advantage of the strengths of both the level set and volume of fluid methods and solves the Navier-Stokes equations on an unstructured mesh, it is capable of resolving the time evolution of very complex vortical motions, air entrainment and pressure variations due to violent deformations following overflow of the weir crest. In the present study, two different types of vertical weir, namely broad-crested and sharp-crested, are considered for validation purposes. The calculated overflow parameters such as pressure head distributions, velocity distributions, and water surface profiles are compared against experimental data as well as numerical results available in literature. A very good quantitative agreement has been obtained. The numerical model, thus, offers a good alternative to traditional experimental methods in the study of weir problems.

  12. Level set method for optimal shape design of MRAM core. Micromagnetic approach

    International Nuclear Information System (INIS)

    Melicher, Valdemar; Cimrak, Ivan; Keer, Roger van

    2008-01-01

    We aim at optimizing the shape of the magnetic core in MRAM memories. The evolution of the magnetization during the writing process is described by the Landau-Lifshitz equation (LLE). The actual shape of the core in one cell is characterized by the coefficient γ. Cost functional f=f(γ) expresses the quality of the writing process having in mind the competition between the full-select and the half-select element. We derive an explicit form of the derivative F=∂f/∂γ which allows for the use of gradient-type methods for the actual computation of the optimized shape (e.g., steepest descend method). The level set method (LSM) is employed for the representation of the piecewise constant coefficient γ

  13. Threshold Signature Schemes Application

    Directory of Open Access Journals (Sweden)

    Anastasiya Victorovna Beresneva

    2015-10-01

    Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.

  14. Level-set segmentation of pulmonary nodules in megavolt electronic portal images using a CT prior

    International Nuclear Information System (INIS)

    Schildkraut, J. S.; Prosser, N.; Savakis, A.; Gomez, J.; Nazareth, D.; Singh, A. K.; Malhotra, H. K.

    2010-01-01

    Purpose: Pulmonary nodules present unique problems during radiation treatment due to nodule position uncertainty that is caused by respiration. The radiation field has to be enlarged to account for nodule motion during treatment. The purpose of this work is to provide a method of locating a pulmonary nodule in a megavolt portal image that can be used to reduce the internal target volume (ITV) during radiation therapy. A reduction in the ITV would result in a decrease in radiation toxicity to healthy tissue. Methods: Eight patients with nonsmall cell lung cancer were used in this study. CT scans that include the pulmonary nodule were captured with a GE Healthcare LightSpeed RT 16 scanner. Megavolt portal images were acquired with a Varian Trilogy unit equipped with an AS1000 electronic portal imaging device. The nodule localization method uses grayscale morphological filtering and level-set segmentation with a prior. The treatment-time portion of the algorithm is implemented on a graphical processing unit. Results: The method was retrospectively tested on eight cases that include a total of 151 megavolt portal image frames. The method reduced the nodule position uncertainty by an average of 40% for seven out of the eight cases. The treatment phase portion of the method has a subsecond execution time that makes it suitable for near-real-time nodule localization. Conclusions: A method was developed to localize a pulmonary nodule in a megavolt portal image. The method uses the characteristics of the nodule in a prior CT scan to enhance the nodule in the portal image and to identify the nodule region by level-set segmentation. In a retrospective study, the method reduced the nodule position uncertainty by an average of 40% for seven out of the eight cases studied.

  15. Level-set dynamics and mixing efficiency of passive and active scalars in DNS and LES of turbulent mixing layers

    NARCIS (Netherlands)

    Geurts, Bernard J.; Vreman, Bert; Kuerten, Hans; Luo, Kai H.

    2001-01-01

    The mixing efficiency in a turbulent mixing layer is quantified by monitoring the surface-area of level-sets of scalar fields. The Laplace transform is applied to numerically calculate integrals over arbitrary level-sets. The analysis includes both direct and large-eddy simulation and is used to

  16. Method for accounting for γ-γ-coincidences in compu-- ter reconstruction of energy level and γ-transition schemes

    International Nuclear Information System (INIS)

    Burmistrov, V.R.

    1979-01-01

    The principle and program of introduction of data on γ-γ- coincidences into the computer program are described. By analogy with the principle of accounting for γ-line intensities while constructing a system of levels according to the reference levels and γ-line spectrum, the ''leaving'' γ-transitions are introduced as an artificial level parameter. This parameter is a list of γ-lines leaving the given level or the lower levels bound with it. As a result of introducing such parameters, the accounting for the data on γ-γ-coincidences amounts to comparing two tables of numbers: a table of γ-line coincidences (an experimental one) and a table of ''leaving'' γ-transitions of every level. The program arranges the γ-lines in the preset system of equations with regard to the γ-line energies, their intensities and data on γ-γ- coincidences, and excludes consideration of the false levels. The calculation results are printed out in tables [ru

  17. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    Science.gov (United States)

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of

  18. Characterization of mammographic masses based on level set segmentation with new image features and patient information

    International Nuclear Information System (INIS)

    Shi Jiazheng; Sahiner, Berkman; Chan Heangping; Ge Jun; Hadjiiski, Lubomir; Helvie, Mark A.; Nees, Alexis; Wu Yita; Wei Jun; Zhou Chuan; Zhang Yiheng; Cui Jing

    2008-01-01

    Computer-aided diagnosis (CAD) for characterization of mammographic masses as malignant or benign has the potential to assist radiologists in reducing the biopsy rate without increasing false negatives. The purpose of this study was to develop an automated method for mammographic mass segmentation and explore new image based features in combination with patient information in order to improve the performance of mass characterization. The authors' previous CAD system, which used the active contour segmentation, and morphological, textural, and spiculation features, has achieved promising results in mass characterization. The new CAD system is based on the level set method and includes two new types of image features related to the presence of microcalcifications with the mass and abruptness of the mass margin, and patient age. A linear discriminant analysis (LDA) classifier with stepwise feature selection was used to merge the extracted features into a classification score. The classification accuracy was evaluated using the area under the receiver operating characteristic curve. The authors' primary data set consisted of 427 biopsy-proven masses (200 malignant and 227 benign) in 909 regions of interest (ROIs) (451 malignant and 458 benign) from multiple mammographic views. Leave-one-case-out resampling was used for training and testing. The new CAD system based on the level set segmentation and the new mammographic feature space achieved a view-based A z value of 0.83±0.01. The improvement compared to the previous CAD system was statistically significant (p=0.02). When patient age was included in the new CAD system, view-based and case-based A z values were 0.85±0.01 and 0.87±0.02, respectively. The study also demonstrated the consistency of the newly developed CAD system by evaluating the statistics of the weights of the LDA classifiers in leave-one-case-out classification. Finally, an independent test on the publicly available digital database for screening

  19. Cooperative Fuzzy Games Approach to Setting Target Levels of ECs in Quality Function Deployment

    Directory of Open Access Journals (Sweden)

    Zhihui Yang

    2014-01-01

    Full Text Available Quality function deployment (QFD can provide a means of translating customer requirements (CRs into engineering characteristics (ECs for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.

  20. A hybrid interface tracking - level set technique for multiphase flow with soluble surfactant

    Science.gov (United States)

    Shin, Seungwon; Chergui, Jalel; Juric, Damir; Kahouadji, Lyes; Matar, Omar K.; Craster, Richard V.

    2018-04-01

    A formulation for soluble surfactant transport in multiphase flows recently presented by Muradoglu and Tryggvason (JCP 274 (2014) 737-757) [17] is adapted to the context of the Level Contour Reconstruction Method, LCRM, (Shin et al. IJNMF 60 (2009) 753-778, [8]) which is a hybrid method that combines the advantages of the Front-tracking and Level Set methods. Particularly close attention is paid to the formulation and numerical implementation of the surface gradients of surfactant concentration and surface tension. Various benchmark tests are performed to demonstrate the accuracy of different elements of the algorithm. To verify surfactant mass conservation, values for surfactant diffusion along the interface are compared with the exact solution for the problem of uniform expansion of a sphere. The numerical implementation of the discontinuous boundary condition for the source term in the bulk concentration is compared with the approximate solution. Surface tension forces are tested for Marangoni drop translation. Our numerical results for drop deformation in simple shear are compared with experiments and results from previous simulations. All benchmarking tests compare well with existing data thus providing confidence that the adapted LCRM formulation for surfactant advection and diffusion is accurate and effective in three-dimensional multiphase flows with a structured mesh. We also demonstrate that this approach applies easily to massively parallel simulations.

  1. Natural setting of Japanese islands and geologic disposal of high-level waste

    International Nuclear Information System (INIS)

    Koide, Hitoshi

    1991-01-01

    The Japanese islands are a combination of arcuate islands along boundaries between four major plates: Eurasia, North America, Pacific and Philippine Sea plates. The interaction among the four plates formed complex geological structures which are basically patchworks of small blocks of land and sea-floor sediments piled up by the subduction of oceanic plates along the margin of the Eurasia continent. Although frequent earthquakes and volcanic eruptions clearly indicate active crustal deformation, the distribution of active faults and volcanoes is localized regionally in the Japanese islands. Crustal displacement faster than 1 mm/year takes place only in restricted regions near plate boundaries or close to major active faults. Volcanic activity is absent in the region between the volcanic front and the subduction zone. The site selection is especially important in Japan. The scenarios for the long-term performance assessment of high-level waste disposal are discussed with special reference to the geological setting of Japan. The long-term prediction of tectonic disturbance, evaluation of faults and fractures in rocks and estimation of long-term water-rock interaction are key issues in the performance assessment of the high-level waste disposal in the Japanese islands. (author)

  2. Cooperative fuzzy games approach to setting target levels of ECs in quality function deployment.

    Science.gov (United States)

    Yang, Zhihui; Chen, Yizeng; Yin, Yunqiang

    2014-01-01

    Quality function deployment (QFD) can provide a means of translating customer requirements (CRs) into engineering characteristics (ECs) for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.

  3. Abolition of set-aside schemes, associated impacts on habitat structure and modelling of potential effects of cross-farm regulation

    DEFF Research Database (Denmark)

    Levin, G.; Jepsen, Martin Rudbeck

    2010-01-01

    proportion of set-aside land was re-cultivated. With Denmark as case we apply an indicator to measure the effect of set-aside land on spatial structure of semi-natural habitats in term of habitat size and connectivity. Furthermore, we model effects of a hypothetical spatial regulation, where set-aside land...... reduces impacts. Effects increase with increasing size of farm agglomerations. However, marginal benefits become negligible at agglomeration sizes over 36 km(2). (C) 2010 Elsevier B.V. All rights reserved...

  4. Exploring patient satisfaction levels, self-rated oral health status and associated variables among citizens covered for dental insurance through a National Social Security Scheme in India.

    Science.gov (United States)

    Singh, Abhinav; Purohit, Bharathi M

    2017-06-01

    To assess patient satisfaction, self-rated oral health and associated factors, including periodontal status and dental caries, among patients covered for dental insurance through a National Social Security Scheme in New Delhi, India. A total of 1,498 patients participated in the study. Satisfaction levels and self-rated oral-health scores were measured using a questionnaire comprising 12 closed-ended questions. Clinical data were collected using the Community Periodontal Index (CPI) and the decayed, missing and filled teeth (DMFT) index. Regression analysis was conducted to evaluate factors associated with dental caries, periodontal status and self-rated oral health. Areas of concern included poor cleanliness within the hospital, extensive delays for appointments, waiting time in hospital and inadequate interpersonal and communication skills among health-care professionals. Approximately 51% of the respondents rated their oral health as fair to poor. Younger age, no tobacco usage, good periodontal status and absence of dental caries were significantly associated with higher oral health satisfaction, with odds ratios of 3.94, 2.38, 2.58 and 2.09, respectively (P ≤ 0.001). The study indicates poor satisfaction levels with the current dental care system and a poor self-rated oral health status among the study population. Some specific areas of concern have been identified. These findings may facilitate restructuring of the existing dental services under the National Social Security Scheme towards creating a better patient care system. © 2017 FDI World Dental Federation.

  5. An accurate anisotropic adaptation method for solving the level set advection equation

    International Nuclear Information System (INIS)

    Bui, C.; Dapogny, C.; Frey, P.

    2012-01-01

    In the present paper, a mesh adaptation process for solving the advection equation on a fully unstructured computational mesh is introduced, with a particular interest in the case it implicitly describes an evolving surface. This process mainly relies on a numerical scheme based on the method of characteristics. However, low order, this scheme lends itself to a thorough analysis on the theoretical side. It gives rise to an anisotropic error estimate which enjoys a very natural interpretation in terms of the Hausdorff distance between the exact and approximated surfaces. The computational mesh is then adapted according to the metric supplied by this estimate. The whole process enjoys a good accuracy as far as the interface resolution is concerned. Some numerical features are discussed and several classical examples are presented and commented in two or three dimensions. (authors)

  6. Relationships between college settings and student alcohol use before, during and after events: a multi-level study.

    Science.gov (United States)

    Paschall, Mallie J; Saltz, Robert F

    2007-11-01

    We examined how alcohol risk is distributed based on college students' drinking before, during and after they go to certain settings. Students attending 14 California public universities (N=10,152) completed a web-based or mailed survey in the fall 2003 semester, which included questions about how many drinks they consumed before, during and after the last time they went to six settings/events: fraternity or sorority party, residence hall party, campus event (e.g. football game), off-campus party, bar/restaurant and outdoor setting (referent). Multi-level analyses were conducted in hierarchical linear modeling (HLM) to examine relationships between type of setting and level of alcohol use before, during and after going to the setting, and possible age and gender differences in these relationships. Drinking episodes (N=24,207) were level 1 units, students were level 2 units and colleges were level 3 units. The highest drinking levels were observed during all settings/events except campus events, with the highest number of drinks being consumed at off-campus parties, followed by residence hall and fraternity/sorority parties. The number of drinks consumed before a fraternity/sorority party was higher than other settings/events. Age group and gender differences in relationships between type of setting/event and 'before,''during' and 'after' drinking levels also were observed. For example, going to a bar/restaurant (relative to an outdoor setting) was positively associated with 'during' drinks among students of legal drinking age while no relationship was observed for underage students. Findings of this study indicate differences in the extent to which college settings are associated with student drinking levels before, during and after related events, and may have implications for intervention strategies targeting different types of settings.

  7. Generalized cost-effectiveness analysis for national-level priority-setting in the health sector

    Directory of Open Access Journals (Sweden)

    Edejer Tessa

    2003-12-01

    Full Text Available Abstract Cost-effectiveness analysis (CEA is potentially an important aid to public health decision-making but, with some notable exceptions, its use and impact at the level of individual countries is limited. A number of potential reasons may account for this, among them technical shortcomings associated with the generation of current economic evidence, political expediency, social preferences and systemic barriers to implementation. As a form of sectoral CEA, Generalized CEA sets out to overcome a number of these barriers to the appropriate use of cost-effectiveness information at the regional and country level. Its application via WHO-CHOICE provides a new economic evidence base, as well as underlying methodological developments, concerning the cost-effectiveness of a range of health interventions for leading causes of, and risk factors for, disease. The estimated sub-regional costs and effects of different interventions provided by WHO-CHOICE can readily be tailored to the specific context of individual countries, for example by adjustment to the quantity and unit prices of intervention inputs (costs or the coverage, efficacy and adherence rates of interventions (effectiveness. The potential usefulness of this information for health policy and planning is in assessing if current intervention strategies represent an efficient use of scarce resources, and which of the potential additional interventions that are not yet implemented, or not implemented fully, should be given priority on the grounds of cost-effectiveness. Health policy-makers and programme managers can use results from WHO-CHOICE as a valuable input into the planning and prioritization of services at national level, as well as a starting point for additional analyses of the trade-off between the efficiency of interventions in producing health and their impact on other key outcomes such as reducing inequalities and improving the health of the poor.

  8. Tradable schemes

    NARCIS (Netherlands)

    J.K. Hoogland (Jiri); C.D.D. Neumann

    2000-01-01

    textabstractIn this article we present a new approach to the numerical valuation of derivative securities. The method is based on our previous work where we formulated the theory of pricing in terms of tradables. The basic idea is to fit a finite difference scheme to exact solutions of the pricing

  9. Control scheme of three-level H-bridge converter for interfacing between renewable energy resources and AC grid

    DEFF Research Database (Denmark)

    Pouresmaeil, Edris; Montesinos-Miracle, Daniel; Gomis-Bellmunt, Oriol

    2011-01-01

    This paper presents a control strategy of multilevel converters for integration of renewable energy resources into power grid. The proposed technique provides compensation for active, reactive, and harmonic current components of grid-connected loads. A three-level H-bridge converter is proposed a...

  10. Level-1 Data Driver Card of the ATLAS New Small Wheel upgrade compatible with the Phase II 1 MHz readout scheme

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00549793; The ATLAS collaboration

    2016-01-01

    The Level-1 Data Driver Card (L1DDC) will be designed for the needs of the future upgrades of the innermost stations of the ATLAS end-cap muon spectrometer. The L1DDC is a high speed aggregator board capable of communicating with a large number of front-end electronics. It collects the Level-1 data along with monitoring data and transmits them to a network interface through a single bidirectional fiber link. In addition, the L1DDC board distributes trigger, time and configuration data coming from the network interface to the front-end boards. The L1DDC is fully compatible with the Phase II upgrade where the trigger rate is expected to reach 1 MHz. This paper describes the overall scheme of the data acquisition process and especially the three different L1DDC boards that will be fabricated. Moreover the L1DDC prototype-1 is also described.

  11. Evaluation of regional-scale water level simulations using various river routing schemes within a hydrometeorological modelling framework for the preparation of the SWOT mission

    Science.gov (United States)

    Häfliger, V.; Martin, E.; Boone, A. A.; Habets, F.; David, C. H.; Garambois, P. A.; Roux, H.; Ricci, S. M.; Thévenin, A.; Berthon, L.; Biancamaria, S.

    2014-12-01

    The ability of a regional hydrometeorological model to simulate water depth is assessed in order to prepare for the SWOT (Surface Water and Ocean Topography) mission that will observe free surface water elevations for rivers having a width larger than 50/100 m. The Garonne river (56 000 km2, in south-western France) has been selected owing to the availability of operational gauges, and the fact that different modeling platforms, the hydrometeorological model SAFRAN-ISBA-MODCOU and several fine scale hydraulic models, have been extensively evaluated over two reaches of the river. Several routing schemes, ranging from the simple Muskingum method to time-variable parameter kinematic and diffusive waves schemes with time varying parameters, are tested using predetermined hydraulic parameters. The results show that the variable flow velocity scheme is advantageous for discharge computations when compared to the original Muskingum routing method. Additionally, comparisons between water level computations and in situ observations led to root mean square errors of 50-60 cm for the improved Muskingum method and 40-50 cm for the kinematic-diffusive wave method, in the downstream Garonne river. The error is larger than the anticipated SWOT resolution, showing the potential of the mission to improve knowledge of the continental water cycle. Discharge computations are also shown to be comparable to those obtained with high-resolution hydraulic models over two reaches. However, due to the high variability of river parameters (e.g. slope and river width), a robust averaging method is needed to compare the hydraulic model outputs and the regional model. Sensitivity tests are finally performed in order to have a better understanding of the mechanisms which control the key hydrological processes. The results give valuable information about the linearity, Gaussianity and symetry of the model, in order to prepare the assimilation of river heights in the model.

  12. Home advantage in high-level volleyball varies according to set number.

    Science.gov (United States)

    Marcelino, Rui; Mesquita, Isabel; Palao Andrés, José Manuel; Sampaio, Jaime

    2009-01-01

    The aim of the present study was to identify the probability of winning each Volleyball set according to game location (home, away). Archival data was obtained from 275 sets in the 2005 Men's Senior World League and 65,949 actions were analysed. Set result (win, loss), game location (home, away), set number (first, second, third, fourth and fifth) and performance indicators (serve, reception, set, attack, dig and block) were the variables considered in this study. In a first moment, performance indicators were used in a logistic model of set result, by binary logistic regression analysis. After finding the adjusted logistic model, the log-odds of winning the set were analysed according to game location and set number. The results showed that winning a set is significantly related to performance indicators (Chisquare(18)=660.97, padvantage at the beginning of the game (first set) and in the two last sets of the game (fourth and fifth sets), probably due to facilities familiarity and crowd effects. Different game actions explain these advantages and showed that to win the first set is more important to take risk, through a better performance in the attack and block, and to win the final set is important to manage the risk through a better performance on the reception. These results may suggest intra-game variation in home advantage and can be most useful to better prepare and direct the competition. Key pointsHome teams always have more probability of winning the game than away teams.Home teams have higher performance in reception, set and attack in the total of the sets.The advantage of home teams is more pronounced at the beginning of the game (first set) and in two last sets of the game (fourth and fifth sets) suggesting intra-game variation in home advantage.Analysis by sets showed that home teams have a better performance in the attack and block in the first set and in the reception in the third and fifth sets.

  13. On the modeling of bubble evolution and transport using coupled level-set/CFD method

    International Nuclear Information System (INIS)

    Bartlomiej Wierzbicki; Steven P Antal; Michael Z Podowski

    2005-01-01

    Full text of publication follows: The ability to predict the shape of the gas/liquid/solid interfaces is important for various multiphase flow and heat transfer applications. Specific issues of interest to nuclear reactor thermal-hydraulics, include the evolution of the shape of bubbles attached to solid surfaces during nucleation, bubble surface interactions in complex geometries, etc. Additional problems, making the overall task even more complicated, are associated with the effect of material properties that may be significantly altered by the addition of minute amounts of impurities, such as surfactants or nano-particles. The present paper is concerned with the development of an innovative approach to model time-dependent shape of gas/liquid interfaces in the presence of solid walls. The proposed approach combines a modified level-set method with an advanced CFD code, NPHASE. The coupled numerical solver can be used to simulate the evolution of gas/liquid interfaces in two-phase flows for a variety of geometries and flow conditions, from individual bubbles to free surfaces (stratified flows). The issues discussed in the full paper will include: a description of the novel aspects of the proposed level-set concept based method, an overview of the NPHASE code modeling framework and a description of the coupling method between these two elements of the overall model. A particular attention will be give to the consistency and completeness of model formulation for the interfacial phenomena near the liquid/gas/solid triple line, and to the impact of the proposed numerical approach on the accuracy and consistency of predictions. The accuracy will be measured in terms of both the calculated shape of the interfaces and the gas and liquid velocity fields around the interfaces and in the entire computational domain. The results of model testing and validation will also be shown in the full paper. The situations analyzed will include: bubbles of different sizes and varying

  14. Fluoroscopy-guided insertion of nasojejunal tubes in children - setting local diagnostic reference levels

    International Nuclear Information System (INIS)

    Vitta, Lavanya; Raghavan, Ashok; Sprigg, Alan; Morrell, Rachel

    2009-01-01

    Little is known about the radiation burden from fluoroscopy-guided insertions of nasojejunal tubes (NJTs) in children. There are no recommended or published standards of diagnostic reference levels (DRLs) available. To establish reference dose area product (DAP) levels for the fluoroscopy-guided insertion of nasojejunal tubes as a basis for setting DRLs for children. In addition, we wanted to assess our local practice and determine the success and complication rates associated with this procedure. Children who had NJT insertion procedures were identified retrospectively from the fluoroscopy database. The age of the child at the time of the procedure, DAP, screening time, outcome of the procedure, and any complications were recorded for each procedure. As the radiation dose depends on the size of the child, the children were assigned to three different age groups. The sample size, mean, median and third-quartile DAPs were calculated for each group. The third-quartile values were used to establish the DRLs. Of 186 procedures performed, 172 were successful on the first attempt. These were performed in a total of 43 children with 60% having multiple insertions over time. The third-quartile DAPs were as follows for each age group: 0-12 months, 2.6 cGy cm 2 ; 1-7 years, 2.45 cGy cm 2 ; >8 years, 14.6 cGy cm 2 . High DAP readings were obtained in the 0-12 months (n = 4) and >8 years (n = 2) age groups. No immediate complications were recorded. Fluoroscopy-guided insertion of NJTs is a highly successful procedure in a selected population of children and is associated with a low complication rate. The radiation dose per procedure is relatively low. (orig.)

  15. Evaluation of two-phase flow solvers using Level Set and Volume of Fluid methods

    Science.gov (United States)

    Bilger, C.; Aboukhedr, M.; Vogiatzaki, K.; Cant, R. S.

    2017-09-01

    Two principal methods have been used to simulate the evolution of two-phase immiscible flows of liquid and gas separated by an interface. These are the Level-Set (LS) method and the Volume of Fluid (VoF) method. Both methods attempt to represent the very sharp interface between the phases and to deal with the large jumps in physical properties associated with it. Both methods have their own strengths and weaknesses. For example, the VoF method is known to be prone to excessive numerical diffusion, while the basic LS method has some difficulty in conserving mass. Major progress has been made in remedying these deficiencies, and both methods have now reached a high level of physical accuracy. Nevertheless, there remains an issue, in that each of these methods has been developed by different research groups, using different codes and most importantly the implementations have been fine tuned to tackle different applications. Thus, it remains unclear what are the remaining advantages and drawbacks of each method relative to the other, and what might be the optimal way to unify them. In this paper, we address this gap by performing a direct comparison of two current state-of-the-art variations of these methods (LS: RCLSFoam and VoF: interPore) and implemented in the same code (OpenFoam). We subject both methods to a pair of benchmark test cases while using the same numerical meshes to examine a) the accuracy of curvature representation, b) the effect of tuning parameters, c) the ability to minimise spurious velocities and d) the ability to tackle fluids with very different densities. For each method, one of the test cases is chosen to be fairly benign while the other test case is expected to present a greater challenge. The results indicate that both methods can be made to work well on both test cases, while displaying different sensitivity to the relevant parameters.

  16. CSR schemes in agribusiness

    DEFF Research Database (Denmark)

    Pötz, Katharina Anna; Haas, Rainer; Balzarova, Michaela

    2013-01-01

    of schemes that can be categorized on focus areas, scales, mechanisms, origins, types and commitment levels. Research limitations/implications – The findings contribute to conceptual and empirical research on existing models to compare and analyse CSR standards. Sampling technique and depth of analysis limit......Purpose – The rise of CSR followed a demand for CSR standards and guidelines. In a sector already characterized by a large number of standards, the authors seek to ask what CSR schemes apply to agribusiness, and how they can be systematically compared and analysed. Design....../methodology/approach – Following a deductive-inductive approach the authors develop a model to compare and analyse CSR schemes based on existing studies and on coding qualitative data on 216 CSR schemes. Findings – The authors confirm that CSR standards and guidelines have entered agribusiness and identify a complex landscape...

  17. [Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie

    At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.

  18. Automatic Fontanel Extraction from Newborns' CT Images Using Variational Level Set

    Science.gov (United States)

    Kazemi, Kamran; Ghadimi, Sona; Lyaghat, Alireza; Tarighati, Alla; Golshaeyan, Narjes; Abrishami-Moghaddam, Hamid; Grebe, Reinhard; Gondary-Jouet, Catherine; Wallois, Fabrice

    A realistic head model is needed for source localization methods used for the study of epilepsy in neonates applying Electroencephalographic (EEG) measurements from the scalp. The earliest models consider the head as a series of concentric spheres, each layer corresponding to a different tissue whose conductivity is assumed to be homogeneous. The results of the source reconstruction depend highly on the electric conductivities of the tissues forming the head.The most used model is constituted of three layers (scalp, skull, and intracranial). Most of the major bones of the neonates’ skull are ossified at birth but can slightly move relative to each other. This is due to the sutures, fibrous membranes that at this stage of development connect the already ossified flat bones of the neurocranium. These weak parts of the neurocranium are called fontanels. Thus it is important to enter the exact geometry of fontaneles and flat bone in a source reconstruction because they show pronounced in conductivity. Computer Tomography (CT) imaging provides an excellent tool for non-invasive investigation of the skull which expresses itself in high contrast to all other tissues while the fontanels only can be identified as absence of bone, gaps in the skull formed by flat bone. Therefore, the aim of this paper is to extract the fontanels from CT images applying a variational level set method. We applied the proposed method to CT-images of five different subjects. The automatically extracted fontanels show good agreement with the manually extracted ones.

  19. Two-phase electro-hydrodynamic flow modeling by a conservative level set model.

    Science.gov (United States)

    Lin, Yuan

    2013-03-01

    The principles of electro-hydrodynamic (EHD) flow have been known for more than a century and have been adopted for various industrial applications, for example, fluid mixing and demixing. Analytical solutions of such EHD flow only exist in a limited number of scenarios, for example, predicting a small deformation of a single droplet in a uniform electric field. Numerical modeling of such phenomena can provide significant insights about EHDs multiphase flows. During the last decade, many numerical results have been reported to provide novel and useful tools of studying the multiphase EHD flow. Based on a conservative level set method, the proposed model is able to simulate large deformations of a droplet by a steady electric field, which is beyond the region of theoretic prediction. The model is validated for both leaky dielectrics and perfect dielectrics, and is found to be in excellent agreement with existing analytical solutions and numerical studies in the literature. Furthermore, simulations of the deformation of a water droplet in decyl alcohol in a steady electric field match better with published experimental data than the theoretical prediction for large deformations. Therefore the proposed model can serve as a practical and accurate tool for simulating two-phase EHD flow. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Modeling of Two-Phase Flow in Rough-Walled Fracture Using Level Set Method

    Directory of Open Access Journals (Sweden)

    Yunfeng Dai

    2017-01-01

    Full Text Available To describe accurately the flow characteristic of fracture scale displacements of immiscible fluids, an incompressible two-phase (crude oil and water flow model incorporating interfacial forces and nonzero contact angles is developed. The roughness of the two-dimensional synthetic rough-walled fractures is controlled with different fractal dimension parameters. Described by the Navier–Stokes equations, the moving interface between crude oil and water is tracked using level set method. The method accounts for differences in densities and viscosities of crude oil and water and includes the effect of interfacial force. The wettability of the rough fracture wall is taken into account by defining the contact angle and slip length. The curve of the invasion pressure-water volume fraction is generated by modeling two-phase flow during a sudden drainage. The volume fraction of water restricted in the rough-walled fracture is calculated by integrating the water volume and dividing by the total cavity volume of the fracture while the two-phase flow is quasistatic. The effect of invasion pressure of crude oil, roughness of fracture wall, and wettability of the wall on two-phase flow in rough-walled fracture is evaluated.

  1. Tokunaga and Horton self-similarity for level set trees of Markov chains

    International Nuclear Information System (INIS)

    Zaliapin, Ilia; Kovchegov, Yevgeniy

    2012-01-01

    Highlights: ► Self-similar properties of the level set trees for Markov chains are studied. ► Tokunaga and Horton self-similarity are established for symmetric Markov chains and regular Brownian motion. ► Strong, distributional self-similarity is established for symmetric Markov chains with exponential jumps. ► It is conjectured that fractional Brownian motions are Tokunaga self-similar. - Abstract: The Horton and Tokunaga branching laws provide a convenient framework for studying self-similarity in random trees. The Horton self-similarity is a weaker property that addresses the principal branching in a tree; it is a counterpart of the power-law size distribution for elements of a branching system. The stronger Tokunaga self-similarity addresses so-called side branching. The Horton and Tokunaga self-similarity have been empirically established in numerous observed and modeled systems, and proven for two paradigmatic models: the critical Galton–Watson branching process with finite progeny and the finite-tree representation of a regular Brownian excursion. This study establishes the Tokunaga and Horton self-similarity for a tree representation of a finite symmetric homogeneous Markov chain. We also extend the concept of Horton and Tokunaga self-similarity to infinite trees and establish self-similarity for an infinite-tree representation of a regular Brownian motion. We conjecture that fractional Brownian motions are also Tokunaga and Horton self-similar, with self-similarity parameters depending on the Hurst exponent.

  2. Delineating Facies Spatial Distribution by Integrating Ensemble Data Assimilation and Indicator Geostatistics with Level Set Transformation.

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn Edward; Song, Xuehang; Ye, Ming; Dai, Zhenxue; Zachara, John; Chen, Xingyuan

    2017-03-01

    A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. The spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.

  3. Measurement of thermally ablated lesions in sonoelastographic images using level set methods

    Science.gov (United States)

    Castaneda, Benjamin; Tamez-Pena, Jose Gerardo; Zhang, Man; Hoyt, Kenneth; Bylund, Kevin; Christensen, Jared; Saad, Wael; Strang, John; Rubens, Deborah J.; Parker, Kevin J.

    2008-03-01

    The capability of sonoelastography to detect lesions based on elasticity contrast can be applied to monitor the creation of thermally ablated lesion. Currently, segmentation of lesions depicted in sonoelastographic images is performed manually which can be a time consuming process and prone to significant intra- and inter-observer variability. This work presents a semi-automated segmentation algorithm for sonoelastographic data. The user starts by planting a seed in the perceived center of the lesion. Fast marching methods use this information to create an initial estimate of the lesion. Subsequently, level set methods refine its final shape by attaching the segmented contour to edges in the image while maintaining smoothness. The algorithm is applied to in vivo sonoelastographic images from twenty five thermal ablated lesions created in porcine livers. The estimated area is compared to results from manual segmentation and gross pathology images. Results show that the algorithm outperforms manual segmentation in accuracy, inter- and intra-observer variability. The processing time per image is significantly reduced.

  4. Poverty identification for a pro-poor health insurance scheme in Tanzania: reliability and multi-level stakeholder perceptions.

    Science.gov (United States)

    Kuwawenaruwa, August; Baraka, Jitihada; Ramsey, Kate; Manzi, Fatuma; Bellows, Ben; Borghi, Josephine

    2015-12-01

    Many low income countries have policies to exempt the poor from user charges in public facilities. Reliably identifying the poor is a challenge when implementing such policies. In Tanzania, a scorecard system was established in 2011, within a programme providing free national health insurance fund (NHIF) cards, to identify poor pregnant women and their families, based on eight components. Using a series of reliability tests on a 2012 dataset of 2,621 households in two districts, this study compares household poverty levels using the scorecard, a wealth index, and monthly consumption expenditures. We compared the distributions of the three wealth measures, and the consistency of household poverty classification using cross-tabulations and the Kappa statistic. We measured errors of inclusion and exclusion of the scorecard relative to the other methods. We also gathered perceptions of the scorecard criteria through qualitative interviews with stakeholders at multiple levels of the health system. The distribution of the scorecard was less skewed than other wealth measures and not truncated, but demonstrated clumping. There was a higher level of agreement between the scorecard and the wealth index than consumption expenditure. The scorecard identified a similar number of poor households as the "basic needs" poverty line based on monthly consumption expenditure, with only 45 % errors of inclusion. However, it failed to pick up half of those living below the "basic needs" poverty line as being poor. Stakeholders supported the inclusion of water sources, income, food security and disability measures but had reservations about other items on the scorecard. In choosing poverty identification strategies for programmes seeking to enhance health equity it's necessary to balance between community acceptability, local relevance and the need for such a strategy. It is important to ensure the strategy is efficient and less costly than alternatives in order to effectively reduce

  5. Scheme for generation of fully-coherent, TW power level hard X-ray pulses from baseline undulators at the European X-ray FEL

    International Nuclear Information System (INIS)

    Geloni, Gianluca; Kocharyan, Vitali; Saldin, Evgeni

    2010-07-01

    The most promising way to increase the output power of an X-ray FEL (XFEL) is by tapering the magnetic field of the undulator. Also, significant increase in power is achievable by starting the FEL process from a monochromatic seed rather than from noise. This report proposes to make use of a cascade self-seeding scheme with wake monochromators in a tunable-gap baseline undulator at the European XFEL to create a source capable of delivering coherent radiation of unprecedented characteristics at hard X-ray wavelengths. Compared with SASE X-ray FEL parameters, the radiation from the new source has three truly unique aspects: complete longitudinal and transverse coherence, and a peak brightness three orders of magnitude higher than what is presently available at LCLS. Additionally, the new source will generate hard X-ray beam at extraordinary peak (TW) and average (kW) power level. The proposed source can thus revolutionize fields like single biomolecule imaging, inelastic scattering and nuclear resonant scattering. The self-seeding scheme with the wake monochromator is extremely compact, and takes almost no cost and time to be implemented. The upgrade proposed in this paper could take place during the commissioning stage of the European XFEL, opening a vast new range of applications from the very beginning of operations.We present feasibility study and examplifications for the SASE2 line of the European XFEL. (orig.)

  6. Interband cascade laser-based ppbv-level mid-infrared methane detection using two digital lock-in amplifier schemes

    Science.gov (United States)

    Song, Fang; Zheng, Chuantao; Yu, Di; Zhou, Yanwen; Yan, Wanhong; Ye, Weilin; Zhang, Yu; Wang, Yiding; Tittel, Frank K.

    2018-03-01

    A parts-per-billion in volume (ppbv) level mid-infrared methane (CH4) sensor system was demonstrated using second-harmonic wavelength modulation spectroscopy (2 f-WMS). A 3291 nm interband cascade laser (ICL) and a multi-pass gas cell (MPGC) with a 16 m optical path length were adopted in the reported sensor system. Two digital lock-in amplifier (DLIA) schemes, a digital signal processor (DSP)-based DLIA and a LabVIEW-based DLIA, were used for harmonic signal extraction. A limit of detection (LoD) of 13.07 ppbv with an averaging time of 2 s was achieved using the DSP-based DLIA and a LoD of 5.84 ppbv was obtained using the LabVIEW-based DLIA with the same averaging time. A rise time of 0→2 parts-per-million in volume (ppmv) and fall time of 2→0 ppmv were observed. Outdoor atmospheric CH4 concentration measurements were carried out to evaluate the sensor performance using the two DLIA schemes.

  7. Endurance Enhancement and High Speed Set/Reset of 50 nm Generation HfO2 Based Resistive Random Access Memory Cell by Intelligent Set/Reset Pulse Shape Optimization and Verify Scheme

    Science.gov (United States)

    Higuchi, Kazuhide; Miyaji, Kousuke; Johguchi, Koh; Takeuchi, Ken

    2012-02-01

    This paper proposes a verify-programming method for the resistive random access memory (ReRAM) cell which achieves a 50-times higher endurance and a fast set and reset compared with the conventional method. The proposed verify-programming method uses the incremental pulse width with turnback (IPWWT) for the reset and the incremental voltage with turnback (IVWT) for the set. With the combination of IPWWT reset and IVWT set, the endurance-cycle increases from 48 ×103 to 2444 ×103 cycles. Furthermore, the measured data retention-time after 20 ×103 set/reset cycles is estimated to be 10 years. Additionally, the filamentary based physical model is proposed to explain the set/reset failure mechanism with various set/reset pulse shapes. The reset pulse width and set voltage correspond to the width and length of the conductive-filament, respectively. Consequently, since the proposed IPWWT and IVWT recover set and reset failures of ReRAM cells, the endurance-cycles are improved.

  8. The importance of information on relatives for the prediction of genomic breeding values and the implications for the makeup of reference data sets in livestock breeding schemes.

    Science.gov (United States)

    Clark, Samuel A; Hickey, John M; Daetwyler, Hans D; van der Werf, Julius H J

    2012-02-09

    The theory of genomic selection is based on the prediction of the effects of genetic markers in linkage disequilibrium with quantitative trait loci. However, genomic selection also relies on relationships between individuals to accurately predict genetic value. This study aimed to examine the importance of information on relatives versus that of unrelated or more distantly related individuals on the estimation of genomic breeding values. Simulated and real data were used to examine the effects of various degrees of relationship on the accuracy of genomic selection. Genomic Best Linear Unbiased Prediction (gBLUP) was compared to two pedigree based BLUP methods, one with a shallow one generation pedigree and the other with a deep ten generation pedigree. The accuracy of estimated breeding values for different groups of selection candidates that had varying degrees of relationships to a reference data set of 1750 animals was investigated. The gBLUP method predicted breeding values more accurately than BLUP. The most accurate breeding values were estimated using gBLUP for closely related animals. Similarly, the pedigree based BLUP methods were also accurate for closely related animals, however when the pedigree based BLUP methods were used to predict unrelated animals, the accuracy was close to zero. In contrast, gBLUP breeding values, for animals that had no pedigree relationship with animals in the reference data set, allowed substantial accuracy. An animal's relationship to the reference data set is an important factor for the accuracy of genomic predictions. Animals that share a close relationship to the reference data set had the highest accuracy from genomic predictions. However a baseline accuracy that is driven by the reference data set size and the overall population effective population size enables gBLUP to estimate a breeding value for unrelated animals within a population (breed), using information previously ignored by pedigree based BLUP methods.

  9. Flipping for success: evaluating the effectiveness of a novel teaching approach in a graduate level setting.

    Science.gov (United States)

    Moraros, John; Islam, Adiba; Yu, Stan; Banow, Ryan; Schindelka, Barbara

    2015-02-28

    opportunities based on problem-solving activities and offer timely feedback/guidance to students. Yet in our study, this teaching style had its fair share of challenges, which were largely dependent on the use and management of technology. Despite these challenges, the Flipped Classroom proved to be a novel and effective teaching approach at the graduate level setting.

  10. DESIRE FOR LEVELS. Background study for the policy document "Setting Environmental Quality Standards for Water and Soil"

    NARCIS (Netherlands)

    van de Meent D; Aldenberg T; Canton JH; van Gestel CAM; Slooff W

    1990-01-01

    The report provides scientific support for setting environmental quality objectives for water, sediment and soil. Quality criteria are not set in this report. Only options for decisions are given. The report is restricted to the derivation of the 'maximally acceptable risk' levels (MAR)

  11. A Novel Scheme to Minimize Hop Count for GAF in Wireless Sensor Networks: Two-Level GAF

    Directory of Open Access Journals (Sweden)

    Vaibhav Soni

    2015-01-01

    Full Text Available In wireless sensor networks, geographic adaptive fidelity (GAF is one of the most popular energy-aware routing protocols. It conserves energy by identifying equivalence between sensors from a routing perspective and then turning off unnecessary sensors, while maintaining the connectivity of the network. Nevertheless, the traditional GAF still cannot reach the optimum energy usage since it needs more number of hops to transmit data packets to the sink. As a result, it also leads to higher packet delay. In this paper, we propose a modified version of GAF to minimize hop count for data routing, called two-level GAF (T-GAF. Furthermore, we use a generalized version of GAF called Diagonal-GAF (DGAF where two diagonal adjacent grids can also directly communicate. It has an advantage of less overhead of coordinator election based on the residual energy of sensors. Analysis and simulation results show significant improvements of the proposed work comparing to traditional GAF in the aspect of total hop count, energy consumption, total distance covered by the data packet before reaching the sink, and packet delay. As a result, compared to traditional GAF, it needs 40% to 47% less hop count and consumes 27% to 35% less energy to extend the network lifetime.

  12. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Energy Technology Data Exchange (ETDEWEB)

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  13. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    International Nuclear Information System (INIS)

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang; Hu, Ying; Xiong, Jing; Zhang, Jianwei

    2015-01-01

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm 3 ) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm 3 , 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm 3 , 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm

  14. Multiatlas segmentation of thoracic and abdominal anatomy with level set-based local search.

    Science.gov (United States)

    Schreibmann, Eduard; Marcus, David M; Fox, Tim

    2014-07-08

    Segmentation of organs at risk (OARs) remains one of the most time-consuming tasks in radiotherapy treatment planning. Atlas-based segmentation methods using single templates have emerged as a practical approach to automate the process for brain or head and neck anatomy, but pose significant challenges in regions where large interpatient variations are present. We show that significant changes are needed to autosegment thoracic and abdominal datasets by combining multi-atlas deformable registration with a level set-based local search. Segmentation is hierarchical, with a first stage detecting bulk organ location, and a second step adapting the segmentation to fine details present in the patient scan. The first stage is based on warping multiple presegmented templates to the new patient anatomy using a multimodality deformable registration algorithm able to cope with changes in scanning conditions and artifacts. These segmentations are compacted in a probabilistic map of organ shape using the STAPLE algorithm. Final segmentation is obtained by adjusting the probability map for each organ type, using customized combinations of delineation filters exploiting prior knowledge of organ characteristics. Validation is performed by comparing automated and manual segmentation using the Dice coefficient, measured at an average of 0.971 for the aorta, 0.869 for the trachea, 0.958 for the lungs, 0.788 for the heart, 0.912 for the liver, 0.884 for the kidneys, 0.888 for the vertebrae, 0.863 for the spleen, and 0.740 for the spinal cord. Accurate atlas segmentation for abdominal and thoracic regions can be achieved with the usage of a multi-atlas and perstructure refinement strategy. To improve clinical workflow and efficiency, the algorithm was embedded in a software service, applying the algorithm automatically on acquired scans without any user interaction.

  15. Sensitivity Analysis of features in tolerancing based on constraint function level sets

    International Nuclear Information System (INIS)

    Ziegler, Philipp; Wartzack, Sandro

    2015-01-01

    Usually, the geometry of the manufactured product inherently varies from the nominal geometry. This may negatively affect the product functions and properties (such as quality and reliability), as well as the assemblability of the single components. In order to avoid this, the geometric variation of these component surfaces and associated geometry elements (like hole axes) are restricted by tolerances. Since tighter tolerances lead to significant higher manufacturing costs, tolerances should be specified carefully. Therefore, the impact of deviating component surfaces on functions, properties and assemblability of the product has to be analyzed. As physical experiments are expensive, methods of statistical tolerance analysis tools are widely used in engineering design. Current tolerance simulation tools lack of an appropriate indicator for the impact of deviating component surfaces. In the adoption of Sensitivity Analysis methods, there are several challenges, which arise from the specific framework in tolerancing. This paper presents an approach to adopt Sensitivity Analysis methods on current tolerance simulations with an interface module, which bases on level sets of constraint functions for parameters of the simulation model. The paper is an extension and generalization of Ziegler and Wartzack [1]. Mathematical properties of the constraint functions (convexity, homogeneity), which are important for the computational costs of the Sensitivity Analysis, are shown. The practical use of the method is illustrated in a case study of a plain bearing. - Highlights: • Alternative definition of Deviation Domains. • Proof of mathematical properties of the Deviation Domains. • Definition of the interface between Deviation Domains and Sensitivity Analysis. • Sensitivity analysis of a gearbox to show the methods practical use

  16. Shape Reconstruction of Thin Electromagnetic Inclusions via Boundary Measurements: Level-Set Method Combined with the Topological Derivative

    Directory of Open Access Journals (Sweden)

    Won-Kwang Park

    2013-01-01

    Full Text Available An inverse problem for reconstructing arbitrary-shaped thin penetrable electromagnetic inclusions concealed in a homogeneous material is considered in this paper. For this purpose, the level-set evolution method is adopted. The topological derivative concept is incorporated in order to evaluate the evolution speed of the level-set functions. The results of the corresponding numerical simulations with and without noise are presented in this paper.

  17. Joint inversion of geophysical data using petrophysical clustering and facies deformation wth the level set technique

    Science.gov (United States)

    Revil, A.

    2015-12-01

    Geological expertise and petrophysical relationships can be brought together to provide prior information while inverting multiple geophysical datasets. The merging of such information can result in more realistic solution in the distribution of the model parameters, reducing ipse facto the non-uniqueness of the inverse problem. We consider two level of heterogeneities: facies, described by facies boundaries and heteroegenities inside each facies determined by a correlogram. In this presentation, we pose the geophysical inverse problem in terms of Gaussian random fields with mean functions controlled by petrophysical relationships and covariance functions controlled by a prior geological cross-section, including the definition of spatial boundaries for the geological facies. The petrophysical relationship problem is formulated as a regression problem upon each facies. The inversion of the geophysical data is performed in a Bayesian framework. We demonstrate the usefulness of this strategy using a first synthetic case for which we perform a joint inversion of gravity and galvanometric resistivity data with the stations located at the ground surface. The joint inversion is used to recover the density and resistivity distributions of the subsurface. In a second step, we consider the possibility that the facies boundaries are deformable and their shapes are inverted as well. We use the level set approach to perform such deformation preserving prior topological properties of the facies throughout the inversion. With the help of prior facies petrophysical relationships and topological characteristic of each facies, we make posterior inference about multiple geophysical tomograms based on their corresponding geophysical data misfits. The method is applied to a second synthetic case showing that we can recover the heterogeneities inside the facies, the mean values for the petrophysical properties, and, to some extent, the facies boundaries using the 2D joint inversion of

  18. Impact and Suggestion of Column-to-Surface Vertical Correction Scheme on the Relationship between Satellite AOD and Ground-Level PM2.5 in China

    Directory of Open Access Journals (Sweden)

    Wei Gong

    2017-10-01

    Full Text Available As China is suffering from severe fine particle pollution from dense industrialization and urbanization, satellite-derived aerosol optical depth (AOD has been widely used for estimating particulate matter with an aerodynamic diameter less than 2.5 μm (PM2.5. However, the correlation between satellite AOD and ground-level PM2.5 could be influenced by aerosol vertical distribution, as satellite AOD represents the entire column, rather than just ground-level concentration. Here, a new column-to-surface vertical correction scheme is proposed to improve separation of the near-surface and elevated aerosol layers, based on the ratio of the integrated extinction coefficient within 200–500 m above ground level (AGL, using the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP aerosol profile products. There are distinct differences in climate, meteorology, terrain, and aerosol transmission throughout China, so comparisons between vertical correction via CALIOP ratio and planetary boundary layer height (PBLH were conducted in different regions from 2014 to 2015, combined with the original Pearson coefficient between satellite AOD and ground-level PM2.5 for reference. Furthermore, the best vertical correction scheme was suggested for different regions to achieve optimal correlation with PM2.5, based on the analysis and discussion of regional and seasonal characteristics of aerosol vertical distribution. According to our results and discussions, vertical correction via PBLH is recommended in northwestern China, where the PBLH varies dramatically, stretching or compressing the surface aerosol layer; vertical correction via the CALIOP ratio is recommended in northeastern China, southwestern China, Central China (excluding summer, North China Plain (excluding Beijing, and the spring in the southeast coast, areas that are susceptible to exogenous aerosols and exhibit the elevated aerosol layer; and original AOD without vertical correction is

  19. Operational Assessment of ICDS Scheme at Grass Root Level in a Rural Area of Eastern India: Time to Introspect

    Science.gov (United States)

    Sahoo, Jyotiranjan; Mahajan, Preetam B; Bhatia, Vikas; Patra, Abhinash K; Hembram, Dilip Kumar

    2016-01-01

    Introduction Integrated Child Development Service (ICDS), a flagship program of Government of India (GoI) for early childhood development hasn’t delivered the desired results since its inception four decades ago. This could be due to infrastructural problems, lack of awareness and proper utilization by the local people, inadequate program monitoring and corruption in food supplies, etc. This study is an audit of 36 Anganwadi centres at Khordha district, Odisha, to evaluate the implementation of the ICDS. Aim To assess operational aspects of ICDS program in a rural area of Odisha, in Eastern India. Materials and Methods A total of 36 out of 50 Anganwadi Centres (AWCs) were included in the study. We interviewed the Anganwadi Workers (AWW) and carried out observations on the AWCs using a checklist. We gathered information under three domains manpower resource, material resource and functional aspects of the AWC. Results Most of the AWCs were adequately staffed. Most of the AWWs were well educated. However, more than 85% of the AWCs did not have designated building for daily functioning which resulted in issues related to implementation of program. Water, toilet and electricity facilities were almost non-existent. Indoor air pollution posed a serious threat to the health of the children. Lack of play materials; lack of health assessment tools for promoting, monitoring physical and mental development; and multiple de-motivating factors within the work environment, eventually translated into lack of faith among the beneficiaries in the rural community. Conclusion Inadequate infrastructure and logistic supply were the most prominent issues found, which resulted in poor implementation of ICDS program. Strengthening of grass root level facilities based on need assessment, effective monitoring and supervision will definitely help in revamping the ICDS program in rural areas. PMID:28208890

  20. Improving district level health planning and priority setting in Tanzania through implementing accountability for reasonableness framework

    DEFF Research Database (Denmark)

    Maluka, Stephen; Kamuzora, Peter; Sebastián, Miguel San

    2010-01-01

    In 2006, researchers and decision-makers launched a five-year project - Response to Accountable Priority Setting for Trust in Health Systems (REACT) - to improve planning and priority-setting through implementing the Accountability for Reasonableness framework in Mbarali District, Tanzania...

  1. MO-AB-BRA-01: A Global Level Set Based Formulation for Volumetric Modulated Arc Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, D; Lyu, Q; Ruan, D; O’Connor, D; Low, D; Sheng, K [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA (United States)

    2016-06-15

    Purpose: The current clinical Volumetric Modulated Arc Therapy (VMAT) optimization is formulated as a non-convex problem and various greedy heuristics have been employed for an empirical solution, jeopardizing plan consistency and quality. We introduce a novel global direct aperture optimization method for VMAT to overcome these limitations. Methods: The global VMAT (gVMAT) planning was formulated as an optimization problem with an L2-norm fidelity term and an anisotropic total variation term. A level set function was used to describe the aperture shapes and adjacent aperture shapes were penalized to control MLC motion range. An alternating optimization strategy was implemented to solve the fluence intensity and aperture shapes simultaneously. Single arc gVMAT plans, utilizing 180 beams with 2° angular resolution, were generated for a glioblastoma multiforme (GBM), lung (LNG), and 2 head and neck cases—one with 3 PTVs (H&N3PTV) and one with 4 PTVs (H&N4PTV). The plans were compared against the clinical VMAT (cVMAT) plans utilizing two overlapping coplanar arcs. Results: The optimization of the gVMAT plans had converged within 600 iterations. gVMAT reduced the average max and mean OAR dose by 6.59% and 7.45% of the prescription dose. Reductions in max dose and mean dose were as high as 14.5 Gy in the LNG case and 15.3 Gy in the H&N3PTV case. PTV coverages (D95, D98, D99) were within 0.25% of the prescription dose. By globally considering all beams, the gVMAT optimizer allowed some beams to deliver higher intensities, yielding a dose distribution that resembles a static beam IMRT plan with beam orientation optimization. Conclusions: The novel VMAT approach allows for the search of an optimal plan in the global solution space and generates deliverable apertures directly. The single arc VMAT approach fully utilizes the digital linacs’ capability in dose rate and gantry rotation speed modulation. Varian Medical Systems, NIH grant R01CA188300, NIH grant R43CA183390.

  2. A finite element/level set model of polyurethane foam expansion and polymerization

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Long, Kevin Nicholas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Roberts, Christine Cardinal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Celina, Mathias C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brunini, Victor [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Soehnel, Melissa Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Noble, David R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tinsley, James [Honeywell Federal Manufacturing & Technologies, Kansas City, MO (United States); Mondy, Lisa [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    Polyurethane foams are used widely for encapsulation and structural purposes because they are inexpensive, straightforward to process, amenable to a wide range of density variations (1 lb/ft3 - 50 lb/ft3), and able to fill complex molds quickly and effectively. Computational model of the filling and curing process are needed to reduce defects such as voids, out-of-specification density, density gradients, foam decomposition from high temperatures due to exotherms, and incomplete filling. This paper details the development of a computational fluid dynamics model of a moderate density PMDI structural foam, PMDI-10. PMDI is an isocyanate-based polyurethane foam, which is chemically blown with water. The polyol reacts with isocyanate to produces the polymer. PMDI- 10 is catalyzed giving it a short pot life: it foams and polymerizes to a solid within 5 minutes during normal processing. To achieve a higher density, the foam is over-packed to twice or more of its free rise density of 10 lb/ft3. The goal for modeling is to represent the expansion, filling of molds, and the polymerization of the foam. This will be used to reduce defects, optimize the mold design, troubleshoot the processed, and predict the final foam properties. A homogenized continuum model foaming and curing was developed based on reaction kinetics, documented in a recent paper; it uses a simplified mathematical formalism that decouples these two reactions. The chemo-rheology of PMDI is measured experimentally and fit to a generalized- Newtonian viscosity model that is dependent on the extent of cure, gas fraction, and temperature. The conservation equations, including the equations of motion, an energy balance, and three rate equations are solved via a stabilized finite element method. The equations are combined with a level set method to determine the location of the foam-gas interface as it evolves to fill the mold. Understanding the thermal history and loads on the foam due to exothermicity and oven

  3. Strengthening fairness, transparency and accountability in health care priority setting at district level in Tanzania

    Directory of Open Access Journals (Sweden)

    Stephen Maluka

    2011-11-01

    Full Text Available Health care systems are faced with the challenge of resource scarcity and have insufficient resources to respond to all health problems and target groups simultaneously. Hence, priority setting is an inevitable aspect of every health system. However, priority setting is complex and difficult because the process is frequently influenced by political, institutional and managerial factors that are not considered by conventional priority-setting tools. In a five-year EU-supported project, which started in 2006, ways of strengthening fairness and accountability in priority setting in district health management were studied. This review is based on a PhD thesis that aimed to analyse health care organisation and management systems, and explore the potential and challenges of implementing Accountability for Reasonableness (A4R approach to priority setting in Tanzania. A qualitative case study in Mbarali district formed the basis of exploring the sociopolitical and institutional contexts within which health care decision making takes place. The study also explores how the A4R intervention was shaped, enabled and constrained by the contexts. Key informant interviews were conducted. Relevant documents were also gathered and group priority-setting processes in the district were observed. The study revealed that, despite the obvious national rhetoric on decentralisation, actual practice in the district involved little community participation. The assumption that devolution to local government promotes transparency, accountability and community participation, is far from reality. The study also found that while the A4R approach was perceived to be helpful in strengthening transparency, accountability and stakeholder engagement, integrating the innovation into the district health system was challenging. This study underscores the idea that greater involvement and accountability among local actors may increase the legitimacy and fairness of priority-setting

  4. Adaptive protection coordination scheme for distribution network with distributed generation using ABC

    Directory of Open Access Journals (Sweden)

    A.M. Ibrahim

    2016-09-01

    Full Text Available This paper presents an adaptive protection coordination scheme for optimal coordination of DOCRs in interconnected power networks with the impact of DG, the used coordination technique is the Artificial Bee Colony (ABC. The scheme adapts to system changes; new relays settings are obtained as generation-level or system-topology changes. The developed adaptive scheme is applied on the IEEE 30-bus test system for both single- and multi-DG existence where results are shown and discussed.

  5. A two-level strategy to realize life-cycle production optimization in an operational setting

    NARCIS (Netherlands)

    Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.

    2012-01-01

    We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles

  6. A two-level strategy to realize life-cycle production optimization in an operational setting

    NARCIS (Netherlands)

    Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.

    2013-01-01

    We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles

  7. SET overexpression in HEK293 cells regulates mitochondrial uncoupling proteins levels within a mitochondrial fission/reduced autophagic flux scenario

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Luciana O.; Goto, Renata N. [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Neto, Marinaldo P.C. [Department of Physics and Chemistry, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Sousa, Lucas O. [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Curti, Carlos [Department of Physics and Chemistry, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Leopoldino, Andréia M., E-mail: andreiaml@usp.br [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil)

    2015-03-06

    We hypothesized that SET, a protein accumulated in some cancer types and Alzheimer disease, is involved in cell death through mitochondrial mechanisms. We addressed the mRNA and protein levels of the mitochondrial uncoupling proteins UCP1, UCP2 and UCP3 (S and L isoforms) by quantitative real-time PCR and immunofluorescence as well as other mitochondrial involvements, in HEK293 cells overexpressing the SET protein (HEK293/SET), either in the presence or absence of oxidative stress induced by the pro-oxidant t-butyl hydroperoxide (t-BHP). SET overexpression in HEK293 cells decreased UCP1 and increased UCP2 and UCP3 (S/L) mRNA and protein levels, whilst also preventing lipid peroxidation and decreasing the content of cellular ATP. SET overexpression also (i) decreased the area of mitochondria and increased the number of organelles and lysosomes, (ii) increased mitochondrial fission, as demonstrated by increased FIS1 mRNA and FIS-1 protein levels, an apparent accumulation of DRP-1 protein, and an increase in the VDAC protein level, and (iii) reduced autophagic flux, as demonstrated by a decrease in LC3B lipidation (LC3B-II) in the presence of chloroquine. Therefore, SET overexpression in HEK293 cells promotes mitochondrial fission and reduces autophagic flux in apparent association with up-regulation of UCP2 and UCP3; this implies a potential involvement in cellular processes that are deregulated such as in Alzheimer's disease and cancer. - Highlights: • SET, UCPs and autophagy prevention are correlated. • SET action has mitochondrial involvement. • UCP2/3 may reduce ROS and prevent autophagy. • SET protects cell from ROS via UCP2/3.

  8. Alternative health insurance schemes

    DEFF Research Database (Denmark)

    Keiding, Hans; Hansen, Bodil O.

    2002-01-01

    In this paper, we present a simple model of health insurance with asymmetric information, where we compare two alternative ways of organizing the insurance market. Either as a competitive insurance market, where some risks remain uninsured, or as a compulsory scheme, where however, the level...... competitive insurance; this situation turns out to be at least as good as either of the alternatives...

  9. A variational approach to multi-phase motion of gas, liquid and solid based on the level set method

    Science.gov (United States)

    Yokoi, Kensuke

    2009-07-01

    We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.

  10. Computerized detection of multiple sclerosis candidate regions based on a level set method using an artificial neural network

    International Nuclear Information System (INIS)

    Kuwazuru, Junpei; Magome, Taiki; Arimura, Hidetaka; Yamashita, Yasuo; Oki, Masafumi; Toyofuku, Fukai; Kakeda, Shingo; Yamamoto, Daisuke

    2010-01-01

    Yamamoto et al. developed the system for computer-aided detection of multiple sclerosis (MS) candidate regions. In a level set method in their proposed method, they employed the constant threshold value for the edge indicator function related to a speed function of the level set method. However, it would be appropriate to adjust the threshold value to each MS candidate region, because the edge magnitudes in MS candidates differ from each other. Our purpose of this study was to develop a computerized detection of MS candidate regions in MR images based on a level set method using an artificial neural network (ANN). To adjust the threshold value for the edge indicator function in the level set method to each true positive (TP) and false positive (FP) region, we constructed the ANN. The ANN could provide the suitable threshold value for each candidate region in the proposed level set method so that TP regions can be segmented and FP regions can be removed. Our proposed method detected MS regions at a sensitivity of 82.1% with 0.204 FPs per slice and similarity index of MS candidate regions was 0.717 on average. (author)

  11. Line-of-Credit Payment Scheme and Its Impact on the Retailer’s Ordering Policy with Inventory-Level-Dependent Demand

    Directory of Open Access Journals (Sweden)

    Tao Jia

    2016-01-01

    Full Text Available Practically, the supplier frequently offers the retailer credit period to stimulate his/her ordering quantity. However, such credit-period-only policy may lead to the dilemma that the supplier’s account receivable increases with sale volume during delay period, especially for the item with inventory-level-dependent demand. Thus, a line-of-credit (LOC payment scheme is usually adopted by the supplier for better controlling account receivables. In this paper, the two-parameter LOC clause is firstly applied to develop an economic order quantity (EOQ model with inventory-level-dependent demand, aiming to explore its influences on the retailer’s ordering policy. Under this new policy, the retailer will be granted full delay payment if his/her order quantity is below a predetermined quantity. Otherwise, the retailer should make immediate payment for the excess part. After analyzing the relationships among parameters, two distinct cases and several theoretical results can be derived. From numerical examples, two incentives, a longer credit period and a lower rate of the retailer’s capital opportunity cost, should account for the retailer’s excessive ordering policy. And a well-designed LOC clause can be applied to induce the retailer to place an appropriate ordering quantity and ensure the supplier maintains a reasonable account receivable.

  12. Comparing Panelists' Understanding of Standard Setting across Multiple Levels of an Alternate Science Assessment

    Science.gov (United States)

    Hansen, Mary A.; Lyon, Steven R.; Heh, Peter; Zigmond, Naomi

    2013-01-01

    Large-scale assessment programs, including alternate assessments based on alternate achievement standards (AA-AAS), must provide evidence of technical quality and validity. This study provides information about the technical quality of one AA-AAS by evaluating the standard setting for the science component. The assessment was designed to have…

  13. Organizational factors related to low levels of sickness absence in a representative set of Swedish companies.

    Science.gov (United States)

    Stoetzer, Ulrich; Bergman, Peter; Aborg, Carl; Johansson, Gun; Ahlberg, Gunnel; Parmsund, Marianne; Svartengren, Magnus

    2014-01-01

    The aim of this qualitative study was to identify manageable organizational factors that could explain why some companies have low levels of sickness absence. There may be factors at company level that can be managed to influence levels of sickness absence, and promote health and a prosperous organization. 38 representative Swedish companies. The study included a total of 204 semi-structured interviews at 38 representative Swedish companies. Qualitative thematic analysis was applied to the interviews, primarily with managers, to indicate the organizational factors that characterize companies with low levels of sickness absence. The factors that were found to characterize companies with low levels of sickness absence concerned strategies and procedures for managing leadership, employee development, communication, employee participation and involvement, corporate values and visions, and employee health. The results may be useful in finding strategies and procedures to reduce levels of sickness absence and promote health. There is research at individual level on the reasons for sickness absence. This study tries to elevate the issue to an organizational level. The findings suggest that explicit strategies for managing certain organizational factors can reduce sickness absence and help companies to develop more health-promoting strategies.

  14. Structural Equation Modelling with Three Schemes Estimation of Score Factors on Partial Least Square (Case Study: The Quality Of Education Level SMA/MA in Sumenep Regency)

    Science.gov (United States)

    Anekawati, Anik; Widjanarko Otok, Bambang; Purhadi; Sutikno

    2017-06-01

    Research in education often involves a latent variable. Statistical analysis technique that has the ability to analyze the pattern of relationship among latent variables as well as between latent variables and their indicators is Structural Equation Modeling (SEM). SEM partial least square (PLS) was developed as an alternative if these conditions are met: the theory that underlying the design of the model is weak, does not assume a certain scale measurement, the sample size should not be large and the data does not have the multivariate normal distribution. The purpose of this paper is to compare the results of modeling of the educational quality in high school level (SMA/MA) in Sumenep Regency with structural equation modeling approach partial least square with three schemes estimation of score factors. This paper is a result of explanatory research using secondary data from Sumenep Education Department and Badan Pusat Statistik (BPS) Sumenep which was data of Sumenep in the Figures and the District of Sumenep in the Figures for the year 2015. The unit of observation in this study were districts in Sumenep that consists of 18 districts on the mainland and 9 districts in the islands. There were two endogenous variables and one exogenous variable. Endogenous variables are the quality of education level of SMA/MA (Y1) and school infrastructure (Y2), whereas exogenous variable is socio-economic condition (X1). In this study, There is one improved model which represented by model from path scheme because this model is a consistent, all of its indicators are valid and its the value of R-square increased which is: Y1=0.651Y2. In this model, the quality of education influenced only by the school infrastructure (0.651). The socio-economic condition did not affect neither the school infrastructure nor the quality of education. If the school infrastructure increased 1 point, then the quality of education increased 0.651 point. The quality of education had an R2 of 0

  15. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    Science.gov (United States)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  16. On piecewise constant level-set (PCLS) methods for the identification of discontinuous parameters in ill-posed problems

    International Nuclear Information System (INIS)

    De Cezaro, A; Leitão, A; Tai, X-C

    2013-01-01

    We investigate level-set-type methods for solving ill-posed problems with discontinuous (piecewise constant) coefficients. The goal is to identify the level sets as well as the level values of an unknown parameter function on a model described by a nonlinear ill-posed operator equation. The PCLS approach is used here to parametrize the solution of a given operator equation in terms of a L 2 level-set function, i.e. the level-set function itself is assumed to be a piecewise constant function. Two distinct methods are proposed for computing stable solutions of the resulting ill-posed problem: the first is based on Tikhonov regularization, while the second is based on the augmented Lagrangian approach with total variation penalization. Classical regularization results (Engl H W et al 1996 Mathematics and its Applications (Dordrecht: Kluwer)) are derived for the Tikhonov method. On the other hand, for the augmented Lagrangian method, we succeed in proving the existence of (generalized) Lagrangian multipliers in the sense of (Rockafellar R T and Wets R J-B 1998 Grundlehren der Mathematischen Wissenschaften (Berlin: Springer)). Numerical experiments are performed for a 2D inverse potential problem (Hettlich F and Rundell W 1996 Inverse Problems 12 251–66), demonstrating the capabilities of both methods for solving this ill-posed problem in a stable way (complicated inclusions are recovered without any a priori geometrical information on the unknown parameter). (paper)

  17. Area-level risk factors for adverse birth outcomes: trends in urban and rural settings

    OpenAIRE

    Kent, Shia T; McClure, Leslie A; Zaitchik, Ben F; Gohlke, Julia M

    2013-01-01

    Background Significant and persistent racial and income disparities in birth outcomes exist in the US. The analyses in this manuscript examine whether adverse birth outcome time trends and associations between area-level variables and adverse birth outcomes differ by urban?rural status. Methods Alabama births records were merged with ZIP code-level census measures of race, poverty, and rurality. B-splines were used to determine long-term preterm birth (PTB) and low birth weight (LBW) trends b...

  18. Representing the Fuzzy improved risk graph for determination of optimized safety integrity level in industrial setting

    Directory of Open Access Journals (Sweden)

    Z. Qorbali

    2013-12-01

    .Conclusion: as a result of establishing the presented method, identical levels in conventional risk graph table are replaced with different sublevels that not only increases the accuracy in determining the SIL, but also elucidates the effective factor in improving the safety level and consequently saves time and cost significantly. The proposed technique has been employed to develop the SIL of Tehran Refinery ISOMAX Center. IRG and FIRG results have been compared to clarify the efficacy and importance of the proposed method

  19. On the Level Set of a Function with Degenerate Minimum Point

    Directory of Open Access Journals (Sweden)

    Yasuhiko Kamiyama

    2015-01-01

    Full Text Available For n≥2, let M be an n-dimensional smooth closed manifold and f:M→R a smooth function. We set minf(M=m and assume that m is attained by unique point p∈M such that p is a nondegenerate critical point. Then the Morse lemma tells us that if a is slightly bigger than m, f-1(a is diffeomorphic to Sn-1. In this paper, we relax the condition on p from being nondegenerate to being an isolated critical point and obtain the same consequence. Some application to the topology of polygon spaces is also included.

  20. CUDA based Level Set Method for 3D Reconstruction of Fishes from Large Acoustic Data

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Anton, François

    2009-01-01

    Acoustic images present views of underwater dynamics, even in high depths. With multi-beam echo sounders (SONARs), it is possible to capture series of 2D high resolution acoustic images. 3D reconstruction of the water column and subsequent estimation of fish abundance and fish species identificat...... of suppressing threshold and show its convergence as the evolution proceeds. We also present a GPU based streaming computation of the method using NVIDIA's CUDA framework to handle large volume data-sets. Our implementation is optimised for memory usage to handle large volumes....

  1. Development of an efficient and economic small scale management scheme for low-and intermediate level radioactive waste and its impact on the environment

    International Nuclear Information System (INIS)

    Salomon, A.Ph.; Panem, J.A.; Manalastas, H.C.; Cortez, S.L.; Paredes, C.H.; Bartolome, Z.M.

    1976-05-01

    This paper is a preliminary report on the evolution of a pilot-scale management system for low-and intermediate level radioactive wastes to provide adequate protection to the public as well as maintain the equilibrium in the human environment. Discussions on the waste management and disposal scheme proposals, assessment of waste treatment requirements of the Atomic Research Center, Philippine Atomic Energy Commission, previous experiences in the handling and management of radioactive wastes, current practices and alternatives to meet waste management problems and research studies on waste treatment are presented. In the selection of a chemical treatment process for ARC, comparative studies on the different waste processing methods or combination of processes that will be most suitable for the waste requirements of the Center are now in progress. The decontamination efficiency and economy of the lime-soda, ferrocyanide phosphate and ferric hydroxide methods are being compared. Jar experiments were conducted in the Lime-Soda Process to establish the optima conditions for certain parameter required in order to achieve an efficient and economical treatment system applicable to the local conditions for attaining maximum removal of contamination; maximum settling time - 5 hours after treatment, optimum pH-11, 2:3 ppm ratio of Ca +2 to Co 3 -2 concentration, concentration of dosing reagents can further be increased beyond 160 ppm Ca +2 and 240 ppm Co 3 -2 . Cobalt contamination can be removed with lime-soda treatment aside from strontium

  2. Additive operator-difference schemes splitting schemes

    CERN Document Server

    Vabishchevich, Petr N

    2013-01-01

    Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy

  3. Wave energy level and geographic setting correlate with Florida beach water quality.

    Science.gov (United States)

    Feng, Zhixuan; Reniers, Ad; Haus, Brian K; Solo-Gabriele, Helena M; Kelly, Elizabeth A

    2016-03-15

    Many recreational beaches suffer from elevated levels of microorganisms, resulting in beach advisories and closures due to lack of compliance with Environmental Protection Agency guidelines. We conducted the first statewide beach water quality assessment by analyzing decadal records of fecal indicator bacteria (enterococci and fecal coliform) levels at 262 Florida beaches. The objectives were to depict synoptic patterns of beach water quality exceedance along the entire Florida shoreline and to evaluate their relationships with wave condition and geographic location. Percent exceedances based on enterococci and fecal coliform were negatively correlated with both long-term mean wave energy and beach slope. Also, Gulf of Mexico beaches exceeded the thresholds significantly more than Atlantic Ocean ones, perhaps partially due to the lower wave energy. A possible linkage between wave energy level and water quality is beach sand, a pervasive nonpoint source that tends to harbor more bacteria in the low-wave-energy environment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Area-level risk factors for adverse birth outcomes: trends in urban and rural settings.

    Science.gov (United States)

    Kent, Shia T; McClure, Leslie A; Zaitchik, Ben F; Gohlke, Julia M

    2013-06-10

    Significant and persistent racial and income disparities in birth outcomes exist in the US. The analyses in this manuscript examine whether adverse birth outcome time trends and associations between area-level variables and adverse birth outcomes differ by urban-rural status. Alabama births records were merged with ZIP code-level census measures of race, poverty, and rurality. B-splines were used to determine long-term preterm birth (PTB) and low birth weight (LBW) trends by rurality. Logistic regression models were used to examine differences in the relationships between ZIP code-level percent poverty or percent African-American with either PTB or LBW. Interactions with rurality were examined. Population dense areas had higher adverse birth outcome rates compared to other regions. For LBW, the disparity between population dense and other regions increased during the 1991-2005 time period, and the magnitude of the disparity was maintained through 2010. Overall PTB and LBW rates have decreased since 2006, except within isolated rural regions. The addition of individual-level socioeconomic or race risk factors greatly attenuated these geographical disparities, but isolated rural regions maintained increased odds of adverse birth outcomes. ZIP code-level percent poverty and percent African American both had significant relationships with adverse birth outcomes. Poverty associations remained significant in the most population-dense regions when models were adjusted for individual-level risk factors. Population dense urban areas have heightened rates of adverse birth outcomes. High-poverty African American areas have higher odds of adverse birth outcomes in urban versus rural regions. These results suggest there are urban-specific social or environmental factors increasing risk for adverse birth outcomes in underserved communities. On the other hand, trends in PTBs and LBWs suggest interventions that have decreased adverse birth outcomes elsewhere may not be reaching

  5. County-Level Poverty Is Equally Associated with Unmet Health Care Needs in Rural and Urban Settings

    Science.gov (United States)

    Peterson, Lars E.; Litaker, David G.

    2010-01-01

    Context: Regional poverty is associated with reduced access to health care. Whether this relationship is equally strong in both rural and urban settings or is affected by the contextual and individual-level characteristics that distinguish these areas, is unclear. Purpose: Compare the association between regional poverty with self-reported unmet…

  6. The Daily Events and Emotions of Master's-Level Family Therapy Trainees in Off-Campus Practicum Settings

    Science.gov (United States)

    Edwards, Todd M.; Patterson, Jo Ellen

    2012-01-01

    The Day Reconstruction Method (DRM) was used to assess the daily events and emotions of one program's master's-level family therapy trainees in off-campus practicum settings. This study examines the DRM reports of 35 family therapy trainees in the second year of their master's program in marriage and family therapy. Four themes emerged from the…

  7. Activity Sets in Multi-Organizational Ecologies : A Project-Level Perspective on Sustainable Energy Innovations

    NARCIS (Netherlands)

    Gerrit Willem Ziggers; Kristina Manser; Bas Hillebrand; Paul Driessen; Josée Bloemer

    2014-01-01

    Complex innovations involve multi-organizational ecologies consisting of a myriad of different actors. This study investigates how innovation activities can be interpreted in the context of multi-organizational ecologies. Taking a project-level perspective, this study proposes a typology of four

  8. Supporting Diverse Young Adolescents: Cooperative Grouping in Inclusive Middle-Level Settings

    Science.gov (United States)

    Miller, Nicole C.; McKissick, Bethany R.; Ivy, Jessica T.; Moser, Kelly

    2017-01-01

    The middle level classroom presents unique challenges to educators who strive to provide opportunities that acknowledge learner diversity in terms of social, cognitive, physical, and emotional development. This is confounded even further within inclusive middle-school classrooms where the responsibility to differentiate instruction is even more…

  9. Flipping for success: evaluating the effectiveness of a novel teaching approach in a graduate level setting

    OpenAIRE

    Moraros, John; Islam, Adiba; Yu, Stan; Banow, Ryan; Schindelka, Barbara

    2015-01-01

    Background Flipped Classroom is a model that?s quickly gaining recognition as a novel teaching approach among health science curricula. The purpose of this study was four-fold and aimed to compare Flipped Classroom effectiveness ratings with: 1) student socio-demographic characteristics, 2) student final grades, 3) student overall course satisfaction, and 4) course pre-Flipped Classroom effectiveness ratings. Methods The participants in the study consisted of 67 Masters-level graduate student...

  10. A highly efficient 3D level-set grain growth algorithm tailored for ccNUMA architecture

    Science.gov (United States)

    Mießen, C.; Velinov, N.; Gottstein, G.; Barrales-Mora, L. A.

    2017-12-01

    A highly efficient simulation model for 2D and 3D grain growth was developed based on the level-set method. The model introduces modern computational concepts to achieve excellent performance on parallel computer architectures. Strong scalability was measured on cache-coherent non-uniform memory access (ccNUMA) architectures. To achieve this, the proposed approach considers the application of local level-set functions at the grain level. Ideal and non-ideal grain growth was simulated in 3D with the objective to study the evolution of statistical representative volume elements in polycrystals. In addition, microstructure evolution in an anisotropic magnetic material affected by an external magnetic field was simulated.

  11. Implementing and measuring the level of laboratory service integration in a program setting in Nigeria.

    Directory of Open Access Journals (Sweden)

    Henry Mbah

    Full Text Available The surge of donor funds to fight HIV&AIDS epidemic inadvertently resulted in the setup of laboratories as parallel structures to rapidly respond to the identified need. However these parallel structures are a threat to the existing fragile laboratory systems. Laboratory service integration is critical to remedy this situation. This paper describes an approach to quantitatively measure and track integration of HIV-related laboratory services into the mainstream laboratory services and highlight some key intervention steps taken, to enhance service integration.A quantitative before-and-after study conducted in 122 Family Health International (FHI360 supported health facilities across Nigeria. A minimum service package was identified including management structure; trainings; equipment utilization and maintenance; information, commodity and quality management for laboratory integration. A check list was used to assess facilities at baseline and 3 months follow-up. Level of integration was assessed on an ordinal scale (0 = no integration, 1 = partial integration, 2 = full integration for each service package. A composite score grading expressed as a percentage of total obtainable score of 14 was defined and used to classify facilities (≤ 80% FULL, 25% to 79% PARTIAL and <25% NO integration. Weaknesses were noted and addressed.We analyzed 9 (7.4% primary, 104 (85.2% secondary and 9 (7.4% tertiary level facilities. There were statistically significant differences in integration levels between baseline and 3 months follow-up period (p<0.01. Baseline median total integration score was 4 (IQR 3 to 5 compared to 7 (IQR 4 to 9 at 3 months follow-up (p = 0.000. Partial and fully integrated laboratory systems were 64 (52.5% and 0 (0.0% at baseline, compared to 100 (82.0% and 3 (2.4% respectively at 3 months follow-up (p = 0.000.This project showcases our novel approach to measure the status of each laboratory on the integration continuum.

  12. Job satisfaction in nurses working in tertiary level health care settings of Islamabad, Pakistan.

    Science.gov (United States)

    Bahalkani, Habib Akhtar; Kumar, Ramesh; Lakho, Abdul Rehman; Mahar, Benazir; Mazhar, Syeda Batool; Majeed, Abdul

    2011-01-01

    Job satisfaction greatly determines the productivity and efficiency of human resource for health. It literally means: 'the extent to which Health Professionals like or dislike their jobs'. Job satisfaction is said to be linked with employee's work environment, job responsibilities, and powers; and time pressure among various health professionals. As such it affects employee's organizational commitment and consequently the quality of health services. Objective of this study was to determine the level of job satisfaction and factors influencing it among nurses in a public sector hospital of Islamabad. A cross sectional study with self-administered structured questionnaire was conducted in the federal capital of Pakistan, Islamabad. Sample included 56 qualified nurses working in a tertiary care hospital. Overall 86% respondents were dissatisfied with about 26% highly dissatisfied with their job. The work environments, poor fringe benefits, dignity, responsibility given at workplace and time pressure were reason for dissatisfaction. Poor work environment, low salaries, lack of training opportunities, proper supervision, time pressure and financial rewards reported by the respondents. Our findings state a low level of overall satisfaction among workers in a public sector tertiary care health organization in Islamabad. Most of this dissatisfaction is caused by poor salaries, not given the due respect, poor work environment, unbalanced responsibilities with little overall control, time pressure, patient care and lack of opportunities for professional development.

  13. County-level poverty is equally associated with unmet health care needs in rural and urban settings.

    Science.gov (United States)

    Peterson, Lars E; Litaker, David G

    2010-01-01

    Regional poverty is associated with reduced access to health care. Whether this relationship is equally strong in both rural and urban settings or is affected by the contextual and individual-level characteristics that distinguish these areas, is unclear. Compare the association between regional poverty with self-reported unmet need, a marker of health care access, by rural/urban setting. Multilevel, cross-sectional analysis of a state-representative sample of 39,953 adults stratified by rural/urban status, linked at the county level to data describing contextual characteristics. Weighted random intercept models examined the independent association of regional poverty with unmet needs, controlling for a range of contextual and individual-level characteristics. The unadjusted association between regional poverty levels and unmet needs was similar in both rural (OR = 1.06 [95% CI, 1.04-1.08]) and urban (OR = 1.03 [1.02-1.05]) settings. Adjusting for other contextual characteristics increased the size of the association in both rural (OR = 1.11 [1.04-1.19]) and urban (OR = 1.11 [1.05-1.18]) settings. Further adjustment for individual characteristics had little additional effect in rural (OR = 1.10 [1.00-1.20]) or urban (OR = 1.11 [1.01-1.22]) settings. To better meet the health care needs of all Americans, health care systems in areas with high regional poverty should acknowledge the relationship between poverty and unmet health care needs. Investments, or other interventions, that reduce regional poverty may be useful strategies for improving health through better access to health care. © 2010 National Rural Health Association.

  14. Comparing of goal setting strategy with group education method to increase physical activity level: A randomized trial.

    Science.gov (United States)

    Jiryaee, Nasrin; Siadat, Zahra Dana; Zamani, Ahmadreza; Taleban, Roya

    2015-10-01

    Designing an intervention to increase physical activity is important to be based on the health care settings resources and be acceptable by the subject group. This study was designed to assess and compare the effect of the goal setting strategy with a group education method on increasing the physical activity of mothers of children aged 1 to 5. Mothers who had at least one child of 1-5 years were randomized into two groups. The effect of 1) goal-setting strategy and 2) group education method on increasing physical activity was assessed and compared 1 month and 3 months after the intervention. Also, the weight, height, body mass index (BMI), waist and hip circumference, and well-being were compared between the two groups before and after the intervention. Physical activity level increased significantly after the intervention in the goal-setting group and it was significantly different between the two groups after intervention (P goal-setting group after the intervention. In the group education method, only the well-being score improved significantly (P goal-setting strategy to boost physical activity, improving the state of well-being and decreasing BMI, waist, and hip circumference.

  15. Comparing of goal setting strategy with group education method to increase physical activity level: A randomized trial

    Directory of Open Access Journals (Sweden)

    Nasrin Jiryaee

    2015-01-01

    Full Text Available Background: Designing an intervention to increase physical activity is important to be based on the health care settings resources and be acceptable by the subject group. This study was designed to assess and compare the effect of the goal setting strategy with a group education method on increasing the physical activity of mothers of children aged 1 to 5. Materials and Methods: Mothers who had at least one child of 1-5 years were randomized into two groups. The effect of 1 goal-setting strategy and 2 group education method on increasing physical activity was assessed and compared 1 month and 3 months after the intervention. Also, the weight, height, body mass index (BMI, waist and hip circumference, and well-being were compared between the two groups before and after the intervention. Results: Physical activity level increased significantly after the intervention in the goal-setting group and it was significantly different between the two groups after intervention (P < 0.05. BMI, waist circumference, hip circumference, and well-being score were significantly different in the goal-setting group after the intervention. In the group education method, only the well-being score improved significantly (P < 0.05. Conclusion: Our study presented the effects of using the goal-setting strategy to boost physical activity, improving the state of well-being and decreasing BMI, waist, and hip circumference.

  16. Recent developments in automated determinations of trace level concentrations of elements and on-line fractionations schemes exploiting the micro-sequential injection - lab-on-valve approach

    DEFF Research Database (Denmark)

    Hansen, Elo Harald; Miró, Manuel; Long, Xiangbao

    2006-01-01

    The determination of trace level concentrations of elements, such as metal species, in complex matrices by atomic absorption or emission spectrometric methods often require appropriate pretreatments comprising separation of the analyte from interfering constituents and analyte preconcentration...... are presented as based on the exploitation of micro-sequential injection (μSI-LOV) using hydrophobic as well as hydrophilic bead materials. The examples given comprise the presentation of a universal approach for SPE-assays, front-end speciation of Cr(III) and Cr(VI) in a fully automated and enclosed set...

  17. A possible methodological approach to setting up control level of radiation factors

    International Nuclear Information System (INIS)

    Devyatajkin, E.V.; Abramov, Yu.V.

    1986-01-01

    The mathematical formalization of the concept of control levels (CL) which enables one to obtain CL numerical values of controllable parameters required for rapid control purposes is described. The initial data for the assessment of environmental radioactivity are the controllable parameter values that is practical characteristic of controllable radiation factor showing technically measurable or calculation value. The controllable parameters can be divided into two classes depending on the degree of radiation effect on a man: possessing additivity properties (dosimetric class) and non-possessing (radiation class, which comprises the results of control of medium alteration dynamics, equipment operation safety, completeness of protection measures performance). The CL calculation formulas with account for requirements of radiation safety standards (RSS-76) are presented

  18. High Levels of Post-Abortion Complication in a Setting Where Abortion Service Is Not Legalized

    Science.gov (United States)

    Melese, Tadele; Habte, Dereje; Tsima, Billy M.; Mogobe, Keitshokile Dintle; Chabaesele, Kesegofetse; Rankgoane, Goabaone; Keakabetse, Tshiamo R.; Masweu, Mabole; Mokotedi, Mosidi; Motana, Mpho; Moreri-Ntshabele, Badani

    2017-01-01

    Background Maternal mortality due to abortion complications stands among the three leading causes of maternal death in Botswana where there is a restrictive abortion law. This study aimed at assessing the patterns and determinants of post-abortion complications. Methods A retrospective institution based cross-sectional study was conducted at four hospitals from January to August 2014. Data were extracted from patients’ records with regards to their socio-demographic variables, abortion complications and length of hospital stay. Descriptive statistics and bivariate analysis were employed. Result A total of 619 patients’ records were reviewed with a mean (SD) age of 27.12 (5.97) years. The majority of abortions (95.5%) were reported to be spontaneous and 3.9% of the abortions were induced by the patient. Two thirds of the patients were admitted as their first visit to the hospitals and one third were referrals from other health facilities. Two thirds of the patients were admitted as a result of incomplete abortion followed by inevitable abortion (16.8%). Offensive vaginal discharge (17.9%), tender uterus (11.3%), septic shock (3.9%) and pelvic peritonitis (2.4%) were among the physical findings recorded on admission. Clinically detectable anaemia evidenced by pallor was found to be the leading major complication in 193 (31.2%) of the cases followed by hypovolemic and septic shock 65 (10.5%). There were a total of 9 abortion related deaths with a case fatality rate of 1.5%. Self-induced abortion and delayed uterine evacuation of more than six hours were found to have significant association with post-abortion complications (p-values of 0.018 and 0.035 respectively). Conclusion Abortion related complications and deaths are high in our setting where abortion is illegal. Mechanisms need to be devised in the health facilities to evacuate the uterus in good time whenever it is indicated and to be equipped to handle the fatal complications. There is an indication for

  19. High Levels of Post-Abortion Complication in a Setting Where Abortion Service Is Not Legalized.

    Directory of Open Access Journals (Sweden)

    Tadele Melese

    Full Text Available Maternal mortality due to abortion complications stands among the three leading causes of maternal death in Botswana where there is a restrictive abortion law. This study aimed at assessing the patterns and determinants of post-abortion complications.A retrospective institution based cross-sectional study was conducted at four hospitals from January to August 2014. Data were extracted from patients' records with regards to their socio-demographic variables, abortion complications and length of hospital stay. Descriptive statistics and bivariate analysis were employed.A total of 619 patients' records were reviewed with a mean (SD age of 27.12 (5.97 years. The majority of abortions (95.5% were reported to be spontaneous and 3.9% of the abortions were induced by the patient. Two thirds of the patients were admitted as their first visit to the hospitals and one third were referrals from other health facilities. Two thirds of the patients were admitted as a result of incomplete abortion followed by inevitable abortion (16.8%. Offensive vaginal discharge (17.9%, tender uterus (11.3%, septic shock (3.9% and pelvic peritonitis (2.4% were among the physical findings recorded on admission. Clinically detectable anaemia evidenced by pallor was found to be the leading major complication in 193 (31.2% of the cases followed by hypovolemic and septic shock 65 (10.5%. There were a total of 9 abortion related deaths with a case fatality rate of 1.5%. Self-induced abortion and delayed uterine evacuation of more than six hours were found to have significant association with post-abortion complications (p-values of 0.018 and 0.035 respectively.Abortion related complications and deaths are high in our setting where abortion is illegal. Mechanisms need to be devised in the health facilities to evacuate the uterus in good time whenever it is indicated and to be equipped to handle the fatal complications. There is an indication for clinical audit on post-abortion care

  20. Sequential injection/bead injection lab-on-valve schemes for on-line solid phase extraction and preconcentration of ultra-trace levels of heavy metals with determination by ETAAS and ICPMS

    DEFF Research Database (Denmark)

    Wang, Jianhua; Hansen, Elo Harald; Miró, Manuel

    2003-01-01

    are focused on the applications of SI-BI-LOV protocols for on-line microcolumn based solid phase extraction of ultra-trace levels of heavy metals, employing the so-called renewable surface separation and preconcentration manipulatory scheme. Two types of sorbents have been employed as packing material...

  1. Setting up experimental incineration system for low-level radioactive samples and combustion experiments

    International Nuclear Information System (INIS)

    Yumoto, Yasuhiro; Hanafusa, Tadashi; Nagamatsu, Tomohiro; Okada, Shigeru

    1997-01-01

    An incineration system was constructed which were composed of a combustion furnace (AP-150R), a cyclone dust collector, radioisotope trapping and measurement apparatus and a radioisotope storage room built in the first basement of the Radioisotope Center. Low level radioactive samples (LLRS) used for the combustion experiment were composed of combustible material or semi-combustible material containing 500 kBq of 99m TcO 4 or 23.25 kBq of 131 INa. The distribution of radioisotopes both in the inside and outside of combustion furnace were estimated. We measured radioactivity of a smoke duct gas in terminal exit of the exhaust port. In case of combustion of LLRS containing 99m TcO 4 or 131 INa, concentration of radioisotopes at the exhaust port showed less than legal concentration limit of these radioisotopes. In cases of combustion of LLRS containing 99m TcO 4 or 131 INa, decontamination factors of the incineration system were 120 and 1.1, respectively. (author)

  2. Implementing and measuring the level of laboratory service integration in a program setting in Nigeria.

    Science.gov (United States)

    Mbah, Henry; Negedu-Momoh, Olubunmi Ruth; Adedokun, Oluwasanmi; Ikani, Patrick Anibbe; Balogun, Oluseyi; Sanwo, Olusola; Ochei, Kingsley; Ekanem, Maurice; Torpey, Kwasi

    2014-01-01

    The surge of donor funds to fight HIV&AIDS epidemic inadvertently resulted in the setup of laboratories as parallel structures to rapidly respond to the identified need. However these parallel structures are a threat to the existing fragile laboratory systems. Laboratory service integration is critical to remedy this situation. This paper describes an approach to quantitatively measure and track integration of HIV-related laboratory services into the mainstream laboratory services and highlight some key intervention steps taken, to enhance service integration. A quantitative before-and-after study conducted in 122 Family Health International (FHI360) supported health facilities across Nigeria. A minimum service package was identified including management structure; trainings; equipment utilization and maintenance; information, commodity and quality management for laboratory integration. A check list was used to assess facilities at baseline and 3 months follow-up. Level of integration was assessed on an ordinal scale (0 = no integration, 1 = partial integration, 2 = full integration) for each service package. A composite score grading expressed as a percentage of total obtainable score of 14 was defined and used to classify facilities (≤ 80% FULL, 25% to 79% PARTIAL and laboratory systems were 64 (52.5%) and 0 (0.0%) at baseline, compared to 100 (82.0%) and 3 (2.4%) respectively at 3 months follow-up (p = 0.000). This project showcases our novel approach to measure the status of each laboratory on the integration continuum.

  3. Three-Dimensional Simulation of DRIE Process Based on the Narrow Band Level Set and Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Jia-Cheng Yu

    2018-02-01

    Full Text Available A three-dimensional topography simulation of deep reactive ion etching (DRIE is developed based on the narrow band level set method for surface evolution and Monte Carlo method for flux distribution. The advanced level set method is implemented to simulate the time-related movements of etched surface. In the meanwhile, accelerated by ray tracing algorithm, the Monte Carlo method incorporates all dominant physical and chemical mechanisms such as ion-enhanced etching, ballistic transport, ion scattering, and sidewall passivation. The modified models of charged particles and neutral particles are epitomized to determine the contributions of etching rate. The effects such as scalloping effect and lag effect are investigated in simulations and experiments. Besides, the quantitative analyses are conducted to measure the simulation error. Finally, this simulator will be served as an accurate prediction tool for some MEMS fabrications.

  4. A topology optimization method based on the level set method for the design of negative permeability dielectric metamaterials

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Izui, Kazuhiro

    2012-01-01

    This paper presents a level set-based topology optimization method for the design of negative permeability dielectric metamaterials. Metamaterials are artificial materials that display extraordinary physical properties that are unavailable with natural materials. The aim of the formulated...... optimization problem is to find optimized layouts of a dielectric material that achieve negative permeability. The presence of grayscale areas in the optimized configurations critically affects the performance of metamaterials, positively as well as negatively, but configurations that contain grayscale areas...... are highly impractical from an engineering and manufacturing point of view. Therefore, a topology optimization method that can obtain clear optimized configurations is desirable. Here, a level set-based topology optimization method incorporating a fictitious interface energy is applied to a negative...

  5. Quasi-min-max Fuzzy MPC of UTSG Water Level Based on Off-Line Invariant Set

    Science.gov (United States)

    Liu, Xiangjie; Jiang, Di; Lee, Kwang Y.

    2015-10-01

    In a nuclear power plant, the water level of the U-tube steam generator (UTSG) must be maintained within a safe range. Traditional control methods encounter difficulties due to the complexity, strong nonlinearity and “swell and shrink” effects, especially at low power levels. A properly designed robust model predictive control can well solve this problem. In this paper, a quasi-min-max fuzzy model predictive controller is developed for controlling the constrained UTSG system. While the online computational burden could be quite large for the real-time control, a bank of ellipsoid invariant sets together with the corresponding feedback control laws are obtained by off-line solving linear matrix inequalities (LMIs). Based on the UTSG states, the online optimization is simplified as a constrained optimization problem with a bisection search for the corresponding ellipsoid invariant set. Simulation results are given to show the effectiveness of the proposed controller.

  6. Individual and setting level predictors of the implementation of a skin cancer prevention program: a multilevel analysis

    Directory of Open Access Journals (Sweden)

    Brownson Ross C

    2010-05-01

    Full Text Available Abstract Background To achieve widespread cancer control, a better understanding is needed of the factors that contribute to successful implementation of effective skin cancer prevention interventions. This study assessed the relative contributions of individual- and setting-level characteristics to implementation of a widely disseminated skin cancer prevention program. Methods A multilevel analysis was conducted using data from the Pool Cool Diffusion Trial from 2004 and replicated with data from 2005. Implementation of Pool Cool by lifeguards was measured using a composite score (implementation variable, range 0 to 10 that assessed whether the lifeguard performed different components of the intervention. Predictors included lifeguard background characteristics, lifeguard sun protection-related attitudes and behaviors, pool characteristics, and enhanced (i.e., more technical assistance, tailored materials, and incentives are provided versus basic treatment group. Results The mean value of the implementation variable was 4 in both years (2004 and 2005; SD = 2 in 2004 and SD = 3 in 2005 indicating a moderate implementation for most lifeguards. Several individual-level (lifeguard characteristics and setting-level (pool characteristics and treatment group factors were found to be significantly associated with implementation of Pool Cool by lifeguards. All three lifeguard-level domains (lifeguard background characteristics, lifeguard sun protection-related attitudes and behaviors and six pool-level predictors (number of weekly pool visitors, intervention intensity, geographic latitude, pool location, sun safety and/or skin cancer prevention programs, and sun safety programs and policies were included in the final model. The most important predictors of implementation were the number of weekly pool visitors (inverse association and enhanced treatment group (positive association. That is, pools with fewer weekly visitors and pools in the enhanced

  7. The QKD network: model and routing scheme

    Science.gov (United States)

    Yang, Chao; Zhang, Hongqi; Su, Jinhai

    2017-11-01

    Quantum key distribution (QKD) technology can establish unconditional secure keys between two communicating parties. Although this technology has some inherent constraints, such as the distance and point-to-point mode limits, building a QKD network with multiple point-to-point QKD devices can overcome these constraints. Considering the development level of current technology, the trust relaying QKD network is the first choice to build a practical QKD network. However, the previous research didn't address a routing method on the trust relaying QKD network in detail. This paper focuses on the routing issues, builds a model of the trust relaying QKD network for easily analysing and understanding this network, and proposes a dynamical routing scheme for this network. From the viewpoint of designing a dynamical routing scheme in classical network, the proposed scheme consists of three components: a Hello protocol helping share the network topology information, a routing algorithm to select a set of suitable paths and establish the routing table and a link state update mechanism helping keep the routing table newly. Experiments and evaluation demonstrates the validity and effectiveness of the proposed routing scheme.

  8. Classification of Normal and Apoptotic Cells from Fluorescence Microscopy Images Using Generalized Polynomial Chaos and Level Set Function.

    Science.gov (United States)

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2016-06-01

    Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.

  9. GPU accelerated edge-region based level set evolution constrained by 2D gray-scale histogram.

    Science.gov (United States)

    Balla-Arabé, Souleymane; Gao, Xinbo; Wang, Bin

    2013-07-01

    Due to its intrinsic nature which allows to easily handle complex shapes and topological changes, the level set method (LSM) has been widely used in image segmentation. Nevertheless, LSM is computationally expensive, which limits its applications in real-time systems. For this purpose, we propose a new level set algorithm, which uses simultaneously edge, region, and 2D histogram information in order to efficiently segment objects of interest in a given scene. The computational complexity of the proposed LSM is greatly reduced by using the highly parallelizable lattice Boltzmann method (LBM) with a body force to solve the level set equation (LSE). The body force is the link with image data and is defined from the proposed LSE. The proposed LSM is then implemented using an NVIDIA graphics processing units to fully take advantage of the LBM local nature. The new algorithm is effective, robust against noise, independent to the initial contour, fast, and highly parallelizable. The edge and region information enable to detect objects with and without edges, and the 2D histogram information enable the effectiveness of the method in a noisy environment. Experimental results on synthetic and real images demonstrate subjectively and objectively the performance of the proposed method.

  10. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    Science.gov (United States)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  11. Comparison of different statistical methods for estimation of extreme sea levels with wave set-up contribution

    Science.gov (United States)

    Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme

    2013-04-01

    Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.

  12. Electromagnetically induced transparency and retrieval of light pulses in a Λ-type and a V-type level scheme in Pr3+:Y2SiO5

    International Nuclear Information System (INIS)

    Beil, Fabian; Klein, Jens; Halfmann, Thomas; Nikoghosyan, Gor

    2008-01-01

    We examine electromagnetically induced transparency (EIT), the optical preparation of persistent nuclear spin coherences and the retrieval of light pulses both in a Λ-type and a V-type coupling scheme in a Pr 3+ :Y 2 SiO 5 crystal, cooled to cryogenic temperatures. The medium is prepared by optical pumping and spectral hole burning, creating a spectrally isolated Λ-type and a V-type system within the inhomogeneous bandwidth of the 3 H 4 ↔ 1 D 2 transition of the Pr 3+ ions. By EIT, in the Λ-type scheme we drive a nuclear spin coherence between the ground-state hyperfine levels, while in the V-type scheme we drive a coherence between the excited-state hyperfine levels. We observe the cancellation of absorption due to EIT and the retrieval of light pulses in both level schemes. This also permits the determination of dephasing times of the nuclear spin coherence, either in the ground state or the optically excited state

  13. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  14. Leveling up: enabling diverse users to locate and effectively use unfamiliar data sets through NCAR's Research Data Archive

    Science.gov (United States)

    Peng, G. S.

    2016-12-01

    Research necessarily expands upon the volume and variety of data used in prior work. Increasingly, investigators look outside their primary areas of expertise for data to incorporate into their research. Locating and using the data that they need, which may be described in terminology from other fields of science or be encoded in unfamiliar data formats, present often insurmountable barriers for potential users. As a data provider of a diverse collection of over 600 atmospheric and oceanic data sets (DS) (http://rda.ucar.edu), we seek to reduce or remove those barriers. Serving a broadening and increasing user base with fixed and finite resources requires automation. Our software harvests metadata descriptors about the data from the data files themselves. Data curators/subject matter experts augment the machine-generated metadata as needed. Metadata powers our data search tools. Users may search for data in a myriad of ways ranging from free text queries to GCMD keywords to faceted searches capable of narrowing down selections by specific criteria. Users are offered customized lists of DSs fitting their criteria with links to DS main information pages that provide detailed information about each DS. Where appropriate, they link to the NCAR Climate Data Guide for expert guidance about strengths and weaknesses of that particular DS. Once users find the data sets they need, we provide modular lessons for common data tasks. The lessons may be data tool install guides, data recipes, blog posts, or short YouTube videos. Rather than overloading users with reams of information, we provide targeted lessons when the user is most receptive, e.g. when they want to use data in an unfamiliar format. We add new material when we discover common points of confusion. Each educational resource is tagged with DS ID numbers so that they are automatically linked with the relevant DSs. How can data providers leverage the work of other data providers? Can a common tagging scheme for data

  15. Constraining a hybrid volatility basis-set model for aging of wood-burning emissions using smog chamber experiments: a box-model study based on the VBS scheme of the CAMx model (v5.40)

    Science.gov (United States)

    Ciarelli, Giancarlo; El Haddad, Imad; Bruns, Emily; Aksoyoglu, Sebnem; Möhler, Ottmar; Baltensperger, Urs; Prévôt, André S. H.

    2017-06-01

    In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K) in a ˜ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS) box model, representing the emission partitioning and their oxidation against OH. We combine aerosol-chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs) from a high-resolution proton transfer reaction mass spectrometer (PTR-MS) and with organic aerosol measurements from an aerosol mass spectrometer (AMS). Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model) relative to low volatility and semi-volatile primary organic material (OMsv), which is partitioned based on current published volatility distribution data. By comparing the NTVOC / OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ˜ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA) concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10-11 to 4. 0 × 10-11 cm3 molec-1 s-1. The average enthalpy of vaporization of secondary organic aerosol

  16. Constraining a hybrid volatility basis-set model for aging of wood-burning emissions using smog chamber experiments: a box-model study based on the VBS scheme of the CAMx model (v5.40

    Directory of Open Access Journals (Sweden)

    G. Ciarelli

    2017-06-01

    Full Text Available In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K in a ∼ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS box model, representing the emission partitioning and their oxidation against OH. We combine aerosol–chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs from a high-resolution proton transfer reaction mass spectrometer (PTR-MS and with organic aerosol measurements from an aerosol mass spectrometer (AMS. Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model relative to low volatility and semi-volatile primary organic material (OMsv, which is partitioned based on current published volatility distribution data. By comparing the NTVOC ∕ OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ∼ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10−11 to 4. 0 × 10−11 cm3 molec−1 s−1

  17. Bud development, flowering and fruit set of Moringa oleifera Lam. (Horseradish Tree as affected by various irrigation levels

    Directory of Open Access Journals (Sweden)

    Quintin Ernst Muhl

    2013-12-01

    Full Text Available Moringa oleifera is becoming increasingly popular as an industrial crop due to its multitude of useful attributes as water purifier, nutritional supplement and biofuel feedstock. Given its tolerance to sub-optimal growing conditions, most of the current and anticipated cultivation areas are in medium to low rainfall areas. This study aimed to assess the effect of various irrigation levels on floral initiation, flowering and fruit set. Three treatments namely, a 900 mm (900IT, 600 mm (600IT and 300 mm (300IT per annum irrigation treatment were administered through drip irrigation, simulating three total annual rainfall amounts. Individual inflorescences from each treatment were tagged during floral initiation and monitored throughout until fruit set. Flower bud initiation was highest at the 300IT and lowest at the 900IT for two consecutive growing seasons. Fruit set on the other hand, decreased with the decrease in irrigation treatment. Floral abortion, reduced pollen viability as well as moisture stress in the style were contributing factors to the reduction in fruiting/yield observed at the 300IT. Moderate water stress prior to floral initiation could stimulate flower initiation, however, this should be followed by sufficient irrigation to ensure good pollination, fruit set and yield.

  18. A Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    Science.gov (United States)

    Buyens, Wim; van Dijk, Bas; Wouters, Jan; Moonen, Marc

    2015-10-01

    Listening to music is still one of the more challenging aspects of using a cochlear implant (CI) for most users. Simple musical structures, a clear rhythm/beat, and lyrics that are easy to follow are among the top factors contributing to music appreciation for CI users. Modifying the audio mix of complex music potentially improves music enjoyment in CI users. A stereo music preprocessing scheme is described in which vocals, drums, and bass are emphasized based on the representation of the harmonic and the percussive components in the input spectrogram, combined with the spatial allocation of instruments in typical stereo recordings. The scheme is assessed with postlingually deafened CI subjects (N = 7) using pop/rock music excerpts with different complexity levels. The scheme is capable of modifying relative instrument level settings, with the aim of improving music appreciation in CI users, and allows individual preference adjustments. The assessment with CI subjects confirms the preference for more emphasis on vocals, drums, and bass as offered by the preprocessing scheme, especially for songs with higher complexity. The stereo music preprocessing scheme has the potential to improve music enjoyment in CI users by modifying the audio mix in widespread (stereo) music recordings. Since music enjoyment in CI users is generally poor, this scheme can assist the music listening experience of CI users as a training or rehabilitation tool.

  19. Improving district level health planning and priority setting in Tanzania through implementing accountability for reasonableness framework: Perceptions of stakeholders.

    Science.gov (United States)

    Maluka, Stephen; Kamuzora, Peter; San Sebastián, Miguel; Byskov, Jens; Ndawi, Benedict; Hurtig, Anna-Karin

    2010-12-01

    In 2006, researchers and decision-makers launched a five-year project - Response to Accountable Priority Setting for Trust in Health Systems (REACT) - to improve planning and priority-setting through implementing the Accountability for Reasonableness framework in Mbarali District, Tanzania. The objective of this paper is to explore the acceptability of Accountability for Reasonableness from the perspectives of the Council Health Management Team, local government officials, health workforce and members of user boards and committees. Individual interviews were carried out with different categories of actors and stakeholders in the district. The interview guide consisted of a series of questions, asking respondents to describe their perceptions regarding each condition of the Accountability for Reasonableness framework in terms of priority setting. Interviews were analysed using thematic framework analysis. Documentary data were used to support, verify and highlight the key issues that emerged. Almost all stakeholders viewed Accountability for Reasonableness as an important and feasible approach for improving priority-setting and health service delivery in their context. However, a few aspects of Accountability for Reasonableness were seen as too difficult to implement given the socio-political conditions and traditions in Tanzania. Respondents mentioned: budget ceilings and guidelines, low level of public awareness, unreliable and untimely funding, as well as the limited capacity of the district to generate local resources as the major contextual factors that hampered the full implementation of the framework in their context. This study was one of the first assessments of the applicability of Accountability for Reasonableness in health care priority-setting in Tanzania. The analysis, overall, suggests that the Accountability for Reasonableness framework could be an important tool for improving priority-setting processes in the contexts of resource-poor settings

  20. Introduction to the level-set full field modeling of laths spheroidization phenomenon in α/β titanium alloys

    Directory of Open Access Journals (Sweden)

    Polychronopoulou D.

    2016-01-01

    Full Text Available Fragmentation of α lamellae and subsequent spheroidization of α laths in α/β titanium alloys occurring during and after deformation are well known phenomena. We will illustrate the development of a new finite element methodology to model them. This new methodology is based on a level set framework to model the deformation and the ad hoc simultaneous and/or subsequent interfaces kinetics. We will focus, at yet, on the modeling of the surface diffusion at the α/β phase interfaces and the motion by mean curvature at the α/α grain interfaces.

  1. Novel room-temperature-setting phosphate ceramics for stabilizing combustion products and low-level mixed wastes

    International Nuclear Information System (INIS)

    Wagh, A.S.; Singh, D.

    1994-01-01

    Argonne National Laboratory, with support from the Office of Technology in the US Department of Energy (DOE), has developed a new process employing novel, chemically bonded ceramic materials to stabilize secondary waste streams. Such waste streams result from the thermal processes used to stabilize low-level, mixed wastes. The process will help the electric power industry treat its combustion and low-level mixed wastes. The ceramic materials are strong, dense, leach-resistant, and inexpensive to fabricate. The room-temperature-setting process allows stabilization of volatile components containing lead, mercury, cadmium, chromium, and nickel. The process also provides effective stabilization of fossil fuel combustion products. It is most suitable for treating fly and bottom ashes

  2. Automated volume analysis of head and neck lesions on CT scans using 3D level set segmentation

    International Nuclear Information System (INIS)

    Street, Ethan; Hadjiiski, Lubomir; Sahiner, Berkman; Gujar, Sachin; Ibrahim, Mohannad; Mukherji, Suresh K.; Chan, Heang-Ping

    2007-01-01

    The authors have developed a semiautomatic system for segmentation of a diverse set of lesions in head and neck CT scans. The system takes as input an approximate bounding box, and uses a multistage level set to perform the final segmentation. A data set consisting of 69 lesions marked on 33 scans from 23 patients was used to evaluate the performance of the system. The contours from automatic segmentation were compared to both 2D and 3D gold standard contours manually drawn by three experienced radiologists. Three performance metric measures were used for the comparison. In addition, a radiologist provided quality ratings on a 1 to 10 scale for all of the automatic segmentations. For this pilot study, the authors observed that the differences between the automatic and gold standard contours were larger than the interobserver differences. However, the system performed comparably to the radiologists, achieving an average area intersection ratio of 85.4% compared to an average of 91.2% between two radiologists. The average absolute area error was 21.1% compared to 10.8%, and the average 2D distance was 1.38 mm compared to 0.84 mm between the radiologists. In addition, the quality rating data showed that, despite the very lax assumptions made on the lesion characteristics in designing the system, the automatic contours approximated many of the lesions very well

  3. Simulation to aid in interpreting biological relevance and setting of population-level protection goals for risk assessment of pesticides.

    Science.gov (United States)

    Topping, Christopher John; Luttik, Robert

    2017-10-01

    Specific protection goals (SPGs) comprise an explicit expression of the environmental components that need protection and the maximum impacts that can be tolerated. SPGs are set by risk managers and are typically based on protecting populations or functions. However, the measurable endpoints available to risk managers, at least for vertebrates, are typically laboratory tests. We demonstrate, using the example of eggshell thinning in skylarks, how simulation can be used to place laboratory endpoints in context of population-level effects as an aid to setting the SPGs. We develop explanatory scenarios investigating the impact of different assumptions of eggshell thinning on skylark population size, density and distribution in 10 Danish landscapes, chosen to represent the range of typical Danish agricultural conditions. Landscape and timing of application of the pesticide were found to be the most critical factors to consider in the impact assessment. Consequently, a regulatory scenario of monoculture spring barley with an early spray treatment eliciting the eggshell thinning effect was applied using concentrations eliciting effects of zero to 100% in steps of 5%. Setting the SPGs requires balancing scientific, social and political realities. However, the provision of clear and detailed options such as those from comprehensive simulation results can inform the decision process by improving transparency and by putting the more abstract testing data into the context of real-world impacts. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Level-set reconstruction algorithm for ultrafast limited-angle X-ray computed tomography of two-phase flows.

    Science.gov (United States)

    Bieberle, M; Hampel, U

    2015-06-13

    Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  5. Effect of a uniform magnetic field on dielectric two-phase bubbly flows using the level set method

    International Nuclear Information System (INIS)

    Ansari, M.R.; Hadidi, A.; Nimvari, M.E.

    2012-01-01

    In this study, the behavior of a single bubble in a dielectric viscous fluid under a uniform magnetic field has been simulated numerically using the Level Set method in two-phase bubbly flow. The two-phase bubbly flow was considered to be laminar and homogeneous. Deformation of the bubble was considered to be due to buoyancy and magnetic forces induced from the external applied magnetic field. A computer code was developed to solve the problem using the flow field, the interface of two phases, and the magnetic field. The Finite Volume method was applied using the SIMPLE algorithm to discretize the governing equations. Using this algorithm enables us to calculate the pressure parameter, which has been eliminated by previous researchers because of the complexity of the two-phase flow. The finite difference method was used to solve the magnetic field equation. The results outlined in the present study agree well with the existing experimental data and numerical results. These results show that the magnetic field affects and controls the shape, size, velocity, and location of the bubble. - Highlights: ►A bubble behavior was simulated numerically. ► A single bubble behavior was considered in a dielectric viscous fluid. ► A uniform magnetic field is used to study a bubble behavior. ► Deformation of the bubble was considered using the Level Set method. ► The magnetic field affects the shape, size, velocity, and location of the bubble.

  6. Identification of Arbitrary Zonation in Groundwater Parameters using the Level Set Method and a Parallel Genetic Algorithm

    Science.gov (United States)

    Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.

    2017-12-01

    Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.

  7. GSHR, a Web-Based Platform Provides Gene Set-Level Analyses of Hormone Responses in Arabidopsis

    Directory of Open Access Journals (Sweden)

    Xiaojuan Ran

    2018-01-01

    Full Text Available Phytohormones regulate diverse aspects of plant growth and environmental responses. Recent high-throughput technologies have promoted a more comprehensive profiling of genes regulated by different hormones. However, these omics data generally result in large gene lists that make it challenging to interpret the data and extract insights into biological significance. With the rapid accumulation of theses large-scale experiments, especially the transcriptomic data available in public databases, a means of using this information to explore the transcriptional networks is needed. Different platforms have different architectures and designs, and even similar studies using the same platform may obtain data with large variances because of the highly dynamic and flexible effects of plant hormones; this makes it difficult to make comparisons across different studies and platforms. Here, we present a web server providing gene set-level analyses of Arabidopsis thaliana hormone responses. GSHR collected 333 RNA-seq and 1,205 microarray datasets from the Gene Expression Omnibus, characterizing transcriptomic changes in Arabidopsis in response to phytohormones including abscisic acid, auxin, brassinosteroids, cytokinins, ethylene, gibberellins, jasmonic acid, salicylic acid, and strigolactones. These data were further processed and organized into 1,368 gene sets regulated by different hormones or hormone-related factors. By comparing input gene lists to these gene sets, GSHR helped to identify gene sets from the input gene list regulated by different phytohormones or related factors. Together, GSHR links prior information regarding transcriptomic changes induced by hormones and related factors to newly generated data and facilities cross-study and cross-platform comparisons; this helps facilitate the mining of biologically significant information from large-scale datasets. The GSHR is freely available at http://bioinfo.sibs.ac.cn/GSHR/.

  8. ESCAP mobile training scheme.

    Science.gov (United States)

    Yasas, F M

    1977-01-01

    In response to a United Nations resolution, the Mobile Training Scheme (MTS) was set up to provide training to the trainers of national cadres engaged in frontline and supervisory tasks in social welfare and rural development. The training is innovative in its being based on an analysis of field realities. The MTS team consisted of a leader, an expert on teaching methods and materials, and an expert on action research and evaluation. The country's trainers from different departments were sent to villages to work for a short period and to report their problems in fulfilling their roles. From these grass roots experiences, they made an analysis of the job, determining what knowledge, attitude and skills it required. Analysis of daily incidents and problems were used to produce indigenous teaching materials drawn from actual field practice. How to consider the problems encountered through government structures for policy making and decisions was also learned. Tasks of the students were to identify the skills needed for role performance by job analysis, daily diaries and project histories; to analyze the particular community by village profiles; to produce indigenous teaching materials; and to practice the role skills by actual role performance. The MTS scheme was tried in Nepal in 1974-75; 3 training programs trained 25 trainers and 51 frontline workers; indigenous teaching materials were created; technical papers written; and consultations were provided. In Afghanistan the scheme was used in 1975-76; 45 participants completed the training; seminars were held; and an ongoing Council was created. It is hoped that the training program will be expanded to other countries.

  9. News Competition: Physics Olympiad hits Thailand Report: Institute carries out survey into maths in physics at university Event: A day for everyone teaching physics Conference: Welsh conference celebrates birthday Schools: Researchers in Residence scheme set to close Teachers: A day for new physics teachers Social: Network combines fun and physics Forthcoming events

    Science.gov (United States)

    2011-09-01

    Competition: Physics Olympiad hits Thailand Report: Institute carries out survey into maths in physics at university Event: A day for everyone teaching physics Conference: Welsh conference celebrates birthday Schools: Researchers in Residence scheme set to close Teachers: A day for new physics teachers Social: Network combines fun and physics Forthcoming events

  10. ZZ-CENPL, Chinese Evaluated Nuclear Parameter Library. ZZ CENPL-DLS, Discrete Level Schemes and Gamma Branching Ratios Library; ZZ CENPL-FBP, Fission Barrier Parameter Library; ZZ CENPL-GDRP, Giant Dipole Resonance Parameter Library; ZZ CENPL-NLD, Nuclear Level Density Parameter Library; ZZ CENPL-MCC, Nuclear Ground State Atomic Masses Library; ZZ CENPL-OMP, Optical Model Parameter Library

    International Nuclear Information System (INIS)

    Su Zongdi

    1995-01-01

    Description of program or function: CENPL - GDRP (Giant Dipole Resonance Parameters for Gamma-Ray): - Format: special format described in documentation; - Nuclides: V, Mn, Co, Ni, Cu, Zn, Ga, Ge, As, Se, Rb, Sr, Y, Zr, Nb, Mo, Rh, Pd, Ag, Cd, In, Sn, Sb, Te, I, Cs, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Ho, Er, Lu, Ta, W, Re, Os, Ir, Pt, Au, Hg, Pb, Bi, Th, U, Np, Pu. - Origin: Experimental values offered by S.S. Dietrich and B.L. Berman. CENPL - FBP (Fission Barrier Parameter Sub-Library): - Format: special format described in documentation; - Nuclides: (1) 51 nuclei region from Th-230 to Cf-255, (2) 46 nuclei region from Th-229 to Cf-253, (3) 24 nuclei region from Pa-232 to Cf-253; - Origin: (1) Lynn, (2) Analysis of experimental data by Back et al., (3) Ohsawa. CENPL - DLS (Discrete level scheme and branch ratio of gamma decay: - Format: Special format described in documentation; - Origin: ENSDF - BNL. CENPL - NLD (Nuclear Level Density): - Format: Special format described in documentation; - Origin: Huang Zhongfu et al. CENPL - OMP (Optical model parameter sub-library): - Format: special format described in documentation ; - Origin: CENDL, ENDF/B-VI, JENDL-3. CENPL - MC (I) and (II) (Atomic masses and characteristic constants for nuclear ground states) : - Format: Brief table format; - Nuclides: 4760 nuclides ranging from Z=0 A=1 to Z=122 A=318. - Origin: Experimental data and systematic results evaluated by Wapstra, theoretical results calculated by Moller, ENSDF - BNL and Nuclear Wallet Cards. CENPL contains the following six sub-libraries: 1. Atomic Masses and Characteristic Constants for nuclear ground states (MCC). This data consists of calculated and in most cases also measured mass excesses, atomic masses, total binding energies, spins, parities, and half-lives of nuclear ground states, abundances, etc. for 4800 nuclides. 2. Discrete Level Schemes and branching ratios of gamma decay (DLS). The data on nuclear discrete levels are based on the Evaluated

  11. Patient- and population-level health consequences of discontinuing antiretroviral therapy in settings with inadequate HIV treatment availability

    Directory of Open Access Journals (Sweden)

    Kimmel April D

    2012-09-01

    Full Text Available Abstract Background In resource-limited settings, HIV budgets are flattening or decreasing. A policy of discontinuing antiretroviral therapy (ART after HIV treatment failure was modeled to highlight trade-offs among competing policy goals of optimizing individual and population health outcomes. Methods In settings with two available ART regimens, we assessed two strategies: (1 continue ART after second-line failure (Status Quo and (2 discontinue ART after second-line failure (Alternative. A computer model simulated outcomes for a single cohort of newly detected, HIV-infected individuals. Projections were fed into a population-level model allowing multiple cohorts to compete for ART with constraints on treatment capacity. In the Alternative strategy, discontinuation of second-line ART occurred upon detection of antiretroviral failure, specified by WHO guidelines. Those discontinuing failed ART experienced an increased risk of AIDS-related mortality compared to those continuing ART. Results At the population level, the Alternative strategy increased the mean number initiating ART annually by 1,100 individuals (+18.7% to 6,980 compared to the Status Quo. More individuals initiating ART under the Alternative strategy increased total life-years by 15,000 (+2.8% to 555,000, compared to the Status Quo. Although more individuals received treatment under the Alternative strategy, life expectancy for those treated decreased by 0.7 years (−8.0% to 8.1 years compared to the Status Quo. In a cohort of treated patients only, 600 more individuals (+27.1% died by 5 years under the Alternative strategy compared to the Status Quo. Results were sensitive to the timing of detection of ART failure, number of ART regimens, and treatment capacity. Although we believe the results robust in the short-term, this analysis reflects settings where HIV case detection occurs late in the disease course and treatment capacity and the incidence of newly detected patients are

  12. Formal and informal decision making on water management at the village level: A case study from the Office du Niger irrigation scheme (Mali)

    Science.gov (United States)

    Vandersypen, Klaartje; Keita, Abdoulaye C. T.; Coulibaly, Y.; Raes, D.; Jamin, J.-Y.

    2007-06-01

    Water Users Associations (WUAs) are all too often considered a panacea for improving water management in irrigation schemes. Where grassroots movements are absent, they are usually imposed on farmers by national governments, NGOs, and international donors, without fully considering existing forms of organization. This also happened in the Office du Niger irrigation scheme in Mali, where after a partial irrigation management transfer, WUAs were created to fill the resulting power vacuum. This paper demonstrates that, despite active efforts to organize farmers in WUAs, informal patterns of decision making remain dominant. Given the shortcomings of these informal patterns, WUAs could provide a much-needed platform for institutionalizing collective action, on the condition that farmers accept them. Therefore WUAs should adopt some crucial characteristics of informal patterns of decision making while avoiding their weaknesses. First, making use of the existing authority of village leadership and the central management can improve the credibility of WUAs. Second, allowing flexibility in procedures and rules can make them more appropriate for dealing with collective action problems that are typically temporary and specific. Last, formalizing the current pattern of conflict management and sanctioning might enhance its sphere of action and tackle the current absence of firm engagement with respect to some informal management decisions. In addition, WUAs should represent and be accountable to all farmers, including those residing outside the village community.

  13. Scheme for femtosecond-resolution pump-probe experiments at XFELs with two-color ten GW-level X-ray pulses

    International Nuclear Information System (INIS)

    Geloni, Gianluca; Kocharyan, Vitali; Saldin, Evgeni

    2010-01-01

    This paper describes a scheme for pump-probe experiments that can be performed at LCLS and at the European XFEL and determines what additional hardware development will be required to bring these experiments to fruition. It is proposed to derive both pump and probe pulses from the same electron bunch, but from different parts of the tunable-gap baseline undulator. This eliminates the need for synchronization and cancels jitter problems. The method has the further advantage to make a wide frequency range accessible at high peak-power and high repetition-rate. An important feature of the proposed scheme is that the hardware requirement is minimal. Our technique is based in essence on the ''fresh'' bunch technique. For its implementation it is sufficient to substitute a single undulator module with short magnetic delay line, i.e. a weak magnetic chicane, which delays the electron bunch with respect to the SASE pulse of half of the bunch length in the linear stage of amplification. This installation does not perturb the baseline mode of operation. We present a feasibility study and we make exemplifications with the parameters of the SASE2 line of the European XFEL. (orig.)

  14. An approach for maximizing the smallest eigenfrequency of structure vibration based on piecewise constant level set method

    Science.gov (United States)

    Zhang, Zhengfang; Chen, Weifeng

    2018-05-01

    Maximization of the smallest eigenfrequency of the linearized elasticity system with area constraint is investigated. The elasticity system is extended into a large background domain, but the void is vacuum and not filled with ersatz material. The piecewise constant level set (PCLS) method is applied to present two regions, the original material region and the void region. A quadratic PCLS function is proposed to represent the characteristic function. Consequently, the functional derivative of the smallest eigenfrequency with respect to PCLS function takes nonzero value in the original material region and zero in the void region. A penalty gradient algorithm is proposed, which initializes the whole background domain with the original material and decreases the area of original material region till the area constraint is satisfied. 2D and 3D numerical examples are presented, illustrating the validity of the proposed algorithm.

  15. Efficient globally optimal segmentation of cells in fluorescence microscopy images using level sets and convex energy functionals.

    Science.gov (United States)

    Bergeest, Jan-Philip; Rohr, Karl

    2012-10-01

    In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Computational Fluid Dynamics Analysis of Cold Plasma Plume Mixing with Blood Using Level Set Method Coupled with Heat Transfer

    Directory of Open Access Journals (Sweden)

    Mehrdad Shahmohammadi Beni

    2017-06-01

    Full Text Available Cold plasmas were proposed for treatment of leukemia. In the present work, conceptual designs of mixing chambers that increased the contact between the two fluids (plasma and blood through addition of obstacles within rectangular-block-shaped chambers were proposed and the dynamic mixing between the plasma and blood were studied using the level set method coupled with heat transfer. Enhancement of mixing between blood and plasma in the presence of obstacles was demonstrated. Continuous tracking of fluid mixing with determination of temperature distributions was enabled by the present model, which would be a useful tool for future development of cold plasma devices for treatment of blood-related diseases such as leukemia.

  17. Privacy Preserving Mapping Schemes Supporting Comparison

    NARCIS (Netherlands)

    Tang, Qiang

    2010-01-01

    To cater to the privacy requirements in cloud computing, we introduce a new primitive, namely Privacy Preserving Mapping (PPM) schemes supporting comparison. An PPM scheme enables a user to map data items into images in such a way that, with a set of images, any entity can determine the <, =, >

  18. Effect of culture levels, ultrafiltered retentate addition, total solid levels and heat treatments on quality improvement of buffalo milk plain set yoghurt.

    Science.gov (United States)

    Yadav, Vijesh; Gupta, Vijay Kumar; Meena, Ganga Sahay

    2018-05-01

    Studied the effect of culture (2, 2.5 and 3%), ultrafiltered (UF) retentate addition (0, 11, 18%), total milk solids (13, 13.50, 14%) and heat treatments (80 and 85 °C/30 min) on the change in pH and titratable acidity (TA), sensory scores and rheological parameters of yoghurt. With 3% culture levels, the required TA (0.90% LA) was achieved in minimum 6 h incubation. With an increase in UF retentate addition, there was observed a highly significant decrease in overall acceptability, body and texture and colour and appearance scores, but there was highly significant increase in rheological parameters of yoghurt samples. Yoghurt made from even 13.75% total solids containing nil UF retentate was observed to be sufficiently firm by the sensory panel. Most of the sensory attributes of yoghurt made with 13.50% total solids were significantly better than yoghurt prepared with either 13 or 14% total solids. Standardised milk heated to 85 °C/30 min resulted in significantly better overall acceptability in yoghurt. Overall acceptability of optimised yoghurt was significantly better than a branded market sample. UF retentate addition adversely affected yoghurt quality, whereas optimization of culture levels, totals milk solids and others process parameters noticeably improved the quality of plain set yoghurt with a shelf life of 15 days at 4 °C.

  19. Set-up and first operation of a plasma oven for treatment of low level radioactive wastes

    Directory of Open Access Journals (Sweden)

    Nachtrodt Frederik

    2014-01-01

    Full Text Available An experimental device for plasma treatment of low and intermediate level radioactive waste was built and tested in several design variations. The laboratory device is designed with the intention to study the general effects and difficulties in a plasma incineration set-up for the further future development of a larger scale pilot plant. The key part of the device consists of a novel microwave plasma torch driven by 200 W electric power, and operating at atmospheric pressure. It is a specific design characteristic of the torch that a high peak temperature can be reached with a low power input compared to other plasma torches. Experiments have been carried out to analyze the effect of the plasma on materials typical for operational low-level wastes. In some preliminary cold tests the behavior of stable volatile species e. g., caesium was investigated by TXRF measurements of material collected from the oven walls and the filtered off-gas. The results help in improving and scaling up the existing design and in understanding the effects for a pilot plant, especially for the off-gas collection and treatment.

  20. Finite Boltzmann schemes

    NARCIS (Netherlands)

    Sman, van der R.G.M.

    2006-01-01

    In the special case of relaxation parameter = 1 lattice Boltzmann schemes for (convection) diffusion and fluid flow are equivalent to finite difference/volume (FD) schemes, and are thus coined finite Boltzmann (FB) schemes. We show that the equivalence is inherent to the homology of the

  1. Joint multiuser switched diversity and adaptive modulation schemes for spectrum sharing systems

    KAUST Repository

    Qaraqe, Marwa

    2012-12-01

    In this paper, we develop multiuser access schemes for spectrum sharing systems whereby secondary users are allowed to share the spectrum with primary users under the condition that the interference observed at the primary receiver is below a predetermined threshold. In particular, we devise two schemes for selecting a user among those that satisfy the interference constraint and achieve an acceptable signal-to-noise ratio level. The first scheme selects the user that reports the best channel quality. In order to alleviate the high feedback load associated with the first scheme, we develop a second scheme based on the concept of switched diversity where the base station scans the users in a sequential manner until an acceptable user is found. In addition to these two selection schemes, we consider two power adaptive settings at the secondary users based on the amount of interference available at the secondary transmitter. In the On/Off power setting, users are allowed to transmit based on whether the interference constraint is met or not, while in the full power adaptive setting, the users are allowed to vary their transmission power to satisfy the interference constraint. Finally, we present numerical results for our proposed algorithms where we show the trade-off between the average spectral efficiency and average feedback load for both schemes. © 2012 IEEE.

  2. Joint multiuser switched diversity and adaptive modulation schemes for spectrum sharing systems

    KAUST Repository

    Qaraqe, Marwa; Abdallah, Mohamed M.; Serpedin, Erchin; Alouini, Mohamed-Slim; Alnuweiri, Hussein M.

    2012-01-01

    In this paper, we develop multiuser access schemes for spectrum sharing systems whereby secondary users are allowed to share the spectrum with primary users under the condition that the interference observed at the primary receiver is below a predetermined threshold. In particular, we devise two schemes for selecting a user among those that satisfy the interference constraint and achieve an acceptable signal-to-noise ratio level. The first scheme selects the user that reports the best channel quality. In order to alleviate the high feedback load associated with the first scheme, we develop a second scheme based on the concept of switched diversity where the base station scans the users in a sequential manner until an acceptable user is found. In addition to these two selection schemes, we consider two power adaptive settings at the secondary users based on the amount of interference available at the secondary transmitter. In the On/Off power setting, users are allowed to transmit based on whether the interference constraint is met or not, while in the full power adaptive setting, the users are allowed to vary their transmission power to satisfy the interference constraint. Finally, we present numerical results for our proposed algorithms where we show the trade-off between the average spectral efficiency and average feedback load for both schemes. © 2012 IEEE.

  3. A classification scheme for risk assessment methods.

    Energy Technology Data Exchange (ETDEWEB)

    Stamp, Jason Edwin; Campbell, Philip LaRoche

    2004-08-01

    This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In

  4. An analysis on the level changing of UET and SET in blood and urine in early stage of kidney disease caused by diabetes

    International Nuclear Information System (INIS)

    Liu Juzhen; Yang Wenying; Cai Tietie

    2001-01-01

    Objective: To study the relationship between UET and SET variation and early changes of diabetic nephropathy. Methods: UET and SET were measured in 24 patients with diabetes, 19 with early stage diabetic nephropathy, 21 with advanced diabetic nephropathy and 30 normal as contrast. Results: Apparent uprise of UET and SET was observed in all patients when compared to normal contrasts (P 2 -macroglobulin was revealed (P<0.05). Conclusion: UET and SET levels uprose as long as diabetic nephropathy deteriorated. As a result, UET and SET may act as sensitive indices in diagnosing early stage diabetic nephropathy

  5. Robust nuclei segmentation in cyto-histopathological images using statistical level set approach with topology preserving constraint

    Science.gov (United States)

    Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.

    2017-02-01

    Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.

  6. Leveling the field: The role of training, safety programs, and knowledge management systems in fostering inclusive field settings

    Science.gov (United States)

    Starkweather, S.; Crain, R.; Derry, K. R.

    2017-12-01

    Knowledge is empowering in all settings, but plays an elevated role in empowering under-represented groups in field research. Field research, particularly polar field research, has deep roots in masculinized and colonial traditions, which can lead to high barriers for women and minorities (e.g. Carey et al., 2016). While recruitment of underrepresented groups into polar field research has improved through the efforts of organizations like the Association of Polar Early Career Scientists (APECS), the experiences and successes of these participants is often contingent on the availability of specialized training opportunities or the quality of explicitly documented information about how to survive Arctic conditions or how to establish successful measurement protocols in harsh environments. In Arctic field research, knowledge is often not explicitly documented or conveyed, but learned through "experience" or informally through ad hoc advice. The advancement of field training programs and knowledge management systems suggest two means for unleashing more explicit forms of knowledge about field work. Examples will be presented along with a case for how they level the playing field and improve the experience of field work for all participants.

  7. Developmental screening tools: feasibility of use at primary healthcare level in low- and middle-income settings.

    Science.gov (United States)

    Fischer, Vinicius Jobim; Morris, Jodi; Martines, José

    2014-06-01

    An estimated 150 million children have a disability. Early identification of developmental disabilities is a high priority for the World Health Organization to allow action to reduce impairments through Gap Action Program on mental health. The study identified the feasibility of using the developmental screening and monitoring tools for children aged 0-3 year(s) by non-specialist primary healthcare providers in low-resource settings. A systematic review of the literature was conducted to identify the tools, assess their psychometric properties, and feasibility of use in low- and middle-income countries (LMICs). Key indicators to examine feasibility in LMICs were derived from a consultation with 23 international experts. We identified 426 studies from which 14 tools used in LMICs were extracted for further examination. Three tools reported adequate psychometric properties and met most of the feasibility criteria. Three tools appear promising for use in identifying and monitoring young children with disabilities at primary healthcare level in LMICs. Further research and development are needed to optimize these tools.

  8. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Science.gov (United States)

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  9. A quantitative evaluation of pleural effusion on computed tomography scans using B-spline and local clustering level set.

    Science.gov (United States)

    Song, Lei; Gao, Jungang; Wang, Sheng; Hu, Huasi; Guo, Youmin

    2017-01-01

    Estimation of the pleural effusion's volume is an important clinical issue. The existing methods cannot assess it accurately when there is large volume of liquid in the pleural cavity and/or the patient has some other disease (e.g. pneumonia). In order to help solve this issue, the objective of this study is to develop and test a novel algorithm using B-spline and local clustering level set method jointly, namely BLL. The BLL algorithm was applied to a dataset involving 27 pleural effusions detected on chest CT examination of 18 adult patients with the presence of free pleural effusion. Study results showed that average volumes of pleural effusion computed using the BLL algorithm and assessed manually by the physicians were 586 ml±339 ml and 604±352 ml, respectively. For the same patient, the volume of the pleural effusion, segmented semi-automatically, was 101.8% ±4.6% of that was segmented manually. Dice similarity was found to be 0.917±0.031. The study demonstrated feasibility of applying the new BLL algorithm to accurately measure the volume of pleural effusion.

  10. SparCLeS: dynamic l₁ sparse classifiers with level sets for robust beard/moustache detection and segmentation.

    Science.gov (United States)

    Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios

    2013-08-01

    Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.

  11. Loosely coupled level sets for retinal layers and drusen segmentation in subjects with dry age-related macular degeneration

    Science.gov (United States)

    Novosel, Jelena; Wang, Ziyuan; de Jong, Henk; Vermeer, Koenraad A.; van Vliet, Lucas J.

    2016-03-01

    Optical coherence tomography (OCT) is used to produce high-resolution three-dimensional images of the retina, which permit the investigation of retinal irregularities. In dry age-related macular degeneration (AMD), a chronic eye disease that causes central vision loss, disruptions such as drusen and changes in retinal layer thicknesses occur which could be used as biomarkers for disease monitoring and diagnosis. Due to the topology disrupting pathology, existing segmentation methods often fail. Here, we present a solution for the segmentation of retinal layers in dry AMD subjects by extending our previously presented loosely coupled level sets framework which operates on attenuation coefficients. In eyes affected by AMD, Bruch's membrane becomes visible only below the drusen and our segmentation framework is adapted to delineate such a partially discernible interface. Furthermore, the initialization stage, which tentatively segments five interfaces, is modified to accommodate the appearance of drusen. This stage is based on Dijkstra's algorithm and combines prior knowledge on the shape of the interface, gradient and attenuation coefficient in the newly proposed cost function. This prior knowledge is incorporated by varying the weights for horizontal, diagonal and vertical edges. Finally, quantitative evaluation of the accuracy shows a good agreement between manual and automated segmentation.

  12. Study of three-dimensional Rayleigh--Taylor instability in compressible fluids through level set method and parallel computation

    International Nuclear Information System (INIS)

    Li, X.L.

    1993-01-01

    Computation of three-dimensional (3-D) Rayleigh--Taylor instability in compressible fluids is performed on a MIMD computer. A second-order TVD scheme is applied with a fully parallelized algorithm to the 3-D Euler equations. The computational program is implemented for a 3-D study of bubble evolution in the Rayleigh--Taylor instability with varying bubble aspect ratio and for large-scale simulation of a 3-D random fluid interface. The numerical solution is compared with the experimental results by Taylor

  13. Multiobjective hyper heuristic scheme for system design and optimization

    Science.gov (United States)

    Rafique, Amer Farhan

    2012-11-01

    As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.

  14. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    Science.gov (United States)

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  15. Scheme Program Documentation Tools

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2004-01-01

    are separate and intended for different documentation purposes they are related to each other in several ways. Both tools are based on XML languages for tool setup and for documentation authoring. In addition, both tools rely on the LAML framework which---in a systematic way---makes an XML language available...... as named functions in Scheme. Finally, the Scheme Elucidator is able to integrate SchemeDoc resources as part of an internal documentation resource....

  16. Optimal Face-Iris Multimodal Fusion Scheme

    Directory of Open Access Journals (Sweden)

    Omid Sharifi

    2016-06-01

    Full Text Available Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA, a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.

  17. Can adding web-based support to UK primary care exercise referral schemes improve patients’ physical activity levels? Findings from an internal pilot study.

    Directory of Open Access Journals (Sweden)

    Adrian Taylor

    2015-10-01

    Full Text Available Background: Promoting physical activity (PA via primary care exercise referral schemes (ERS is common but there is no rigorous evidence for long term changes in PA (Pavey et al, 2011 among those with chronic conditions. From July 2015, for 15 months, the e-coachER trial began to recruit 1400 patients (in SW England, Birmingham and Glasgow with one or more chronic conditions including diabetes, obesity, hypertension, osteoarthritis, or depression, who are eligible and about to attend an ERS. The two-arm parallel RCT is powered to determine if the addition of a web-based, interactive, theory-driven and evidence-based support system called e-coachER (hosted on the ‘LifeGuide’ platform will result in at least 10% more patients who do 150 mins or more per week of accelerometer assessed moderate or vigorous physical activity (MVPA at 12 months. Recruitment into the trial is within primary care, using both mail-merged patient invitations and opportunistic GP invitations (and exercise referrals. Within the trial, after participants are screened, provide consent and complete baseline assessments, they are randomised to receive usual ERS at each site or usual ERS plus a mailed Welcome Pack with registration details to access e-coachER on-line. Inclusion criteria for entering the trial are: (1 Aged 16-74 years; (2 with one or more of the following: obesity (BMI 30-35, hypertension (SBP 140-179 or DBP 90-109, type 2 diabetes, lower limb osteoarthritis, recent history of treatment for depression; (3 Participants who are in the two lowest (of four groups using the GP Physical Activity Questionnaire; (4 have an e-mail address and access to the internet; (5 Eligible for an ERS. The intervention rationale, design and content are reported in another presentation. Aims: This presentation will provide initial findings from a 3 month internal pilot phase with a focus on trial recruitment and initial intervention engagement. We will present data on the

  18. On-line sample-pre-treatment schemes for trace-level determinations of metals by coupling flow injection or sequential injection with ICP-MS

    DEFF Research Database (Denmark)

    Wang, Jianhua; Hansen, Elo Harald

    2003-01-01

    a polytetrafluoroethylene (PTFE) knotted reactor (KR), solvent extraction-back extraction and hydride/vapor generation. It also addresses a novel, robust approach, whereby the protocol of SI-LOV-bead injection (BI) on-line separation and pre-concentration of ultra-trace levels of metals by a renewable microcolumn...

  19. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  20. Adaptive protection scheme

    Directory of Open Access Journals (Sweden)

    R. Sitharthan

    2016-09-01

    Full Text Available This paper aims at modelling an electronically coupled distributed energy resource with an adaptive protection scheme. The electronically coupled distributed energy resource is a microgrid framework formed by coupling the renewable energy source electronically. Further, the proposed adaptive protection scheme provides a suitable protection to the microgrid for various fault conditions irrespective of the operating mode of the microgrid: namely, grid connected mode and islanded mode. The outstanding aspect of the developed adaptive protection scheme is that it monitors the microgrid and instantly updates relay fault current according to the variations that occur in the system. The proposed adaptive protection scheme also employs auto reclosures, through which the proposed adaptive protection scheme recovers faster from the fault and thereby increases the consistency of the microgrid. The effectiveness of the proposed adaptive protection is studied through the time domain simulations carried out in the PSCAD⧹EMTDC software environment.

  1. Can adding web-based support to UK primary care exercise referral schemes improve patients’ physical activity levels? Intervention development for the e-coachER study.

    Directory of Open Access Journals (Sweden)

    Adrian Taylor

    2015-10-01

    Aims: This presentation will provide details on the intervention development and data to be captured to inform a process evaluation. Methods: An initial version of e-coachER was produced, building on experiences from obesity and diabetes self-management interventions using the Lifeguide platform, and beta tested over 7 months. Co-applicants and researchers then provided feedback on a time-truncated version, and ERS patients on a real-time version, for 5 months before it was locked for the RCT. Within the trial, after participants are screened, provide consent and complete baseline assessments, they are randomised to receive usual ERS at each site or usual ERS plus a mailed Welcome Pack (including a user friendly guide to register for e-coachER access in-line, a free pedometer and a fridge magnet with daily recording strips for step counts or minutes of MVPA. Contact details for an e-coachER facilitator are provided for additional technical support. Results: At the core of the intervention are ‘7 Steps to Health’ aimed to last 5-10 mins each, to encourage patients to think about the benefits of PA, seek support from an ERS practitioner (and friends/family, and the web, to self-monitor PA with a pedometer and upload steps or minutes of MVPA, set progressive goals, build confidence, autonomy and relatedness (from Self-Determination Theory, find ways to increase sustainable PA more broadly, and deal with setbacks. An avatar (to avoid having to represent a range of individual characteristics such as age, gender, and ethnicity and brief narratives are used throughout to normalise and support behaviour change and encourage e-coachER use. Automatic or patient chosen e-mails from the Lifeguide system promote on-going use of functions such as recording weekly PA and goal setting. For each site, participants are able to access links to reputable generic websites for further information about chronic conditions and lifestyle, links to other sites and apps for self

  2. Comparative Study on Feature Selection and Fusion Schemes for Emotion Recognition from Speech

    Directory of Open Access Journals (Sweden)

    Santiago Planet

    2012-09-01

    Full Text Available The automatic analysis of speech to detect affective states may improve the way users interact with electronic devices. However, the analysis only at the acoustic level could be not enough to determine the emotion of a user in a realistic scenario. In this paper we analyzed the spontaneous speech recordings of the FAU Aibo Corpus at the acoustic and linguistic levels to extract two sets of features. The acoustic set was reduced by a greedy procedure selecting the most relevant features to optimize the learning stage. We compared two versions of this greedy selection algorithm by performing the search of the relevant features forwards and backwards. We experimented with three classification approaches: Naïve-Bayes, a support vector machine and a logistic model tree, and two fusion schemes: decision-level fusion, merging the hard-decisions of the acoustic and linguistic classifiers by means of a decision tree; and feature-level fusion, concatenating both sets of features before the learning stage. Despite the low performance achieved by the linguistic data, a dramatic improvement was achieved after its combination with the acoustic information, improving the results achieved by this second modality on its own. The results achieved by the classifiers using the parameters merged at feature level outperformed the classification results of the decision-level fusion scheme, despite the simplicity of the scheme. Moreover, the extremely reduced set of acoustic features obtained by the greedy forward search selection algorithm improved the results provided by the full set.

  3. The effect of goal setting on fruit and vegetable consumption and physical activity level in a Web-based intervention.

    Science.gov (United States)

    O'Donnell, Stephanie; Greene, Geoffrey W; Blissmer, Bryan

    2014-01-01

    To explore the relationship between goal setting and fruit and vegetable (FV) consumption and physical activity (PA) in an intervention for college students. Secondary data analysis of intervention group participants from a 10-week online intervention with complete weekly data (n = 724). Outcomes (cups of FV per day and minutes of PA per week) and goals for both behaviors were reported online each week. Weekly differences between goals and behaviors were calculated, as well as the proportion meeting individual goals and meeting recommendations for behaviors. There were significant (P goal setting on both behaviors and of goal group (tertile of meeting weekly goals) on behavior, as well as meeting recommendations for both behaviors. There was an increase in FV consumption (P Goal setting as part of a Web-based intervention for college students was effective, but results differed for FV and PA. Goal setting for maintaining behavior may need to differ from goal setting for changing behavior. Copyright © 2014 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  4. Methodology for setting the reference levels in the measurements of the dose rate absorbed in air due to the environmental gamma radiation

    International Nuclear Information System (INIS)

    Dominguez Ley, Orlando; Capote Ferrera, Eduardo; Caveda Ramos, Celia; Alonso Abad, Dolores

    2008-01-01

    Full text: The methodology for setting the reference levels of the measurements of the gamma dose rate absorbed in the air is described. The registration level was obtained using statistical methods. To set the alarm levels, it was necessary to begin with certain affectation level, which activates the investigation operation mode when being reached. It is was necessary to transform this affectation level into values of the indicators selected to set the appearance of an alarm in the network, allowing its direct comparison and at the same time a bigger operability of this one. The affectation level was assumed as an effective dose of 1 mSv/y, which is the international dose limit for public. The conversion factor obtained in a practical way as a consequence of the Chernobyl accident was assumed, converting the value of annual effective dose into values of effective dose rate in air. These factors are the most important in our work, since the main task of the National Network of Environmental Radiological Surveillance of the Republic of Cuba is detecting accidents with a situations regional affectation, and this accident is precisely an example of pollution at this scale. The alarm level setting was based on the results obtained in the first year of the Chernobyl accident. For this purpose, some transformations were achieved. In the final results, a correction factor was introduced depending on the year season the measurement was made. It was taken into account the influence of different meteorological events on the measurement of this indicator. (author)

  5. Vector domain decomposition schemes for parabolic equations

    Science.gov (United States)

    Vabishchevich, P. N.

    2017-09-01

    A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.

  6. Development of an efficient and economical small scale management scheme for low and intermediate-Level radioactive waste and its impact on the environment

    International Nuclear Information System (INIS)

    Salomon, A.Ph.; Panem, J.A.; Manalastas, H.C.; Cortez, S. L.; Paredes, C.H.; Bartolome, Z.M.

    1976-05-01

    This paper describes the efforts made towards the establishment of a pilot-scale management system for the low and intermediate-level radioactive wastes of the Atomic Research Center. The past and current practices in handling radioactive wastes are discussed and the assessment of their capabilities to meet the projections on the waste production is presented. The future waste management requirements of the Center was evaluated and comparative studies on the Lime-Soda and Phosphate Processes were conducted on simulated and raw liquid wastes with initial activity ranging from 10 -4 uCi/ml to 10 -2 uCi/ml, to establish the ideal parameters for best attaining maximum removal of radioactivity in liquids. The effectiveness of treatment was evaluated in terms of the decontamination factor, DF, obtained

  7. A web-based study of the relationship of duration of insulin pump infusion set use and fasting blood glucose level in adults with type 1 diabetes.

    Science.gov (United States)

    Sampson Perrin, Alysa J; Guzzetta, Russell C; Miller, Kellee M; Foster, Nicole C; Lee, Anna; Lee, Joyce M; Block, Jennifer M; Beck, Roy W

    2015-05-01

    To evaluate the impact of infusion set use duration on glycemic control, we conducted an Internet-based study using the T1D Exchange's online patient community, Glu ( myGlu.org ). For 14 days, 243 electronically consented adults with type 1 diabetes (T1D) entered online that day's fasting blood glucose (FBG) level, the prior day's total daily insulin (TDI) dose, and whether the infusion set was changed. Mean duration of infusion set use was 3.0 days. Mean FBG level was higher with each successive day of infusion set use, increasing from 126 mg/dL on Day 1 to 133 mg/dL on Day 3 to 147 mg/dL on Day 5 (P<0.001). TDI dose did not vary with increased duration of infusion set use. Internet-based data collection was used to rapidly conduct the study at low cost. The results indicate that FBG levels increase with each additional day of insulin pump infusion set use.

  8. Continuous soil maps - a fuzzy set approach to bridge the gap between aggregation levels of process and distribution models

    NARCIS (Netherlands)

    Gruijter, de J.J.; Walvoort, D.J.J.; Gaans, van P.F.M.

    1997-01-01

    Soil maps as multi-purpose models of spatial soil distribution have a much higher level of aggregation (map units) than the models of soil processes and land-use effects that need input from soil maps. This mismatch between aggregation levels is particularly detrimental in the context of precision

  9. The Level of Vision Necessary for Competitive Performance in Rifle Shooting: Setting the Standards for Paralympic Shooting with Vision Impairment

    NARCIS (Netherlands)

    Allen, P.M.; Latham, K.; Mann, D.L.; Ravensbergen, H.J.C.; Myint, J.

    2016-01-01

    The aim of this study was to investigate the level of vision impairment (VI) that would reduce performance in shooting; to guide development of entry criteria to visually impaired (VI) shooting. Nineteen international-level shooters without VI took part in the study. Participants shot an air rifle,

  10. Proposal for a scheme to generate 10 TW-Level femtosecond X-ray pulses for imaging single protein molecules at the European XFEL

    Energy Technology Data Exchange (ETDEWEB)

    Serkez, Svitozar; Kocharyan, Vitali; Saldin, Evgeni; Zagorodnov, Igor [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Geloni, Gianluca [European XFEL GmbH, Hamburg (Germany); Yefanov, Oleksander [Center for Free-Electron Laser Science, Hamburg (Germany)

    2013-06-15

    Single biomolecular imaging using XFEL radiation is an emerging method for protein structure determination using the ''diffraction before destruction'' method at near atomic resolution. Crucial parameters for such bio-imaging experiments are photon energy range, peak power, pulse duration, and transverse coherence. The largest diffraction signals are achieved at the longest wavelength that supports a given resolution, which should be better than 0.3 nm. We propose a configuration which combines self-seeding and undulator tapering techniques with the emittance-spoiler method in order to increase the XFEL output peak power and to shorten the pulse duration up to a level sufficient for performing bio-imaging of single protein molecules at the optimal photon energy range, i.e. around 4 keV. Experiments at the LCLS confirmed the feasibility of these three new techniques. Based on start-to-end simulations we demonstrate that self-seeding, combined with undulator tapering, allows one to achieve up to a 100-fold increase in peak-power. A slotted foil in the last bunch compressor is added for X-ray pulse duration control. Simulations indicate that one can achieve diffraction to the desired resolution with 50 mJ (corresponding to 10{sup 14} photons) per 10 fs pulse at 3.5 keV photon energy in a 100 nm focus. This result is exemplified using the photosystem I membrane protein as a case study.

  11. Renormalization in self-consistent approximation schemes at finite temperature I: theory

    International Nuclear Information System (INIS)

    Hees, H. van; Knoll, J.

    2001-07-01

    Within finite temperature field theory, we show that truncated non-perturbative self-consistent Dyson resummation schemes can be renormalized with local counter-terms defined at the vacuum level. The requirements are that the underlying theory is renormalizable and that the self-consistent scheme follows Baym's Φ-derivable concept. The scheme generates both, the renormalized self-consistent equations of motion and the closed equations for the infinite set of counter terms. At the same time the corresponding 2PI-generating functional and the thermodynamic potential can be renormalized, in consistency with the equations of motion. This guarantees the standard Φ-derivable properties like thermodynamic consistency and exact conservation laws also for the renormalized approximation scheme to hold. The proof uses the techniques of BPHZ-renormalization to cope with the explicit and the hidden overlapping vacuum divergences. (orig.)

  12. Theoretical study of charge trapping levels in silicon nitride using the LDA-1/2 self-energy correction scheme for excited states

    International Nuclear Information System (INIS)

    Patrocinio, Weslley S.; Ribeiro, Mauro; Fonseca, Leonardo R.C.

    2012-01-01

    Silicon nitride, with a permittivity mid-way between SiO 2 and common high-k materials such as HfO 2 , is widely used in microelectronics as an insulating layer on top of oxides where it serves as an impurity barrier with the positive side effect of increasing the dielectric constant of the insulator when it is SiO 2 . It is also employed as charge storage in nonvolatile memory devices thanks to its high concentration of charge traps. However, in the case of memories, it is still unclear which defects are responsible for charge trapping and what is the impact of defect concentration on the structural and electronic properties of SiN x . Indeed, for the amorphous phase the band gap was measured in the range 5.1–5.5 eV, with long tails in the density of states penetrating the gap region. It is still not clear which defects are responsible for the tails. On the other hand, the K-center defects have been associated with charge trapping, though its origin is assigned to one Si back bond. To investigate the contribution of defect states to the band edge tails and band gap states, we adopted the β phase of stoichiometric silicon nitride (β-Si 3 N 4 ) as our model material and calculated its electronic properties employing ab initio DFT/LDA simulations with self-energy correction to improve the location of defect states in the SiN x band gap through the correction of the band gap underestimation typical of DFT/LDA. We considered some important defects in SiN x , as the Si anti-site and the N vacancy with H saturation, in two defect concentrations. The location of our calculated defect levels in the band gap correlates well with the available experimental data, offering a structural explanation to the measured band edge tails and charge trapping characteristics.

  13. Economic comparison of food, non food crops, set-aside at a regional level with a linear programming model

    International Nuclear Information System (INIS)

    Sourie, J.C.; Hautcolas, J.C.; Blanchet, J.

    1992-01-01

    This paper is concerned with a regional linear programming model. Its purpose is a simulation of the European Economic Community supply of non-food crops at the farm gate according to different sets of European Common Agriculture Policy (CAP) measures. The methodology is first described with a special emphasis on the aggregation problem. The model allows the simultaneous calculation of the impact of non food crops on the farmer's income and on the agricultural budget. The model is then applied to an intensive agricultural region (400 000 ha of arable land). In this region, sugar beet and rape seem the less costly resources, both for the farmers and the CAP taxpayers. An improvement of the economic situation of the two previous agents can be obtained only if a tax exemption on ethanol and rape oil and a subsidy per hactare are allowed. This subsidy can be lower than the set aside premium. (author)

  14. Developmental Screening Tools: Feasibility of Use at Primary Healthcare Level in Low- and Middle-income Settings

    OpenAIRE

    Fischer, Vinicius Jobim; Morris, Jodi; Martines, José

    2014-01-01

    ABSTRACT An estimated 150 million children have a disability. Early identification of developmental disabilities is a high priority for the World Health Organization to allow action to reduce impairments through Gap Action Program on mental health. The study identified the feasibility of using the developmental screening and monitoring tools for children aged 0-3 year(s) by non-specialist primary healthcare providers in low-resource settings. A systematic review of the literature was conducte...

  15. The RISC-V Instruction Set Manual. Volume 1: User-Level ISA, Version 2.0

    Science.gov (United States)

    2014-05-06

    RV128I Base Integer Instruction Set 81 18 Calling Convention 83 18.1 C Datatypes and Alignment...FCVT.D.S, are encoded in the OP-FP major opcode space and both the source and destination are floating-point registers. The rs2 field encodes the datatype ...of the source, and the fmt field encodes the datatype of the destination. FCVT.S.D rounds according to the RM field; FCVT.D.S will never round. 31 27

  16. Evaluating statistical cloud schemes

    OpenAIRE

    Grützun, Verena; Quaas, Johannes; Morcrette , Cyril J.; Ament, Felix

    2015-01-01

    Statistical cloud schemes with prognostic probability distribution functions have become more important in atmospheric modeling, especially since they are in principle scale adaptive and capture cloud physics in more detail. While in theory the schemes have a great potential, their accuracy is still questionable. High-resolution three-dimensional observational data of water vapor and cloud water, which could be used for testing them, are missing. We explore the potential of ground-based re...

  17. Scheme of energy utilities

    International Nuclear Information System (INIS)

    2002-04-01

    This scheme defines the objectives relative to the renewable energies and the rational use of the energy in the framework of the national energy policy. It evaluates the needs and the potentialities of the regions and preconizes the actions between the government and the territorial organizations. The document is presented in four parts: the situation, the stakes and forecasts; the possible actions for new measures; the scheme management and the regional contributions analysis. (A.L.B.)

  18. Design of Rate-Compatible Parallel Concatenated Punctured Polar Codes for IR-HARQ Transmission Schemes

    Directory of Open Access Journals (Sweden)

    Jian Jiao

    2017-11-01

    Full Text Available In this paper, we propose a rate-compatible (RC parallel concatenated punctured polar (PCPP codes for incremental redundancy hybrid automatic repeat request (IR-HARQ transmission schemes, which can transmit multiple data blocks over a time-varying channel. The PCPP coding scheme can provide RC polar coding blocks in order to adapt to channel variations. First, we investigate an improved random puncturing (IRP pattern for the PCPP coding scheme due to the code-rate and block length limitations of conventional polar codes. The proposed IRP algorithm only select puncturing bits from the frozen bits set and keep the information bits unchanged during puncturing, which can improve 0.2–1 dB decoding performance more than the existing random puncturing (RP algorithm. Then, we develop a RC IR-HARQ transmission scheme based on PCPP codes. By analyzing the overhead of the previous successful decoded PCPP coding block in our IR-HARQ scheme, the optimal initial code-rate can be determined for each new PCPP coding block over time-varying channels. Simulation results show that the average number of transmissions is about 1.8 times for each PCPP coding block in our RC IR-HARQ scheme with a 2-level PCPP encoding construction, which can reduce half of the average number of transmissions than the existing RC polar coding schemes.

  19. Certificateless Key-Insulated Generalized Signcryption Scheme without Bilinear Pairings

    Directory of Open Access Journals (Sweden)

    Caixue Zhou

    2017-01-01

    Full Text Available Generalized signcryption (GSC can be applied as an encryption scheme, a signature scheme, or a signcryption scheme with only one algorithm and one key pair. A key-insulated mechanism can resolve the private key exposure problem. To ensure the security of cloud storage, we introduce the key-insulated mechanism into GSC and propose a concrete scheme without bilinear pairings in the certificateless cryptosystem setting. We provide a formal definition and a security model of certificateless key-insulated GSC. Then, we prove that our scheme is confidential under the computational Diffie-Hellman (CDH assumption and unforgeable under the elliptic curve discrete logarithm (EC-DL assumption. Our scheme also supports both random-access key update and secure key update. Finally, we evaluate the efficiency of our scheme and demonstrate that it is highly efficient. Thus, our scheme is more suitable for users who communicate with the cloud using mobile devices.

  20. Small-scale classification schemes

    DEFF Research Database (Denmark)

    Hertzum, Morten

    2004-01-01

    Small-scale classification schemes are used extensively in the coordination of cooperative work. This study investigates the creation and use of a classification scheme for handling the system requirements during the redevelopment of a nation-wide information system. This requirements...... classification inherited a lot of its structure from the existing system and rendered requirements that transcended the framework laid out by the existing system almost invisible. As a result, the requirements classification became a defining element of the requirements-engineering process, though its main...... effects remained largely implicit. The requirements classification contributed to constraining the requirements-engineering process by supporting the software engineers in maintaining some level of control over the process. This way, the requirements classification provided the software engineers...

  1. A Transactional Asynchronous Replication Scheme for Mobile Database Systems

    Institute of Scientific and Technical Information of China (English)

    丁治明; 孟小峰; 王珊

    2002-01-01

    In mobile database systems, mobility of users has a significant impact on data replication. As a result, the various replica control protocols that exist today in traditional distributed and multidatabase environments are no longer suitable. To solve this problem, a new mobile database replication scheme, the Transaction-Level Result-Set Propagation (TLRSP)model, is put forward in this paper. The conflict detection and resolution strategy based on TLRSP is discussed in detail, and the implementation algorithm is proposed. In order to compare the performance of the TLRSP model with that of other mobile replication schemes, we have developed a detailed simulation model. Experimental results show that the TLRSP model provides an efficient support for replicated mobile database systems by reducing reprocessing overhead and maintaining database consistency.

  2. "Notice the Similarities between the Two Sets …": Imperative Usage in a Corpus of Upper-Level Student Papers

    Science.gov (United States)

    Neiderhiser, Justine A.; Kelley, Patrick; Kennedy, Kohlee M.; Swales, John M.; Vergaro, Carla

    2016-01-01

    The sparse literature on the use of imperatives in research papers suggests that they are relatively common in a small number of disciplines, but rare, if used at all, in others. The present study addresses the use of imperatives in a corpus of upper-level A-graded student papers from 16 disciplines. A total of 822 papers collected within the past…

  3. Tails from previous exposures: a general problem in setting reference levels for the assessment of internal contamination

    International Nuclear Information System (INIS)

    Breuer, F.; Frittelli, L.

    1988-01-01

    Reference levels for retention and excretion are evaluated for routine and special monitoring following the intake of a fraction of ICRP annual limits (ALIs) or of a unit activity. Methodologies are also suggested for taking into account the contribution by previous intakes to excretion or retention

  4. Sequential injection-bead injection-lab-on-valve schemes for on-line solid phase extraction and preconcentration of ultra-trace levels of heavy metals with determination by electrothermal atomic absorption spectrometry and inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Wang Jianhua; Hansen, Elo Harald; Miro, Manuel

    2003-01-01

    This communication presents an overview of the state-of-the-art of the exploitation of sequential injection (SI)-bead injection (BI)-lab-on-valve (LOV) schemes for automatic on-line sample pre-treatments interfaced with ETAAS and ICPMS detection as conducted in the authors' group. The discussions are focused on the applications of SI-BI-LOV protocols for on-line microcolumn based solid phase extraction of ultra-trace levels of heavy metals, employing the so-called renewable surface separation and preconcentration manipulatory scheme. Two types of sorbents have been employed as packing material, that is, the hydrophilic SP Sephadex C-25 cation exchange and iminodiacetate based Muromac A-1 chelating resins, and the hydrophobic poly(tetrafluoroethylene) (PTFE) and poly(styrene-divinylbenzene) copolymer alkylated with octadecyl groups (C 18 -PS/DVB). Using ETAAS as detection device, the easy-to-handle hydrophilic renewable reactors hold the features of improved R.S.D.s and LODs as compared to those operated in the conventional, permanent mode, in addition to the elimination of flow resistance. The hydrophobic columns fall into two categories, that is, the renewable one packed with C 18 -PS/DVB beads entails analogous R.S.D.s and LODs with respect to the conventional approach, while those with PTFE beads result in slightly inferior R.S.D.s and LODs by similar comparison, yet offering a wider dynamic range than when using an external permanent column. Moreover, the hydrophilic materials result in much higher enrichment of the analyte than the hydrophobic ones, although PTFE is the packing material that exhibits the best retention efficiency

  5. The Level of Vision Necessary for Competitive Performance in Rifle Shooting: Setting the Standards for Paralympic Shooting with Vision Impairment.

    Science.gov (United States)

    Allen, Peter M; Latham, Keziah; Mann, David L; Ravensbergen, Rianne H J C; Myint, Joy

    2016-01-01

    The aim of this study was to investigate the level of vision impairment (VI) that would reduce performance in shooting; to guide development of entry criteria to visually impaired (VI) shooting. Nineteen international-level shooters without VI took part in the study. Participants shot an air rifle, while standing, toward a regulation target placed at the end of a 10 m shooting range. Cambridge simulation glasses were used to simulate six different levels of VI. Visual acuity (VA) and contrast sensitivity (CS) were assessed along with shooting performance in each of seven conditions of simulated impairment and compared to that with habitual vision. Shooting performance was evaluated by calculating each individual's average score in every level of simulated VI and normalizing this score by expressing it as a percentage of the baseline performance achieved with habitual vision. Receiver Operating Characteristic curves were constructed to evaluate the ability of different VA and CS cut-off criteria to appropriately classify these athletes as achieving 'expected' or 'below expected' shooting results based on their performance with different levels of VA and CS. Shooting performance remained relatively unaffected by mild decreases in VA and CS, but quickly deteriorated with more moderate losses. The ability of visual function measurements to classify shooting performance was good, with 78% of performances appropriately classified using a cut-off of 0.53 logMAR and 74% appropriately classified using a cut-off of 0.83 logCS. The current inclusion criteria for VI shooting (1.0 logMAR) is conservative, maximizing the chance of including only those with an impairment that does impact performance, but potentially excluding some who do have a genuine impairment in the sport. A lower level of impairment would include more athletes who do have a genuine impairment but would potentially include those who do not actually have an impairment that impacts performance in the sport. An

  6. The Level of Vision Necessary for Competitive Performance in Rifle Shooting: Setting the Standards for Paralympic Shooting With Vision Impairment

    Directory of Open Access Journals (Sweden)

    Peter M Allen

    2016-11-01

    Full Text Available The aim of this study was to investigate the level of vision impairment that would reduce performance in shooting; to guide development of entry criteria to visually impaired (VI shooting. Nineteen international-level shooters without vision impairment took part in the study. Participants shot an air rifle, while standing, towards a regulation target placed at the end of a 10m shooting range. Cambridge simulation glasses were used to simulate six different levels of vision impairment. Visual acuity (VA and contrast sensitivity (CS were assessed along with shooting performance in each of seven conditions of simulated impairment and compared to that with habitual vision. Shooting performance was evaluated by calculating each individual’s average score in every level of simulated vision impairment and normalising this score by expressing it as a percentage of the baseline performance achieved with habitual vision. Receiver Operating Characteristic (ROC curves were constructed to evaluate the ability of different VA and CS cut-off criteria to appropriately classify these athletes as achieving ‘expected’ or ‘below expected’ shooting results based on their performance with different levels of VA and CS. Shooting performance remained relatively unaffected by mild decreases in VA and CS, but quickly deteriorated with more moderate losses. The ability of visual function measurements to classify shooting performance was good, with 78% of performances appropriately classified using a cut-off of 0.53 logMAR and 74% appropriately classified using a cut-off of 0.83 logCS. The current inclusion criteria for VI shooting (1.0 logMAR is conservative, maximising the chance of including only those with an impairment that does impact performance, but potentially excluding some who do have a genuine impairment in the sport. A lower level of impairment would include more athletes who do have a genuine impairment but would potentially include those who do not

  7. Serum cystatin C levels in preterm newborns in our setting: Correlation with serum creatinine and preterm pathologies.

    Science.gov (United States)

    Bardallo Cruzado, Leonor; Pérez González, Elena; Martínez Martos, Zoraima; Bermudo Guitarte, Carmen; Granero Asencio, Mercedes; Luna Lagares, Salud; Marín Patón, Mariano; Polo Padilla, Juan

    2015-01-01

    Cystatin C (CysC) is a renal function marker that is not as influenced as creatinine (Cr) by endogenous or exogenous agents, so it is therefore proposed as a marker in preterm infants. To determine serum CysC values in preterm infants during the first week of life, compared to Cr. To analyze alterations caused by prematurity diseases. The design involved a longitudinal, observational study of prospective cohorts. Groups were based on gestational age (GA): Group A (24-27 weeks), Group B (28-33 weeks), Group C (34-36 weeks). Blood samples were collected at birth, within 48-72hours and after 7 days of life. SPSS v.20 software was used. The statistical methods applied included chi-squared test and ANOVA. A total of 109 preterm infants were included in the study. CysC levels were: 1.54mg/L (±0.28) at birth; 1.38mg/L (±0.36) within 48-72hours of life; 1.50mg/L (±0.31) after 7 days (p<0.05). Cr levels were: 0.64mg/dL (±0.17) at birth; 0.64mg/dL (±0.28) within 48-72hours; 0.56mg/dL (±0.19) after 7 days (P<.05). CysC values were lower in hypotensive patients and those with a respiratory disease (P<.05), and no alterations associated with other diseases were observed. There were no differences in Cr levels associated with any disease. Creatinine levels were higher in patients ≤1.500g (P<.05). Serum CysC decreased within 48-72hours of life, and this decline showed significance (P<.05). The levels increased after 7 days in all 3 GA groups, and there was no difference in CysC levels among the groups. More studies in preterm infants with hypotension and respiratory disease are required. CysC is a better glomerular filtration (GF) marker in ≤1.500g preterm infants. Copyright © 2015 The Authors. Published by Elsevier España, S.L.U. All rights reserved.

  8. Development of a working set of waste package performance criteria for deepsea disposal of low-level radioactive waste. Final report

    International Nuclear Information System (INIS)

    Columbo, P.; Fuhrmann, M.; Neilson, R.M. Jr; Sailor, V.L.

    1982-11-01

    The United States ocean dumping regulations developed pursuant to PL92-532, the Marine Protection, Research, and Sanctuaries Act of 1972, as amended, provide for a general policy of isolation and containment of low-level radioactive waste after disposal into the ocean. In order to determine whether any particular waste packaging system is adequate to meet this general requirement, it is necessary to establish a set of performance criteria against which to evaluate a particular packaging system. These performance criteria must present requirements for the behavior of the waste in combination with its immobilization agent and outer container in a deepsea environment. This report presents a working set of waste package performance criteria, and includes a glossary of terms, characteristics of low-level radioactive waste, radioisotopes of importance in low-level radioactive waste, and a summary of domestic and international regulations which control the ocean disposal of these wastes

  9. The politics of agenda setting at the global level: key informant interviews regarding the International Labour Organization Decent Work Agenda.

    Science.gov (United States)

    Di Ruggiero, Erica; Cohen, Joanna E; Cole, Donald C

    2014-07-01

    Global labour markets continue to undergo significant transformations resulting from socio-political instability combined with rises in structural inequality, employment insecurity, and poor working conditions. Confronted by these challenges, global institutions are providing policy guidance to protect and promote the health and well-being of workers. This article provides an account of how the International Labour Organization's Decent Work Agenda contributes to the work policy agendas of the World Health Organization and the World Bank. This qualitative study involved semi-structured interviews with representatives from three global institutions--the International Labour Organization (ILO), the World Health Organization and the World Bank. Of the 25 key informants invited to participate, 16 took part in the study. Analysis for key themes was followed by interpretation using selected agenda setting theories. Interviews indicated that through the Decent Work Agenda, the International Labour Organization is shaping the global policy narrative about work among UN agencies, and that the pursuit of decent work and the Agenda were perceived as important goals with the potential to promote just policies. The Agenda was closely linked to the World Health Organization's conception of health as a human right. However, decent work was consistently identified by World Bank informants as ILO terminology in contrast to terms such as job creation and job access. The limited evidence base and its conceptual nature were offered as partial explanations for why the Agenda has yet to fully influence other global institutions. Catalytic events such as the economic crisis were identified as creating the enabling conditions to influence global work policy agendas. Our evidence aids our understanding of how an issue like decent work enters and stays on the policy agendas of global institutions, using the Decent Work Agenda as an illustrative example. Catalytic events and policy

  10. The politics of agenda setting at the global level: key informant interviews regarding the International Labour Organization Decent Work Agenda

    Science.gov (United States)

    2014-01-01

    Background Global labour markets continue to undergo significant transformations resulting from socio-political instability combined with rises in structural inequality, employment insecurity, and poor working conditions. Confronted by these challenges, global institutions are providing policy guidance to protect and promote the health and well-being of workers. This article provides an account of how the International Labour Organization’s Decent Work Agenda contributes to the work policy agendas of the World Health Organization and the World Bank. Methods This qualitative study involved semi-structured interviews with representatives from three global institutions – the International Labour Organization (ILO), the World Health Organization and the World Bank. Of the 25 key informants invited to participate, 16 took part in the study. Analysis for key themes was followed by interpretation using selected agenda setting theories. Results Interviews indicated that through the Decent Work Agenda, the International Labour Organization is shaping the global policy narrative about work among UN agencies, and that the pursuit of decent work and the Agenda were perceived as important goals with the potential to promote just policies. The Agenda was closely linked to the World Health Organization’s conception of health as a human right. However, decent work was consistently identified by World Bank informants as ILO terminology in contrast to terms such as job creation and job access. The limited evidence base and its conceptual nature were offered as partial explanations for why the Agenda has yet to fully influence other global institutions. Catalytic events such as the economic crisis were identified as creating the enabling conditions to influence global work policy agendas. Conclusions Our evidence aids our understanding of how an issue like decent work enters and stays on the policy agendas of global institutions, using the Decent Work Agenda as an illustrative

  11. Systems-Level Annotation of a Metabolomics Data Set Reduces 25 000 Features to Fewer than 1000 Unique Metabolites.

    Science.gov (United States)

    Mahieu, Nathaniel G; Patti, Gary J

    2017-10-03

    When using liquid chromatography/mass spectrometry (LC/MS) to perform untargeted metabolomics, it is now routine to detect tens of thousands of features from biological samples. Poor understanding of the data, however, has complicated interpretation and masked the number of unique metabolites actually being measured in an experiment. Here we place an upper bound on the number of unique metabolites detected in Escherichia coli samples analyzed with one untargeted metabolomics method. We first group multiple features arising from the same analyte, which we call "degenerate features", using a context-driven annotation approach. Surprisingly, this analysis revealed thousands of previously unreported degeneracies that reduced the number of unique analytes to ∼2961. We then applied an orthogonal approach to remove nonbiological features from the data using the 13 C-based credentialing technology. This further reduced the number of unique analytes to less than 1000. Our 90% reduction in data is 5-fold greater than previously published studies. On the basis of the results, we propose an alternative approach to untargeted metabolomics that relies on thoroughly annotated reference data sets. To this end, we introduce the creDBle database ( http://creDBle.wustl.edu ), which contains accurate mass, retention time, and MS/MS fragmentation data as well as annotations of all credentialed features.

  12. Towards Symbolic Encryption Schemes

    DEFF Research Database (Denmark)

    Ahmed, Naveed; Jensen, Christian D.; Zenner, Erik

    2012-01-01

    , namely an authenticated encryption scheme that is secure under chosen ciphertext attack. Therefore, many reasonable encryption schemes, such as AES in the CBC or CFB mode, are not among the implementation options. In this paper, we report new attacks on CBC and CFB based implementations of the well......Symbolic encryption, in the style of Dolev-Yao models, is ubiquitous in formal security models. In its common use, encryption on a whole message is specified as a single monolithic block. From a cryptographic perspective, however, this may require a resource-intensive cryptographic algorithm......-known Needham-Schroeder and Denning-Sacco protocols. To avoid such problems, we advocate the use of refined notions of symbolic encryption that have natural correspondence to standard cryptographic encryption schemes....

  13. Compact Spreader Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Placidi, M.; Jung, J. -Y.; Ratti, A.; Sun, C.

    2014-07-25

    This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.

  14. The Impact of Video Length on Learning in a Middle-Level Flipped Science Setting: Implications for Diversity Inclusion

    Science.gov (United States)

    Slemmons, Krista; Anyanwu, Kele; Hames, Josh; Grabski, Dave; Mlsna, Jeffery; Simkins, Eric; Cook, Perry

    2018-05-01

    Popularity of videos for classroom instruction has increased over the years due to affordability and user-friendliness of today's digital video cameras. This prevalence has led to an increase in flipped, K-12 classrooms countrywide. However, quantitative data establishing the appropriate video length to foster authentic learning is limited, particularly in middle-level classrooms. We focus on this aspect of video technology in two flipped science classrooms at the middle school level to determine the optimal video length to enable learning, increase retention and support student motivation. Our results indicate that while assessments directly following short videos were slightly higher, these findings were not significantly different from scores following longer videos. While short-term retention of material did not seem to be influenced by video length, longer-term retention for males and students with learning disabilities was higher following short videos compared to long as assessed on summative assessments. Students self-report that they were more engaged, had enhanced focus, and had a perceived higher retention of content following shorter videos. This study has important implications for student learning, application of content, and the development of critical thinking skills. This is particularly paramount in an era where content knowledge is just a search engine away.

  15. A browser-based 3D Visualization Tool designed for comparing CERES/CALIOP/CloudSAT level-2 data sets.

    Science.gov (United States)

    Chu, C.; Sun-Mack, S.; Chen, Y.; Heckert, E.; Doelling, D. R.

    2017-12-01

    In Langley NASA, Clouds and the Earth's Radiant Energy System (CERES) and Moderate Resolution Imaging Spectroradiometer (MODIS) are merged with Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) and CloudSat Cloud Profiling Radar (CPR). The CERES merged product (C3M) matches up to three CALIPSO footprints with each MODIS pixel along its ground track. It then assigns the nearest CloudSat footprint to each of those MODIS pixels. The cloud properties from MODIS, retrieved using the CERES algorithms, are included in C3M with the matched CALIPSO and CloudSat products along with radiances from 18 MODIS channels. The dataset is used to validate the CERES retrieved MODIS cloud properties and the computed TOA and surface flux difference using MODIS or CALIOP/CloudSAT retrieved clouds. This information is then used to tune the computed fluxes to match the CERES observed TOA flux. A visualization tool will be invaluable to determine the cause of these large cloud and flux differences in order to improve the methodology. This effort is part of larger effort to allow users to order the CERES C3M product sub-setted by time and parameter as well as the previously mentioned visualization capabilities. This presentation will show a new graphical 3D-interface, 3D-CERESVis, that allows users to view both passive remote sensing satellites (MODIS and CERES) and active satellites (CALIPSO and CloudSat), such that the detailed vertical structures of cloud properties from CALIPSO and CloudSat are displayed side by side with horizontally retrieved cloud properties from MODIS and CERES. Similarly, the CERES computed profile fluxes whether using MODIS or CALIPSO and CloudSat clouds can also be compared. 3D-CERESVis is a browser-based visualization tool that makes uses of techniques such as multiple synchronized cursors, COLLADA format data and Cesium.

  16. Economic evaluation and the Jordan Rational Drug List: an exploratory study of national-level priority setting.

    Science.gov (United States)

    Lafi, Rania; Robinson, Suzanne; Williams, Iestyn

    2012-01-01

    To explore the extent of and barriers to the use of economic evaluation in compiling the Jordan Rational Drug List in the health care system of Jordan. The research reported in this article involved a case study of the Jordan Rational Drug List. Data collection methods included semi-structured interviews with decision makers and analysis of secondary documentary sources. The case study was supplemented by additional interviews with a small number of Jordanian academics involved in the production of economic evaluation. The research found that there was no formal requirement for cost-effectiveness information submitted as part of the decision-making process for the inclusion of new technologies on the Jordan Rational Drug List. Both decision makers and academics suggested that economic evidence was not influential in formulary decisions. This is unusual for national formulary bodies. The study identified a number of barriers that prevent substantive and routine use of economic evaluation. While some of these echo findings of previous studies, others-notably the extent to which the sectional interests of clinical groups and commercial (pharmaceutical) industry exert undue influence over decision making-more obviously result from the specific Jordanian context. Economic evaluation was not found to be influential in the Jordan Rational Drug List. Recommendations for improvement include enhancing capacity in relation to generating, accessing, and/or applying health economic analysis to priority setting decisions. There is a further need to incentivize the use of economic evaluation, and this requires that organizational and structural impediments be removed. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  17. New analytic unitarization schemes

    International Nuclear Information System (INIS)

    Cudell, J.-R.; Predazzi, E.; Selyugin, O. V.

    2009-01-01

    We consider two well-known classes of unitarization of Born amplitudes of hadron elastic scattering. The standard class, which saturates at the black-disk limit includes the standard eikonal representation, while the other class, which goes beyond the black-disk limit to reach the full unitarity circle, includes the U matrix. It is shown that the basic properties of these schemes are independent of the functional form used for the unitarization, and that U matrix and eikonal schemes can be extended to have similar properties. A common form of unitarization is proposed interpolating between both classes. The correspondence with different nonlinear equations are also briefly examined.

  18. An Optimization Scheme for ProdMod

    International Nuclear Information System (INIS)

    Gregory, M.V.

    1999-01-01

    A general purpose dynamic optimization scheme has been devised in conjunction with the ProdMod simulator. The optimization scheme is suitable for the Savannah River Site (SRS) High Level Waste (HLW) complex operations, and able to handle different types of optimizations such as linear, nonlinear, etc. The optimization is performed in the stand-alone FORTRAN based optimization deliver, while the optimizer is interfaced with the ProdMod simulator for flow of information between the two

  19. Does industry take the susceptible subpopulation of asthmatic individuals into consideration when setting derived no‐effect levels?

    Science.gov (United States)

    Johansson, Mia K. V.; Johanson, Gunnar; Öberg, Mattias

    2016-01-01

    Abstract Asthma, a chronic respiratory disease, can be aggravated by exposure to certain chemical irritants. The objectives were first to investigate the extent to which experimental observations on asthmatic subjects are taken into consideration in connection with the registration process under the EU REACH regulation, and second, to determine whether asthmatics are provided adequate protection by the derived no‐effect levels (DNELs) for acute inhalation exposure. We identified substances for which experimental data on the pulmonary functions of asthmatics exposed to chemicals under controlled conditions are available. The effect concentrations were then compared with DNELs and other guideline and limit values. As of April 2015, only 2.6% of 269 classified irritants had available experimental data on asthmatics. Fourteen of the 22 identified substances with available data were fully registered under REACH and we retrieved 114 reliable studies related to these. Sixty‐three of these studies, involving nine of the 14 substances, were cited by the REACH registrants. However, only 17 of the 114 studies, involving four substances, were regarded as key studies. Furthermore, many of the DNELs for acute inhalation were higher than estimated effect levels for asthmatics, i.e., lowest observed adverse effect concentrations or no‐observed adverse effect concentrations, indicating low or no safety margin. We conclude that REACH registrants tend to disregard findings on asthmatics when deriving these DNELs. In addition, we found examples of DNELs, particularly among those derived for workers, which likely do not provide adequate protection for asthmatics. Copyright © 2016 The Authors Journal of Applied Toxicology Published by John Wiley & Sons Ltd. PMID:27283874

  20. A comparative study on seismic response of two unstable rock slopes within same tectonic setting but different activity level

    Science.gov (United States)

    Kleinbrod, Ulrike; Burjánek, Jan; Hugentobler, Marc; Amann, Florian; Fäh, Donat

    2017-12-01

    In this study, the seismic response of two slope instabilities is investigated with seismic ambient vibration analysis. Two similar sites have been chosen: an active deep-seated slope instability at Cuolm da Vi and the geologically, structurally and morphologically similar, but presently not moving Alp Caschlè slope. Both slopes are located at the upper Vorderrheintal (Canton Graubünden, Switzerland). Ambient vibrations were recorded on both slopes and processed by time-frequency polarization and site-to-reference spectral ratio analysis. The data interpretation shows correlations between degree of disintegration of the rock mass and amplification. However, the ambient vibration analysis conducted, does not allow retrieving a resonance frequency that can be related to the total depth of the instability of Cuolm da Vi. Even though seismic waves can be hardly traced in rock instabilities containing open fractures, it was possible to retrieve a dispersion curve and a velocity profile from the array measurement at Cuolm da Vi due to the high level of disintegration of the rock material down to a depth of about 100 m. From the similar amplification pattern at the two sites, we expect a similar structure, indicating that also the slope at Alp Caschlè was active in the past in a similar manner as Cuolm da Vi. However, a smoother increase of amplification with frequency is observed at Alp Caschlè, which might indicate less disintegration of the rock mass in a particular depth range at this site, when comparing to Cuolm da Vi where a high level of disintegration is observed, resulting from the high activity at the slope. From the frequency-dependent amplification, we can distinguish between two parts within both instabilities, one part showing decreasing disintegration of the rock mass with increasing depth, for the other parts less-fractured blocks are observed. Since the block structures are found in the lower part of the instabilities, they might contribute to the

  1. EU Action against Climate Change. EU emissions trading. An open scheme promoting global innovation

    International Nuclear Information System (INIS)

    2005-01-01

    The European Union is committed to global efforts to reduce the greenhouse gas emissions from human activities that threaten to cause serious disruption to the world's climate. Building on the innovative mechanisms set up under the Kyoto Protocol to the 1992 United Nations Framework Convention on Climate Change (UNFCCC) - joint implementation, the clean development mechanism and international emissions trading - the EU has developed the largest company-level scheme for trading in emissions of carbon dioxide (CO2), making it the world leader in this emerging market. The emissions trading scheme started in the 25 EU Member States on 1 January 2005

  2. Level of data quality from Health Management Information Systems in a resources limited setting and its associated factors, eastern Ethiopia

    Directory of Open Access Journals (Sweden)

    Kidist Teklegiorgis

    2016-08-01

    Methods: A cross-sectional study was conducted by using structured questionnaires in Dire Dawa Administration health facilities. All unit and/or department heads from all government health facilities were selected. The data was analysed using STATA version 11. Frequency and percentages were computed to present the descriptive findings. Association between variables was computed using binary logistic regression. Results: Over all data quality was found to be 75.3% in unit and/or departments. Trained staff to fill format, decision based on supervisor directives and department heads seek feedback were significantly associated with data quality and their magnitudes were (AOR = 2.253, 95% CI [1.082, 4.692], (AOR = 2.131, 95% CI [1.073, 4.233] and (AOR = 2.481, 95% CI [1.262, 4.876], respectively. Conclusion: Overall data quality was found to be below the national expectation level. Low data quality was found at health posts compared to health centres and hospitals. There was also a shortage of assigned HIS personnel, separate HIS offices, and assigned budgets for HIS across all units and/or departments.

  3. The influence of power and actor relations on priority setting and resource allocation practices at the hospital level in Kenya: a case study.

    Science.gov (United States)

    Barasa, Edwine W; Cleary, Susan; English, Mike; Molyneux, Sassy

    2016-09-30

    Priority setting and resource allocation in healthcare organizations often involves the balancing of competing interests and values in the context of hierarchical and politically complex settings with multiple interacting actor relationships. Despite this, few studies have examined the influence of actor and power dynamics on priority setting practices in healthcare organizations. This paper examines the influence of power relations among different actors on the implementation of priority setting and resource allocation processes in public hospitals in Kenya. We used a qualitative case study approach to examine priority setting and resource allocation practices in two public hospitals in coastal Kenya. We collected data by a combination of in-depth interviews of national level policy makers, hospital managers, and frontline practitioners in the case study hospitals (n = 72), review of documents such as hospital plans and budgets, minutes of meetings and accounting records, and non-participant observations in case study hospitals over a period of 7 months. We applied a combination of two frameworks, Norman Long's actor interface analysis and VeneKlasen and Miller's expressions of power framework to examine and interpret our findings RESULTS: The interactions of actors in the case study hospitals resulted in socially constructed interfaces between: 1) senior managers and middle level managers 2) non-clinical managers and clinicians, and 3) hospital managers and the community. Power imbalances resulted in the exclusion of middle level managers (in one of the hospitals) and clinicians and the community (in both hospitals) from decision making processes. This resulted in, amongst others, perceptions of unfairness, and reduced motivation in hospital staff. It also puts to question the legitimacy of priority setting processes in these hospitals. Designing hospital decision making structures to strengthen participation and inclusion of relevant stakeholders could

  4. 4. Payment Schemes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 2. Electronic Commerce - Payment Schemes. V Rajaraman. Series Article Volume 6 Issue 2 February 2001 pp 6-13. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/006/02/0006-0013 ...

  5. Contract saving schemes

    NARCIS (Netherlands)

    Ronald, R.; Smith, S.J.; Elsinga, M.; Eng, O.S.; Fox O'Mahony, L.; Wachter, S.

    2012-01-01

    Contractual saving schemes for housing are institutionalised savings programmes normally linked to rights to loans for home purchase. They are diverse types as they have been developed differently in each national context, but normally fall into categories of open, closed, compulsory, and ‘free

  6. Alternative reprocessing schemes evaluation

    International Nuclear Information System (INIS)

    1979-02-01

    This paper reviews the parameters which determine the inaccessibility of the plutonium in reprocessing plants. Among the various parameters, the physical and chemical characteristics of the materials, the various processing schemes and the confinement are considered. The emphasis is placed on that latter parameter, and the advantages of an increased confinement in the socalled PIPEX reprocessing plant type are presented

  7. Introduction to association schemes

    NARCIS (Netherlands)

    Seidel, J.J.

    1991-01-01

    The present paper gives an introduction to the theory of association schemes, following Bose-Mesner (1959), Biggs (1974), Delsarte (1973), Bannai-Ito (1984) and Brouwer-Cohen-Neumaier (1989). Apart from definitions and many examples, also several proofs and some problems are included. The paragraphs

  8. Reaction schemes of immunoanalysis

    International Nuclear Information System (INIS)

    Delaage, M.; Barbet, J.

    1991-01-01

    The authors apply a general theory for multiple equilibria to the reaction schemes of immunoanalysis, competition and sandwich. This approach allows the manufacturer to optimize the system and provide the user with interpolation functions for the standard curve and its first derivative as well, thus giving access to variance [fr

  9. Estimation of community-level influenza-associated illness in a low resource rural setting in India.

    Science.gov (United States)

    Saha, Siddhartha; Gupta, Vivek; Dawood, Fatimah S; Broor, Shobha; Lafond, Kathryn E; Chadha, Mandeep S; Rai, Sanjay K; Krishnan, Anand

    2018-01-01

    To estimate rates of community-level influenza-like-illness (ILI) and influenza-associated ILI in rural north India. During 2011, we conducted household-based healthcare utilization surveys (HUS) for any acute medical illness (AMI) in preceding 14days among residents of 28villages of Ballabgarh, in north India. Concurrently, we conducted clinic-based surveillance (CBS) in the area for AMI episodes with illness onset ≤3days and collected nasal and throat swabs for influenza virus testing using real-time polymerase chain reaction. Retrospectively, we applied ILI case definition (measured/reported fever and cough) to HUS and CBS data. We attributed 14days of risk-time per person surveyed in HUS and estimated community ILI rate by dividing the number of ILI cases in HUS by total risk-time. We used CBS data on influenza positivity and applied it to HUS-based community ILI rates by age, month, and clinic type, to estimate the community influenza-associated ILI rates. The HUS of 69,369 residents during the year generated risk-time of 3945 person-years (p-y) and identified 150 (5%, 95%CI: 4-6) ILI episodes (38 ILI episodes/1,000 p-y; 95% CI 32-44). Among 1,372 ILI cases enrolled from clinics, 126 (9%; 95% CI 8-11) had laboratory-confirmed influenza (A (H3N2) = 72; B = 54). After adjusting for age, month, and clinic type, overall influenza-associated ILI rate was 4.8/1,000 p-y; rates were highest among children value of influenza vaccination among target groups.

  10. Diagnostic Cut-Off Levels of Plasma Brain Natriuretic Peptide to Distinguish Left Ventricular Failure in Emergency Setting

    International Nuclear Information System (INIS)

    Hussain, A.; Afridi, F. I.; Lutfi, I. A.

    2014-01-01

    Objective: To determine the diagnostic cut-off values of brain natriuretic (BNP) peptide to establish left ventricular failure in patients presenting with dyspnoea in emergency department. Study Design: Descriptive study. Place and Duration of Study: Ziauddin University Hospital, Karachi, from July to December 2011. Methodology: BNP estimation was done on Axysm analyzer with kit provided by Abbott diagnostics, while the Doppler echocardiography was done on Toshiba style (UICW-660A) using 2.5 MHz and 5.0 MHz probes. Log transformation was done to normalize the original BNP values. A receiver operating curve was plotted to determine the diagnostic cut-off value of BNP which can be used to distinguish CHF from other causes of dyspnoea. Statistical analysis was performed by SPSS version 17. Results: A total of 92 patients presenting with dyspnoea in the emergency department were studied. There were 38/92 (41.3%) males and 54/92 (58.7%) females, and the average age of the study population was 64 A +- 14.1 years. These patients had BNP levels and Doppler echocardiography done. The average BNP was found to be 1117.78 A +- 1445.74 pg/ml. In log transformation, the average was found to be 2.72 A +- 0.58. BNP value of 531 pg/ml was found to be the cut off to distinguish between cardiogenic and non-cardiogenic causes of dyspnoea. Conclusion: BNP value of 531 pg/ml can distinguish CHF from other conditions as a cause of dyspnoea in emergency. (author)

  11. Can we avoid high levels of dose escalation for high-risk prostate cancer in the setting of androgen deprivation?

    Science.gov (United States)

    Shakespeare, Thomas P; Wilcox, Shea W; Aherne, Noel J

    2016-01-01

    Both dose-escalated external beam radiotherapy (DE-EBRT) and androgen deprivation therapy (ADT) improve outcomes in patients with high-risk prostate cancer. However, there is little evidence specifically evaluating DE-EBRT for patients with high-risk prostate cancer receiving ADT, particularly for EBRT doses >74 Gy. We aimed to determine whether DE-EBRT >74 Gy improves outcomes for patients with high-risk prostate cancer receiving long-term ADT. Patients with high-risk prostate cancer were treated on an institutional protocol prescribing 3-6 months neoadjuvant ADT and DE-EBRT, followed by 2 years of adjuvant ADT. Between 2006 and 2012, EBRT doses were escalated from 74 Gy to 76 Gy and then to 78 Gy. We interrogated our electronic medical record to identify these patients and analyzed our results by comparing dose levels. In all, 479 patients were treated with a 68-month median follow-up. The 5-year biochemical disease-free survivals for the 74 Gy, 76 Gy, and 78 Gy groups were 87.8%, 86.9%, and 91.6%, respectively. The metastasis-free survivals were 95.5%, 94.5%, and 93.9%, respectively, and the prostate cancer-specific survivals were 100%, 94.4%, and 98.1%, respectively. Dose escalation had no impact on any outcome in either univariate or multivariate analysis. There was no benefit of DE-EBRT >74 Gy in our cohort of high-risk prostate patients treated with long-term ADT. As dose escalation has higher risks of radiotherapy-induced toxicity, it may be feasible to omit dose escalation beyond 74 Gy in this group of patients. Randomized studies evaluating dose escalation for high-risk patients receiving ADT should be considered.

  12. Quantitative characterization of metastatic disease in the spine. Part I. Semiautomated segmentation using atlas-based deformable registration and the level set method

    International Nuclear Information System (INIS)

    Hardisty, M.; Gordon, L.; Agarwal, P.; Skrinskas, T.; Whyne, C.

    2007-01-01

    Quantitative assessment of metastatic disease in bone is often considered immeasurable and, as such, patients with skeletal metastases are often excluded from clinical trials. In order to effectively quantify the impact of metastatic tumor involvement in the spine, accurate segmentation of the vertebra is required. Manual segmentation can be accurate but involves extensive and time-consuming user interaction. Potential solutions to automating segmentation of metastatically involved vertebrae are demons deformable image registration and level set methods. The purpose of this study was to develop a semiautomated method to accurately segment tumor-bearing vertebrae using the aforementioned techniques. By maintaining morphology of an atlas, the demons-level set composite algorithm was able to accurately differentiate between trans-cortical tumors and surrounding soft tissue of identical intensity. The algorithm successfully segmented both the vertebral body and trabecular centrum of tumor-involved and healthy vertebrae. This work validates our approach as equivalent in accuracy to an experienced user

  13. Estimation of community-level influenza-associated illness in a low resource rural setting in India.

    Directory of Open Access Journals (Sweden)

    Siddhartha Saha

    Full Text Available To estimate rates of community-level influenza-like-illness (ILI and influenza-associated ILI in rural north India.During 2011, we conducted household-based healthcare utilization surveys (HUS for any acute medical illness (AMI in preceding 14days among residents of 28villages of Ballabgarh, in north India. Concurrently, we conducted clinic-based surveillance (CBS in the area for AMI episodes with illness onset ≤3days and collected nasal and throat swabs for influenza virus testing using real-time polymerase chain reaction. Retrospectively, we applied ILI case definition (measured/reported fever and cough to HUS and CBS data. We attributed 14days of risk-time per person surveyed in HUS and estimated community ILI rate by dividing the number of ILI cases in HUS by total risk-time. We used CBS data on influenza positivity and applied it to HUS-based community ILI rates by age, month, and clinic type, to estimate the community influenza-associated ILI rates.The HUS of 69,369 residents during the year generated risk-time of 3945 person-years (p-y and identified 150 (5%, 95%CI: 4-6 ILI episodes (38 ILI episodes/1,000 p-y; 95% CI 32-44. Among 1,372 ILI cases enrolled from clinics, 126 (9%; 95% CI 8-11 had laboratory-confirmed influenza (A (H3N2 = 72; B = 54. After adjusting for age, month, and clinic type, overall influenza-associated ILI rate was 4.8/1,000 p-y; rates were highest among children <5 years (13; 95% CI: 4-29 and persons≥60 years (11; 95%CI: 2-30.We present a novel way to use HUS and CBS data to generate estimates of community burden of influenza. Although the confidence intervals overlapped considerably, higher point estimates for burden among young children and older adults shows the utility for exploring the value of influenza vaccination among target groups.

  14. Can we avoid high levels of dose escalation for high-risk prostate cancer in the setting of androgen deprivation?

    Directory of Open Access Journals (Sweden)

    Shakespeare TP

    2016-05-01

    Full Text Available Thomas P Shakespeare,1,2 Shea W Wilcox,1 Noel J Aherne1,2 1Department of Radiation Oncology, North Coast Cancer Institute, 2Rural Clinical School, Faculty of Medicine, University of New South Wales, Coffs Harbour, NSW, Australia Aim: Both dose-escalated external beam radiotherapy (DE-EBRT and androgen deprivation therapy (ADT improve outcomes in patients with high-risk prostate cancer. However, there is little evidence specifically evaluating DE-EBRT for patients with high-risk prostate cancer receiving ADT, particularly for EBRT doses >74 Gy. We aimed to determine whether DE-EBRT >74 Gy improves outcomes for patients with high-risk prostate cancer receiving long-term ADT. Patients and methods: Patients with high-risk prostate cancer were treated on an institutional protocol prescribing 3–6 months neoadjuvant ADT and DE-EBRT, followed by 2 years of adjuvant ADT. Between 2006 and 2012, EBRT doses were escalated from 74 Gy to 76 Gy and then to 78 Gy. We interrogated our electronic medical record to identify these patients and analyzed our results by comparing dose levels. Results: In all, 479 patients were treated with a 68-month median follow-up. The 5-year biochemical disease-free survivals for the 74 Gy, 76 Gy, and 78 Gy groups were 87.8%, 86.9%, and 91.6%, respectively. The metastasis-free survivals were 95.5%, 94.5%, and 93.9%, respectively, and the prostate cancer-specific survivals were 100%, 94.4%, and 98.1%, respectively. Dose escalation had no impact on any outcome in either univariate or multivariate analysis. Conclusion: There was no benefit of DE-EBRT >74 Gy in our cohort of high-risk prostate patients treated with long-term ADT. As dose escalation has higher risks of radiotherapy-induced toxicity, it may be feasible to omit dose escalation beyond 74 Gy in this group of patients. Randomized studies evaluating dose escalation for high-risk patients receiving ADT should be considered. Keywords: radiotherapy, IMRT, dose

  15. [Dot1 and Set2 Histone Methylases Control the Spontaneous and UV-Induced Mutagenesis Levels in the Saccharomyces cerevisiae Yeasts].

    Science.gov (United States)

    Kozhina, T N; Evstiukhina, T A; Peshekhonov, V T; Chernenkov, A Yu; Korolev, V G

    2016-03-01

    In the Saccharomyces cerevisiae yeasts, the DOT1 gene product provides methylation of lysine 79 (K79) of hi- stone H3 and the SET2 gene product provides the methylation of lysine 36 (K36) of the same histone. We determined that the dot1 and set2 mutants suppress the UV-induced mutagenesis to an equally high degree. The dot1 mutation demonstrated statistically higher sensitivity to the low doses of MMC than the wild type strain. The analysis of the interaction between the dot1 and rad52 mutations revealed a considerable level of spontaneous cell death in the double dot1 rad52 mutant. We observed strong suppression of the gamma-in- duced mutagenesis in the set2 mutant. We determined that the dot1 and set2 mutations decrease the sponta- neous mutagenesis rate in both single and d ouble mutants. The epistatic interaction between the dot1 and set2 mutations and almost similar sensitivity of the corresponding mutants to the different types of DNA damage allow one to conclude that both genes are involved in the control of the same DNA repair pathways, the ho- mologous-recombination-based and the postreplicative DNA repair.

  16. Canonical, stable, general mapping using context schemes.

    Science.gov (United States)

    Novak, Adam M; Rosen, Yohei; Haussler, David; Paten, Benedict

    2015-11-15

    Sequence mapping is the cornerstone of modern genomics. However, most existing sequence mapping algorithms are insufficiently general. We introduce context schemes: a method that allows the unambiguous recognition of a reference base in a query sequence by testing the query for substrings from an algorithmically defined set. Context schemes only map when there is a unique best mapping, and define this criterion uniformly for all reference bases. Mappings under context schemes can also be made stable, so that extension of the query string (e.g. by increasing read length) will not alter the mapping of previously mapped positions. Context schemes are general in several senses. They natively support the detection of arbitrary complex, novel rearrangements relative to the reference. They can scale over orders of magnitude in query sequence length. Finally, they are trivially extensible to more complex reference structures, such as graphs, that incorporate additional variation. We demonstrate empirically the existence of high-performance context schemes, and present efficient context scheme mapping algorithms. The software test framework created for this study is available from https://registry.hub.docker.com/u/adamnovak/sequence-graphs/. anovak@soe.ucsc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. A numerical scheme for the generalized Burgers–Huxley equation

    Directory of Open Access Journals (Sweden)

    Brajesh K. Singh

    2016-10-01

    Full Text Available In this article, a numerical solution of generalized Burgers–Huxley (gBH equation is approximated by using a new scheme: modified cubic B-spline differential quadrature method (MCB-DQM. The scheme is based on differential quadrature method in which the weighting coefficients are obtained by using modified cubic B-splines as a set of basis functions. This scheme reduces the equation into a system of first-order ordinary differential equation (ODE which is solved by adopting SSP-RK43 scheme. Further, it is shown that the proposed scheme is stable. The efficiency of the proposed method is illustrated by four numerical experiments, which confirm that obtained results are in good agreement with earlier studies. This scheme is an easy, economical and efficient technique for finding numerical solutions for various kinds of (nonlinear physical models as compared to the earlier schemes.

  18. Decentralising Zimbabwe’s water management: The case of Guyu-Chelesa irrigation scheme

    Science.gov (United States)

    Tambudzai, Rashirayi; Everisto, Mapedza; Gideon, Zhou

    Smallholder irrigation schemes are largely supply driven such that they exclude the beneficiaries on the management decisions and the choice of the irrigation schemes that would best suit their local needs. It is against this background that the decentralisation framework and the Dublin Principles on Integrated Water Resource Management (IWRM) emphasise the need for a participatory approach to water management. The Zimbabwean government has gone a step further in decentralising the management of irrigation schemes, that is promoting farmer managed irrigation schemes so as to ensure effective management of scarce community based land and water resources. The study set to investigate the way in which the Guyu-Chelesa irrigation scheme is managed with specific emphasis on the role of the Irrigation Management Committee (IMC), the level of accountability and the powers devolved to the IMC. Merrey’s 2008 critique of IWRM also informs this study which views irrigation as going beyond infrastructure by looking at how institutions and decision making processes play out at various levels including at the irrigation scheme level. The study was positioned on the hypothesis that ‘decentralised or autonomous irrigation management enhances the sustainability and effectiveness of irrigation schemes’. To validate or falsify the stated hypothesis, data was gathered using desk research in the form of reviewing articles, documents from within the scheme and field research in the form of questionnaire surveys, key informant interviews and field observation. The Statistical Package for Social Sciences was used to analyse data quantitatively, whilst content analysis was utilised to analyse qualitative data whereby data was analysed thematically. Comparative analysis was carried out as Guyu-Chelesa irrigation scheme was compared with other smallholder irrigation scheme’s experiences within Zimbabwe and the Sub Saharan African region at large. The findings were that whilst the

  19. Low-level HIV-1 replication and the dynamics of the resting CD4+ T cell reservoir for HIV-1 in the setting of HAART

    Directory of Open Access Journals (Sweden)

    Wilke Claus O

    2008-01-01

    Full Text Available Abstract Background In the setting of highly active antiretroviral therapy (HAART, plasma levels of human immunodeficiency type-1 (HIV-1 rapidly decay to below the limit of detection of standard clinical assays. However, reactivation of remaining latently infected memory CD4+ T cells is a source of continued virus production, forcing patients to remain on HAART despite clinically undetectable viral loads. Unfortunately, the latent reservoir decays slowly, with a half-life of up to 44 months, making it the major known obstacle to the eradication of HIV-1 infection. However, the mechanism underlying the long half-life of the latent reservoir is unknown. The most likely potential mechanisms are low-level viral replication and the intrinsic stability of latently infected cells. Methods Here we use a mathematical model of T cell dynamics in the setting of HIV-1 infection to probe the decay characteristics of the latent reservoir upon initiation of HAART. We compare the behavior of this model to patient derived data in order to gain insight into the role of low-level viral replication in the setting of HAART. Results By comparing the behavior of our model to patient derived data, we find that the viral dynamics observed in patients on HAART could be consistent with low-level viral replication but that this replication would not significantly affect the decay rate of the latent reservoir. Rather than low-level replication, the intrinsic stability of latently infected cells and the rate at which they are reactivated primarily determine the observed reservoir decay rate according to the predictions of our model. Conclusion The intrinsic stability of the latent reservoir has important implications for efforts to eradicate HIV-1 infection and suggests that intensified HAART would not accelerate the decay of the latent reservoir.

  20. Low-level HIV-1 replication and the dynamics of the resting CD4+ T cell reservoir for HIV-1 in the setting of HAART

    Science.gov (United States)

    Sedaghat, Ahmad R; Siliciano, Robert F; Wilke, Claus O

    2008-01-01

    Background In the setting of highly active antiretroviral therapy (HAART), plasma levels of human immunodeficiency type-1 (HIV-1) rapidly decay to below the limit of detection of standard clinical assays. However, reactivation of remaining latently infected memory CD4+ T cells is a source of continued virus production, forcing patients to remain on HAART despite clinically undetectable viral loads. Unfortunately, the latent reservoir decays slowly, with a half-life of up to 44 months, making it the major known obstacle to the eradication of HIV-1 infection. However, the mechanism underlying the long half-life of the latent reservoir is unknown. The most likely potential mechanisms are low-level viral replication and the intrinsic stability of latently infected cells. Methods Here we use a mathematical model of T cell dynamics in the setting of HIV-1 infection to probe the decay characteristics of the latent reservoir upon initiation of HAART. We compare the behavior of this model to patient derived data in order to gain insight into the role of low-level viral replication in the setting of HAART. Results By comparing the behavior of our model to patient derived data, we find that the viral dynamics observed in patients on HAART could be consistent with low-level viral replication but that this replication would not significantly affect the decay rate of the latent reservoir. Rather than low-level replication, the intrinsic stability of latently infected cells and the rate at which they are reactivated primarily determine the observed reservoir decay rate according to the predictions of our model. Conclusion The intrinsic stability of the latent reservoir has important implications for efforts to eradicate HIV-1 infection and suggests that intensified HAART would not accelerate the decay of the latent reservoir. PMID:18171475

  1. A repeat-until-success quantum computing scheme

    Energy Technology Data Exchange (ETDEWEB)

    Beige, A [School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT (United Kingdom); Lim, Y L [DSO National Laboratories, 20 Science Park Drive, Singapore 118230, Singapore (Singapore); Kwek, L C [Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542, Singapore (Singapore)

    2007-06-15

    Recently we proposed a hybrid architecture for quantum computing based on stationary and flying qubits: the repeat-until-success (RUS) quantum computing scheme. The scheme is largely implementation independent. Despite the incompleteness theorem for optical Bell-state measurements in any linear optics set-up, it allows for the implementation of a deterministic entangling gate between distant qubits. Here we review this distributed quantum computation scheme, which is ideally suited for integrated quantum computation and communication purposes.

  2. A repeat-until-success quantum computing scheme

    International Nuclear Information System (INIS)

    Beige, A; Lim, Y L; Kwek, L C

    2007-01-01

    Recently we proposed a hybrid architecture for quantum computing based on stationary and flying qubits: the repeat-until-success (RUS) quantum computing scheme. The scheme is largely implementation independent. Despite the incompleteness theorem for optical Bell-state measurements in any linear optics set-up, it allows for the implementation of a deterministic entangling gate between distant qubits. Here we review this distributed quantum computation scheme, which is ideally suited for integrated quantum computation and communication purposes

  3. Traffic calming schemes : opportunities and implementation strategies.

    NARCIS (Netherlands)

    Schagen, I.N.L.G. van (ed.)

    2003-01-01

    Commissioned by the Swedish National Road Authority, this report aims to provide a concise overview of knowledge of and experiences with traffic calming schemes in urban areas, both on a technical level and on a policy level. Traffic calming refers to a combination of network planning and

  4. On Converting Secret Sharing Scheme to Visual Secret Sharing Scheme

    Directory of Open Access Journals (Sweden)

    Wang Daoshun

    2010-01-01

    Full Text Available Abstract Traditional Secret Sharing (SS schemes reconstruct secret exactly the same as the original one but involve complex computation. Visual Secret Sharing (VSS schemes decode the secret without computation, but each share is m times as big as the original and the quality of the reconstructed secret image is reduced. Probabilistic visual secret sharing (Prob.VSS schemes for a binary image use only one subpixel to share the secret image; however the probability of white pixels in a white area is higher than that in a black area in the reconstructed secret image. SS schemes, VSS schemes, and Prob. VSS schemes have various construction methods and advantages. This paper first presents an approach to convert (transform a -SS scheme to a -VSS scheme for greyscale images. The generation of the shadow images (shares is based on Boolean XOR operation. The secret image can be reconstructed directly by performing Boolean OR operation, as in most conventional VSS schemes. Its pixel expansion is significantly smaller than that of VSS schemes. The quality of the reconstructed images, measured by average contrast, is the same as VSS schemes. Then a novel matrix-concatenation approach is used to extend the greyscale -SS scheme to a more general case of greyscale -VSS scheme.

  5. Effect of liner design, pulsator setting, and vacuum level on bovine teat tissue changes and milking characteristics as measured by ultrasonography

    Directory of Open Access Journals (Sweden)

    Gleeson David E

    2004-05-01

    Full Text Available Friesian-type dairy cows were milked with different machine settings to determine the effect of these settings on teat tissue reaction and on milking characteristics. Three teat-cup liner designs were used with varying upper barrel dimensions (wide-bore WB = 31.6 mm; narrow-bore NB = 21.0 mm; narrow-bore NB1 = 25.0 mm. These liners were tested with alternate and simultaneous pulsation patterns, pulsator ratios (60:40 and 67:33 and three system vacuum levels (40, 44 and 50 kPa. Teat tissue was measured using ultrasonography, before milking and directly after milking. The measurements recorded were teat canal length (TCL, teat diameter (TD, cistern diameter (CD and teat wall thickness (TWT. Teat tissue changes were similar with a system vacuum level of either 50 kPa (mid-level or 40 kPa (low-level. Widening the liner upper barrel bore dimension from 21.0 mm (P

  6. Teamwork skills in actual, in situ, and in-center pediatric emergencies: performance levels across settings and perceptions of comparative educational impact.

    Science.gov (United States)

    Couto, Thomaz Bittencourt; Kerrey, Benjamin T; Taylor, Regina G; FitzGerald, Michael; Geis, Gary L

    2015-04-01

    Pediatric emergencies require effective teamwork. These skills are developed and demonstrated in actual emergencies and in simulated environments, including simulation centers (in center) and the real care environment (in situ). Our aims were to compare teamwork performance across these settings and to identify perceived educational strengths and weaknesses between simulated settings. We hypothesized that teamwork performance in actual emergencies and in situ simulations would be higher than for in-center simulations. A retrospective, video-based assessment of teamwork was performed in an academic, pediatric level 1 trauma center, using the Team Emergency Assessment Measure (TEAM) tool (range, 0-44) among emergency department providers (physicians, nurses, respiratory therapists, paramedics, patient care assistants, and pharmacists). A survey-based, cross-sectional assessment was conducted to determine provider perceptions regarding simulation training. One hundred thirty-two videos, 44 from each setting, were reviewed. Mean total TEAM scores were similar and high in all settings (31.2 actual, 31.1 in situ, and 32.3 in-center, P = 0.39). Of 236 providers, 154 (65%) responded to the survey. For teamwork training, in situ simulation was considered more realistic (59% vs. 10%) and more effective (45% vs. 15%) than in-center simulation. In a video-based study in an academic pediatric institution, ratings of teamwork were relatively high among actual resuscitations and 2 simulation settings, substantiating the influence of simulation-based training on instilling a culture of communication and teamwork. On the basis of survey results, providers favored the in situ setting for teamwork training and suggested an expansion of our existing in situ program.

  7. Selectively strippable paint schemes

    Science.gov (United States)

    Stein, R.; Thumm, D.; Blackford, Roger W.

    1993-03-01

    In order to meet the requirements of more environmentally acceptable paint stripping processes many different removal methods are under evaluation. These new processes can be divided into mechanical and chemical methods. ICI has developed a paint scheme with intermediate coat and fluid resistant polyurethane topcoat which can be stripped chemically in a short period of time with methylene chloride free and phenol free paint strippers.

  8. The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses

    Directory of Open Access Journals (Sweden)

    Han Bossier

    2018-01-01

    Full Text Available Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1 the balance between false and true positives and (2 the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS, or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35. To do this, we apply a resampling scheme on a large dataset (N = 1,400 to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results.

  9. Scalable Nonlinear Compact Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Debojyoti [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil M. [Univ. of Chicago, IL (United States); Brown, Jed [Univ. of Colorado, Boulder, CO (United States)

    2014-04-01

    In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.

  10. Settings for Physical Activity – Developing a Site-specific Physical Activity Behavior Model based on Multi-level Intervention Studies

    DEFF Research Database (Denmark)

    Troelsen, Jens; Klinker, Charlotte Demant; Breum, Lars

    Settings for Physical Activity – Developing a Site-specific Physical Activity Behavior Model based on Multi-level Intervention Studies Introduction: Ecological models of health behavior have potential as theoretical framework to comprehend the multiple levels of factors influencing physical...... to be taken into consideration. A theoretical implication of this finding is to develop a site-specific physical activity behavior model adding a layered structure to the ecological model representing the determinants related to the specific site. Support: This study was supported by TrygFonden, Realdania...... activity (PA). The potential is shown by the fact that there has been a dramatic increase in application of ecological models in research and practice. One proposed core principle is that an ecological model is most powerful if the model is behavior-specific. However, based on multi-level interventions...

  11. New Imaging Operation Scheme at VLTI

    Science.gov (United States)

    Haubois, Xavier

    2018-04-01

    After PIONIER and GRAVITY, MATISSE will soon complete the set of 4 telescope beam combiners at VLTI. Together with recent developments in the image reconstruction algorithms, the VLTI aims to develop its operation scheme to allow optimized and adaptive UV plane coverage. The combination of spectro-imaging instruments, optimized operation framework and image reconstruction algorithms should lead to an increase of the reliability and quantity of the interferometric images. In this contribution, I will present the status of this new scheme as well as possible synergies with other instruments.

  12. A conditioned level-set method with block-division strategy to flame front extraction based on OH-PLIF measurements

    International Nuclear Information System (INIS)

    Han Yue; Cai Guo-Biao; Xu Xu; Bruno Renou; Abdelkrim Boukhalfa

    2014-01-01

    A novel approach to extract flame fronts, which is called the conditioned level-set method with block division (CLSB), has been developed. Based on a two-phase level-set formulation, the conditioned initialization and region-lock optimization appear to be beneficial to improve the efficiency and accuracy of the flame contour identification. The original block-division strategy enables the approach to be unsupervised by calculating local self-adaptive threshold values autonomously before binarization. The CLSB approach has been applied to deal with a large set of experimental data involving swirl-stabilized premixed combustion in diluted regimes operating at atmospheric pressures. The OH-PLIF measurements have been carried out in this framework. The resulting images are, thus, featured by lower signal-to-noise ratios (SNRs) than the ideal image; relatively complex flame structures lead to significant non-uniformity in the OH signal intensity; and, the magnitude of the maximum OH gradient observed along the flame front can also vary depending on flow or local stoichiometry. Compared with other conventional edge detection operators, the CLSB method demonstrates a good ability to deal with the OH-PLIF images at low SNR and with the presence of a multiple scales of both OH intensity and OH gradient. The robustness to noise sensitivity and intensity inhomogeneity has been evaluated throughout a range of experimental images of diluted flames, as well as against a circle test as Ground Truth (GT). (interdisciplinary physics and related areas of science and technology)

  13. Level densities

    International Nuclear Information System (INIS)

    Ignatyuk, A.V.

    1998-01-01

    For any applications of the statistical theory of nuclear reactions it is very important to obtain the parameters of the level density description from the reliable experimental data. The cumulative numbers of low-lying levels and the average spacings between neutron resonances are usually used as such data. The level density parameters fitted to such data are compiled in the RIPL Starter File for the tree models most frequently used in practical calculations: i) For the Gilber-Cameron model the parameters of the Beijing group, based on a rather recent compilations of the neutron resonance and low-lying level densities and included into the beijing-gc.dat file, are chosen as recommended. As alternative versions the parameters provided by other groups are given into the files: jaeri-gc.dat, bombay-gc.dat, obninsk-gc.dat. Additionally the iljinov-gc.dat, and mengoni-gc.dat files include sets of the level density parameters that take into account the damping of shell effects at high energies. ii) For the backed-shifted Fermi gas model the beijing-bs.dat file is selected as the recommended one. Alternative parameters of the Obninsk group are given in the obninsk-bs.dat file and those of Bombay in bombay-bs.dat. iii) For the generalized superfluid model the Obninsk group parameters included into the obninsk-bcs.dat file are chosen as recommended ones and the beijing-bcs.dat file is included as an alternative set of parameters. iv) For the microscopic approach to the level densities the files are: obninsk-micro.for -FORTRAN 77 source for the microscopical statistical level density code developed in Obninsk by Ignatyuk and coworkers, moller-levels.gz - Moeller single-particle level and ground state deformation data base, moller-levels.for -retrieval code for Moeller single-particle level scheme. (author)

  14. Change in Vitamin D Levels Occurs Early after Antiretroviral Therapy Initiation and Depends on Treatment Regimen in Resource-Limited Settings

    Science.gov (United States)

    Havers, Fiona P.; Detrick, Barbara; Cardoso, Sandra W.; Berendes, Sima; Lama, Javier R.; Sugandhavesa, Patcharaphan; Mwelase, Noluthando H.; Campbell, Thomas B.; Gupta, Amita

    2014-01-01

    Study Background Vitamin D has wide-ranging effects on the immune system, and studies suggest that low serum vitamin D levels are associated with worse clinical outcomes in HIV. Recent studies have identified an interaction between antiretrovirals used to treat HIV and reduced serum vitamin D levels, but these studies have been done in North American and European populations. Methods Using a prospective cohort study design nested in a multinational clinical trial, we examined the effect of three combination antiretroviral (cART) regimens on serum vitamin D levels in 270 cART-naïve, HIV-infected adults in nine diverse countries, (Brazil, Haiti, Peru, Thailand, India, Malawi, South Africa, Zimbabwe and the United States). We evaluated the change between baseline serum vitamin D levels and vitamin D levels 24 and 48 weeks after cART initiation. Results Serum vitamin D levels decreased significantly from baseline to 24 weeks among those randomized to efavirenz/lamivudine/zidovudine (mean change: −7.94 [95% Confidence Interval (CI) −10.42, −5.54] ng/ml) and efavirenz/emtricitabine/tenofovir-DF (mean change: −6.66 [95% CI −9.40, −3.92] ng/ml) when compared to those randomized to atazanavir/emtricitabine/didanosine-EC (mean change: −2.29 [95% CI –4.83, 0.25] ng/ml). Vitamin D levels did not change significantly between week 24 and 48. Other factors that significantly affected serum vitamin D change included country (p<0.001), season (p<0.001) and baseline vitamin D level (p<0.001). Conclusion Efavirenz-containing cART regimens adversely affected vitamin D levels in patients from economically, geographically and racially diverse resource-limited settings. This effect was most pronounced early after cART initiation. Research is needed to define the role of Vitamin D supplementation in HIV care. PMID:24752177

  15. The Political Economy of International Emissions Trading Scheme Choice

    DEFF Research Database (Denmark)

    Boom, Jan-Tjeerd; Svendsen, Jan Tinggard

    2000-01-01

    The Kyoto Protocol allows emission trade between the Annex B countries. We consider three schemes of emissions trading: government trading, permit trading and credit trading. The schemes are compared in a public choice setting focusing on group size and rent-seeking from interest groups. We find ...

  16. Insights on different participation schemes to meet climate goals

    International Nuclear Information System (INIS)

    Russ, Peter; Ierland, Tom van

    2009-01-01

    Models and scenarios to assess greenhouse gas mitigation action have become more diversified and detailed, allowing the simulation of more realistic global climate policy set-ups. In this paper, different participation schemes to meet different levels of radiative forcing are analysed. The focus is on scenarios that are in line with the 2 deg. C target. Typical stylised participation schemes are based either on a perfect global carbon market or delayed participation with targets only for developed countries, no actions by developing countries and no access to credits from offsetting mechanisms in developing countries. This paper adds an intermediate policy scenario assuming a gradual incorporation of all countries, including a gradually developing carbon market, and taking into account the ability to contribute of different parties. Perfect participation by all parties would be optimal, but it is shown that participation schemes involving a gradual and differentiated participation by all parties can substantially decrease global costs and still meet the 2 deg. C target. Carbon markets can compensate in part for those costs incurred by developing countries' own, autonomous mitigation actions that do not generate tradable emission credits.

  17. Glycated haemoglobin (HbA1c ) and fasting plasma glucose relationships in sea-level and high-altitude settings.

    Science.gov (United States)

    Bazo-Alvarez, J C; Quispe, R; Pillay, T D; Bernabé-Ortiz, A; Smeeth, L; Checkley, W; Gilman, R H; Málaga, G; Miranda, J J

    2017-06-01

    Higher haemoglobin levels and differences in glucose metabolism have been reported among high-altitude residents, which may influence the diagnostic performance of HbA 1c . This study explores the relationship between HbA 1c and fasting plasma glucose (FPG) in populations living at sea level and at an altitude of > 3000 m. Data from 3613 Peruvian adults without a known diagnosis of diabetes from sea-level and high-altitude settings were evaluated. Linear, quadratic and cubic regression models were performed adjusting for potential confounders. Receiver operating characteristic (ROC) curves were constructed and concordance between HbA 1c and FPG was assessed using a Kappa index. At sea level and high altitude, means were 13.5 and 16.7 g/dl (P > 0.05) for haemoglobin level; 41 and 40 mmol/mol (5.9% and 5.8%; P < 0.01) for HbA 1c ; and 5.8 and 5.1 mmol/l (105 and 91.3 mg/dl; P < 0.001) for FPG, respectively. The adjusted relationship between HbA 1c and FPG was quadratic at sea level and linear at high altitude. Adjusted models showed that, to predict an HbA 1c value of 48 mmol/mol (6.5%), the corresponding mean FPG values at sea level and high altitude were 6.6 and 14.8 mmol/l (120 and 266 mg/dl), respectively. An HbA 1c cut-off of 48 mmol/mol (6.5%) had a sensitivity for high FPG of 87.3% (95% confidence interval (95% CI) 76.5 to 94.4) at sea level and 40.9% (95% CI 20.7 to 63.6) at high altitude. The relationship between HbA 1c and FPG is less clear at high altitude than at sea level. Caution is warranted when using HbA 1c to diagnose diabetes mellitus in this setting. © 2017 The Authors. Diabetic Medicine published by John Wiley & Sons Ltd on behalf of Diabetes UK.

  18. Hitting emissions targets with (statistical) confidence in multi-instrument Emissions Trading Schemes

    International Nuclear Information System (INIS)

    Shipworth, David

    2003-12-01

    A means of assessing, monitoring and controlling aggregate emissions from multi-instrument Emissions Trading Schemes is proposed. The approach allows contributions from different instruments with different forms of emissions targets to be integrated. Where Emissions Trading Schemes are helping to meet specific national targets, the approach allows the entry requirements of new participants to be calculated and set at a level that will achieve these targets. The approach is multi-levelled, and may be extended downwards to support pooling of participants within instruments, or upwards to embed Emissions Trading Schemes within a wider suite of policies and measures with hard and soft targets. Aggregate emissions from each instrument are treated stochastically. Emissions from the scheme as a whole are then the joint probability distribution formed by integrating the emissions from its instruments. Because a Bayesian approach is adopted, qualitative and semi-qualitative data from expert opinion can be used where quantitative data is not currently available, or is incomplete. This approach helps government retain sufficient control over emissions trading scheme targets to allow them to meet their emissions reduction obligations, while minimising the need for retrospectively adjusting existing participants' conditions of entry. This maintains participant confidence, while providing the necessary policy levers for good governance

  19. A systematic review on the relationship between the nursing shortage and nurses' job satisfaction, stress and burnout levels in oncology/haematology settings.

    Science.gov (United States)

    Gi, Toh Shir; Devi, Kamala M; Neo Kim, Emily Ang

    2011-01-01

    Nursing shortage is a global issue that which affects oncology nursing. Oncology nurses are more prone to experience job dissatisfaction, stress and burnout when they work in units with poor staffing. There is thus a need for greater understanding of the relationship between the nursing shortage and nursing outcomes in oncology/haematology settings. This review aimed to establish the best available evidence concerning the relationship between the nursing shortage and nurses' job satisfaction, stress and burnout levels in oncology/haematology settings; and to make recommendations for practice and future research. Types of participants: This review considered studies that included oncology registered nurses (RNs) who were more than 18 years of age and worked in either inpatient or outpatient oncology/haematology wards or units for the adult or paediatric patients.Types of intervention: This review considered studies that evaluated the relationship between the nursing shortage and nurses' job satisfaction, stress and burnout levels in oncology/haematology settings.Types of outcomes: This review included studies that measured job satisfaction, stress and burnout levels using different outcomes measures. Job satisfaction was determined by the Measure of Job Satisfaction scale, the Misener Nurse Practitioner Job Satisfaction Scale and the Likert scale, stress by the Pediatric Oncology Nurse Stressor Questionnaire and burnout by the Maslash Burnout Inventory scale.Types of studies: This review included descriptive/descriptive-correlational studies which were published in English. The search strategy sought to identify published and unpublished studies conducted between 1990 and 2010. Using a three-step search strategy, the following databases were accessed: CINAHL, Medline, Scopus, ScienceDirect, PsycInfo, PsycArticles, Web of Science, The Cochrane Library, Proquest and Mednar. Two independent reviewers assessed each paper for methodological validity prior to inclusion in

  20. Ultrasonic scalpel causes greater depth of soft tissue necrosis compared to monopolar electrocautery at standard power level settings in a pig model.

    Science.gov (United States)

    Homayounfar, Kia; Meis, Johanna; Jung, Klaus; Klosterhalfen, Bernd; Sprenger, Thilo; Conradi, Lena-Christin; Langer, Claus; Becker, Heinz

    2012-02-23

    Ultrasonic scalpel (UC) and monopolar electrocautery (ME) are common tools for soft tissue dissection. However, morphological data on the related tissue alteration are discordant. We developed an automatic device for standardized sample excision and compared quality and depth of morphological changes caused by UC and ME in a pig model. 100 tissue samples (5 × 3 cm) of the abdominal wall were excised in 16 pigs. Excisions were randomly performed manually or by using the self-constructed automatic device at standard power levels (60 W cutting in ME, level 5 in UC) for abdominal surgery. Quality of tissue alteration and depth of coagulation necrosis were examined histopathologically. Device (UC vs. ME) and mode (manually vs. automatic) effects were studied by two-way analysis of variance at a significance level of 5%. At the investigated power level settings UC and ME induced qualitatively similar coagulation necroses. Mean depth of necrosis was 450.4 ± 457.8 μm for manual UC and 553.5 ± 326.9 μm for automatic UC versus 149.0 ± 74.3 μm for manual ME and 257.6 ± 119.4 μm for automatic ME. Coagulation necrosis was significantly deeper (p power levels.

  1. APPLICATION OF ROUGH SET THEORY TO MAINTENANCE LEVEL DECISION-MAKING FOR AERO-ENGINE MODULES BASED ON INCREMENTAL KNOWLEDGE LEARNING

    Institute of Scientific and Technical Information of China (English)

    陆晓华; 左洪福; 蔡景

    2013-01-01

    The maintenance of an aero-engine usually includes three levels ,and the maintenance cost and period greatly differ depending on the different maintenance levels .To plan a reasonable maintenance budget program , airlines would like to predict the maintenance level of aero-engine before repairing in terms of performance parame-ters ,which can provide more economic benefits .The maintenance level decision rules are mined using the histori-cal maintenance data of a civil aero-engine based on the rough set theory ,and a variety of possible models of upda-ting rules produced by newly increased maintenance cases added to the historical maintenance case database are in-vestigated by the means of incremental machine learning .The continuously updated rules can provide reasonable guidance suggestions for engineers and decision support for planning a maintenance budget program before repai-ring .The results of an example show that the decision rules become more typical and robust ,and they are more accurate to predict the maintenance level of an aero-engine module as the maintenance data increase ,which illus-trates the feasibility of the represented method .

  2. Awareness and Coverage of the National Health Insurance Scheme ...

    African Journals Online (AJOL)

    Sub- national levels possess a high degree of autonomy in a number of sectors including health. It is important to assess the level of coverage of the scheme among the formal sector workers in Nigeria as a proxy to gauge the extent of coverage of the scheme and derive suitable lessons that could be used in its expansion.

  3. An efficient numerical progressive diagonalization scheme for the quantum Rabi model revisited

    International Nuclear Information System (INIS)

    Pan, Feng; Bao, Lina; Dai, Lianrong; Draayer, Jerry P

    2017-01-01

    An efficient numerical progressive diagonalization scheme for the quantum Rabi model is revisited. The advantage of the scheme lies in the fact that the quantum Rabi model can be solved almost exactly by using the scheme that only involves a finite set of one variable polynomial equations. The scheme is especially efficient for a specified eigenstate of the model, for example, the ground state. Some low-lying level energies of the model for several sets of parameters are calculated, of which one set of the results is compared to that obtained from the Braak’s exact solution proposed recently. It is shown that the derivative of the entanglement measure defined in terms of the reduced von Neumann entropy with respect to the coupling parameter does reach the maximum near the critical point deduced from the classical limit of the Dicke model, which may provide a probe of the critical point of the crossover in finite quantum many-body systems, such as that in the quantum Rabi model. (paper)

  4. Casemix and rehabilitation: evaluation of an early discharge scheme.

    Science.gov (United States)

    Brandis, S

    2000-01-01

    This paper presents a case study of an early discharge scheme funded by casemix incentives and discusses limitations of a casemix model of funding whereby hospital inpatient care is funded separately from care in other settings. The POSITIVE Rehabilitation program received 151 patients discharged early from hospital in a twelve-month period. Program evaluation demonstrates a 40.9% drop in the average length of stay of rehabilitation patients and a 42.6% drop in average length of stay for patients with stroke. Other benefits of the program include a high level of patient satisfaction, improved carer support and increased continuity of care. The challenge under the Australian interpretation of a casemix model of funding is ensuring the viability of services that extend across acute hospital, non-acute care, and community and home settings.

  5. Multi-criteria decision aid approach for the selection of the best compromise management scheme for ELVs: the case of Cyprus.

    Science.gov (United States)

    Mergias, I; Moustakas, K; Papadopoulos, A; Loizidou, M

    2007-08-25

    Each alternative scheme for treating a vehicle at its end of life has its own consequences from a social, environmental, economic and technical point of view. Furthermore, the criteria used to determine these consequences are often contradictory and not equally important. In the presence of multiple conflicting criteria, an optimal alternative scheme never exists. A multiple-criteria decision aid (MCDA) method to aid the Decision Maker (DM) in selecting the best compromise scheme for the management of End-of-Life Vehicles (ELVs) is presented in this paper. The constitution of a set of alternatives schemes, the selection of a list of relevant criteria to evaluate these alternative schemes and the choice of an appropriate management system are also analyzed in this framework. The proposed procedure relies on the PROMETHEE method which belongs to the well-known family of multiple criteria outranking methods. For this purpose, level, linear and Gaussian functions are used as preference functions.

  6. Bonus schemes and trading activity

    NARCIS (Netherlands)

    Pikulina, E.S.; Renneboog, L.D.R.; ter Horst, J.R.; Tobler, P.N.

    2014-01-01

    Little is known about how different bonus schemes affect traders' propensity to trade and which bonus schemes improve traders' performance. We study the effects of linear versus threshold bonus schemes on traders' behavior. Traders buy and sell shares in an experimental stock market on the basis of

  7. Succesful labelling schemes

    DEFF Research Database (Denmark)

    Juhl, Hans Jørn; Stacey, Julia

    2001-01-01

    . In the spring of 2001 MAPP carried out an extensive consumer study with special emphasis on the Nordic environmentally friendly label 'the swan'. The purpose was to find out how much consumers actually know and use various labelling schemes. 869 households were contacted and asked to fill in a questionnaire...... it into consideration when I go shopping. The respondent was asked to pick the most suitable answer, which described her use of each label. 29% - also called 'the labelling blind' - responded that they basically only knew the recycling label and the Government controlled organic label 'Ø-mærket'. Another segment of 6...

  8. Scheme of stepmotor control

    International Nuclear Information System (INIS)

    Grashilin, V.A.; Karyshev, Yu.Ya.

    1982-01-01

    A 6-cycle scheme of step motor is described. The block-diagram and the basic circuit of the step motor control are presented. The step motor control comprises a pulse shaper, electronic commutator and power amplifiers. The step motor supply from 6-cycle electronic commutator provides for higher reliability and accuracy than from 3-cycle commutator. The control of step motor work is realised by the program given by the external source of control signals. Time-dependent diagrams for step motor control are presented. The specifications of the step-motor is given

  9. Coastal lagoon systems as indicator of Holocene sea-level development in a periglacial soft-sediment setting: Samsø, Denmark

    DEFF Research Database (Denmark)

    Sander, Lasse; Fruergaard, Mikkel; Johannessen, Peter N.

    2014-01-01

    . Stratigraphy, grain-size distribution, fossil and organic matter content of cores retrieved from the lagoons were analyzed and compared. Age control was established using radiocarbon and optically stimulated luminescence dating. Our data produced a surprisingly consistent pattern for the sedimentary......Confined shallow-water environments are encountered many places along the coast of the inner Danish waters. Despite their common occurrence, these environments have rarely been studied as sedimentary archives. In this study we set out to trace back changes in relative sea-level and associated...... geomorphological responses in sediment cores retrieved from coastal lagoon systems on the island of Samsø, central Denmark. In the mid-Atlantic period, the post-glacial sea-level rise reached what is today the southern Kattegat Sea. Waves, currents and tides began to erode the unconsolidated moraine material...

  10. First UHF Implementation of the Incremental Scheme for Open-Shell Systems.

    Science.gov (United States)

    Anacker, Tony; Tew, David P; Friedrich, Joachim

    2016-01-12

    The incremental scheme makes it possible to compute CCSD(T) correlation energies to high accuracy for large systems. We present the first extension of this fully automated black-box approach to open-shell systems using an Unrestricted Hartree-Fock (UHF) wave function, extending the efficient domain-specific basis set approach to handle open-shell references. We test our approach on a set of organic and metal organic structures and molecular clusters and demonstrate standard deviations from canonical CCSD(T) values of only 1.35 kJ/mol using a triple ζ basis set. We find that the incremental scheme is significantly more cost-effective than the canonical implementation even for relatively small systems and that the ease of parallelization makes it possible to perform high-level calculations on large systems in a few hours on inexpensive computers. We show that the approximations that make our approach widely applicable are significantly smaller than both the basis set incompleteness error and the intrinsic error of the CCSD(T) method, and we further demonstrate that incremental energies can be reliably used in extrapolation schemes to obtain near complete basis set limit CCSD(T) reaction energies for large systems.

  11. Imaging disturbance zones ahead of a tunnel by elastic full-waveform inversion: Adjoint gradient based inversion vs. parameter space reduction using a level-set method

    Directory of Open Access Journals (Sweden)

    Andre Lamert

    2018-03-01

    Full Text Available We present and compare two flexible and effective methodologies to predict disturbance zones ahead of underground tunnels by using elastic full-waveform inversion. One methodology uses a linearized, iterative approach based on misfit gradients computed with the adjoint method while the other uses iterative, gradient-free unscented Kalman filtering in conjunction with a level-set representation. Whereas the former does not involve a priori assumptions on the distribution of elastic properties ahead of the tunnel, the latter introduces a massive reduction in the number of explicit model parameters to be inverted for by focusing on the geometric form of potential disturbances and their average elastic properties. Both imaging methodologies are validated through successful reconstructions of simple disturbances. As an application, we consider an elastic multiple disturbance scenario. By using identical synthetic time-domain seismograms as test data, we obtain satisfactory, albeit different, reconstruction results from the two inversion methodologies. The computational costs of both approaches are of the same order of magnitude, with the gradient-based approach showing a slight advantage. The model parameter space reduction approach compensates for this by additionally providing a posteriori estimates of model parameter uncertainty. Keywords: Tunnel seismics, Full waveform inversion, Seismic waves, Level-set method, Adjoint method, Kalman filter

  12. Packet reversed packet combining scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2006-07-01

    The packet combining scheme is a well defined simple error correction scheme with erroneous copies at the receiver. It offers higher throughput combined with ARQ protocols in networks than that of basic ARQ protocols. But packet combining scheme fails to correct errors when the errors occur in the same bit locations of two erroneous copies. In the present work, we propose a scheme that will correct error if the errors occur at the same bit location of the erroneous copies. The proposed scheme when combined with ARQ protocol will offer higher throughput. (author)

  13. A full quantum network scheme

    International Nuclear Information System (INIS)

    Ma Hai-Qiang; Wei Ke-Jin; Yang Jian-Hui; Li Rui-Xue; Zhu Wu

    2014-01-01

    We present a full quantum network scheme using a modified BB84 protocol. Unlike other quantum network schemes, it allows quantum keys to be distributed between two arbitrary users with the help of an intermediary detecting user. Moreover, it has good expansibility and prevents all potential attacks using loopholes in a detector, so it is more practical to apply. Because the fiber birefringence effects are automatically compensated, the scheme is distinctly stable in principle and in experiment. The simple components for every user make our scheme easier for many applications. The experimental results demonstrate the stability and feasibility of this scheme. (general)

  14. Hilbert schemes of points on some classes surface singularities

    OpenAIRE

    Gyenge, Ádám

    2016-01-01

    We study the geometry and topology of Hilbert schemes of points on the orbifold surface [C^2/G], respectively the singular quotient surface C^2/G, where G is a finite subgroup of SL(2,C) of type A or D. We give a decomposition of the (equivariant) Hilbert scheme of the orbifold into affine space strata indexed by a certain combinatorial set, the set of Young walls. The generating series of Euler characteristics of Hilbert schemes of points of the singular surface of type A or D is computed in...

  15. A hybrid Lagrangian Voronoi-SPH scheme

    Science.gov (United States)

    Fernandez-Gutierrez, D.; Souto-Iglesias, A.; Zohdi, T. I.

    2017-11-01

    A hybrid Lagrangian Voronoi-SPH scheme, with an explicit weakly compressible formulation for both the Voronoi and SPH sub-domains, has been developed. The SPH discretization is substituted by Voronoi elements close to solid boundaries, where SPH consistency and boundary conditions implementation become problematic. A buffer zone to couple the dynamics of both sub-domains is used. This zone is formed by a set of particles where fields are interpolated taking into account SPH particles and Voronoi elements. A particle may move in or out of the buffer zone depending on its proximity to a solid boundary. The accuracy of the coupled scheme is discussed by means of a set of well-known verification benchmarks.

  16. Elevated gamma glutamyl transferase levels are associated with the location of acute pulmonary embolism. Cross-sectional evaluation in hospital setting

    Directory of Open Access Journals (Sweden)

    Ozge Korkmaz

    Full Text Available ABSTRACT CONTEXT AND OBJECTIVE: The location of embolism is associated with clinical findings and disease severity in cases of acute pulmonary embolism. The level of gamma-glutamyl transferase increases under oxidative stress-related conditions. In this study, we investigated whether gamma-glutamyl transferase levels could predict the location of pulmonary embolism. DESIGN AND SETTING: Hospital-based cross-sectional study at Cumhuriyet University, Sivas, Turkey. METHODS : 120 patients who were diagnosed with acute pulmonary embolism through computed tomography-assisted pulmonary angiography were evaluated. They were divided into two main groups (proximally and distally located, and subsequently into subgroups according to thrombus localization as follows: first group (thrombus in main pulmonary artery; n = 9; second group (thrombus in main pulmonary artery branches; n = 71; third group (thrombus in pulmonary artery segmental branches; n = 34; and fourth group (thrombus in pulmonary artery subsegmental branches; n = 8. RESULTS : Gamma-glutamyl transferase levels on admission, heart rate, oxygen saturation, right ventricular dilatation/hypokinesia, pulmonary artery systolic pressure and cardiopulmonary resuscitation requirement showed prognostic significance in univariate analysis. The multivariate logistic regression model showed that gamma-glutamyl transferase level on admission (odds ratio, OR = 1.044; 95% confidence interval, CI: 1.011-1.079; P = 0.009 and pulmonary artery systolic pressure (OR = 1.063; 95% CI: 1.005-1.124; P = 0.033 remained independently associated with proximally localized thrombus in pulmonary artery. CONCLUSIONS : The findings revealed a significant association between increased existing embolism load in the pulmonary artery and increased serum gamma-glutamyl transferase levels.

  17. Ultrasonic scalpel causes greater depth of soft tissue necrosis compared to monopolar electrocautery at standard power level settings in a pig model

    Science.gov (United States)

    2012-01-01

    Background Ultrasonic scalpel (UC) and monopolar electrocautery (ME) are common tools for soft tissue dissection. However, morphological data on the related tissue alteration are discordant. We developed an automatic device for standardized sample excision and compared quality and depth of morphological changes caused by UC and ME in a pig model. Methods 100 tissue samples (5 × 3 cm) of the abdominal wall were excised in 16 pigs. Excisions were randomly performed manually or by using the self-constructed automatic device at standard power levels (60 W cutting in ME, level 5 in UC) for abdominal surgery. Quality of tissue alteration and depth of coagulation necrosis were examined histopathologically. Device (UC vs. ME) and mode (manually vs. automatic) effects were studied by two-way analysis of variance at a significance level of 5%. Results At the investigated power level settings UC and ME induced qualitatively similar coagulation necroses. Mean depth of necrosis was 450.4 ± 457.8 μm for manual UC and 553.5 ± 326.9 μm for automatic UC versus 149.0 ± 74.3 μm for manual ME and 257.6 ± 119.4 μm for automatic ME. Coagulation necrosis was significantly deeper (p < 0.01) when UC was used compared to ME. The mode of excision (manual versus automatic) did not influence the depth of necrosis (p = 0.85). There was no significant interaction between dissection tool and mode of excision (p = 0.93). Conclusions Thermal injury caused by UC and ME results in qualitatively similar coagulation necrosis. The depth of necrosis is significantly greater in UC compared to ME at investigated standard power levels. PMID:22361346

  18. District health manager and mid-level provider perceptions of practice environments in acute obstetric settings in Tanzania: a mixed-method study.

    Science.gov (United States)

    Ng'ang'a, Njoki; Byrne, Mary Woods; Kruk, Margaret E; Shemdoe, Aloisia; de Pinho, Helen

    2016-08-08

    and mid-level providers points to deficient HRH management practices, which contribute to poor practice environments in acute obstetric settings in Tanzania. Our findings indicate that members of CHMTs require additional support to adequately fulfill their HRH management role. Further research conducted in low-income countries is necessary to determine the appropriate package of interventions required to strengthen the capacity of members of CHMTs.

  19. Evaluation of Parallel Level Sets and Bowsher's Method as Segmentation-Free Anatomical Priors for Time-of-Flight PET Reconstruction.

    Science.gov (United States)

    Schramm, Georg; Holler, Martin; Rezaei, Ahmadreza; Vunckx, Kathleen; Knoll, Florian; Bredies, Kristian; Boada, Fernando; Nuyts, Johan

    2018-02-01

    In this article, we evaluate Parallel Level Sets (PLS) and Bowsher's method as segmentation-free anatomical priors for regularized brain positron emission tomography (PET) reconstruction. We derive the proximity operators for two PLS priors and use the EM-TV algorithm in combination with the first order primal-dual algorithm by Chambolle and Pock to solve the non-smooth optimization problem for PET reconstruction with PLS regularization. In addition, we compare the performance of two PLS versions against the symmetric and asymmetric Bowsher priors with quadratic and relative difference penalty function. For this aim, we first evaluate reconstructions of 30 noise realizations of simulated PET data derived from a real hybrid positron emission tomography/magnetic resonance imaging (PET/MR) acquisition in terms of regional bias and noise. Second, we evaluate reconstructions of a real brain PET/MR data set acquired on a GE Signa time-of-flight PET/MR in a similar way. The reconstructions of simulated and real 3D PET/MR data show that all priors were superior to post-smoothed maximum likelihood expectation maximization with ordered subsets (OSEM) in terms of bias-noise characteristics in different regions of interest where the PET uptake follows anatomical boundaries. Our implementation of the asymmetric Bowsher prior showed slightly superior performance compared with the two versions of PLS and the symmetric Bowsher prior. At very high regularization weights, all investigated anatomical priors suffer from the transfer of non-shared gradients.

  20. Gay-Straight Alliances vary on dimensions of youth socializing and advocacy: factors accounting for individual and setting-level differences.

    Science.gov (United States)

    Poteat, V Paul; Scheer, Jillian R; Marx, Robert A; Calzo, Jerel P; Yoshikawa, Hirokazu

    2015-06-01

    Gay-Straight Alliances (GSAs) are school-based youth settings that could promote health. Yet, GSAs have been treated as homogenous without attention to variability in how they operate or to how youth are involved in different capacities. Using a systems perspective, we considered two primary dimensions along which GSAs function to promote health: providing socializing and advocacy opportunities. Among 448 students in 48 GSAs who attended six regional conferences in Massachusetts (59.8 % LGBQ; 69.9 % White; 70.1 % cisgender female), we found substantial variation among GSAs and youth in levels of socializing and advocacy. GSAs were more distinct from one another on advocacy than socializing. Using multilevel modeling, we identified group and individual factors accounting for this variability. In the socializing model, youth and GSAs that did more socializing activities did more advocacy. In the advocacy model, youth who were more actively engaged in the GSA as well as GSAs whose youth collectively perceived greater school hostility and reported greater social justice efficacy did more advocacy. Findings suggest potential reasons why GSAs vary in how they function in ways ranging from internal provisions of support, to visibility raising, to collective social change. The findings are further relevant for settings supporting youth from other marginalized backgrounds and that include advocacy in their mission.