WorldWideScience

Sample records for sampling scheme matters

  1. Optimal sampling schemes applied in geology

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-05-01

    Full Text Available Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology UP 2010 2 / 47 Outline 1 Introduction to hyperspectral remote... sensing 2 Objective of Study 1 3 Study Area 4 Data used 5 Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology...

  2. Interpolation-free scanning and sampling scheme for tomographic reconstructions

    International Nuclear Information System (INIS)

    Donohue, K.D.; Saniie, J.

    1987-01-01

    In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation

  3. Effects of sparse sampling schemes on image quality in low-dose CT

    International Nuclear Information System (INIS)

    Abbas, Sajid; Lee, Taewon; Cho, Seungryong; Shin, Sukyoung; Lee, Rena

    2013-01-01

    Purpose: Various scanning methods and image reconstruction algorithms are actively investigated for low-dose computed tomography (CT) that can potentially reduce a health-risk related to radiation dose. Particularly, compressive-sensing (CS) based algorithms have been successfully developed for reconstructing images from sparsely sampled data. Although these algorithms have shown promises in low-dose CT, it has not been studied how sparse sampling schemes affect image quality in CS-based image reconstruction. In this work, the authors present several sparse-sampling schemes for low-dose CT, quantitatively analyze their data property, and compare effects of the sampling schemes on the image quality.Methods: Data properties of several sampling schemes are analyzed with respect to the CS-based image reconstruction using two measures: sampling density and data incoherence. The authors present five different sparse sampling schemes, and simulated those schemes to achieve a targeted dose reduction. Dose reduction factors of about 75% and 87.5%, compared to a conventional scan, were tested. A fully sampled circular cone-beam CT data set was used as a reference, and sparse sampling has been realized numerically based on the CBCT data.Results: It is found that both sampling density and data incoherence affect the image quality in the CS-based reconstruction. Among the sampling schemes the authors investigated, the sparse-view, many-view undersampling (MVUS)-fine, and MVUS-moving cases have shown promising results. These sampling schemes produced images with similar image quality compared to the reference image and their structure similarity index values were higher than 0.92 in the mouse head scan with 75% dose reduction.Conclusions: The authors found that in CS-based image reconstructions both sampling density and data incoherence affect the image quality, and suggest that a sampling scheme should be devised and optimized by use of these indicators. With this strategic

  4. Impact of renewables on electricity markets – Do support schemes matter?

    International Nuclear Information System (INIS)

    Winkler, Jenny; Gaio, Alberto; Pfluger, Benjamin; Ragwitz, Mario

    2016-01-01

    Rising renewable shares influence electricity markets in several ways: among others, average market prices are reduced and price volatility increases. Therefore, the “missing money problem” in energy-only electricity markets is more likely to occur in systems with high renewable shares. Nevertheless, renewables are supported in many countries due to their expected benefits. The kind of support instrument can however influence the degree to which renewables influence the market. While fixed feed-in tariffs lead to higher market impacts, more market-oriented support schemes such as market premiums, quota systems and capacity-based payments decrease the extent to which markets are affected. This paper analyzes the market impacts of different support schemes. For this purpose, a new module is added to an existing bottom-up simulation model of the electricity market. In addition, different degrees of flexibility in the electricity system are considered. A case study for Germany is used to derive policy recommendations regarding the choice of support scheme. - Highlights: •Renewable support schemes matter regarding the impact on electricity markets. •Market-oriented support schemes reduce the impact on electricity markets. •More flexible electricity systems reduce the need for market participation. •Sliding premiums combine market integration with a productive risk allocation.

  5. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  6. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    Science.gov (United States)

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  7. Investigation of the influence of sampling schemes on quantitative dynamic fluorescence imaging.

    Science.gov (United States)

    Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Wang, Guodong; Wang, Bo; Zhan, Yonghua; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin

    2018-04-01

    Dynamic optical data from a series of sampling intervals can be used for quantitative analysis to obtain meaningful kinetic parameters of probe in vivo . The sampling schemes may affect the quantification results of dynamic fluorescence imaging. Here, we investigate the influence of different sampling schemes on the quantification of binding potential ( BP ) with theoretically simulated and experimentally measured data. Three groups of sampling schemes are investigated including the sampling starting point, sampling sparsity, and sampling uniformity. In the investigation of the influence of the sampling starting point, we further summarize two cases by considering the missing timing sequence between the probe injection and sampling starting time. Results show that the mean value of BP exhibits an obvious growth trend with an increase in the delay of the sampling starting point, and has a strong correlation with the sampling sparsity. The growth trend is much more obvious if throwing the missing timing sequence. The standard deviation of BP is inversely related to the sampling sparsity, and independent of the sampling uniformity and the delay of sampling starting time. Moreover, the mean value of BP obtained by uniform sampling is significantly higher than that by using the non-uniform sampling. Our results collectively suggest that a suitable sampling scheme can help compartmental modeling of dynamic fluorescence imaging provide more accurate results and simpler operations.

  8. Teaching the Conceptual Scheme "The Particle Nature of Matter" in the Elementary School.

    Science.gov (United States)

    Pella, Milton O.; And Others

    Conclusions of an extensive project aimed to prepare lessons and associated materials related to teaching concepts included in the scheme "The Particle Nature of Matter" for grades two through six are presented. The hypothesis formulated for the project was that children in elementary schools can learn theoretical concepts related to the particle…

  9. Prospective and retrospective spatial sampling scheme to characterize geochemicals in a mine tailings area

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-07-01

    Full Text Available This study demonstrates that designing sampling schemes using simulated annealing results in much better selection of samples from an existing scheme in terms of prediction accuracy. The presentation to the SASA Eastern Cape Chapter as an invited...

  10. Optimal sampling schemes for vegetation and geological field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2012-07-01

    Full Text Available The presentation made to Wits Statistics Department was on common classification methods used in the field of remote sensing, and the use of remote sensing to design optimal sampling schemes for field visits with applications in vegetation...

  11. Study on a new meteorological sampling scheme developed for the OSCAAR code system

    International Nuclear Information System (INIS)

    Liu Xinhe; Tomita, Kenichi; Homma, Toshimitsu

    2002-03-01

    One important step in Level-3 Probabilistic Safety Assessment is meteorological sequence sampling, on which the previous studies were mainly related to code systems using the straight-line plume model and more efforts are needed for those using the trajectory puff model such as the OSCAAR code system. This report describes the development of a new meteorological sampling scheme for the OSCAAR code system that explicitly considers population distribution. A group of principles set for the development of this new sampling scheme includes completeness, appropriate stratification, optimum allocation, practicability and so on. In this report, discussions are made about the procedures of the new sampling scheme and its application. The calculation results illustrate that although it is quite difficult to optimize stratification of meteorological sequences based on a few environmental parameters the new scheme do gather the most inverse conditions in a single subset of meteorological sequences. The size of this subset may be as small as a few dozens, so that the tail of a complementary cumulative distribution function is possible to remain relatively static in different trials of the probabilistic consequence assessment code. (author)

  12. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  13. Evaluation of diesel particulate matter sampling techniques

    CSIR Research Space (South Africa)

    Pretorius, CJ

    2011-09-01

    Full Text Available The study evaluated diesel particulate matter (DPM) sampling methods used in the South African mining industry. The three-piece cassette respirable, open face and stopper sampling methods were compared with the SKC DPM cassette method to find a...

  14. Failure Probability Estimation Using Asymptotic Sampling and Its Dependence upon the Selected Sampling Scheme

    Directory of Open Access Journals (Sweden)

    Martinásková Magdalena

    2017-12-01

    Full Text Available The article examines the use of Asymptotic Sampling (AS for the estimation of failure probability. The AS algorithm requires samples of multidimensional Gaussian random vectors, which may be obtained by many alternative means that influence the performance of the AS method. Several reliability problems (test functions have been selected in order to test AS with various sampling schemes: (i Monte Carlo designs; (ii LHS designs optimized using the Periodic Audze-Eglājs (PAE criterion; (iii designs prepared using Sobol’ sequences. All results are compared with the exact failure probability value.

  15. Outline of experimental schemes for measurements of thermophysical and transport properties in warm dense matter at GSI and FAIR

    International Nuclear Information System (INIS)

    Tauschwitz, Anna; Jacoby, Joachim; Maruhn, Joachim; Basko, Mikhail; Efremov, Vladimir; Iosilevskiy, Igor; Neumayer, Paul; Novikov, Vladimir; Tauschwitz, Andreas; Rosmej, Frank

    2010-01-01

    Different experimental schemes for investigation of warm dense matter produced with intense energetic ion beams are presented. The described target configurations allow direct measurements of thermophysical and transport properties of warm dense matter without hydrodynamic recalculations. The presented experiments will be realized at the current GSI synchrotron SIS-18 and the future FAIR facility in the framework of the WDM-collaboration.

  16. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    Science.gov (United States)

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  17. OLT-centralized sampling frequency offset compensation scheme for OFDM-PON.

    Science.gov (United States)

    Chen, Ming; Zhou, Hui; Zheng, Zhiwei; Deng, Rui; Chen, Qinghui; Peng, Miao; Liu, Cuiwei; He, Jing; Chen, Lin; Tang, Xionggui

    2017-08-07

    We propose an optical line terminal (OLT)-centralized sampling frequency offset (SFO) compensation scheme for adaptively-modulated OFDM-PON systems. By using the proposed SFO scheme, the phase rotation and inter-symbol interference (ISI) caused by SFOs between OLT and multiple optical network units (ONUs) can be centrally compensated in the OLT, which reduces the complexity of ONUs. Firstly, the optimal fast Fourier transform (FFT) size is identified in the intensity-modulated and direct-detection (IMDD) OFDM system in the presence of SFO. Then, the proposed SFO compensation scheme including phase rotation modulation (PRM) and length-adaptive OFDM frame has been experimentally demonstrated in the downlink transmission of an adaptively modulated optical OFDM with the optimal FFT size. The experimental results show that up to ± 300 ppm SFO can be successfully compensated without introducing any receiver performance penalties.

  18. Using remote sensing images to design optimal field sampling schemes

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available sampling schemes case studies Optimized field sampling representing the overall distribution of a particular mineral Deriving optimal exploration target zones CONTINUUM REMOVAL for vegetation [13, 27, 46]. The convex hull transform is a method... of normalizing spectra [16, 41]. The convex hull technique is anal- ogous to fitting a rubber band over a spectrum to form a continuum. Figure 5 shows the concept of the convex hull transform. The differ- ence between the hull and the orig- inal spectrum...

  19. An Experiment and Detection Scheme for Cavity-Based Light Cold Dark Matter Particle Searches

    Directory of Open Access Journals (Sweden)

    Masroor H. S. Bukhari

    2017-01-01

    Full Text Available A resonance detection scheme and some useful ideas for cavity-based searches of light cold dark matter particles (such as axions are presented, as an effort to aid in the on-going endeavors in this direction as well as for future experiments, especially in possibly developing a table-top experiment. The scheme is based on our idea of a resonant detector, incorporating an integrated tunnel diode (TD and GaAs HEMT/HFET (High-Electron Mobility Transistor/Heterogeneous FET transistor amplifier, weakly coupled to a cavity in a strong transverse magnetic field. The TD-amplifier combination is suggested as a sensitive and simple technique to facilitate resonance detection within the cavity while maintaining excellent noise performance, whereas our proposed Halbach magnet array could serve as a low-noise and permanent solution replacing the conventional electromagnets scheme. We present some preliminary test results which demonstrate resonance detection from simulated test signals in a small optimal axion mass range with superior signal-to-noise ratios (SNR. Our suggested design also contains an overview of a simpler on-resonance dc signal read-out scheme replacing the complicated heterodyne read-out. We believe that all these factors and our propositions could possibly improve or at least simplify the resonance detection and read-out in cavity-based DM particle detection searches (and other spectroscopy applications and reduce the complications (and associated costs, in addition to reducing the electromagnetic interference and background.

  20. The effect of sampling scheme in the survey of atmospheric deposition of heavy metals in Albania by using moss biomonitoring.

    Science.gov (United States)

    Qarri, Flora; Lazo, Pranvera; Bekteshi, Lirim; Stafilov, Trajce; Frontasyeva, Marina; Harmens, Harry

    2015-02-01

    The atmospheric deposition of heavy metals in Albania was investigated by using a carpet-forming moss species (Hypnum cupressiforme) as bioindicator. Sampling was done in the dry seasons of autumn 2010 and summer 2011. Two different sampling schemes are discussed in this paper: a random sampling scheme with 62 sampling sites distributed over the whole territory of Albania and systematic sampling scheme with 44 sampling sites distributed over the same territory. Unwashed, dried samples were totally digested by using microwave digestion, and the concentrations of metal elements were determined by inductively coupled plasma atomic emission spectroscopy (ICP-AES) and AAS (Cd and As). Twelve elements, such as conservative and trace elements (Al and Fe and As, Cd, Cr, Cu, Ni, Mn, Pb, V, Zn, and Li), were measured in moss samples. Li as typical lithogenic element is also included. The results reflect local emission points. The median concentrations and statistical parameters of elements were discussed by comparing two sampling schemes. The results of both sampling schemes are compared with the results of other European countries. Different levels of the contamination valuated by the respective contamination factor (CF) of each element are obtained for both sampling schemes, while the local emitters identified like iron-chromium metallurgy and cement industry, oil refinery, mining industry, and transport have been the same for both sampling schemes. In addition, the natural sources, from the accumulation of these metals in mosses caused by metal-enriched soil, associated with wind blowing soils were pointed as another possibility of local emitting factors.

  1. Axially perpendicular offset Raman scheme for reproducible measurement of housed samples in a noncircular container under variation of container orientation.

    Science.gov (United States)

    Duy, Pham K; Chang, Kyeol; Sriphong, Lawan; Chung, Hoeil

    2015-03-17

    An axially perpendicular offset (APO) scheme that is able to directly acquire reproducible Raman spectra of samples contained in an oval container under variation of container orientation has been demonstrated. This scheme utilized an axially perpendicular geometry between the laser illumination and the Raman photon detection, namely, irradiation through a sidewall of the container and gathering of the Raman photon just beneath the container. In the case of either backscattering or transmission measurements, Raman sampling volumes for an internal sample vary when the orientation of an oval container changes; therefore, the Raman intensities of acquired spectra are inconsistent. The generated Raman photons traverse the same bottom of the container in the APO scheme; the Raman sampling volumes can be relatively more consistent under the same situation. For evaluation, the backscattering, transmission, and APO schemes were simultaneously employed to measure alcohol gel samples contained in an oval polypropylene container at five different orientations and then the accuracies of the determination of the alcohol concentrations were compared. The APO scheme provided the most reproducible spectra, yielding the best accuracy when the axial offset distance was 10 mm. Monte Carlo simulations were performed to study the characteristics of photon propagation in the APO scheme and to explain the origin of the optimal offset distance that was observed. In addition, the utility of the APO scheme was further demonstrated by analyzing samples in a circular glass container.

  2. Evaluation of alternative macroinvertebrate sampling techniques for use in a new tropical freshwater bioassessment scheme

    OpenAIRE

    Isabel Eleanor Moore; Kevin Joseph Murphy

    2015-01-01

    Aim: The study aimed to determine the effectiveness of benthic macroinvertebrate dredge net sampling procedures as an alternative method to kick net sampling in tropical freshwater systems, specifically as an evaluation of sampling methods used in the Zambian Invertebrate Scoring System (ZISS) river bioassessment scheme. Tropical freshwater ecosystems are sometimes dangerous or inaccessible to sampling teams using traditional kick-sampling methods, so identifying an alternative procedure that...

  3. Activation analysis of air particulate matter

    International Nuclear Information System (INIS)

    Alian, A.; Sansoni, B.

    1988-11-01

    This review on activation analysis of air particulate matter is an extended and updated version of a review given by the same authors in 1985. The main part is aimed at the analytical scheme and refers to rules and techniques for sampling, sample and standard preparation, irradiation and counting procedures, as well as data processing, - evaluation, and - presentation. Additional chapters deal with relative and monostandard methods, the use of activation analysis for atmosphere samples in various localities, and level of toxic and other elements in the atmosphere. The review contains 190 references. (RB)

  4. Evaluation of alternative macroinvertebrate sampling techniques for use in a new tropical freshwater bioassessment scheme

    Directory of Open Access Journals (Sweden)

    Isabel Eleanor Moore

    2015-06-01

    Full Text Available Aim: The study aimed to determine the effectiveness of benthic macroinvertebrate dredge net sampling procedures as an alternative method to kick net sampling in tropical freshwater systems, specifically as an evaluation of sampling methods used in the Zambian Invertebrate Scoring System (ZISS river bioassessment scheme. Tropical freshwater ecosystems are sometimes dangerous or inaccessible to sampling teams using traditional kick-sampling methods, so identifying an alternative procedure that produces similar results is necessary in order to collect data from a wide variety of habitats.MethodsBoth kick and dredge nets were used to collect macroinvertebrate samples at 16 riverine sites in Zambia, ranging from backwaters and floodplain lagoons to fast flowing streams and rivers. The data were used to calculate ZISS, diversity (S: number of taxa present, and Average Score Per Taxon (ASPT scores per site, using the two sampling methods to compare their sampling effectiveness. Environmental parameters, namely pH, conductivity, underwater photosynthetically active radiation (PAR, temperature, alkalinity, flow, and altitude, were also recorded and used in statistical analysis. Invertebrate communities present at the sample sites were determined using multivariate procedures.ResultsAnalysis of the invertebrate community and environmental data suggested that the testing exercise was undertaken in four distinct macroinvertebrate community types, supporting at least two quite different macroinvertebrate assemblages, and showing significant differences in habitat conditions. Significant correlations were found for all three bioassessment score variables between results acquired using the two methods, with dredge-sampling normally producing lower scores than did the kick net procedures. Linear regression models were produced in order to correct each biological variable score collected by a dredge net to a score similar to that of one collected by kick net

  5. Properties of quark matter governed by quantum chromodynamics. Pt. 2

    International Nuclear Information System (INIS)

    Soni, V.

    1983-01-01

    Renormalization schemes are examined (in the Coulomb gauge) for quantum chromodynamics in the presence of quark matter. We demand that the effective coupling constant for all schemes become congruent with the vacuum QCD running coupling constant as the matter chemical potential, μ, goes to zero. Also, to enable us to standardize with the vacuum QCD running coupling constant at some asymptotic momentum transfer, vertical strokep 0 vertical stroke, we keep μ 0 vertical stroke, to ensure that the matter contribution is negligible at this point. This means all schemes merge with vacuum QCD at vertical strokep 0 vertical stroke and beyond. Two renormalization group invariants are shown to emerge: (I) the effective or invariant charge, gsub(inv) 2 , which is, however, scheme dependent and (II) g 2 (M)/S(M), where S(M) - 1 is the Coulomb propagator, which is scheme independent. The only scheme in which gsub(inv) 2 is scheme independent and identical to g 2 (M)/S(M) is the screened charged scheme (previous paper) characterised by the normalization of the entire Green function, S - 1 , to unity. We conclude that this is the scheme to be used if one wants to identify with the experimental effective coupling in perturbation theory. However, if we do not restrict to perturbation theory all schemes should be allowed. Although we discuss matter QCD in the Coulomb gauge, the above considerations are quite general to gauge theories in the presence of matter. (orig.)

  6. Scheme dependence of quantum gravity on de Sitter background

    Energy Technology Data Exchange (ETDEWEB)

    Kitamoto, Hiroyuki, E-mail: kitamoto@post.kek.jp [KEK Theory Center, Tsukuba, Ibaraki 305-0801 (Japan); Kitazawa, Yoshihisa, E-mail: kitazawa@post.kek.jp [KEK Theory Center, Tsukuba, Ibaraki 305-0801 (Japan); The Graduate University for Advanced Studies (Sokendai), Department of Particle and Nuclear Physics, Tsukuba, Ibaraki 305-0801 (Japan)

    2013-08-11

    We extend our investigation of the IR effects on the local dynamics of matter fields in quantum gravity. Specifically we clarify how the IR effects depend on the change of the quantization scheme: different parametrization of the metric and the matter field redefinition. Conformal invariance implies effective Lorentz invariance of the matter system in de Sitter space. An arbitrary choice of the parametrization of the metric and the matter field redefinition does not preserve the effective Lorentz invariance of the local dynamics. As for the effect of different parametrization of the metric alone, the effective Lorentz symmetry breaking term can be eliminated by shifting the background metric. In contrast, we cannot compensate the matter field redefinition dependence by such a way. The effective Lorentz invariance can be retained only when we adopt the specific matter field redefinitions where all dimensionless couplings become scale invariant at the classical level. This scheme is also singled out by unitarity as the kinetic terms are canonically normalized.

  7. A new configurational bias scheme for sampling supramolecular structures

    Energy Technology Data Exchange (ETDEWEB)

    De Gernier, Robin; Mognetti, Bortolo M., E-mail: bmognett@ulb.ac.be [Center for Nonlinear Phenomena and Complex Systems, Université Libre de Bruxelles, Code Postal 231, Campus Plaine, B-1050 Brussels (Belgium); Curk, Tine [Department of Chemistry, University of Cambridge, Cambridge CB2 1EW (United Kingdom); Dubacheva, Galina V. [Biosurfaces Unit, CIC biomaGUNE, Paseo Miramon 182, 20009 Donostia - San Sebastian (Spain); Richter, Ralf P. [Biosurfaces Unit, CIC biomaGUNE, Paseo Miramon 182, 20009 Donostia - San Sebastian (Spain); Université Grenoble Alpes, DCM, 38000 Grenoble (France); CNRS, DCM, 38000 Grenoble (France); Max Planck Institute for Intelligent Systems, 70569 Stuttgart (Germany)

    2014-12-28

    We present a new simulation scheme which allows an efficient sampling of reconfigurable supramolecular structures made of polymeric constructs functionalized by reactive binding sites. The algorithm is based on the configurational bias scheme of Siepmann and Frenkel and is powered by the possibility of changing the topology of the supramolecular network by a non-local Monte Carlo algorithm. Such a plan is accomplished by a multi-scale modelling that merges coarse-grained simulations, describing the typical polymer conformations, with experimental results accounting for free energy terms involved in the reactions of the active sites. We test the new algorithm for a system of DNA coated colloids for which we compute the hybridisation free energy cost associated to the binding of tethered single stranded DNAs terminated by short sequences of complementary nucleotides. In order to demonstrate the versatility of our method, we also consider polymers functionalized by receptors that bind a surface decorated by ligands. In particular, we compute the density of states of adsorbed polymers as a function of the number of ligand–receptor complexes formed. Such a quantity can be used to study the conformational properties of adsorbed polymers useful when engineering adsorption with tailored properties. We successfully compare the results with the predictions of a mean field theory. We believe that the proposed method will be a useful tool to investigate supramolecular structures resulting from direct interactions between functionalized polymers for which efficient numerical methodologies of investigation are still lacking.

  8. A unified thermostat scheme for efficient configurational sampling for classical/quantum canonical ensembles via molecular dynamics

    Science.gov (United States)

    Zhang, Zhijun; Liu, Xinzijian; Chen, Zifei; Zheng, Haifeng; Yan, Kangyu; Liu, Jian

    2017-07-01

    We show a unified second-order scheme for constructing simple, robust, and accurate algorithms for typical thermostats for configurational sampling for the canonical ensemble. When Langevin dynamics is used, the scheme leads to the BAOAB algorithm that has been recently investigated. We show that the scheme is also useful for other types of thermostats, such as the Andersen thermostat and Nosé-Hoover chain, regardless of whether the thermostat is deterministic or stochastic. In addition to analytical analysis, two 1-dimensional models and three typical real molecular systems that range from the gas phase, clusters, to the condensed phase are used in numerical examples for demonstration. Accuracy may be increased by an order of magnitude for estimating coordinate-dependent properties in molecular dynamics (when the same time interval is used), irrespective of which type of thermostat is applied. The scheme is especially useful for path integral molecular dynamics because it consistently improves the efficiency for evaluating all thermodynamic properties for any type of thermostat.

  9. Continuous quality control of the blood sampling procedure using a structured observation scheme

    DEFF Research Database (Denmark)

    Seemann, T. L.; Nybo, M.

    2015-01-01

    Background: An important preanalytical factor is the blood sampling procedure and its adherence to the guidelines, i.e. CLSI and ISO 15189, in order to ensure a consistent quality of the blood collection. Therefore, it is critically important to introduce quality control on this part of the process....... As suggested by the EFLM working group on the preanalytical phase we introduced continuous quality control of the blood sampling procedure using a structured observation scheme to monitor the quality of blood sampling performed on an everyday basis. Materials and methods: Based on our own routines the EFLM....... Conclusion: It is possible to establish a continuous quality control on blood sampling. It has been well accepted by the staff and we have already been able to identify critical areas in the sampling process. We find that continuous auditing increase focus on the quality of blood collection which ensures...

  10. Screening of various diesel particulate matter samples from various commodity mines

    CSIR Research Space (South Africa)

    Mahlangu, Vusi J

    2016-09-01

    Full Text Available This paper presents qualitative analysis results of diesel particulate matter (DPM) from various mining commodities in South Africa. The objective of this work was to determine the concentrations of elements in DPM samples. For this screening...

  11. Samplings of urban particulate matter for mutagenicity assays

    International Nuclear Information System (INIS)

    De Zaiacono, T.

    1996-07-01

    In the frame of a specific program relating to the evaluation of mutagenic activity of urban particulate matter, an experimental arrangement has been developed to sample aerosuspended particles from the external environment carried indoor by means of a fan. Instrumentation was placed directly in the air flow to minimize particle losses, and consisted of total filter, collecting particles without any size separation; cascade impactor, fractioning urban particulate to obtain separate samples for analyses; an optical device, for real time monitoring of aerosol concentration, temperature and relative humidity sensors. Some of the samples obtained were analysed to investigate: particle morphology, aerosol granulometric distributions, effect of relative humidity on collected particulate, amount of ponderal mass compared with real time optical determinations. The results obtained are reported here, together with some considerations about carbonaceous particles, in urban areas mainly originated from diesel exhausts, their degree of agglomeration and role to vehiculate substances into the human respiratory

  12. Emergent Braided Matter of Quantum Geometry

    Directory of Open Access Journals (Sweden)

    Sundance Bilson-Thompson

    2012-03-01

    Full Text Available We review and present a few new results of the program of emergent matter as braid excitations of quantum geometry that is represented by braided ribbon networks. These networks are a generalisation of the spin networks proposed by Penrose and those in models of background independent quantum gravity theories, such as Loop Quantum Gravity and Spin Foam models. This program has been developed in two parallel but complimentary schemes, namely the trivalent and tetravalent schemes. The former studies the braids on trivalent braided ribbon networks, while the latter investigates the braids on tetravalent braided ribbon networks. Both schemes have been fruitful. The trivalent scheme has been quite successful at establishing a correspondence between braids and Standard Model particles, whereas the tetravalent scheme has naturally substantiated a rich, dynamical theory of interactions and propagation of braids, which is ruled by topological conservation laws. Some recent advances in the program indicate that the two schemes may converge to yield a fundamental theory of matter in quantum spacetime.

  13. Evaluation of sampling schemes for in-service inspection of steam generator tubing

    International Nuclear Information System (INIS)

    Hanlen, R.C.

    1990-03-01

    This report is a follow-on of work initially sponsored by the US Nuclear Regulatory Commission (Bowen et al. 1989). The work presented here is funded by EPRI and is jointly sponsored by the Electric Power Research Institute (EPRI) and the US Nuclear Regulatory Commission (NRC). The goal of this research was to evaluate fourteen sampling schemes or plans. The main criterion used for evaluating plan performance was the effectiveness for sampling, detecting and plugging defective tubes. The performance criterion was evaluated across several choices of distributions of degraded/defective tubes, probability of detection (POD) curves and eddy-current sizing models. Conclusions from this study are dependent upon the tube defect distributions, sample size, and expansion rules considered. As degraded/defective tubes form ''clusters'' (i.e., maps 6A, 8A and 13A), the smaller sample sizes provide a capability of detecting and sizing defective tubes that approaches 100% inspection. When there is little or no clustering (i.e., maps 1A, 20 and 21), sample efficiency is approximately equal to the initial sample size taken. Thee is an indication (though not statistically significant) that the systematic sampling plans are better than the random sampling plans for equivalent initial sample size. There was no indication of an effect due to modifying the threshold value for the second stage expansion. The lack of an indication is likely due to the specific tube flaw sizes considered for the six tube maps. 1 ref., 11 figs., 19 tabs

  14. Association of plasma homocysteine and white matter hypodensities in a sample of stroke patients

    International Nuclear Information System (INIS)

    Naveed, G.

    2015-01-01

    Studies of homocysteine in vascular disorders have yielded conflicting data. There are also differences based on various ethnicities and cultures. In this study, we have examined the homocysteine patterns in local stroke patients, so as to ascertain the homocysteine status in a sample of local population. Homocysteine-white matter hypodensities relationship in stroke is emerging, as an important aspect in stroke pathophysiology and is thought to have prognostic and therapeutic values. Methods: We included 150 stroke patients who were diagnosed as having clinical stroke on the basis of history; physical examination and CT (Computerized Tomography) scan of brain. These patients were recruited from neurology and emergency wards of two public sector hospitals of Lahore. The presence or absence of white matter hypodensities were diagnosed after consultation with a radiologist. Blood samples were collected from the same stroke patients. Results: We found a strong association between white matter hypodensities and total homocysteine in plasma of stroke patients p<0.001. Conclusion: Homocysteine is a risk factor for white matter hypodensities in stroke patients in our study. (author)

  15. Relevance of sampling schemes in light of Ruelle's linear response theory

    International Nuclear Information System (INIS)

    Lucarini, Valerio; Wouters, Jeroen; Faranda, Davide; Kuna, Tobias

    2012-01-01

    We reconsider the theory of the linear response of non-equilibrium steady states to perturbations. We first show that using a general functional decomposition for space–time dependent forcings, we can define elementary susceptibilities that allow us to construct the linear response of the system to general perturbations. Starting from the definition of SRB measure, we then study the consequence of taking different sampling schemes for analysing the response of the system. We show that only a specific choice of the time horizon for evaluating the response of the system to a general time-dependent perturbation allows us to obtain the formula first presented by Ruelle. We also discuss the special case of periodic perturbations, showing that when they are taken into consideration the sampling can be fine-tuned to make the definition of the correct time horizon immaterial. Finally, we discuss the implications of our results in terms of strategies for analysing the outputs of numerical experiments by providing a critical review of a formula proposed by Reick

  16. QCD axion dark matter from long-lived domain walls during matter domination

    OpenAIRE

    Harigaya, Keisuke; Kawasaki, Masahiro

    2018-01-01

    The domain wall problem of the Peccei–Quinn mechanism can be solved if the Peccei–Quinn symmetry is explicitly broken by a small amount. Domain walls decay into axions, which may account for dark matter of the universe. This scheme is however strongly constrained by overproduction of axions unless the phase of the explicit breaking term is tuned. We investigate the case where the universe is matter-dominated around the temperature of the MeV scale and domain walls decay during this matter dom...

  17. [Influence of Natural Dissolved Organic Matter on the Passive Sampling Technique and its Application].

    Science.gov (United States)

    Yu, Shang-yun; Zhou, Yan-mei

    2015-08-01

    This paper studied the effects of different concentrations of natural dissolved organic matter (DOM) on the passive sampling technique. The results showed that the presence of DOM affected the organic pollutant adsorption ability of the membrane. For lgK(OW), 3-5, DOM had less impact on the adsorption of organic matter by the membrane; for lgK(OW), > 5.5, DOM significantly increased the adsorption capacity of the membrane. Meanwhile, LDPE passive sampling technique was applied to monitor PAHs and PAEs in pore water of three surface sediments in Taizi River. All of the target pollutants were detected in varying degrees at each sampling point. Finally, the quotient method was used to assess the ecological risks of PAHs and PAEs. The results showed that fluoranthene exceeded the reference value of the aquatic ecosystem, meaning there was a big ecological risk.

  18. Studies of thermophysical properties of high-energy-density states in matter using intense heavy ion beams at the future Fair accelerator facilities: The HEDgeHOB collaboration

    International Nuclear Information System (INIS)

    Tahir, N.A.; Deutsch, C.; Hoffmann, D.H.H.; Shutov, A.; Lomonosov, I.V.; Gryaznov, V.; Fortov, V.E.; Hoffmann, D.H.H.; Ni, P.; Udrea, S.; Varentsov, D.; Piriz, A.R.; Wouchuk, G.

    2006-01-01

    Intense beams of energetic heavy ions are believed to be a very efficient and novel tool to create states of High-Energy-Density (HED) in matter. This paper shows with the help of numerical simulations that the heavy ion beams that will be generated at the future Facility for Antiprotons and Ion Research (FAIR) will allow one to use two different experimental schemes to study HED states in matter. The German government has recently approved the construction of FAIR at Darmstadt. First scheme named HIHEX (Heavy Ion Heating and EXpansion), will generate high-pressure, high-entropy states in matter by volumetric isochoric heating. The heated material will then be allowed to expand in an isentropic way. Using this scheme, it will be possible to study important regions of the phase diagram that are either difficult to access or are even unaccessible using traditional methods of shock compression. The second scheme would allow one to achieve low-entropy compression of a sample material like hydrogen or water to produce conditions that are believed to exist in the interiors of the giant planets. This scheme is named LAPLAS after Laboratory Planetary Sciences. (authors)

  19. Dynamics of a magnetic monopole in matter, Maxwell equations in dyonic matter and detection of electric dipole moments

    International Nuclear Information System (INIS)

    Artru, X.; Fayolle, D.

    2001-01-01

    For a monopole, the analogue of the Lorentz equation in matter is shown to be f = g (H-v centre dot D). Dual-symmetric Maxwell equations, for matter containing hidden magnetic charge in addition to electric ones, are given. They apply as well to ordinary matter if the particles possess T-violating electric dipole moments. Two schemes of experiments for the detection of such moments in macroscopic pieces of matter are proposed

  20. A numerical relativity scheme for cosmological simulations

    Science.gov (United States)

    Daverio, David; Dirian, Yves; Mitsou, Ermis

    2017-12-01

    Cosmological simulations involving the fully covariant gravitational dynamics may prove relevant in understanding relativistic/non-linear features and, therefore, in taking better advantage of the upcoming large scale structure survey data. We propose a new 3  +  1 integration scheme for general relativity in the case where the matter sector contains a minimally-coupled perfect fluid field. The original feature is that we completely eliminate the fluid components through the constraint equations, thus remaining with a set of unconstrained evolution equations for the rest of the fields. This procedure does not constrain the lapse function and shift vector, so it holds in arbitrary gauge and also works for arbitrary equation of state. An important advantage of this scheme is that it allows one to define and pass an adaptation of the robustness test to the cosmological context, at least in the case of pressureless perfect fluid matter, which is the relevant one for late-time cosmology.

  1. A high-precision sampling scheme to assess persistence and transport characteristics of micropollutants in rivers.

    Science.gov (United States)

    Schwientek, Marc; Guillet, Gaëlle; Rügner, Hermann; Kuch, Bertram; Grathwohl, Peter

    2016-01-01

    Increasing numbers of organic micropollutants are emitted into rivers via municipal wastewaters. Due to their persistence many pollutants pass wastewater treatment plants without substantial removal. Transport and fate of pollutants in receiving waters and export to downstream ecosystems is not well understood. In particular, a better knowledge of processes governing their environmental behavior is needed. Although a lot of data are available concerning the ubiquitous presence of micropollutants in rivers, accurate data on transport and removal rates are lacking. In this paper, a mass balance approach is presented, which is based on the Lagrangian sampling scheme, but extended to account for precise transport velocities and mixing along river stretches. The calculated mass balances allow accurate quantification of pollutants' reactivity along river segments. This is demonstrated for representative members of important groups of micropollutants, e.g. pharmaceuticals, musk fragrances, flame retardants, and pesticides. A model-aided analysis of the measured data series gives insight into the temporal dynamics of removal processes. The occurrence of different removal mechanisms such as photooxidation, microbial degradation, and volatilization is discussed. The results demonstrate, that removal processes are highly variable in time and space and this has to be considered for future studies. The high precision sampling scheme presented could be a powerful tool for quantifying removal processes under different boundary conditions and in river segments with contrasting properties. Copyright © 2015. Published by Elsevier B.V.

  2. [PICS: pharmaceutical inspection cooperation scheme].

    Science.gov (United States)

    Morénas, J

    2009-01-01

    The pharmaceutical inspection cooperation scheme (PICS) is a structure containing 34 participating authorities located worldwide (October 2008). It has been created in 1995 on the basis of the pharmaceutical inspection convention (PIC) settled by the European free trade association (EFTA) in1970. This scheme has different goals as to be an international recognised body in the field of good manufacturing practices (GMP), for training inspectors (by the way of an annual seminar and experts circles related notably to active pharmaceutical ingredients [API], quality risk management, computerized systems, useful for the writing of inspection's aide-memoires). PICS is also leading to high standards for GMP inspectorates (through regular crossed audits) and being a room for exchanges on technical matters between inspectors but also between inspectors and pharmaceutical industry.

  3. Sampling and analytical methodologies for energy dispersive X-ray fluorescence analysis of airborne particulate matter

    International Nuclear Information System (INIS)

    1993-01-01

    The present document represents an attempt to summarize the most important features of the different forms of ED-XFR as applied to the analysis of airborne particulate matter. It is intended to serve as a set of guidelines for use by participants in the IAEA's own programmes, and other scientists, who are not yet fully experienced in the application of ED-XRF to airborne particulate samples, and who wish either to make a start on using this technique or to improve their existing procedures. The methodologies for sampling described in this document are of rather general applicability. Emphasis is also placed on the sources of errors affecting the sampling of airborne particulate matter. The analytical part of the document describes the different forms of ED-XRF and their potential applications. Spectrum evaluation, a key step in X-ray spectrometry, is covered in depth, including discussion on several calibration and peak fitting techniques and computer programs especially designed for this purpose. 148 refs, 25 figs, 13 tabs

  4. Implementation of a compressive sampling scheme for wireless sensors to achieve energy efficiency in a structural health monitoring system

    Science.gov (United States)

    O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.

    2013-04-01

    Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.

  5. Two-loop corrections for nuclear matter in the Walecka model

    International Nuclear Information System (INIS)

    Furnstahl, R.J.; Perry, R.J.; Serot, B.D.; Department of Physics, The Ohio State University, Columbus, Ohio 43210; Physics Department and Nuclear Theory Center, Indiana University, Bloomington, Indiana 47405)

    1989-01-01

    Two-loop corrections for nuclear matter, including vacuum polarization, are calculated in the Walecka model to study the loop expansion as an approximation scheme for quantum hadrodynamics. Criteria for useful approximation schemes are discussed, and the concepts of strong and weak convergence are introduced. The two-loop corrections are evaluated first with one-loop parameters and mean fields and then by minimizing the total energy density with respect to the scalar field and refitting parameters to empirical nuclear matter saturation properties. The size and nature of the corrections indicate that the loop expansion is not convergent at two-loop order in either the strong or weak sense. Prospects for alternative approximation schemes are discussed

  6. Improvements in PIXE analysis of hourly particulate matter samples

    Energy Technology Data Exchange (ETDEWEB)

    Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); National Institute of Nuclear Physics (INFN), Division of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Lucarelli, F. [Department of Physics and Astronomy, University of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); National Institute of Nuclear Physics (INFN), Division of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Nava, S. [National Institute of Nuclear Physics (INFN), Division of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Giannoni, M. [National Institute of Nuclear Physics (INFN), Division of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy); Carraresi, L. [Department of Physics and Astronomy, University of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); National Institute of Nuclear Physics (INFN), Division of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Prati, P. [Department of Physics, University of Genoa and INFN Division of Genoa, Via Dodecaneso 33, 16146 Genoa (Italy); Vecchi, R. [Department of Physics, Università degli Studi di Milano and INFN Division of Milan, Via Celoria 16, 20133 Milan (Italy)

    2015-11-15

    Most air quality studies on particulate matter (PM) are based on 24-h averaged data; however, many PM emissions as well as their atmospheric dilution processes change within a few hours. Samplings of PM with 1-h resolution can be performed by the streaker sampler (PIXE International Corporation), which is designed to separate the fine (aerodynamic diameter less than 2.5 μm) and the coarse (aerodynamic diameter between 2.5 and 10 μm) fractions of PM. These samples are efficiently analyzed by Particle Induced X-ray Emission (PIXE) at the LABEC laboratory of INFN in Florence (Italy), equipped with a 3 MV Tandetron accelerator, thanks to an optimized external-beam set-up, a convenient choice of the beam energy and suitable collecting substrates. A detailed description of the adopted set-up and results from a methodological study on the detection limits for the selection of the optimal beam energy are shown; the outcomes of the research on alternative collecting substrates, which produce a lower background during the measurements, and with lower contaminations, are also discussed.

  7. Candidates for non-baryonic dark matter

    International Nuclear Information System (INIS)

    Fornengo, Nicolao

    2002-01-01

    This report is a brief review of the efforts to explain the nature of non-baryonic dark matter and of the studies devoted to the search for relic particles. Among the different dark matter candidates, special attention is devoted to relic neutralinos, by giving an overview of the recent calculations of its relic abundance and detection rates in a wide variety of supersymmetric schemes

  8. Candidates for non-baryonic dark matter

    OpenAIRE

    Fornengo, Nicolao

    2002-01-01

    This report is a brief review of the efforts to explain the nature of non-baryonic dark matter and of the studies devoted to the search for relic particles. Among the different dark matter candidates, special attention is devoted to relic neutralinos, by giving an overview of the recent calculations of its relic abundance and detection rates in a wide variety of supersymmetric schemes.

  9. Iodination of the humic samples from HUPA project

    International Nuclear Information System (INIS)

    Reiller, P.; Mercier-Bion, F.; Barre, N.; Gimenez, N.; Miserque, F.

    2005-01-01

    The interaction of iodine with natural organic matter in general and with humic substances (HS) in particular, has been the subject of numerous studies. It has come to a consensus that in soils as well as in aquatic systems, the speciation of iodine is closely related to the redox potential of the medium. In oxidizing media, as in sea water or upper horizons, the major part of iodine is found in iodate form IO 3 - , whereas in reducing media, iodide I - is the major specie. Nevertheless, it has been shown that in some cases, organically bound iodine can dominate the speciation either as methyl iodide or bounded to humic substances. It is now also clear that this reactivity is closely related to the occurrence of molecular iodine I 2 (aq) and its disproportionation to HIO and I - . The reaction scheme can be viewed as an electrophilic substitution of an hydrogen to an iodine atom on a phenolic ring. This scheme has been validated in the case of HS on different samples including HUPA, and the covalent character of this interaction has been shown using electrospray ionization mass spectroscopy (ESI-MS), X-ray photoelectron spectroscopy. Nevertheless, in some of the latter studies, the characterization of the final reaction products did not satisfy the authors completely as total separation from I - could not be achieved. Thus, further studies were led using HUPA samples: natural humic and fulvic extract from Gorleben and synthetic samples obtained form FZ Rossendorf. Dialysis procedures were envisaged to improve the incomplete separation between the colloidal humic matter and the iodide ions either unreacted or produced by the reaction. (orig.)

  10. Time-and-ID-Based Proxy Reencryption Scheme

    Directory of Open Access Journals (Sweden)

    Kambombo Mtonga

    2014-01-01

    Full Text Available Time- and ID-based proxy reencryption scheme is proposed in this paper in which a type-based proxy reencryption enables the delegator to implement fine-grained policies with one key pair without any additional trust on the proxy. However, in some applications, the time within which the data was sampled or collected is very critical. In such applications, for example, healthcare and criminal investigations, the delegatee may be interested in only some of the messages with some types sampled within some time bound instead of the entire subset. Hence, in order to carter for such situations, in this paper, we propose a time-and-identity-based proxy reencryption scheme that takes into account the time within which the data was collected as a factor to consider when categorizing data in addition to its type. Our scheme is based on Boneh and Boyen identity-based scheme (BB-IBE and Matsuo’s proxy reencryption scheme for identity-based encryption (IBE to IBE. We prove that our scheme is semantically secure in the standard model.

  11. CSR schemes in agribusiness

    DEFF Research Database (Denmark)

    Pötz, Katharina Anna; Haas, Rainer; Balzarova, Michaela

    2013-01-01

    of schemes that can be categorized on focus areas, scales, mechanisms, origins, types and commitment levels. Research limitations/implications – The findings contribute to conceptual and empirical research on existing models to compare and analyse CSR standards. Sampling technique and depth of analysis limit......Purpose – The rise of CSR followed a demand for CSR standards and guidelines. In a sector already characterized by a large number of standards, the authors seek to ask what CSR schemes apply to agribusiness, and how they can be systematically compared and analysed. Design....../methodology/approach – Following a deductive-inductive approach the authors develop a model to compare and analyse CSR schemes based on existing studies and on coding qualitative data on 216 CSR schemes. Findings – The authors confirm that CSR standards and guidelines have entered agribusiness and identify a complex landscape...

  12. Mathematical models of granular matter

    CERN Document Server

    Mariano, Paolo; Giovine, Pasquale

    2008-01-01

    Granular matter displays a variety of peculiarities that distinguish it from other appearances studied in condensed matter physics and renders its overall mathematical modelling somewhat arduous. Prominent directions in the modelling granular flows are analyzed from various points of view. Foundational issues, numerical schemes and experimental results are discussed. The volume furnishes a rather complete overview of the current research trends in the mechanics of granular matter. Various chapters introduce the reader to different points of view and related techniques. New models describing granular bodies as complex bodies are presented. Results on the analysis of the inelastic Boltzmann equations are collected in different chapters. Gallavotti-Cohen symmetry is also discussed.

  13. Examining the effect of psychopathic traits on gray matter volume in a community substance abuse sample.

    Science.gov (United States)

    Cope, Lora M; Shane, Matthew S; Segall, Judith M; Nyalakanti, Prashanth K; Stevens, Michael C; Pearlson, Godfrey D; Calhoun, Vince D; Kiehl, Kent A

    2012-11-30

    Psychopathy is believed to be associated with brain abnormalities in both paralimbic (i.e., orbitofrontal cortex, insula, temporal pole, parahippocampal gyrus, posterior cingulate) and limbic (i.e., amygdala, hippocampus, anterior cingulate) regions. Recent structural imaging studies in both community and prison samples are beginning to support this view. Sixty-six participants, recruited from community corrections centers, were administered the Hare psychopathy checklist-revised (PCL-R), and underwent magnetic resonance imaging (MRI). Voxel-based morphometry was used to test the hypothesis that psychopathic traits would be associated with gray matter reductions in limbic and paralimbic regions. Effects of lifetime drug and alcohol use on gray matter volume were covaried. Psychopathic traits were negatively associated with gray matter volumes in right insula and right hippocampus. Additionally, psychopathic traits were positively associated with gray matter volumes in bilateral orbital frontal cortex and right anterior cingulate. Exploratory regression analyses indicated that gray matter volumes within right hippocampus and left orbital frontal cortex combined to explain 21.8% of the variance in psychopathy scores. These results support the notion that psychopathic traits are associated with abnormal limbic and paralimbic gray matter volume. Furthermore, gray matter increases in areas shown to be functionally impaired suggest that the structure-function relationship may be more nuanced than previously thought. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  15. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.

    2014-01-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  16. Non-equilibrium umbrella sampling applied to force spectroscopy of soft matter.

    Science.gov (United States)

    Gao, Y X; Wang, G M; Williams, D R M; Williams, Stephen R; Evans, Denis J; Sevick, E M

    2012-02-07

    Physical systems often respond on a timescale which is longer than that of the measurement. This is particularly true in soft matter where direct experimental measurement, for example in force spectroscopy, drives the soft system out of equilibrium and provides a non-equilibrium measure. Here we demonstrate experimentally for the first time that equilibrium physical quantities (such as the mean square displacement) can be obtained from non-equilibrium measurements via umbrella sampling. Our model experimental system is a bead fluctuating in a time-varying optical trap. We also show this for simulated force spectroscopy on a complex soft molecule--a piston-rotaxane.

  17. Physical renormalization schemes and asymptotic safety in quantum gravity

    Science.gov (United States)

    Falls, Kevin

    2017-12-01

    The methods of the renormalization group and the ɛ -expansion are applied to quantum gravity revealing the existence of an asymptotically safe fixed point in spacetime dimensions higher than two. To facilitate this, physical renormalization schemes are exploited where the renormalization group flow equations take a form which is independent of the parameterisation of the physical degrees of freedom (i.e. the gauge fixing condition and the choice of field variables). Instead the flow equation depends on the anomalous dimensions of reference observables. In the presence of spacetime boundaries we find that the required balance between the Einstein-Hilbert action and Gibbons-Hawking-York boundary term is preserved by the beta functions. Exploiting the ɛ -expansion near two dimensions we consider Einstein gravity coupled to matter. Scheme independence is generically obscured by the loop-expansion due to breaking of two-dimensional Weyl invariance. In schemes which preserve two-dimensional Weyl invariance we avoid the loop expansion and find a unique ultraviolet (UV) fixed point. At this fixed point the anomalous dimensions are large and one must resum all loop orders to obtain the critical exponents. Performing the resummation a set of universal scaling dimensions are found. These scaling dimensions show that only a finite number of matter interactions are relevant. This is a strong indication that quantum gravity is renormalizable.

  18. Numerical schemes for explosion hazards

    International Nuclear Information System (INIS)

    Therme, Nicolas

    2015-01-01

    In nuclear facilities, internal or external explosions can cause confinement breaches and radioactive materials release in the environment. Hence, modeling such phenomena is crucial for safety matters. Blast waves resulting from explosions are modeled by the system of Euler equations for compressible flows, whereas Navier-Stokes equations with reactive source terms and level set techniques are used to simulate the propagation of flame front during the deflagration phase. The purpose of this thesis is to contribute to the creation of efficient numerical schemes to solve these complex models. The work presented here focuses on two major aspects: first, the development of consistent schemes for the Euler equations, then the buildup of reliable schemes for the front propagation. In both cases, explicit in time schemes are used, but we also introduce a pressure correction scheme for the Euler equations. Staggered discretization is used in space. It is based on the internal energy formulation of the Euler system, which insures its positivity and avoids tedious discretization of the total energy over staggered grids. A discrete kinetic energy balance is derived from the scheme and a source term is added in the discrete internal energy balance equation to preserve the exact total energy balance at the limit. High order methods of MUSCL type are used in the discrete convective operators, based solely on material velocity. They lead to positivity of density and internal energy under CFL conditions. This ensures that the total energy cannot grow and we can furthermore derive a discrete entropy inequality. Under stability assumptions of the discrete L8 and BV norms of the scheme's solutions one can prove that a sequence of converging discrete solutions necessarily converges towards the weak solution of the Euler system. Besides it satisfies a weak entropy inequality at the limit. Concerning the front propagation, we transform the flame front evolution equation (the so called

  19. Evaluation of a microwave method for dry matter determination in faecal samples from weaned pigs with or without clinical diarrhoea.

    Science.gov (United States)

    Pedersen, Ken Steen; Stege, Helle; Nielsen, Jens Peter

    2011-07-01

    Microwave drying as a procedure for determination of faecal dry matter in weaned pigs was evaluated and clinical relevant cut-off values between faecal consistency scores were determined. Repeatability and reproducibility were evaluated. Overall coefficient of variation was 0.03. The 95% confidence limits for any future faecal subsample examined by any operator in any replica were ± 0.85% faecal dry matter. Robustness in relation to weight of wet faeces was evaluated. The weight categories were 0.5, 1.0, 1.5, 2.0 and 3.0 g. Samples of 0.5 g gave significantly different mean faecal dry matter content compared to weighing of 1.0-3.0 g. Agreement with freeze-drying was evaluated. Lin's concordance correlation coefficient was 0.94. On average the faecal dry matter values was 1.7% (SD=1.99%) higher in freeze dried compared to micro waved samples. Non-parametric ROC analyses were used to determine optimal faecal dry matter cut-off values for clinical faecal consistency scores. The 4 consistency scores were score 1=firm and shaped, score 2=soft and shaped, score 3=loose and score 4=watery. The cut-off values were score 1: faecal dry matter content >19.5%, score 2: faecal dry matter content ≤ 19.5% and >18.0%, score 3: faecal dry matter content ≤ 18.0% and >11.3%, score 4: faecal dry matter content ≤ 11.3%. In conclusion, the microwave procedure has an acceptable repeatability/reproducibility and good agreement with freeze drying can be expected. A minimum of 1.0 g of wet faeces must be used for analyses. Faecal dry matter cut-off values between 4 different clinical consistency scores were determined. © 2011 Elsevier B.V. All rights reserved.

  20. Demonstration of Novel Sampling Techniques for Measurement of Turbine Engine Volatile and Non-Volatile Particulate Matter (PM) Emissions

    Science.gov (United States)

    2017-03-06

    WP-201317) Demonstration of Novel Sampling Techniques for Measurement of Turbine Engine Volatile and Non-volatile Particulate Matter (PM... Engine Volatile and Non-Volatile Particulate Matter (PM) Emissions 6. AUTHOR(S) E. Corporan, M. DeWitt, C. Klingshirn, M.D. Cheng, R. Miake-Lye, J. Peck...the performance and viability of two devices to condition aircraft turbine engine exhaust to allow the accurate measurement of total (volatile and non

  1. Sampling and analytical methodologies for instrumental neutron activation analysis of airborne particulate matter

    International Nuclear Information System (INIS)

    1992-01-01

    The IAEA supports a number of projects having to do with the analysis of airborne particulate matter by nuclear techniques. Most of this work involves the use of activation analysis in its various forms, particularly instrumental neutron activation analysis (INAA). This technique has been widely used in many different countries for the analysis of airborne particulate matter, and there are already many publications in scientific journals, books and reports describing such work. The present document represents an attempt to summarize the most important features of INAA as applied to the analysis of airborne particulate matter. It is intended to serve as a set of guidelines for use by participants in the IAEA's own programmes, and other scientists, who are not yet fully experienced in the application of INAA to airborne particulate samples, and who wish either to make a start on using this technique or to improve their existing procedures. The methodologies for sampling described in this document are of rather general applicability, although they are presented here in a way that takes account of the particular requirements arising from the use of INAA as the analytical technique. The analytical part of the document, however, is presented in a form that is applicable only to INAA. (Subsequent publications in this series are expected to deal specifically with other nuclear related techniques such as energy dispersive X ray fluorescence (ED-XRF) and particle induced X ray emission (PIXE) analysis). Although the methods and procedures described here have been found through experience to yield acceptable results, they should not be considered mandatory. Any other procedure used should, however, be chosen to be capable of yielding results at least of equal quality to those described

  2. Sampling and analytical methodologies for instrumental neutron activation analysis of airborne particulate matter

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-12-01

    The IAEA supports a number of projects having to do with the analysis of airborne particulate matter by nuclear techniques. Most of this work involves the use of activation analysis in its various forms, particularly instrumental neutron activation analysis (INAA). This technique has been widely used in many different countries for the analysis of airborne particulate matter, and there are already many publications in scientific journals, books and reports describing such work. The present document represents an attempt to summarize the most important features of INAA as applied to the analysis of airborne particulate matter. It is intended to serve as a set of guidelines for use by participants in the IAEA's own programmes, and other scientists, who are not yet fully experienced in the application of INAA to airborne particulate samples, and who wish either to make a start on using this technique or to improve their existing procedures. The methodologies for sampling described in this document are of rather general applicability, although they are presented here in a way that takes account of the particular requirements arising from the use of INAA as the analytical technique. The analytical part of the document, however, is presented in a form that is applicable only to INAA. (Subsequent publications in this series are expected to deal specifically with other nuclear related techniques such as energy dispersive X ray fluorescence (ED-XRF) and particle induced X ray emission (PIXE) analysis). Although the methods and procedures described here have been found through experience to yield acceptable results, they should not be considered mandatory. Any other procedure used should, however, be chosen to be capable of yielding results at least of equal quality to those described.

  3. Chaotic inflation with metric and matter perturbations

    International Nuclear Information System (INIS)

    Feldman, H.A.; Brandenberger, R.H.

    1989-01-01

    A perturbative scheme to analyze the evolution of both metric and scalar field perturbations in an expanding universe is developed. The scheme is applied to study chaotic inflation with initial metric and scalar field perturbations present. It is shown that initial gravitational perturbations with wavelength smaller than the Hubble radius rapidly decay. The metric simultaneously picks up small perturbations determined by the matter inhomogeneities. Both are frozen in once the wavelength exceeds the Hubble radius. (orig.)

  4. Comparison of PIXE and XRF analysis of airborne particulate matter samples collected on Teflon and quartz fibre filters

    Science.gov (United States)

    Chiari, M.; Yubero, E.; Calzolai, G.; Lucarelli, F.; Crespo, J.; Galindo, N.; Nicolás, J. F.; Giannoni, M.; Nava, S.

    2018-02-01

    Within the framework of research projects focusing on the sampling and analysis of airborne particulate matter, Particle Induced X-ray Emission (PIXE) and Energy Dispersive X-ray Fluorescence (ED-XRF) techniques are routinely used in many laboratories throughout the world to determine the elemental concentration of the particulate matter samples. In this work an inter-laboratory comparison of the results obtained from analysing several samples (collected on both Teflon and quartz fibre filters) using both techniques is presented. The samples were analysed by PIXE (in Florence, at the 3 MV Tandetron accelerator of INFN-LABEC laboratory) and by XRF (in Elche, using the ARL Quant'X EDXRF spectrometer with specific conditions optimized for specific groups of elements). The results from the two sets of measurements are in good agreement for all the analysed samples, thus validating the use of the ARL Quant'X EDXRF spectrometer and the selected measurement protocol for the analysis of aerosol samples. Moreover, thanks to the comparison of PIXE and XRF results on Teflon and quartz fibre filters, possible self-absorption effects due to the penetration of the aerosol particles inside the quartz fibre-filters were quantified.

  5. Characterization of airborne particulate matter in Santiago, Chile. Part 1: design, sampling and analysis for an experimental campaign

    International Nuclear Information System (INIS)

    Toro E, P.

    1995-01-01

    This work describes the siting and sampling procedures of collecting airborne particulate matter in Santiago, Chile, determining its chemical composition and daily behaviour. The airborne particulate matter was collected onto polycarbonate membranes, one of fine pore and other of coarse pore, using Pm 10 samplers. The material was analyzed using neutron activation analysis., proton induced X ray emission, X ray fluorescence, voltametry, atomic absorption spectrometry, ion chromatography and isotope dilution. (author). 1 tab

  6. [Sample preparation methods for chromatographic analysis of organic components in atmospheric particulate matter].

    Science.gov (United States)

    Hao, Liang; Wu, Dapeng; Guan, Yafeng

    2014-09-01

    The determination of organic composition in atmospheric particulate matter (PM) is of great importance in understanding how PM affects human health, environment, climate, and ecosystem. Organic components are also the scientific basis for emission source tracking, PM regulation and risk management. Therefore, the molecular characterization of the organic fraction of PM has become one of the priority research issues in the field of environmental analysis. Due to the extreme complexity of PM samples, chromatographic methods have been the chief selection. The common procedure for the analysis of organic components in PM includes several steps: sample collection on the fiber filters, sample preparation (transform the sample into a form suitable for chromatographic analysis), analysis by chromatographic methods. Among these steps, the sample preparation methods will largely determine the throughput and the data quality. Solvent extraction methods followed by sample pretreatment (e. g. pre-separation, derivatization, pre-concentration) have long been used for PM sample analysis, and thermal desorption methods have also mainly focused on the non-polar organic component analysis in PM. In this paper, the sample preparation methods prior to chromatographic analysis of organic components in PM are reviewed comprehensively, and the corresponding merits and limitations of each method are also briefly discussed.

  7. Additive operator-difference schemes splitting schemes

    CERN Document Server

    Vabishchevich, Petr N

    2013-01-01

    Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy

  8. Dark Matter Profiles in Dwarf Galaxies: A Statistical Sample Using High-Resolution Hα Velocity Fields from PCWI

    Science.gov (United States)

    Relatores, Nicole C.; Newman, Andrew B.; Simon, Joshua D.; Ellis, Richard; Truong, Phuongmai N.; Blitz, Leo

    2018-01-01

    We present high quality Hα velocity fields for a sample of nearby dwarf galaxies (log M/M⊙ = 8.4-9.8) obtained as part of the Dark Matter in Dwarf Galaxies survey. The purpose of the survey is to investigate the cusp-core discrepancy by quantifying the variation of the inner slope of the dark matter distributions of 26 dwarf galaxies, which were selected as likely to have regular kinematics. The data were obtained with the Palomar Cosmic Web Imager, located on the Hale 5m telescope. We extract rotation curves from the velocity fields and use optical and infrared photometry to model the stellar mass distribution. We model the total mass distribution as the sum of a generalized Navarro-Frenk-White dark matter halo along with the stellar and gaseous components. We present the distribution of inner dark matter density profile slopes derived from this analysis. For a subset of galaxies, we compare our results to an independent analysis based on CO observations. In future work, we will compare the scatter in inner density slopes, as well as their correlations with galaxy properties, to theoretical predictions for dark matter core creation via supernovae feedback.

  9. Collection of size fractionated particulate matter sample for neutron activation analysis in Japan

    International Nuclear Information System (INIS)

    Otoshi, Tsunehiko; Nakamatsu, Hiroaki; Oura, Yasuji; Ebihara, Mitsuru

    2004-01-01

    According to the decision of the 2001 Workshop on Utilization of Research Reactor (Neutron Activation Analysis (NAA) Section), size fractionated particulate matter collection for NAA was started from 2002 at two sites in Japan. The two monitoring sites, ''Tokyo'' and ''Sakata'', were classified into ''urban'' and ''rural''. In each site, two size fractions, namely PM 2-10 '' and PM 2 '' particles (aerodynamic particle size between 2 to 10 micrometer and less than 2 micrometer, respectively) were collected every month on polycarbonate membrane filters. Average concentrations of PM 10 (sum of PM 2-10 and PM 2 samples) during the common sampling period of August to November 2002 in each site were 0.031mg/m 3 in Tokyo, and 0.022mg/m 3 in Sakata. (author)

  10. Performance of laboratories analysing welding fume on filter samples: results from the WASP proficiency testing scheme.

    Science.gov (United States)

    Stacey, Peter; Butler, Owen

    2008-06-01

    This paper emphasizes the need for occupational hygiene professionals to require evidence of the quality of welding fume data from analytical laboratories. The measurement of metals in welding fume using atomic spectrometric techniques is a complex analysis often requiring specialist digestion procedures. The results from a trial programme testing the proficiency of laboratories in the Workplace Analysis Scheme for Proficiency (WASP) to measure potentially harmful metals in several different types of welding fume showed that most laboratories underestimated the mass of analyte on the filters. The average recovery was 70-80% of the target value and >20% of reported recoveries for some of the more difficult welding fume matrices were welding fume trial filter samples. Consistent rather than erratic error predominated, suggesting that the main analytical factor contributing to the differences between the target values and results was the effectiveness of the sample preparation procedures used by participating laboratories. It is concluded that, with practice and regular participation in WASP, performance can improve over time.

  11. Sequential sampling: a novel method in farm animal welfare assessment.

    Science.gov (United States)

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall

  12. Matching soil salinization and cropping systems in communally managed irrigation schemes

    Science.gov (United States)

    Malota, Mphatso; Mchenga, Joshua

    2018-03-01

    Occurrence of soil salinization in irrigation schemes can be a good indicator to introduce high salt tolerant crops in irrigation schemes. This study assessed the level of soil salinization in a communally managed 233 ha Nkhate irrigation scheme in the Lower Shire Valley region of Malawi. Soil samples were collected within the 0-0.4 m soil depth from eight randomly selected irrigation blocks. Irrigation water samples were also collected from five randomly selected locations along the Nkhate River which supplies irrigation water to the scheme. Salinity of both the soil and the irrigation water samples was determined using an electrical conductivity (EC) meter. Analysis of the results indicated that even for very low salinity tolerant crops (ECi water was suitable for irrigation purposes. However, root-zone soil salinity profiles depicted that leaching of salts was not adequate and that the leaching requirement for the scheme needs to be relooked and always be adhered to during irrigation operation. The study concluded that the crop system at the scheme needs to be adjusted to match with prevailing soil and irrigation water salinity levels.

  13. A privacy preserving secure and efficient authentication scheme for telecare medical information systems.

    Science.gov (United States)

    Mishra, Raghavendra; Barnwal, Amit Kumar

    2015-05-01

    The Telecare medical information system (TMIS) presents effective healthcare delivery services by employing information and communication technologies. The emerging privacy and security are always a matter of great concern in TMIS. Recently, Chen at al. presented a password based authentication schemes to address the privacy and security. Later on, it is proved insecure against various active and passive attacks. To erase the drawbacks of Chen et al.'s anonymous authentication scheme, several password based authentication schemes have been proposed using public key cryptosystem. However, most of them do not present pre-smart card authentication which leads to inefficient login and password change phases. To present an authentication scheme with pre-smart card authentication, we present an improved anonymous smart card based authentication scheme for TMIS. The proposed scheme protects user anonymity and satisfies all the desirable security attributes. Moreover, the proposed scheme presents efficient login and password change phases where incorrect input can be quickly detected and a user can freely change his password without server assistance. Moreover, we demonstrate the validity of the proposed scheme by utilizing the widely-accepted BAN (Burrows, Abadi, and Needham) logic. The proposed scheme is also comparable in terms of computational overheads with relevant schemes.

  14. Spectral properties of nuclear matter

    International Nuclear Information System (INIS)

    Bozek, P

    2006-01-01

    We review self-consistent spectral methods for nuclear matter calculations. The in-medium T-matrix approach is conserving and thermodynamically consistent. It gives both the global and the single-particle properties the system. The T-matrix approximation allows to address the pairing phenomenon in cold nuclear matter. A generalization of nuclear matter calculations to the super.uid phase is discussed and numerical results are presented for this case. The linear response of a correlated system going beyond the Hartree-Fock+ Random-Phase-Approximation (RPA) scheme is studied. The polarization is obtained by solving a consistent Bethe-Salpeter (BS) equation for the coupling of dressed nucleons to an external field. We find that multipair contributions are important for the spin(isospin) response when the interaction is spin(isospin) dependent

  15. Right Fronto-Subcortical White Matter Microstructure Predicts Cognitive Control Ability on the Go/No-go Task in a Community Sample.

    Science.gov (United States)

    Hinton, Kendra E; Lahey, Benjamin B; Villalta-Gil, Victoria; Boyd, Brian D; Yvernault, Benjamin C; Werts, Katherine B; Plassard, Andrew J; Applegate, Brooks; Woodward, Neil D; Landman, Bennett A; Zald, David H

    2018-01-01

    Go/no-go tasks are widely used to index cognitive control. This construct has been linked to white matter microstructure in a circuit connecting the right inferior frontal gyrus (IFG), subthalamic nucleus (STN), and pre-supplementary motor area. However, the specificity of this association has not been tested. A general factor of white matter has been identified that is related to processing speed. Given the strong processing speed component in successful performance on the go/no-go task, this general factor could contribute to task performance, but the general factor has often not been accounted for in past studies of cognitive control. Further, studies on cognitive control have generally employed small unrepresentative case-control designs. The present study examined the relationship between go/no-go performance and white matter microstructure in a large community sample of 378 subjects that included participants with a range of both clinical and subclinical nonpsychotic psychopathology. We found that white matter microstructure properties in the right IFG-STN tract significantly predicted task performance, and remained significant after controlling for dimensional psychopathology. The general factor of white matter only reached statistical significance when controlling for dimensional psychopathology. Although the IFG-STN and general factor tracts were highly correlated, when both were included in the model, only the IFG-STN remained a significant predictor of performance. Overall, these findings suggest that while a general factor of white matter can be identified in a young community sample, white matter microstructure properties in the right IFG-STN tract show a specific relationship to cognitive control. The findings highlight the importance of examining both specific and general correlates of cognition, especially in tasks with a speeded component.

  16. Neutrino masses and mixings: Big Bang and Supernova nucleosynthesis and neutrino dark matter

    International Nuclear Information System (INIS)

    Fuller, George M.

    1999-01-01

    The existence of small mixings between light active and sterile neutrino species could have implications for Big Bang and Supernova Heavy Element Nucleosynthesis. As well, such mixing would force us to abandon cherished constraints on light neutrino Dark Matter. Two proposed 4-neutrino mass and mixing schemes, for example, can both accomodate existing experimental results and lead to elegant solutions to the neutron-deficit problem for r-Process nucleosynthesis from neutrino-heated supernova ejecta. Each of these solutions is based on matter-enhanced (MSW) active-sterile neutrino transformation. In plausible extensions of these schemes to the early universe, Shi and Fuller have shown that relatively light mass (∼200 eV to ∼10 keV) sterile neutrinos produced via active-sterile MSW conversion can have a ''cold'' energy spectrum. Neutrinos produced in this way circumvent the principal problem of light neutrino dark matter and would be, essentially, Cold Dark Matter

  17. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Annemarie [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden)], E-mail: wagnera@chalmers.se; Boman, Johan [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden); Gatari, Michael J. [Institute of Nuclear Science and Technology, University of Nairobi, P.O. Box 30197-00100, Nairobi (Kenya)

    2008-12-15

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 {mu}m aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers.

  18. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    International Nuclear Information System (INIS)

    Wagner, Annemarie; Boman, Johan; Gatari, Michael J.

    2008-01-01

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 μm aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers

  19. On some Approximation Schemes for Steady Compressible Viscous Flow

    Science.gov (United States)

    Bause, M.; Heywood, J. G.; Novotny, A.; Padula, M.

    This paper continues our development of approximation schemes for steady compressible viscous flow based on an iteration between a Stokes like problem for the velocity and a transport equation for the density, with the aim of improving their suitability for computations. Such schemes seem attractive for computations because they offer a reduction to standard problems for which there is already highly refined software, and because of the guidance that can be drawn from an existence theory based on them. Our objective here is to modify a recent scheme of Heywood and Padula [12], to improve its convergence properties. This scheme improved upon an earlier scheme of Padula [21], [23] through the use of a special ``effective pressure'' in linking the Stokes and transport problems. However, its convergence is limited for several reasons. Firstly, the steady transport equation itself is only solvable for general velocity fields if they satisfy certain smallness conditions. These conditions are met here by using a rescaled variant of the steady transport equation based on a pseudo time step for the equation of continuity. Another matter limiting the convergence of the scheme in [12] is that the Stokes linearization, which is a linearization about zero, has an inevitably small range of convergence. We replace it here with an Oseen or Newton linearization, either of which has a wider range of convergence, and converges more rapidly. The simplicity of the scheme offered in [12] was conducive to a relatively simple and clearly organized proof of its convergence. The proofs of convergence for the more complicated schemes proposed here are structured along the same lines. They strengthen the theorems of existence and uniqueness in [12] by weakening the smallness conditions that are needed. The expected improvement in the computational performance of the modified schemes has been confirmed by Bause [2], in an ongoing investigation.

  20. Importance Sampling Based Decision Trees for Security Assessment and the Corresponding Preventive Control Schemes: the Danish Case Study

    DEFF Research Database (Denmark)

    Liu, Leo; Rather, Zakir Hussain; Chen, Zhe

    2013-01-01

    Decision Trees (DT) based security assessment helps Power System Operators (PSO) by providing them with the most significant system attributes and guiding them in implementing the corresponding emergency control actions to prevent system insecurity and blackouts. DT is obtained offline from time...... and adopts a methodology of importance sampling to maximize the information contained in the database so as to increase the accuracy of DT. Further, this paper also studies the effectiveness of DT by implementing its corresponding preventive control schemes. These approaches are tested on the detailed model...

  1. Heavy metal analysis of suspended particulate matter (SPM) and other samples from some workplaces in Kenya

    International Nuclear Information System (INIS)

    Kinyua, A.M.; Gatebe, C.K.; Mangala, M.J.

    1998-01-01

    Air pollution studies in Nairobi are indicating a rising trend in the particulate matter loading. The trend is mainly attributed to increased volume of motor vehicles, the physical change of the environment, agricultural and industrial activities. In this study, total suspended particulate matter sampling at the Nairobi industrial area and inside one workplace are reported. Included also are the results of analysis of water samples and effluents collected from a sugar factory, a tannery, and mercury (Hg) analysis in some beauty creams sold in Nairobi. The samples were analysed for heavy metal content using Energy Dispersive X-ray Fluorescence (EDXRF) while the suspended particulate matter (SPM) concentrations were determined by gravimetric technique. Total reflection x-ray fluorescence (TRXF), atomic absorption spectrophotometry and PIXE analytical techniques plus the use of Standard and Certified Reference Materials (SRM's and CRM's) were used for quality control, analysis and evaluation of the accrued data. Air sampling in the industrial area was done twice (Wednesday and Saturday) every week for a period of two months (November and December, 1996) and twice monthly for a period of six months (January-June 1997). Each sample covering approximately 24 hours, was collected using the 'Gent' Stacked Filter Unit (SFU), for day and night times. The SPM were found to vary from 16 to 83 mgm -3 during the sampling period. The analysis of dust collected inside a workplace showed that there was poor filtration of the air pumped into the building and that there was a need for improvement of the air conditioning unit plus reduction of emissions from a neighbouring tyre factory. Beauty creams analysed showed that there is some mercury present in significant amounts (0.14 - 3.0%). The results of these mercury levels are presented for various brands of cosmetics sold in some market outlets in Nairobi. The health implications on the presence of mercury in some of these beauty

  2. CHARACTERISTIC OF AIRBORNE PARTICULATE MATTER SAMPLES COLLECTED FROM TWO SEMI INDUSTRIAL SITES IN BANDUNG, INDONESIA

    Directory of Open Access Journals (Sweden)

    Diah Dwiana Lestiani

    2013-12-01

    Full Text Available Air particulate matter concentrations, black carbon as well as elemental concentrations in two semi industrial sites were investigated as a preliminary study for evaluation of air quality in these areas. Sampling of airborne particulate matter was conducted in July 2009 using a Gent stacked filter unit sampler and a total of 18 pairs of samples were collected. Black carbon was determined by reflectance measurement and elemental analysis was performed using particle induced X-ray emission (PIXE. Elements Na, Mg, Al, Si, P, S, Cl, K, Ca, Ti, Cr, Mn, Fe, Cu, Zn and As were detected. Twenty four hour PM2.5 concentration at semi industrial sites Kiaracondong and Holis ranged from 4.0 to 22.2 µg m-3, while the PM10 concentration ranged from 24.5 to 77.1 µg m-3. High concentration of crustal elements, sulphur and zinc were identified in fine and coarse fractions for both sites. The fine fraction data from both sites were analyzed using a multivariate principal component analysis and for Kiaracondong site, identified factors are attributed to sea-salt with soil dust, vehicular emissions and biomass burning, non ferrous smelter, and iron/steel work industry, while for Holis site identified factors are attributed to soil dust, industrial emissions, vehicular emissions with biomass burning, and sea-salt. Although particulate samples were collected from semi industrial sites, vehicular emissions constituted with S, Zn and BC were identified in both sites.

  3. The one-sample PARAFAC approach reveals molecular size distributions of fluorescent components in dissolved organic matter

    DEFF Research Database (Denmark)

    Wünsch, Urban; Murphy, Kathleen R.; Stedmon, Colin

    2017-01-01

    Molecular size plays an important role in dissolved organic matter (DOM) biogeochemistry, but its relationship with the fluorescent fraction of DOM (FDOM) remains poorly resolved. Here high-performance size exclusion chromatography (HPSEC) was coupled to fluorescence emission-excitation (EEM...... but not their spectral properties. Thus, in contrast to absorption measurements, bulk fluorescence is unlikely to reliably indicate the average molecular size of DOM. The one-sample approach enables robust and independent cross-site comparisons without large-scale sampling efforts and introduces new analytical...... opportunities for elucidating the origins and biogeochemical properties of FDOM...

  4. Quantitative schemes in energy dispersive X-ray fluorescence implemented in AXIL

    International Nuclear Information System (INIS)

    Tchantchane, A.; Benamar, M.A.; Tobbeche, S.

    1995-01-01

    E.D.X.R.F (Energy Dispersive X-ray Fluorescence) has long been used for quantitative analysis of many types of samples including environment samples. the software package AXIL (Analysis of x-ray spectra by iterative least quares) is extensively used for the spectra analysis and the quantification of x-ray spectra. It includes several methods of quantitative schemes for evaluating element concentrations. We present the general theory behind each scheme implemented into the software package. The spectra of the performance of each of these quantitative schemes. We have also investigated their performance relative to the uncertainties in the experimental parameters and sample description

  5. Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.

  6. A sampling scheme intended for tandem measurements of sodium transport and microvillous surface area in the coprodaeal epithelium of hens on high- and low-salt diets.

    Science.gov (United States)

    Mayhew, T M; Dantzer, V; Elbrønd, V S; Skadhauge, E

    1990-12-01

    A tissue sampling protocol for combined morphometric and physiological studies on the mucosa of the avian coprodaeum is presented. The morphometric goal is to estimate the surface area due to microvilli at the epithelial cell apex and the proposed scheme is illustrated using material from three White Plymouth Rock hens. The scheme is designed to satisfy sampling requirements for the unbiased estimation of surface areas by vertical sectioning coupled with cycloid test lines and it incorporates a number of useful internal checks. It relies on multi-level sampling with four levels of stereological estimation. At Level I, macroscopic estimates of coprodaeal volume are obtained. Light microscopy is employed at Level II to calculate epithelial volume density. Levels III and IV require low and high power electron microscopy to estimate the surface density of the epithelial apical border and the amplification factor due to microvilli. Worked examples of the calculation steps are provided.

  7. An Extended Multilocus Sequence Typing (MLST Scheme for Rapid Direct Typing of Leptospira from Clinical Samples.

    Directory of Open Access Journals (Sweden)

    Sabrina Weiss

    2016-09-01

    Full Text Available Rapid typing of Leptospira is currently impaired by requiring time consuming culture of leptospires. The objective of this study was to develop an assay that provides multilocus sequence typing (MLST data direct from patient specimens while minimising costs for subsequent sequencing.An existing PCR based MLST scheme was modified by designing nested primers including anchors for facilitated subsequent sequencing. The assay was applied to various specimen types from patients diagnosed with leptospirosis between 2014 and 2015 in the United Kingdom (UK and the Lao Peoples Democratic Republic (Lao PDR. Of 44 clinical samples (23 serum, 6 whole blood, 3 buffy coat, 12 urine PCR positive for pathogenic Leptospira spp. at least one allele was amplified in 22 samples (50% and used for phylogenetic inference. Full allelic profiles were obtained from ten specimens, representing all sample types (23%. No nonspecific amplicons were observed in any of the samples. Of twelve PCR positive urine specimens three gave full allelic profiles (25% and two a partial profile. Phylogenetic analysis allowed for species assignment. The predominant species detected was L. interrogans (10/14 and 7/8 from UK and Lao PDR, respectively. All other species were detected in samples from only one country (Lao PDR: L. borgpetersenii [1/8]; UK: L. kirschneri [1/14], L. santarosai [1/14], L. weilii [2/14].Typing information of pathogenic Leptospira spp. was obtained directly from a variety of clinical samples using a modified MLST assay. This assay negates the need for time-consuming culture of Leptospira prior to typing and will be of use both in surveillance, as single alleles enable species determination, and outbreaks for the rapid identification of clusters.

  8. Innovative process scheme for removal of organic matter, phosphorus and nitrogen from pig manure

    DEFF Research Database (Denmark)

    Karakashev, Dimitar Borisov; Schmidt, Jens Ejbye; Angelidaki, Irini

    2008-01-01

    blanket (UASB) reactor, partial oxidation), nitrogen (oxygen-limited autotrophic nitrification-denitrification, OLAND) and phosphorus (phosphorus removal by precipitation as struvite, PRS) from pig manure were tested. Results obtained showed that microfiltration was unsuitable for pig manure treatment....... PRS treated effluent was negatively affecting the further processing of the pig manure in UASB, and was therefore not included in the final process flow scheme. In a final scheme (PIGMAN concept) combination of the following successive process steps was used: thermophilic anaerobic digestion...... with sequential separation by decanter centrifuge, post-digestion in UASB reactor, partial oxidation and finally OLAND process. This combination resulted in reduction of the total organic, nitrogen and phosphorus contents by 96%, 88%, and 81%, respectively....

  9. Characterization of organic matter in cloud waters sampled at the puy de Dôme mountain using FT-ICR-MS

    Science.gov (United States)

    Bianco, A.; Chaumerliac, N.; Vaitilingom, M.; Deguillaume, L.; Bridoux, M. C.

    2017-12-01

    The chemical composition of organic matter in cloud water is highly complex. The organic species result from their dissolution from the gas phase or from the soluble fraction of the particle phase. They are also produced by aqueous phase reactivity. Several low molecular weight organic species have been quantified such as aldehydes and carboxylic acids. Recently, amino acids were also detected in cloud water and their presence is related to the presence of microorganisms. Compounds presenting similarities with high molecular weight organic substances or HULIS found in aerosols were also observed in clouds. Overall, these studies mainly focused on individual compounds or functional groups rather than the complex mixture at the molecular level. This study presents a non-targeted approach to characterize the organic matter in clouds. Samples were collected at the puy de Dôme Mountain (France). Two cloud water samples (June & July 2016) were analyzed using high resolution mass spectrometry (ESI-FT-ICR-MS 9.4T). A reversed solid phase extraction (SPE) procedure was performed to concentrate dissolved organic matter components. Composer (v.1.5.3) software was used to filter the mass spectral data, recalibrate externally the dataset and calculate all possible formulas for detected anions. The first cloud sample (June) resulted from air mass coming from the North (North Sea) while the second one (July) resulted from air mass coming from the West (Atlantic Ocean). Thus, both cloud events derived from marine air masses but were characterized by different hydrogen peroxide concentration and dissolved organic carbon content and were sampled at different periods during the day. Elemental compositions of 6487 and 3284 unique molecular species were identified in each sample. Nitrogen-containing compounds (CHNO compounds), sulfur-containing compounds (CHOS & CHNOS compounds) and other oxygen-containing compounds (CHO compounds) with molecular weights up to 800 Da were detected

  10. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    Science.gov (United States)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  11. Discrete dark matter

    CERN Document Server

    Hirsch, M; Peinado, E; Valle, J W F

    2010-01-01

    We propose a new motivation for the stability of dark matter (DM). We suggest that the same non-abelian discrete flavor symmetry which accounts for the observed pattern of neutrino oscillations, spontaneously breaks to a Z2 subgroup which renders DM stable. The simplest scheme leads to a scalar doublet DM potentially detectable in nuclear recoil experiments, inverse neutrino mass hierarchy, hence a neutrinoless double beta decay rate accessible to upcoming searches, while reactor angle equal to zero gives no CP violation in neutrino oscillations.

  12. Air oxidation of samples from different clay formations of East Paris basin: quantitative and qualitative consequences on the dissolved organic matter

    International Nuclear Information System (INIS)

    Blanchart, Pascale; Faure, Pierre; Michels, Raymond; Parant, Stephane

    2012-01-01

    Document available in extended abstract form only. During the excavation and the building of an underground research laboratory in clay geological formations, exposure to air is one of the most important parameters affecting the composition of fossil organic matter. Indeed the net effect of air oxidation of the organic matter is enrichment in oxygen and carbon combined with a loss of hydrogen. Effluents formed are CO 2 and water as well as the liberation of hydrocarbons. This process may have an impact on water chemistry of the clay, especially on the quantity and composition of Dissolved Organic Matter (DOM). The clays studied were the following and may be distinguished on the basis of their organic matter content: - The Callovo-Oxfordian argillite, collected in the Bure Underground Research Laboratory (Meuse, France), which contains a mixture of type II and III kerogen; - The Toarcian shales of East Paris Basin collected from drilling EST 204 (Meuse, France) contains type II kerogen; - The Kimmeridgian shales of East Paris Basin collected from drilling HTM 102 (Meuse, France) also contains type II kerogen. The powdered clay samples were oxidized in a ventilated oven at 100 C under air flow during 2, 256, 512 and 1088 hours for Callovo-Oxfordian samples and during 512 and 2048 hours for Toarcian and Kimmeridgian samples. The DOM of each sample was extracted by soxhlet using pure water. Different analyses were carried out: - Quantitative evolution of DOM with the oxidation process; - Evolution of several chemical parameters of DOM with oxidation using molecular analyses (PyGC-MS) molecular weight distribution (GPC-HPLC) as well as spectroscopic measurements (3D-Fluorescence). Increasing oxidation induces an increase of DOC values for all samples. Also, Changes in the chemical composition of the DOM are observed: decrease in the molecular weight range; enrichment in acidic functional groups (alkane-dioic acids, alkanoic acids, aromatics poly acids). Moreover the

  13. The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes

    Science.gov (United States)

    Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark

    2000-01-01

    Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.

  14. Evaluation of sampling methods for toxicological testing of indoor air particulate matter.

    Science.gov (United States)

    Tirkkonen, Jenni; Täubel, Martin; Hirvonen, Maija-Riitta; Leppänen, Hanna; Lindsley, William G; Chen, Bean T; Hyvärinen, Anne; Huttunen, Kati

    2016-09-01

    There is a need for toxicity tests capable of recognizing indoor environments with compromised air quality, especially in the context of moisture damage. One of the key issues is sampling, which should both provide meaningful material for analyses and fulfill requirements imposed by practitioners using toxicity tests for health risk assessment. We aimed to evaluate different existing methods of sampling indoor particulate matter (PM) to develop a suitable sampling strategy for a toxicological assay. During three sampling campaigns in moisture-damaged and non-damaged school buildings, we evaluated one passive and three active sampling methods: the Settled Dust Box (SDB), the Button Aerosol Sampler, the Harvard Impactor and the National Institute for Occupational Safety and Health (NIOSH) Bioaerosol Cyclone Sampler. Mouse RAW264.7 macrophages were exposed to particle suspensions and cell metabolic activity (CMA), production of nitric oxide (NO) and tumor necrosis factor (TNFα) were determined after 24 h of exposure. The repeatability of the toxicological analyses was very good for all tested sampler types. Variability within the schools was found to be high especially between different classrooms in the moisture-damaged school. Passively collected settled dust and PM collected actively with the NIOSH Sampler (Stage 1) caused a clear response in exposed cells. The results suggested the higher relative immunotoxicological activity of dust from the moisture-damaged school. The NIOSH Sampler is a promising candidate for the collection of size-fractionated PM to be used in toxicity testing. The applicability of such sampling strategy in grading moisture damage severity in buildings needs to be developed further in a larger cohort of buildings.

  15. On Converting Secret Sharing Scheme to Visual Secret Sharing Scheme

    Directory of Open Access Journals (Sweden)

    Wang Daoshun

    2010-01-01

    Full Text Available Abstract Traditional Secret Sharing (SS schemes reconstruct secret exactly the same as the original one but involve complex computation. Visual Secret Sharing (VSS schemes decode the secret without computation, but each share is m times as big as the original and the quality of the reconstructed secret image is reduced. Probabilistic visual secret sharing (Prob.VSS schemes for a binary image use only one subpixel to share the secret image; however the probability of white pixels in a white area is higher than that in a black area in the reconstructed secret image. SS schemes, VSS schemes, and Prob. VSS schemes have various construction methods and advantages. This paper first presents an approach to convert (transform a -SS scheme to a -VSS scheme for greyscale images. The generation of the shadow images (shares is based on Boolean XOR operation. The secret image can be reconstructed directly by performing Boolean OR operation, as in most conventional VSS schemes. Its pixel expansion is significantly smaller than that of VSS schemes. The quality of the reconstructed images, measured by average contrast, is the same as VSS schemes. Then a novel matrix-concatenation approach is used to extend the greyscale -SS scheme to a more general case of greyscale -VSS scheme.

  16. NSVZ scheme with the higher derivative regularization for N=1 SQED

    International Nuclear Information System (INIS)

    Kataev, A.L.; Stepanyantz, K.V.

    2013-01-01

    The exact NSVZ relation between a β-function of N=1 SQED and an anomalous dimension of the matter superfields is studied within the Slavnov higher derivative regularization approach. It is shown that if the renormalization group functions are defined in terms of the bare coupling constant, this relation is always valid. In the renormalized theory the NSVZ relation is obtained in the momentum subtraction scheme supplemented by a special finite renormalization. Unlike the dimensional reduction, the higher derivative regularization allows to fix this finite renormalization. This is made by imposing the conditions Z 3 (α,μ=Λ)=1 and Z(α,μ=Λ)=1 on the renormalization constants of N=1 SQED, where Λ is a parameter in the higher derivative term. The results are verified by the explicit three-loop calculation. In this approximation we relate the DR ¯ scheme and the NSVZ scheme defined within the higher derivative approach by the finite renormalization

  17. The Anatomy of Pension Fraud in Nigeria: Its Motives, the Management and Future of the Nigerian Pension Scheme

    Directory of Open Access Journals (Sweden)

    Amaka E. Agbata

    2017-12-01

    Full Text Available The study determined how the administration of the Pension Scheme could be perked up in Nigeria through effective management that would reduce fraudulent practices apparent in the scheme. By following the precept of library research via the survey design, a 5-point Likert Scale questionnaire was designed to educe primary information about pension matters from a sample of 435 knowledgeable respondents. The collected data were presented and analyzed. Three hypotheses were formulated and tested based on Multiple Regression Analysis models with the aid of Minitab version 17. The findings show that, despite the provisions of the Act (the Pension Reform Act - PRA, intents for committing Pension Fraud have not reduced to a significant extent. Also, the accumulated assets of pension funds have not been adequately diversified into profitable investment alternatives. Therefore, we recommend that, among other things, amendments should concertedly be made to the PRA to at least discourage acts of pension frauds by instituting severe punitive measures for culprits, while simultaneously inculcating moral ethics among public servants in Nigeria.

  18. The dark matter of galaxy voids

    Science.gov (United States)

    Sutter, P. M.; Lavaux, Guilhem; Wandelt, Benjamin D.; Weinberg, David H.; Warren, Michael S.

    2014-03-01

    How do observed voids relate to the underlying dark matter distribution? To examine the spatial distribution of dark matter contained within voids identified in galaxy surveys, we apply Halo Occupation Distribution models representing sparsely and densely sampled galaxy surveys to a high-resolution N-body simulation. We compare these galaxy voids to voids found in the halo distribution, low-resolution dark matter and high-resolution dark matter. We find that voids at all scales in densely sampled surveys - and medium- to large-scale voids in sparse surveys - trace the same underdensities as dark matter, but they are larger in radius by ˜20 per cent, they have somewhat shallower density profiles and they have centres offset by ˜ 0.4Rv rms. However, in void-to-void comparison we find that shape estimators are less robust to sampling, and the largest voids in sparsely sampled surveys suffer fragmentation at their edges. We find that voids in galaxy surveys always correspond to underdensities in the dark matter, though the centres may be offset. When this offset is taken into account, we recover almost identical radial density profiles between galaxies and dark matter. All mock catalogues used in this work are available at http://www.cosmicvoids.net.

  19. Optical generation of matter qubit graph states

    International Nuclear Information System (INIS)

    Benjamin, S C; Eisert, J; Stace, T M

    2005-01-01

    We present a scheme for rapidly entangling matter qubits in order to create graph states for one-way quantum computing. The qubits can be simple three-level systems in separate cavities. Coupling involves only local fields and a static (unswitched) linear optics network. Fusion of graph-state sections occurs with, in principle, zero probability of damaging the nascent graph state. We avoid the finite thresholds of other schemes by operating on two entangled pairs, so that each generates exactly one photon. We do not require the relatively slow single qubit local flips to be applied during the growth phase: growth of the graph state can then become a purely optical process. The scheme naturally generates graph states with vertices of high degree and so is easily able to construct minimal graph states, with consequent resource savings. The most efficient approach will be to create new graph-state edges even as qubits elsewhere are measured, in a 'just in time' approach. An error analysis indicates that the scheme is relatively robust against imperfections in the apparatus

  20. Quality control scheme for thyroid related hormones measured by radioimmunoassay

    International Nuclear Information System (INIS)

    Kamel, R.S.

    1989-09-01

    A regional quality control scheme for thyroid related hormones measured by radioimmunoassay is being established in the Middle East. The scheme started in January 1985, with eight laboratories which were all from Iraq. At the present nineteen laboratories from Iraq, Jordan, Kuwait, Saudi Arabia and United Arab Emirates (Dubai) are now participating in the scheme. The scheme was supported by the International Atomic Energy Agency. All participants received monthly three freeze dried quality control samples for assay. Results for T3, T4 and TSH received from participants are analysed statistically batch by batch and returned to the participants. Laboratories reporting quite marked bias results were contacted to check the assay performance for that particular batch and to define the weak points. Clinical interpretation for certain well defined samples were reported. A regular case study report is recently introduced to the scheme and will be distributed regularly as one of the guidelines in establishing a trouble shooting programme throughout the scheme. The overall mean between the laboratory performance showed a good result for the T4, moderate but acceptable for T3 and poor for TSH. The statistical analysis of the results based on the concept of a ''target'' value is derived from the believed correct value the ''Median''. The overall mean bias values (ignoring signs) for respectively low, normal and high concentration samples were for T4 18.0 ± 12.5, 11.2 ± 6.4 and 11.2 ± 6.4, for T3 28.8 ± 23.5, 11.2 ± 8.4 and 13.4 ± 9.0 and for TSH 46.3 ± 50.1, 37.2 ± 28.5 and 19.1 ± 12.1. The scheme proved to be effective not only in improving the overall performance but also it helped to develop awareness of the need for internal quality control programmes and gave confidence in the results of the participants. The scheme will continue and will be expanded to involve more laboratories in the region. Refs, fig and tabs

  1. Continuous quality control of the blood sampling procedure using a structured observation scheme

    DEFF Research Database (Denmark)

    Seemann, Tine Lindberg; Nybo, Mads

    2016-01-01

    INTRODUCTION: An observational study was conducted using a structured observation scheme to assess compliance with the local phlebotomy guideline, to identify necessary focus items, and to investigate whether adherence to the phlebotomy guideline improved. MATERIALS AND METHODS: The questionnaire...

  2. Asymptotic safety of gravity with matter

    Science.gov (United States)

    Christiansen, Nicolai; Litim, Daniel F.; Pawlowski, Jan M.; Reichert, Manuel

    2018-05-01

    We study the asymptotic safety conjecture for quantum gravity in the presence of matter fields. A general line of reasoning is put forward explaining why gravitons dominate the high-energy behavior, largely independently of the matter fields as long as these remain sufficiently weakly coupled. Our considerations are put to work for gravity coupled to Yang-Mills theories with the help of the functional renormalization group. In an expansion about flat backgrounds, explicit results for beta functions, fixed points, universal exponents, and scaling solutions are given in systematic approximations exploiting running propagators, vertices, and background couplings. Invariably, we find that the gauge coupling becomes asymptotically free while the gravitational sector becomes asymptotically safe. The dependence on matter field multiplicities is weak. We also explain how the scheme dependence, which is more pronounced, can be handled without changing the physics. Our findings offer a new interpretation of many earlier results, which is explained in detail. The results generalize to theories with minimally coupled scalar and fermionic matter. Some implications for the ultraviolet closure of the Standard Model or its extensions are given.

  3. An assessment of common atmospheric particulate matter sampling ...

    African Journals Online (AJOL)

    The method detection limit was also low (0.2 to 1 μg/L) for most metals, and 50% and less standard deviation to mean ratios were obtained for Ni and Pb. Key words: Toxic metals, inductively coupled plasma mass spectroscopy, scanning electron microscopy coupled with energy dispersive spectrometry, particulate matter, ...

  4. Study of cold neutron sources: Implementation and validation of a complete computation scheme for research reactor using Monte Carlo codes TRIPOLI-4.4 and McStas

    International Nuclear Information System (INIS)

    Campioni, Guillaume; Mounier, Claude

    2006-01-01

    The main goal of the thesis about studies of cold neutrons sources (CNS) in research reactors was to create a complete set of tools to design efficiently CNS. The work raises the problem to run accurate simulations of experimental devices inside reactor reflector valid for parametric studies. On one hand, deterministic codes have reasonable computation times but introduce problems for geometrical description. On the other hand, Monte Carlo codes give the possibility to compute on precise geometry, but need computation times so important that parametric studies are impossible. To decrease this computation time, several developments were made in the Monte Carlo code TRIPOLI-4.4. An uncoupling technique is used to isolate a study zone in the complete reactor geometry. By recording boundary conditions (incoming flux), further simulations can be launched for parametric studies with a computation time reduced by a factor 60 (case of the cold neutron source of the Orphee reactor). The short response time allows to lead parametric studies using Monte Carlo code. Moreover, using biasing methods, the flux can be recorded on the surface of neutrons guides entries (low solid angle) with a further gain of running time. Finally, the implementation of a coupling module between TRIPOLI- 4.4 and the Monte Carlo code McStas for research in condensed matter field gives the possibility to obtain fluxes after transmission through neutrons guides, thus to have the neutron flux received by samples studied by scientists of condensed matter. This set of developments, involving TRIPOLI-4.4 and McStas, represent a complete computation scheme for research reactors: from nuclear core, where neutrons are created, to the exit of neutrons guides, on samples of matter. This complete calculation scheme is tested against ILL4 measurements of flux in cold neutron guides. (authors)

  5. A Novel Power-Saving Transmission Scheme for Multiple-Component-Carrier Cellular Systems

    Directory of Open Access Journals (Sweden)

    Yao-Liang Chung

    2016-04-01

    Full Text Available As mobile data traffic levels have increased exponentially, resulting in rising energy costs in recent years, the demand for and development of green communication technologies has resulted in various energy-saving designs for cellular systems. At the same time, recent technological advances have allowed multiple component carriers (CCs to be simultaneously utilized in a base station (BS, a development that has made the energy consumption of BSs a matter of increasing concern. To help address this concern, herein we propose a novel scheme aimed at efficiently minimizing the power consumption of BS transceivers during transmission, while still ensuring good service quality and fairness for users. Specifically, the scheme utilizes the dynamic activation/deactivation of CCs during data transmission to increase power usage efficiency. To test its effectiveness, the proposed scheme was applied to a model consisting of a BS with orthogonal frequency division multiple access-based CCs in a downlink transmission environment. The results indicated that, given periods of relatively light traffic loads, the total power consumption of the proposed scheme is significantly lower than that of schemes in which all the CCs of a BS are constantly activated, suggesting the scheme’s potential for reducing both energy costs and carbon dioxide emissions.

  6. Experimental results on advanced inertial fusion schemes obtained within the HiPER project

    Czech Academy of Sciences Publication Activity Database

    Batani, D.; Gizzi, L.A.; Koester, P.; Labate, L.; Honrubia, J.; Antonelli, L.; Morace, A.; Volpe, L.; Santos, J.J.; Schurtz, G.; Hulin, S.; Ribeyre, X.; Nicolai, P.; Vauzour, B.; Dorchies, F.; Nazarov, W.; Pasley, J.; Richetta, M.; Lancaster, K.; Spindloe, C.; Tolley, M.; Neely, D.; Kozlová, Michaela; Nejdl, Jaroslav; Rus, Bedřich; Wolowski, J.; Badziak, J.

    2012-01-01

    Roč. 57, č. 1 (2012), s. 3-10 ISSN 0029-5922. [International Workshop and Summer School on Towards Fusion Energy /10./. Kudowa Zdroj, 12.06.2011-18.06.2011] Institutional research plan: CEZ:AV0Z10100502 Keywords : advanced ignition schemes * fast ignition * shock ignition Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.507, year: 2012

  7. Trimethylsilyl derivatives of organic compounds in source samples and in atmospheric fine particulate matter.

    Science.gov (United States)

    Nolte, Christopher G; Schauer, James J; Cass, Glen R; Simoneit, Bernd R T

    2002-10-15

    Source sample extracts of vegetative detritus, motor vehicle exhaust, tire dust paved road dust, and cigarette smoke have been silylated and analyzed by GC-MS to identify polar organic compounds that may serve as tracers for those specific emission sources of atmospheric fine particulate matter. Candidate molecular tracers were also identified in atmospheric fine particle samples collected in the San Joaquin Valley of California. A series of normal primary alkanols, dominated by even carbon-numbered homologues from C26 to C32, the secondary alcohol 10-nonacosanol, and some phytosterols are prominent polar compounds in the vegetative detritus source sample. No new polar organic compounds are found in the motor vehicle exhaust samples. Several hydrogenated resin acids are present in the tire dust sample, which might serve as useful tracers for those sources in areas that are heavily impacted by motor vehicle traffic. Finally, the alcohol and sterol emission profiles developed for all the source samples examined in this project are scaled according to the ambient fine particle mass concentrations attributed to those sources by a chemical mass balance receptor model that was previously applied to the San Joaquin Valley to compute the predicted atmospheric concentrations of individual alcohols and sterols. The resulting underprediction of alkanol concentrations at the urban sites suggests that alkanols may be more sensitive tracers for natural background from vegetative emissions (i.e., waxes) than the high molecular weight alkanes, which have been the best previously available tracers for that source.

  8. Sampling designs and methods for estimating fish-impingement losses at cooling-water intakes

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-01-01

    Several systems for estimating fish impingement at power plant cooling-water intakes are compared to determine the most statistically efficient sampling designs and methods. Compared to a simple random sampling scheme the stratified systematic random sampling scheme, the systematic random sampling scheme, and the stratified random sampling scheme yield higher efficiencies and better estimators for the parameters in two models of fish impingement as a time-series process. Mathematical results and illustrative examples of the applications of the sampling schemes to simulated and real data are given. Some sampling designs applicable to fish-impingement studies are presented in appendixes

  9. A general scheme for obtaining graviton spectrums

    International Nuclear Information System (INIS)

    GarcIa-Cuadrado, G

    2006-01-01

    The aim of this contribution is to present a general scheme for obtaining graviton spectra from modified gravity theories, based on a theory developed by Grishchuk in the mid 1970s. We try to be pedagogical, putting in order some basic ideas in a compact procedure and also giving a review of the current trends in this arena. With the aim to fill a gap for the interface between quantum field theorists and observational cosmologist in this matter, we highlight two interesting applications to cosmology: clues as to the nature of dark energy; and the possibility of reconstruction of the scalar potential in scalar-tensor gravity theories

  10. Green Ocean Amazon 2014/15 High-Volume Filter Sampling: Atmospheric Particulate Matter of an Amazon Tropical City and its Relationship to Population Health Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Machado, C. M. [Federal Univ. of Amazonas (Brazil); Santos, Erickson O. [Federal Univ. of Amazonas (Brazil); Fernandes, Karenn S. [Federal Univ. of Amazonas (Brazil); Neto, J. L. [Federal Univ. of Amazonas (Brazil); Souza, Rodrigo A. [Univ. of the State of Amazonas (Brazil)

    2016-08-01

    Manaus, the capital of the Brazilian state of Amazonas, is developing very rapidly. Its pollution plume contains aerosols from fossil fuel combustion mainly due to vehicular emission, industrial activity, and a thermal power plant. Soil resuspension is probably a secondary source of atmospheric particles. The plume transports from Manaus to the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility ARM site at Manacapuru urban pollutants as well as pollutants from pottery factories along the route of the plume. Considering the effects of particulate matter on health, atmospheric particulate matter was evaluated at this site as part of the ARM Facility’s Green Ocean Amazon 2014/15 (GoAmazon 2014/15) field campaign. Aerosol or particulate matter (PM) is typically defined by size, with the smaller particles having more health impact. Total suspended particulate (TSP) are particles smaller than 100 μm; particles smaller than 2.5 μm are called PM2.5. In this work, the PM2.5 levels were obtained from March to December of 2015, totaling 34 samples and TSP levels from October to December of 2015, totaling 17 samples. Sampling was conducted with PM2.5 and TSP high-volume samplers using quartz filters (Figure 1). Filters were stored during 24 hours in a room with temperature (21,1ºC) and humidity (44,3 %) control, in order to do gravimetric analyses by weighing before and after sampling. This procedure followed the recommendations of the Brazilian Association for Technical Standards local norm (NBR 9547:1997). Mass concentrations of particulate matter were obtained from the ratio between the weighted sample and the volume of air collected. Defining a relationship between particulate matter (PM2.5 and TSP) and respiratory diseases of the local population is an important goal of this project, since no information exists on that topic.

  11. Corrections to the General (2,4) and (4,4) FDTD Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Meierbachtol, Collin S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Smith, William S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shao, Xuan-Min [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-29

    The sampling weights associated with two general higher order FDTD schemes were derived by Smith, et al. and published in a IEEE Transactions on Antennas and Propagation article in 2012. Inconsistencies between governing equations and their resulting solutions were discovered within the article. In an effort to track down the root cause of these inconsistencies, the full three-dimensional, higher order FDTD dispersion relation was re-derived using MathematicaTM. During this process, two errors were identi ed in the article. Both errors are highlighted in this document. The corrected sampling weights are also provided. Finally, the original stability limits provided for both schemes are corrected, and presented in a more precise form. It is recommended any future implementations of the two general higher order schemes provided in the Smith, et al. 2012 article should instead use the sampling weights and stability conditions listed in this document.

  12. An adaptive Cartesian control scheme for manipulators

    Science.gov (United States)

    Seraji, H.

    1987-01-01

    A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.

  13. Deep versus periventricular white matter lesions and cognitive function in a community sample of middle-aged participants.

    Science.gov (United States)

    Soriano-Raya, Juan José; Miralbell, Júlia; López-Cancio, Elena; Bargalló, Núria; Arenillas, Juan Francisco; Barrios, Maite; Cáceres, Cynthia; Toran, Pere; Alzamora, Maite; Dávalos, Antoni; Mataró, Maria

    2012-09-01

    The association of cerebral white matter lesions (WMLs) with cognitive status is not well understood in middle-aged individuals. Our aim was to determine the specific contribution of periventricular hyperintensities (PVHs) and deep white matter hyperintensities (DWMHs) to cognitive function in a community sample of asymptomatic participants aged 50 to 65 years. One hundred stroke- and dementia-free adults completed a comprehensive neuropsychological battery and brain MRI protocol. Participants were classified according to PVH and DWMH scores (Fazekas scale). We dichotomized our sample into low grade WMLs (participants without or with mild lesions) and high grade WMLs (participants with moderate or severe lesions). Analyses were performed separately in PVH and DWMH groups. High grade DWMHs were associated with significantly lower scores in executive functioning (-0.45 standard deviations [SD]), attention (-0.42 SD), verbal fluency (-0.68 SD), visual memory (-0.52 SD), visuospatial skills (-0.79 SD), and psychomotor speed (-0.46 SD). Further analyses revealed that high grade DWMHs were also associated with a three- to fourfold increased risk of impaired scores (i.e.,<1.5 SD) in executive functioning, verbal fluency, visuospatial skills, and psychomotor speed. Our findings suggest that only DWMHs, not PVHs, are related to diminished cognitive function in middle-aged individuals. (JINS, 2012, 18, 1-12).

  14. Dark Matter Detection Using Helium Evaporation and Field Ionization.

    Science.gov (United States)

    Maris, Humphrey J; Seidel, George M; Stein, Derek

    2017-11-03

    We describe a method for dark matter detection based on the evaporation of helium atoms from a cold surface and their subsequent detection using field ionization. When a dark matter particle scatters off a nucleus of the target material, elementary excitations (phonons or rotons) are produced. Excitations which have an energy greater than the binding energy of helium to the surface can result in the evaporation of helium atoms. We propose to detect these atoms by ionizing them in a strong electric field. Because the binding energy of helium to surfaces can be below 1 meV, this detection scheme opens up new possibilities for the detection of dark matter particles in a mass range down to 1  MeV/c^{2}.

  15. Dark Matter Detection Using Helium Evaporation and Field Ionization

    Science.gov (United States)

    Maris, Humphrey J.; Seidel, George M.; Stein, Derek

    2017-11-01

    We describe a method for dark matter detection based on the evaporation of helium atoms from a cold surface and their subsequent detection using field ionization. When a dark matter particle scatters off a nucleus of the target material, elementary excitations (phonons or rotons) are produced. Excitations which have an energy greater than the binding energy of helium to the surface can result in the evaporation of helium atoms. We propose to detect these atoms by ionizing them in a strong electric field. Because the binding energy of helium to surfaces can be below 1 meV, this detection scheme opens up new possibilities for the detection of dark matter particles in a mass range down to 1 MeV /c2 .

  16. Functions and Design Scheme of Tibet High Altitude Test Base

    Institute of Scientific and Technical Information of China (English)

    Yu Yongqing; Guo Jian; Yin Yu; Mao Yan; Li Guangfan; Fan Jianbin; Lu Jiayu; Su Zhiyi; Li Peng; Li Qingfeng; Liao Weiming; Zhou Jun

    2010-01-01

    @@ The functional orientation of the Tibet High Altitude Test Base, subordinated to the State Grid Corporation of China (SGCC), is to serve power transmission projects in high altitude areas, especially to provide technical support for southwestern hydropower delivery projects by UHVDC transmission and Qinghai-Tibet grid interconnection project. This paper presents the matters concerned during siting and planning, functions,design scheme, the main performances and parameters of the test facilities, as well as the tests and research tasks already carried out.

  17. Study of suprathermal electron transport in solid or compressed matter for the fast-ignitor scheme

    International Nuclear Information System (INIS)

    Perez, F.

    2010-01-01

    The inertial confinement fusion (ICF) concept is widely studied nowadays. It consists in quickly compressing and heating a small spherical capsule filled with fuel, using extremely energetic lasers. Since approximately 15 years, the fast-ignition (FI) technique has been proposed to facilitate the fuel heating by adding a particle beam - electrons generated by an ultra-intense laser - at the exact moment when the capsule compression is at its maximum. This thesis constitutes an experimental study of these electron beams generated by picosecond-scale lasers. We present new results on the characteristics of these electrons after they are accelerated by the laser (energy, divergence, etc.) as well as their interaction with the matter they pass through. The experimental results are explained and reveal different aspects of these laser-accelerated fast electrons. Their analysis allowed for significant progress in understanding several mechanisms: how they are injected into solid matter, how to measure their divergence, and how they can be automatically collimated inside compressed matter. (author) [fr

  18. A Novel Type of Oil—generating Organic Matter —Crystal—enclosed Organic Matter

    Institute of Scientific and Technical Information of China (English)

    周中毅; 裴存民; 等

    1992-01-01

    The comparative study of organic matter in carbonate rocks and argillaceous rocks from the same horizon indicates that the organic thermal maturities of carbonate rocks are much lower than those of argillaceous rocks .Ana extensive analysis of extracted and inclused organic matter from the same sample shows that inclused organic matter is different from extracted organic matter,and the thermal maturity of the former is usually lower than that of the latter in terms of biomarker structural parameters.It seems that carbonate mineras could preserve organic matter and retard organic maturation.The inclused organic matter,abundant in most carbonate rocks,will be released from minerals and transformed into oil and gas during the high-thermal maturity stage.

  19. SOURCE SAMPLING FINE PARTICULATE MATTER: WOOD-FIRED INDUSTRIAL BOILER

    Science.gov (United States)

    The report provides a profile for a wood-fired industrial boiler equipped with a multistage electrostatic precipitator control device. Along with the profile of emissions of fine particulate matter of aerodynamic diameter of 2.5 micrometers or less (PM-2.5), data are also provide...

  20. Low-complexity joint symbol synchronization and sampling frequency offset estimation scheme for optical IMDD OFDM systems.

    Science.gov (United States)

    Zhang, Zhen; Zhang, Qianwu; Chen, Jian; Li, Yingchun; Song, Yingxiong

    2016-06-13

    A low-complexity joint symbol synchronization and SFO estimation scheme for asynchronous optical IMDD OFDM systems based on only one training symbol is proposed. Numerical simulations and experimental demonstrations are also under taken to evaluate the performance of the mentioned scheme. The experimental results show that robust and precise symbol synchronization and the SFO estimation can be achieved simultaneously at received optical power as low as -20dBm in asynchronous OOFDM systems. SFO estimation accuracy in MSE can be lower than 1 × 10-11 under SFO range from -60ppm to 60ppm after 25km SSMF transmission. Optimal System performance can be maintained until cumulate number of employed frames for calculation is less than 50 under above-mentioned conditions. Meanwhile, the proposed joint scheme has a low level of operation complexity comparing with existing methods, when the symbol synchronization and SFO estimation are considered together. Above-mentioned results can give an important reference in practical system designs.

  1. Control charts for location based on different sampling schemes

    NARCIS (Netherlands)

    Mehmood, R.; Riaz, M.; Does, R.J.M.M.

    2013-01-01

    Control charts are the most important statistical process control tool for monitoring variations in a process. A number of articles are available in the literature for the X̄ control chart based on simple random sampling, ranked set sampling, median-ranked set sampling (MRSS), extreme-ranked set

  2. Characterization and quantification by mass spectrometry of mobile organic matter from clay rock: influence of the origin and of the sampling

    International Nuclear Information System (INIS)

    Huclier-Markai, S.; Landesman, C.; Montavon, G.; Grambow, B.; Monteau, F.; Fernandez, A.M.; Vinsot, A.

    2012-01-01

    Document available in extended abstract form only. In environmental studies, Natural Organic Matter (NOM) plays a key role on the bioavailability and the toxicity of metallic compounds. If one wants to evaluate the mobility of heavy metals / radionuclides, which is in natura in most of the cases dependant on their interactions with NOM. One part of the organic inventory in the Callovo-Oxfordian formation (COx) exists as small dissolved compounds in the pore water but the weak content and the weak porosity of the formation (∼ 8 % of water in weight) make the complexation study with metal ions difficult. One part of the organic matter attached to the sediment (∼ 1 % in weight) can be mobilized in a synthetic pore water 1 and can be considered as similar to in situ pore water dissolved organic matter (DOM) regarding its size distribution. The collection of clay pore water has been done through percolation experiment, a unique and original experimental process developed to get pore water from a core sample,that have been described previously. From these experiments, it was shown that mobile organic matter concentration could reach 0.01 mol C/L by application of a pressure gradient up to 100 bars. Since part of the OM from the COx is known to be sensitive to air oxidation, the characterization and quantification of DOM were then performed under anoxic conditions (about - 170 mV vs Standard Hydrogen Electrode SHE). In addition, the chemical composition of NOM contained in the pore water from the argillite clay rock has been determined in in-situ like conditions by ESI-MS and APCI-MS, which are suitable techniques to identify the chemical composition of NOM contained in the COx pore water available from boreholes. Mostly low molecular weight molecules were identified, of whom structural features observed were mainly acidic compounds, fatty acids as well as aldehydes and amino acids. Fulvic and Humic Acids have such low concentrations in the COx formation, leading to a

  3. New insights into the developing rabbit brain using diffusion tensor tractography and generalized q-sampling MRI.

    Directory of Open Access Journals (Sweden)

    Seong Yong Lim

    Full Text Available The use of modern neuroimaging methods to characterize the complex anatomy of brain development at different stages reveals an enormous wealth of information in understanding this highly ordered process and provides clues to detect neurological and neurobehavioral disorders that have their origin in early structural and functional cerebral maturation. Non-invasive diffusion tensor magnetic resonance imaging (DTI is able to distinguish cerebral microscopic structures, especially in the white matter regions. However, DTI is unable to resolve the complicated neural structure, i.e., the fiber crossing that is frequently observed during the maturation process. To overcome this limitation, several methods have been proposed. One such method, generalized q-sampling imaging (GQI, can be applied to a variety of datasets, including the single shell, multi-shell or grid sampling schemes that are believed to be able to resolve the complicated crossing fibers. Rabbits have been widely used for neurodevelopment research because they exhibit human-like timing of perinatal brain white matter maturation. Here, we present a longitudinal study using both DTI and GQI to demonstrate the changes in cerebral maturation of in vivo developing rabbit brains over a period of 40 weeks. Fractional anisotropy (FA of DTI and generalized fractional anisotropy (GFA of GQI indices demonstrated that the white matter anisotropy increased with age, with GFA exhibiting an increase in the hippocampus as well. Normalized quantitative anisotropy (NQA of GQI also revealed an increase in the hippocampus, allowing us to observe the changes in gray matter as well. Regional and whole brain DTI tractography also demonstrated refinement in fiber pathway architecture with maturation. We concluded that DTI and GQI results were able to characterize the white matter anisotropy changes, whereas GQI provided further information about the gray matter hippocampus area. This developing rabbit brain

  4. New approach to measure soil particulate organic matter in intact samples using X-ray computed micro-tomography

    Science.gov (United States)

    Kravchenko, Alexandra; Negassa, Wakene; Guber, Andrey; Schmidt, Sonja

    2014-05-01

    Particulate soil organic matter (POM) is biologically and chemically active fraction of soil organic matter. It is a source of many agricultural and ecological benefits, among which are POM's contribution to C sequestration. Most of conventional research methods for studying organic matter dynamics involve measurements conducted on pre-processed i.e., ground and sieved soil samples. Unfortunately, grinding and sieving completely destroys soil structure, the component crucial for soil functioning and C protection. Importance of a better understanding of the role of soil structure and of the physical protection that it provides to soil C cannot be overstated; and analysis of quantities, characteristics, and decomposition rates of POM in soil samples with intact structure is among the key elements of gaining such understanding. However, a marked difficulty hindering the progress in such analyses is a lack of tools for identification and quantitative analysis of POM in intact soil samples. Recent advancement in applications of X-ray computed micro-tomography (μ-CT) to soil science has given an opportunity to conduct such analyses. The objective of the current study is to develop a procedure for identification and quantitative characterization of POM within intact soil samples using X-ray μ-CT images and to test performance of the proposed procedure on a set of multiple intact soil macro-aggregates. We used 16 4-6 mm soil aggregates collected at 0-15 cm depth from a Typic Hapludalf soil at multiple field sites with diverse agricultural management history. The aggregates have been scanned at SIMBIOS Centre, Dundee, Scotland at 10 micron resolution. POM was determined from the aggregate images using the developed procedure. The procedure was based on combining image pre-processing steps with discriminant analysis classification. The first component of the procedure consisted of image pre-processing steps based on the range of gray values (GV) along with shape and size

  5. Exact analysis of Packet Reversed Packet Combining Scheme and Modified Packet Combining Scheme; and a combined scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-07-01

    Packet combining scheme is a well defined simple error correction scheme for the detection and correction of errors at the receiver. Although it permits a higher throughput when compared to other basic ARQ protocols, packet combining (PC) scheme fails to correct errors when errors occur in the same bit locations of copies. In a previous work, a scheme known as Packet Reversed Packet Combining (PRPC) Scheme that will correct errors which occur at the same bit location of erroneous copies, was studied however PRPC does not handle a situation where a packet has more than 1 error bit. The Modified Packet Combining (MPC) Scheme that can correct double or higher bit errors was studied elsewhere. Both PRPC and MPC schemes are believed to offer higher throughput in previous studies, however neither adequate investigation nor exact analysis was done to substantiate this claim of higher throughput. In this work, an exact analysis of both PRPC and MPC is carried out and the results reported. A combined protocol (PRPC and MPC) is proposed and the analysis shows that it is capable of offering even higher throughput and better error correction capability at high bit error rate (BER) and larger packet size. (author)

  6. Determination of natural organic matter and iron binding capacity in fen samples

    Science.gov (United States)

    Kügler, Stefan; Cooper, Rebecca E.; Frieder Mohr, Jan; Wichard, Thomas; Küsel, Kirsten

    2017-04-01

    Natural organic matter (NOM) plays an important role in ecosystem processes such as soil carbon stabilization, nutrient availability and metal complexation. Iron-NOM-complexes, for example, are known to increase the solubility and, as a result, the bioavailability of iron in natural environments leading to several effects on the microbial community. Due to the various functions of NOM in natural environments, there is a high level of interest in the characterization of the molecular composition. The complexity of NOM presents a significant challenge in the elucidation of its composition. However, the development and utilization of high resolution mass spectrometry (HR-MS) as a tool to detect single compounds in complex samples has spearheaded the effort to elucidate the composition of NOM. Over the past years, the accuracy of ion cyclotron- or Orbitrap mass spectrometers has increased dramatically resulting in the possibility to obtain a mass differentiation of the large number of compounds in NOM. Together these tools provide significant and powerful insight into the molecular composition of NOM. In the current study, we aim to understand abiotic and biotic interactions between NOM and metals, such as iron, found in the Schlöppnerbrunnen fen (Fichtelgebirge, Germany) and how these interactions impact the microbial consortia. We characterized the dissolved organic matter (DOM) as well as basic chemical parameters in the iron-rich (up to 20 mg (g wt peat)-1), slightly acidic (pH 4.8) fen to gain more information about DOM-metal interactions. This minerotrophic peatland connected to the groundwater has received Fe(II) released from the surrounding soils in the Lehstenbach catchment. Absorption spectroscopy (AAS), differential pulse polarography (DPP) and high resolution electrospray ionization mass spectrometry (HR-ESI-Orbitrap-MS) was applied to characterize the molecular composition of DOM in the peat water extract (PWE). We identified typical patterns for DOM

  7. The Mira-Titan Universe. II. Matter Power Spectrum Emulation

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana; Upadhye, Amol; Bingham, Derek; Habib, Salman; Higdon, David; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas

    2017-09-20

    We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k similar to 5 Mpc(-1) and redshift z <= 2. In addition to covering the standard set of Lambda CDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with 16 medium-resolution simulations and TimeRG perturbation theory results to provide accurate coverage over a wide k-range; the data set generated as part of this project is more than 1.2Pbytes. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-up results with more than a hundred cosmological models will soon achieve similar to 1% accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches.

  8. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  9. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  10. Continuous quality control of the blood sampling procedure using a structured observation scheme

    OpenAIRE

    Lindberg Seemann, Tine; Nybo, Mads

    2016-01-01

    INTRODUCTION: An observational study was conducted using a structured observation scheme to assess compliance with the local phlebotomy guideline, to identify necessary focus items, and to investigate whether adherence to the phlebotomy guideline improved.MATERIALS AND METHODS: The questionnaire from the EFLM Working Group for the Preanalytical Phase was adapted to local procedures. A pilot study of three months duration was conducted. Based on this, corrective actions were implemented and a ...

  11. Convergence of numerical schemes suitable for two dimensional nonlinear convection: application to the coupling of modes in a plasma

    International Nuclear Information System (INIS)

    Boukadida, T.

    1988-01-01

    The compatibility between accuracy and stability of the quasilinear equations is studied. Three stuations are analyzed: the discontinuous P-1 approximation of the first order quasilinear equation, the two dimensional version of the Lax-Friedrichs scheme and the coupling of modes in a plasma. For the one dimensional case, the proposed scheme matches the available data. In the two dimensional case, tests to show the explosion condition are performed. This investigation can be applied in laser-matter interactions, nonlinear optics and in many fields of physics [fr

  12. Development of SYVAC sampling techniques

    International Nuclear Information System (INIS)

    Prust, J.O.; Dalrymple, G.J.

    1985-04-01

    This report describes the requirements of a sampling scheme for use with the SYVAC radiological assessment model. The constraints on the number of samples that may be taken is considered. The conclusions from earlier studies using the deterministic generator sampling scheme are summarised. The method of Importance Sampling and a High Dose algorithm, which are designed to preferentially sample in the high dose region of the parameter space, are reviewed in the light of experience gained from earlier studies and the requirements of a site assessment and sensitivity analyses. In addition the use of an alternative numerical integration method for estimating risk is discussed. It is recommended that the method of Importance Sampling is developed and tested for use with SYVAC. An alternative numerical integration method is not recommended for investigation at this stage but should be the subject of future work. (author)

  13. Hunting for dark matter with ultra-stable fibre as frequency delay system.

    Science.gov (United States)

    Yang, Wanpeng; Li, Dawei; Zhang, Shuangyou; Zhao, Jianye

    2015-07-10

    Many cosmological observations point towards the existence of dark-matter(DM) particles and consider them as the main component of the matter content of the universe. The goal of revealing the nature of dark-matter has triggered the development of new, extremely sensitive detectors. It has been demonstrated that the frequencies and phases of optical clock have a transient shift during the DMs' arrival due to the DM-SM(Standard Model) coupling. A simple, reliable and feasible experimental scheme is firstly proposed in this paper, based on "frequency-delay system" to search dark-matter by "self-frequency comparison" of an optical clock. During the arrival of a dark-matter, frequency discrepancy is expected between two signals with a short time difference(~ms) of the same optical clock to exhibit the interaction between atoms and dark-matter. Furthermore, this process can determine the exact position of dark-matter when it is crossing the optical clocks, therefore a network of detecting stations located in different places is recommended to reduce the misjudgment risk to an acceptable level.

  14. Nicotine Contamination in Particulate Matter Sampling

    Directory of Open Access Journals (Sweden)

    Eric Garshick

    2009-02-01

    Full Text Available We have addressed potential contamination of PM2.5 filter samples by nicotine from cigarette smoke. We collected two nicotine samples – one nicotine sampling filter was placed in-line after the collection of PM2.5 and the other stood alone. The overall correlation between the two nicotine filter levels was 0.99. The nicotine collected on the “stand-alone” filter was slightly greater than that on the “in-line” filter (mean difference = 1.10 μg/m3, but the difference was statistically significant only when PM2.5 was low (≤ 50 μg/m3. It is therefore important to account for personal and secondhand smoke exposure while assessing occupational and environmental PM.

  15. Finite Boltzmann schemes

    NARCIS (Netherlands)

    Sman, van der R.G.M.

    2006-01-01

    In the special case of relaxation parameter = 1 lattice Boltzmann schemes for (convection) diffusion and fluid flow are equivalent to finite difference/volume (FD) schemes, and are thus coined finite Boltzmann (FB) schemes. We show that the equivalence is inherent to the homology of the

  16. Continuous quality control of the blood sampling procedure using a structured observation scheme.

    Science.gov (United States)

    Seemann, Tine Lindberg; Nybo, Mads

    2016-10-15

    An observational study was conducted using a structured observation scheme to assess compliance with the local phlebotomy guideline, to identify necessary focus items, and to investigate whether adherence to the phlebotomy guideline improved. The questionnaire from the EFLM Working Group for the Preanalytical Phase was adapted to local procedures. A pilot study of three months duration was conducted. Based on this, corrective actions were implemented and a follow-up study was conducted. All phlebotomists at the Department of Clinical Biochemistry and Pharmacology were observed. Three blood collections by each phlebotomist were observed at each session conducted at the phlebotomy ward and the hospital wards, respectively. Error frequencies were calculated for the phlebotomy ward and the hospital wards and for the two study phases. A total of 126 blood drawings by 39 phlebotomists were observed in the pilot study, while 84 blood drawings by 34 phlebotomists were observed in the follow-up study. In the pilot study, the three major error items were hand hygiene (42% error), mixing of samples (22%), and order of draw (21%). Minor significant differences were found between the two settings. After focus on the major aspects, the follow-up study showed significant improvement for all three items at both settings (P < 0.01, P < 0.01, and P = 0.01, respectively). Continuous quality control of the phlebotomy procedure revealed a number of items not conducted in compliance with the local phlebotomy guideline. It supported significant improvements in the adherence to the recommended phlebotomy procedures and facilitated documentation of the phlebotomy quality.

  17. Determination of Selected Polycyclic Aromatic Compounds in Particulate Matter Samples with Low Mass Loading: An Approach to Test Method Accuracy

    Directory of Open Access Journals (Sweden)

    Susana García-Alonso

    2017-01-01

    Full Text Available A miniaturized analytical procedure to determine selected polycyclic aromatic compounds (PACs in low mass loadings (<10 mg of particulate matter (PM is evaluated. The proposed method is based on a simple sonication/agitation method using small amounts of solvent for extraction. The use of a reduced sample size of particulate matter is often limiting for allowing the quantification of analytes. This also leads to the need for changing analytical procedures and evaluating its performance. The trueness and precision of the proposed method were tested using ambient air samples. Analytical results from the proposed method were compared with those of pressurized liquid and microwave extractions. Selected PACs (polycyclic aromatic hydrocarbons (PAHs and nitro polycyclic aromatic hydrocarbons (NPAHs were determined by liquid chromatography with fluorescence detection (HPLC/FD. Taking results from pressurized liquid extractions as reference values, recovery rates of sonication/agitation method were over 80% for the most abundant PAHs. Recovery rates of selected NPAHs were lower. Enhanced rates were obtained when methanol was used as a modifier. Intermediate precision was estimated by data comparison from two mathematical approaches: normalized difference data and pooled relative deviations. Intermediate precision was in the range of 10–20%. The effectiveness of the proposed method was evaluated in PM aerosol samples collected with very low mass loadings (<0.2 mg during characterization studies from turbofan engine exhausts.

  18. Investigation of optimal photoionization schemes for Sm by multi-step resonance ionization

    International Nuclear Information System (INIS)

    Cha, H.; Song, K.; Lee, J.

    1997-01-01

    Excited states of Sm atoms are investigated by using multi-color resonance enhanced multiphoton ionization spectroscopy. Among the ionization signals one observed at 577.86 nm is regarded as the most efficient excited state if an 1-color 3-photon scheme is applied. Meanwhile an observed level located at 587.42 nm is regarded as the most efficient state if one uses a 2-color scheme. For 2-color scheme a level located at 573.50 nm from this first excited state is one of the best second excited state for the optimal photoionization scheme. Based on this ionization scheme various concentrations of standard solutions for samarium are determined. The minimum amount of sample which can be detected by a 2-color scheme is determined as 200 fg. The detection sensitivity is limited mainly due to the pollution of the graphite atomizer. copyright 1997 American Institute of Physics

  19. The use of Thompson sampling to increase estimation precision

    NARCIS (Netherlands)

    Kaptein, M.C.

    2015-01-01

    In this article, we consider a sequential sampling scheme for efficient estimation of the difference between the means of two independent treatments when the population variances are unequal across groups. The sampling scheme proposed is based on a solution to bandit problems called Thompson

  20. Time-and-ID-Based Proxy Reencryption Scheme

    OpenAIRE

    Mtonga, Kambombo; Paul, Anand; Rho, Seungmin

    2014-01-01

    Time- and ID-based proxy reencryption scheme is proposed in this paper in which a type-based proxy reencryption enables the delegator to implement fine-grained policies with one key pair without any additional trust on the proxy. However, in some applications, the time within which the data was sampled or collected is very critical. In such applications, for example, healthcare and criminal investigations, the delegatee may be interested in only some of the messages with some types sampled wi...

  1. Effect of dissolved organic matter on pre-equilibrium passive sampling: A predictive QSAR modeling study.

    Science.gov (United States)

    Lin, Wei; Jiang, Ruifen; Shen, Yong; Xiong, Yaxin; Hu, Sizi; Xu, Jianqiao; Ouyang, Gangfeng

    2018-04-13

    Pre-equilibrium passive sampling is a simple and promising technique for studying sampling kinetics, which is crucial to determine the distribution, transfer and fate of hydrophobic organic compounds (HOCs) in environmental water and organisms. Environmental water samples contain complex matrices that complicate the traditional calibration process for obtaining the accurate rate constants. This study proposed a QSAR model to predict the sampling rate constants of HOCs (polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs) and pesticides) in aqueous systems containing complex matrices. A homemade flow-through system was established to simulate an actual aqueous environment containing dissolved organic matter (DOM) i.e. humic acid (HA) and (2-Hydroxypropyl)-β-cyclodextrin (β-HPCD)), and to obtain the experimental rate constants. Then, a quantitative structure-activity relationship (QSAR) model using Genetic Algorithm-Multiple Linear Regression (GA-MLR) was found to correlate the experimental rate constants to the system state including physicochemical parameters of the HOCs and DOM which were calculated and selected as descriptors by Density Functional Theory (DFT) and Chem 3D. The experimental results showed that the rate constants significantly increased as the concentration of DOM increased, and the enhancement factors of 70-fold and 34-fold were observed for the HOCs in HA and β-HPCD, respectively. The established QSAR model was validated as credible (R Adj. 2 =0.862) and predictable (Q 2 =0.835) in estimating the rate constants of HOCs for complex aqueous sampling, and a probable mechanism was developed by comparison to the reported theoretical study. The present study established a QSAR model of passive sampling rate constants and calibrated the effect of DOM on the sampling kinetics. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. An Automated Scheme for the Large-Scale Survey of Herbig-Haro Objects

    Science.gov (United States)

    Deng, Licai; Yang, Ji; Zheng, Zhongyuan; Jiang, Zhaoji

    2001-04-01

    Owing to their spectral properties, Herbig-Haro (HH) objects can be discovered using photometric methods through a combination of filters, sampling the characteristic spectral lines and the nearby continuum. The data are commonly processed through direct visual inspection of the images. To make data reduction more efficient and the results more uniform and complete, an automated searching scheme for HH objects is developed to manipulate the images using IRAF. This approach helps to extract images with only intrinsic HH emissions. By using this scheme, the pointlike stellar sources and extended nebulous sources with continuum emission can be eliminated from the original images. The objects with only characteristic HH emission become prominent and can be easily picked up. In this paper our scheme is illustrated by a sample field and has been applied to our surveys for HH objects.

  3. The destruction of organic matter

    CERN Document Server

    Gorsuch, T T

    1970-01-01

    International Series of Monographs in Analytical Chemistry, Volume 39: The Destruction of Organic Matter focuses on the identification of trace elements in organic compounds. The monograph first offers information on the processes involved in the determination of trace elements in organic matters, as well as the methods not involving complete destruction of these elements. The text surveys the sources of errors in the processes responsible in pinpointing elements in organic compounds. These processes include sampling, disruption of the samples, manipulation, and measurements. The book

  4. Matter and symbols of the artificial

    Energy Technology Data Exchange (ETDEWEB)

    Rocha, L.M.

    1998-08-01

    The study of complex systems should be based on a systems-theoretic framework which requires both self-organizing and symbolic dimensions. An inclusive framework based on the notion of semiotics is advanced to build models capable of representing, as well as evolving in their environments, with implications for Artificial Life. Such undertaking is pursued by discussing the ways in which symbol and matter are irreducibly intertwined in evolutionary systems. The problem is thus phrased in terms of the semiotic categories of syntax, semantics, and pragmatics. With this semiotic view of matter and symbols the requirements of semiotic closure are expressed in models with both self-organizing and symbolic characteristics. Situated action and recent developments in the evolution of cellular automata rules to solve non-trivial tasks are discussed in this context. Finally, indirect encoding schemes for genetic algorithms are developed which follow the semiotic framework here proposed.

  5. Soil separator and sampler and method of sampling

    Science.gov (United States)

    O'Brien, Barry H [Idaho Falls, ID; Ritter, Paul D [Idaho Falls, ID

    2010-02-16

    A soil sampler includes a fluidized bed for receiving a soil sample. The fluidized bed may be in communication with a vacuum for drawing air through the fluidized bed and suspending particulate matter of the soil sample in the air. In a method of sampling, the air may be drawn across a filter, separating the particulate matter. Optionally, a baffle or a cyclone may be included within the fluidized bed for disentrainment, or dedusting, so only the finest particulate matter, including asbestos, will be trapped on the filter. The filter may be removable, and may be tested to determine the content of asbestos and other hazardous particulate matter in the soil sample.

  6. High speed network sampling

    OpenAIRE

    Rindalsholt, Ole Arild

    2005-01-01

    Master i nettverks- og systemadministrasjon Classical Sampling methods play an important role in the current practice of Internet measurement. With today’s high speed networks, routers cannot manage to generate complete Netflow data for every packet. They have to perform restricted sampling. This thesis summarizes some of the most important sampling schemes and their applications before diving into an analysis on the effect of sampling Netflow records.

  7. An energy-efficient adaptive sampling scheme for wireless sensor networks

    NARCIS (Netherlands)

    Masoum, Alireza; Meratnia, Nirvana; Havinga, Paul J.M.

    2013-01-01

    Wireless sensor networks are new monitoring platforms. To cope with their resource constraints, in terms of energy and bandwidth, spatial and temporal correlation in sensor data can be exploited to find an optimal sampling strategy to reduce number of sampling nodes and/or sampling frequencies while

  8. The effect of a dynamic background albedo scheme on Sahel/Sahara precipitation during the mid-Holocene

    Directory of Open Access Journals (Sweden)

    F. S. E. Vamborg

    2011-02-01

    Full Text Available We have implemented a new albedo scheme that takes the dynamic behaviour of the surface below the canopy into account, into the land-surface scheme of the MPI-ESM. The standard (static scheme calculates the seasonal canopy albedo as a function of leaf area index, whereas the background albedo is a gridbox constant derived from satellite measurements. The new (dynamic scheme additionally models the background albedo as a slowly changing function of organic matter in the ground and of litter and standing dead biomass covering the ground. We use the two schemes to investigate the interactions between vegetation, albedo and precipitation in the Sahel/Sahara for two time-slices: pre-industrial and mid-Holocene. The dynamic scheme represents the seasonal cycle of albedo and the correspondence between annual mean albedo and vegetation cover in a more consistent way than the static scheme. It thus gives a better estimate of albedo change between the two time periods. With the introduction of the dynamic scheme, precipitation is increased by 30 mm yr−1 for the pre-industrial simulation and by about 80 mm yr−1 for the mid-Holocene simulation. The present-day dry bias in the Sahel of standard ECHAM5 is thus reduced and the sensitivity of precipitation to mid-Holocene external forcing is increased by around one third. The locations of mid-Holocene lakes, as estimated from reconstructions, lie south of the modelled desert border in both mid-Holocene simulations. The magnitude of simulated rainfall in this area is too low to fully sustain lakes, however it is captured better with the dynamic scheme. The dynamic scheme leads to increased vegetation variability in the remaining desert region, indicating a higher frequency of green spells, thus reaching a better agreement with the vegetation distribution as derived from pollen records.

  9. Optimum sampling scheme for characterization of mine tailings

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-07-01

    Full Text Available The paper describes a novice method for sampling geochemicals to characterize mine tailings. The author’s model the spatial relationships between a multi-element signature and, as covariates, abundance estimates of secondary iron-bearing minerals...

  10. Particulate matter mass concentrations produced from pavement surface abrasion

    Directory of Open Access Journals (Sweden)

    Fullova Dasa

    2017-01-01

    Full Text Available According to the latest findings particulate matter belong to the most significant pollutants in Europe together with ground-level ozone O3 and nitrogen dioxide NO2. Road traffic is one of the main sources of particulate matter. Traffic volume has unpleasant impact on longevity of the pavements and also on the environment. Vehicle motions cause mechanical wearing of the asphalt pavement surface - wearing course by vehicle tyres. The paper deals with abrasion of bituminous wearing courses of pavements. The asphalt mixtures are compared in terms of mechanically separated particulate matter. The samples of asphalt mixtures were rutted in wheel tracking machine. The particulate matter measurements were performed in laboratory conditions. The experimental laboratory measurements make it possible to sample particulates without contamination from exhaust emissions, abraded particles from vehicles, resuspension of road dust and climate affects. The paper offers partial results of measurements on six trial samples of asphalt mixtures with different composition. It presents particulate matter morphology and the comparison of rutted asphalt samples in terms of PM mass concentrations and chemical composition.

  11. Understanding the types of fraud in claims to South African medical schemes.

    Science.gov (United States)

    Legotlo, T G; Mutezo, A

    2018-03-28

    Medical schemes play a significant role in funding private healthcare in South Africa (SA). However, the sector is negatively affected by the high rate of fraudulent claims. To identify the types of fraudulent activities committed in SA medical scheme claims. A cross-sectional qualitative study was conducted, adopting a case study strategy. A sample of 15 employees was purposively selected from a single medical scheme administration company in SA. Semi-structured interviews were conducted to collect data from study participants. A thematic analysis of the data was done using ATLAS.ti software (ATLAS.ti Scientific Software Development, Germany). The study population comprised the 17 companies that administer medical schemes in SA. Data were collected from 15 study participants, who were selected from the medical scheme administrator chosen as a case study. The study found that medical schemes were defrauded in numerous ways. The perpetrators of this type of fraud include healthcare service providers, medical scheme members, employees, brokers and syndicates. Medical schemes are mostly defrauded by the submission of false claims by service providers and syndicates. Fraud committed by medical scheme members encompasses the sharing of medical scheme benefits with non-members (card farming) and non-disclosure of pre-existing conditions at the application stage. The study concluded that perpetrators of fraud have found several ways of defrauding SA medical schemes regarding claims. Understanding and identifying the types of fraud events facing medical schemes is the initial step towards establishing methods to mitigate this risk. Future studies should examine strategies to manage fraudulent medical scheme claims.

  12. The depth distribution functions of the natural abundances of carbon isotopes in Alfisols thoroughly sampled by thin-layer sampling, and their relation to the dynamics of organic matter in theses soils

    International Nuclear Information System (INIS)

    Becker-Heidmann, P.

    1989-01-01

    The aim of this study was to gain fundamental statements on the relationship between the depth distributions of the natural abundances of 13 C and 14 C isotopes and the dynamics of the organic matter in Alfisols. For this purpose, six Alfisols were investigated: four forest soils from Northern Germany, two of them developed in Loess and two in glacial loam, one West German Loess soil used for fruit-growing and one agricultural granite-gneiss soil from the semiarid part of India. The soil was sampled as succesive horizontal layers of 2 cm depth from an area of 0.5 to 1 m 2 size, starting from the organic down to the C horizon or the lower part of the Bt. This kind of completely thin-layer-wise sampling was applied here for the first time. The carbon content and the natural abundances of the 13 C and the 14 C isotopes of each sample were determined. The δ 13 C value was measured by mass spectrometry. A vacuum preparation line with an electronically controlled cooling unit was constructed thereto. For the determination of the 14 C content, the sample carbon was transferred into benzene, and its activity was measured by liquid scintillation spectrometry. From the combination of the depth distribution functions of the 14 C activity and the δ 13 C value, and with the aid of additional analyses like C/N ratio and particle size distribution, a conclusive interpretation as to the dynamics of the organic matter in the investigated Alfisols is given. (orig./BBR)

  13. Adaptive protection scheme

    Directory of Open Access Journals (Sweden)

    R. Sitharthan

    2016-09-01

    Full Text Available This paper aims at modelling an electronically coupled distributed energy resource with an adaptive protection scheme. The electronically coupled distributed energy resource is a microgrid framework formed by coupling the renewable energy source electronically. Further, the proposed adaptive protection scheme provides a suitable protection to the microgrid for various fault conditions irrespective of the operating mode of the microgrid: namely, grid connected mode and islanded mode. The outstanding aspect of the developed adaptive protection scheme is that it monitors the microgrid and instantly updates relay fault current according to the variations that occur in the system. The proposed adaptive protection scheme also employs auto reclosures, through which the proposed adaptive protection scheme recovers faster from the fault and thereby increases the consistency of the microgrid. The effectiveness of the proposed adaptive protection is studied through the time domain simulations carried out in the PSCAD⧹EMTDC software environment.

  14. Optimal updating magnitude in adaptive flat-distribution sampling.

    Science.gov (United States)

    Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery

    2017-11-07

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  15. Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme

    Science.gov (United States)

    Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook

    1995-01-01

    Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.

  16. Detecting Topological Defect Dark Matter Using Coherent Laser Ranging System

    Science.gov (United States)

    Yang, Wanpeng; Leng, Jianxiao; Zhang, Shuangyou; Zhao, Jianye

    2016-01-01

    In the last few decades, optical frequency combs with high intensity, broad optical bandwidth, and directly traceable discrete wavelengths have triggered rapid developments in distance metrology. However, optical frequency combs to date have been limited to determine the absolute distance to an object (such as satellite missions). We propose a scheme for the detection of topological defect dark matter using a coherent laser ranging system composed of dual-combs and an optical clock via nongravitational signatures. The dark matter field, which comprises a defect, may interact with standard model particles, including quarks and photons, resulting in the alteration of their masses. Thus, a topological defect may function as a dielectric material with a distinctive frequency-depend index of refraction, which would cause the time delay of a periodic extraterrestrial or terrestrial light. When a topological defect passes through the Earth, the optical path of long-distance vacuum path is altered, this change in optical path can be detected through the coherent laser ranging system. Compared to continuous wavelength(cw) laser interferometry methods, dual-comb interferometry in our scheme excludes systematic misjudgement by measuring the absolute optical path length. PMID:27389642

  17. Energy Aware Cluster Based Routing Scheme For Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Roy Sohini

    2015-09-01

    Full Text Available Wireless Sensor Network (WSN has emerged as an important supplement to the modern wireless communication systems due to its wide range of applications. The recent researches are facing the various challenges of the sensor network more gracefully. However, energy efficiency has still remained a matter of concern for the researches. Meeting the countless security needs, timely data delivery and taking a quick action, efficient route selection and multi-path routing etc. can only be achieved at the cost of energy. Hierarchical routing is more useful in this regard. The proposed algorithm Energy Aware Cluster Based Routing Scheme (EACBRS aims at conserving energy with the help of hierarchical routing by calculating the optimum number of cluster heads for the network, selecting energy-efficient route to the sink and by offering congestion control. Simulation results prove that EACBRS performs better than existing hierarchical routing algorithms like Distributed Energy-Efficient Clustering (DEEC algorithm for heterogeneous wireless sensor networks and Energy Efficient Heterogeneous Clustered scheme for Wireless Sensor Network (EEHC.

  18. Effects of organic matter and ageing on the bioaccessibility of arsenic

    International Nuclear Information System (INIS)

    Meunier, Louise; Koch, Iris; Reimer, Kenneth J.

    2011-01-01

    Arsenic-contaminated soils may pose a risk to human health. Redevelopment of contaminated sites may involve amending soils with organic matter, which potentially increases arsenic bioaccessibility. The effects of ageing on arsenic-contaminated soils mixed with peat moss were evaluated in a simulated ageing period representing two years, during which arsenic bioaccessibility was periodically measured. Significant increases (p = 0.032) in bioaccessibility were observed for 15 of 31 samples tested, particularly in comparison with samples originally containing >30% bioaccessible arsenic in soils naturally rich in organic matter (>25%). Samples where percent arsenic bioaccessibility was unchanged with age were generally poor in organic matter (average 7.7%) and contained both arsenopyrite and pentavalent arsenic forms that remained unaffected by the organic matter amendments. Results suggest that the addition of organic matter may lead to increases in arsenic bioaccessibility, which warrants caution in the evaluation of risks associated with redevelopment of arsenic-contaminated land. - Highlights: → Adding organic matter to contaminated soils may increase arsenic bioaccessibility. → Ageing soils with >25% organic matter can lead to increased arsenic bioaccessibility. → No changes in arsenic bioaccessibility for soils poor in organic matter (mean 7.7%). → No changes in arsenic bioaccessibility for samples containing arsenopyrite. → Organic matter in soil may favour oxidation of trivalent arsenic to pentavalent form. - Adding organic carbon may increase arsenic bioaccessibility, especially in samples originally containing >30% bioaccessible arsenic in organic carbon-rich soils (>25%).

  19. Discretization of convection-diffusion equations with finite-difference scheme derived from simplified analytical solutions

    International Nuclear Information System (INIS)

    Kriventsev, Vladimir

    2000-09-01

    Most of thermal hydraulic processes in nuclear engineering can be described by general convection-diffusion equations that are often can be simulated numerically with finite-difference method (FDM). An effective scheme for finite-difference discretization of such equations is presented in this report. The derivation of this scheme is based on analytical solutions of a simplified one-dimensional equation written for every control volume of the finite-difference mesh. These analytical solutions are constructed using linearized representations of both diffusion coefficient and source term. As a result, the Efficient Finite-Differencing (EFD) scheme makes it possible to significantly improve the accuracy of numerical method even using mesh systems with fewer grid nodes that, in turn, allows to speed-up numerical simulation. EFD has been carefully verified on the series of sample problems for which either analytical or very precise numerical solutions can be found. EFD has been compared with other popular FDM schemes including novel, accurate (as well as sophisticated) methods. Among the methods compared were well-known central difference scheme, upwind scheme, exponential differencing and hybrid schemes of Spalding. Also, newly developed finite-difference schemes, such as the the quadratic upstream (QUICK) scheme of Leonard, the locally analytic differencing (LOAD) scheme of Wong and Raithby, the flux-spline scheme proposed by Varejago and Patankar as well as the latest LENS discretization of Sakai have been compared. Detailed results of this comparison are given in this report. These tests have shown a high efficiency of the EFD scheme. For most of sample problems considered EFD has demonstrated the numerical error that appeared to be in orders of magnitude lower than that of other discretization methods. Or, in other words, EFD has predicted numerical solution with the same given numerical error but using much fewer grid nodes. In this report, the detailed

  20. PN Sequence Preestimator Scheme for DS-SS Signal Acquisition Using Block Sequence Estimation

    Directory of Open Access Journals (Sweden)

    Sang Kyu Park

    2005-03-01

    Full Text Available An m-sequence (PN sequence preestimator scheme for direct-sequence spread spectrum (DS-SS signal acquisition by using block sequence estimation (BSE is proposed and analyzed. The proposed scheme consists of an estimator and a verifier which work according to the PN sequence chip clock, and provides not only the enhanced chip estimates with a threshold decision logic and one-chip error correction among the first m received chips, but also the reliability check of the estimates with additional decision logic. The probabilities of the estimator and verifier operations are calculated. With these results, the detection, the false alarm, and the missing probabilities of the proposed scheme are derived. In addition, using a signal flow graph, the average acquisition time is calculated. The proposed scheme can be used as a preestimator and easily implemented by changing the internal signal path of a generally used digital matched filter (DMF correlator or any other correlator that has a lot of sampling data memories for sampled PN sequence. The numerical results show rapid acquisition performance in a relatively good CNR.

  1. The mineral matter characteristics of some Chinese coal

    Energy Technology Data Exchange (ETDEWEB)

    Liu, X. [China University of Mining and Technology (China). Dept. of Coal Preparation and Utilization

    1994-12-01

    The mineral matter has been separated from 18 coal samples with a low temperature ashes and analyzed by means of X-ray diffraction method. Based on the results of chemical analysis of the coal ash, with reference to the standard composition of minerals, the content of various mineral phases in the coal ash has been determined. Furthermore, this paper summarizes the mineral matter characteristics of the coal samples and discusses the relationship between the composition of mineral matter in coal and its depositional environment.

  2. Effect of sample digestion, air filter contamination, and post-adsorption on the analysis of trace elements in air particulate matter

    International Nuclear Information System (INIS)

    Yang, Xiao Jin; Wan, Pingyu; Foley, Roy

    2012-01-01

    Inductively coupled plasma atomic emission spectrometry and inductively coupled plasma MS are the major analytical tools for trace elements in environmental matrices, however, the underestimate of certain trace elements in analysis of air particulate matter by these two techniques has long been observed. This has been attributed to incomplete sample digestion. Here, we demonstrate that the combined effects of sample digestion, air filter impurities, and post-adsorption of the analytes contribute to the interference of the analysis. Particular attention should be paid to post-adsorption of analytes onto air filters after acid digestion. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  3. Effect of sample digestion, air filter contamination, and post-adsorption on the analysis of trace elements in air particulate matter

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiao Jin [Department of Environment and Climate Change, Environmental Forensic and Analytical Science Section, New South Wales (Australia); Department of Applied Chemistry, College of Chemical Engineering, Beijing University of Chemical Technology, Beijing (China); Wan, Pingyu [Department of Applied Chemistry, College of Chemical Engineering, Beijing University of Chemical Technology, Beijing (China); Foley, Roy [Department of Environment and Climate Change, Environmental Forensic and Analytical Science Section, New South Wales (Australia)

    2012-11-15

    Inductively coupled plasma atomic emission spectrometry and inductively coupled plasma MS are the major analytical tools for trace elements in environmental matrices, however, the underestimate of certain trace elements in analysis of air particulate matter by these two techniques has long been observed. This has been attributed to incomplete sample digestion. Here, we demonstrate that the combined effects of sample digestion, air filter impurities, and post-adsorption of the analytes contribute to the interference of the analysis. Particular attention should be paid to post-adsorption of analytes onto air filters after acid digestion. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  4. Evaluation of sampling inhalable PM10 particulate matter (≤ 10 μm) using co-located high volume samplers

    International Nuclear Information System (INIS)

    Rajoy, R R S; Dias, J W C; Rego, E C P; Netto, A D Pereira

    2015-01-01

    This paper presents the results of the determination of the concentrations of atmospheric particulate matter ≤ 10 μm (PM10), collected simultaneously by six PM10 high volume samplers from two different manufacturers installed in the same location. Fifteen samples of 24 h were obtained with each equipment at a selected urban area of Rio de Janeiro city. The concentration of PM10 ranged between 10.73 and 54.04 μg m −3 . The samplers were considered comparable to each other, as the adopted methodology presented good repeatability

  5. The phase-space structure of nearby dark matter as constrained by the SDSS

    International Nuclear Information System (INIS)

    Leclercq, Florent; Percival, Will; Jasche, Jens; Lavaux, Guilhem; Wandelt, Benjamin

    2017-01-01

    Previous studies using numerical simulations have demonstrated that the shape of the cosmic web can be described by studying the Lagrangian displacement field. We extend these analyses, showing that it is now possible to perform a Lagrangian description of cosmic structure in the nearby Universe based on large-scale structure observations. Building upon recent Bayesian large-scale inference of initial conditions, we present a cosmographic analysis of the dark matter distribution and its evolution, referred to as the dark matter phase-space sheet, in the nearby universe as probed by the Sloan Digital Sky Survey main galaxy sample. We consider its stretchings and foldings using a tetrahedral tessellation of the Lagrangian lattice. The method provides extremely accurate estimates of nearby density and velocity fields, even in regions of low galaxy density. It also measures the number of matter streams, and the deformation and parity reversals of fluid elements, which were previously thought inaccessible using observations. We illustrate the approach by showing the phase-space structure of known objects of the nearby Universe such as the Sloan Great Wall, the Coma cluster and the Boötes void. We dissect cosmic structures into four distinct components (voids, sheets, filaments, and clusters), using the Lagrangian classifiers DIVA, ORIGAMI, and a new scheme which we introduce and call LICH. Because these classifiers use information other than the sheer local density, identified structures explicitly carry physical information about their formation history. Accessing the phase-space structure of dark matter in galaxy surveys opens the way for new confrontations of observational data and theoretical models. We have made our data products publicly available.

  6. The phase-space structure of nearby dark matter as constrained by the SDSS

    Energy Technology Data Exchange (ETDEWEB)

    Leclercq, Florent; Percival, Will [Institute of Cosmology and Gravitation (ICG), University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth PO1 3FX (United Kingdom); Jasche, Jens [Excellence Cluster Universe, Technische Universität München, Boltzmannstrasse 2, D-85748 Garching (Germany); Lavaux, Guilhem; Wandelt, Benjamin, E-mail: florent.leclercq@polytechnique.org, E-mail: lavaux@iap.fr, E-mail: jasche@iap.fr, E-mail: wandelt@iap.fr, E-mail: will.percival@port.ac.uk [Institut d' Astrophysique de Paris (IAP), UMR 7095, CNRS – UPMC Université Paris 6, Sorbonne Universités, 98bis boulevard Arago, F-75014 Paris (France)

    2017-06-01

    Previous studies using numerical simulations have demonstrated that the shape of the cosmic web can be described by studying the Lagrangian displacement field. We extend these analyses, showing that it is now possible to perform a Lagrangian description of cosmic structure in the nearby Universe based on large-scale structure observations. Building upon recent Bayesian large-scale inference of initial conditions, we present a cosmographic analysis of the dark matter distribution and its evolution, referred to as the dark matter phase-space sheet, in the nearby universe as probed by the Sloan Digital Sky Survey main galaxy sample. We consider its stretchings and foldings using a tetrahedral tessellation of the Lagrangian lattice. The method provides extremely accurate estimates of nearby density and velocity fields, even in regions of low galaxy density. It also measures the number of matter streams, and the deformation and parity reversals of fluid elements, which were previously thought inaccessible using observations. We illustrate the approach by showing the phase-space structure of known objects of the nearby Universe such as the Sloan Great Wall, the Coma cluster and the Boötes void. We dissect cosmic structures into four distinct components (voids, sheets, filaments, and clusters), using the Lagrangian classifiers DIVA, ORIGAMI, and a new scheme which we introduce and call LICH. Because these classifiers use information other than the sheer local density, identified structures explicitly carry physical information about their formation history. Accessing the phase-space structure of dark matter in galaxy surveys opens the way for new confrontations of observational data and theoretical models. We have made our data products publicly available.

  7. Study of the decay scheme of 159Tm

    International Nuclear Information System (INIS)

    Aguer, Pierre; Bastin, Genevieve; Chin Fan Liang; Libert, Jean; Paris, Pierre; Peghaire, Alain

    1975-01-01

    The energy levels of 159 Er have been investigated from the decay of 159 Tm (T(1/2)=9mn). Samples were obtained by (p,xn) reaction and on-line separation through Isocele facility. A level scheme is proposed with 24 levels between 0 and 1.3MeV [fr

  8. Coalescent: an open-science framework for importance sampling in coalescent theory.

    Science.gov (United States)

    Tewari, Susanta; Spouge, John L

    2015-01-01

    Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only

  9. Coalescent: an open-science framework for importance sampling in coalescent theory

    Directory of Open Access Journals (Sweden)

    Susanta Tewari

    2015-08-01

    Full Text Available Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner.Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3 for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux. Extensive tests and coverage make the framework reliable and maintainable.Conclusions. In coalescent theory, many studies of computational efficiency

  10. Global Particulate Matter Source Apportionment

    Science.gov (United States)

    Lamancusa, C.; Wagstrom, K.

    2017-12-01

    As our global society develops and grows it is necessary to better understand the impacts and nuances of atmospheric chemistry, in particular those associated with atmospheric particulate matter. We have developed a source apportionment scheme for the GEOS-Chem global atmospheric chemical transport model. While these approaches have existed for several years in regional chemical transport models, the Global Particulate Matter Source Apportionment Technology (GPSAT) represents the first incorporation into a global chemical transport model. GPSAT runs in parallel to a standard GEOS-Chem run. GPSAT uses the fact that all molecules of a given species have the same probability of undergoing any given process as a core principle. This allows GPSAT to track many different species using only the flux information provided by GEOS-Chem's many processes. GPSAT accounts for the change in source specific concentrations as a result of aqueous and gas-phase chemistry, horizontal and vertical transport, condensation and evaporation on particulate matter, emissions, and wet and dry deposition. By using fluxes, GPSAT minimizes computational cost by circumventing the computationally costly chemistry and transport solvers. GPSAT will allow researchers to address many pertinent research questions about global particulate matter including the global impact of emissions from different source regions and the climate impacts from different source types and regions. For this first application of GPSAT, we investigate the contribution of the twenty largest urban areas worldwide to global particulate matter concentrations. The species investigated include: ammonium, nitrates, sulfates, and the secondary organic aerosols formed by the oxidation of benzene, isoprene, and terpenes. While GPSAT is not yet publically available, we will incorporate it into a future standard release of GEOS-Chem so that all GEOS-Chem users will have access to this new tool.

  11. Closing in on mass-degenerate dark matter scenarios with antiprotons and direct detection

    International Nuclear Information System (INIS)

    Garny, Mathias; Ibarra, Alejandro; Pato, Miguel; Vogl, Stefan

    2012-01-01

    Over the last years both cosmic-ray antiproton measurements and direct dark matter searches have proved particularly effective in constraining the nature of dark matter candidates. The present work focusses on these two types of constraints in a minimal framework which features a Majorana fermion as the dark matter particle and a scalar that mediates the coupling to quarks. Considering a wide range of coupling schemes, we derive antiproton and direct detection constraints using the latest data and paying close attention to astrophysical and nuclear uncertainties. Both signals are strongly enhanced in the presence of degenerate dark matter and scalar masses, but we show that the effect is especially dramatic in direct detection. Accordingly, the latest direct detection limits take the lead over antiprotons. We find that antiproton and direct detection data set stringent lower limits on the mass splitting, reaching 19% at a 300 GeV dark matter mass for a unity coupling. Interestingly, these limits are orthogonal to ongoing collider searches at the Large Hadron Collider, making it feasible to close in on degenerate dark matter scenarios within the next years

  12. Closing in on mass-degenerate dark matter scenarios with antiprotons and direct detection

    Energy Technology Data Exchange (ETDEWEB)

    Garny, Mathias [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Ibarra, Alejandro; Pato, Miguel; Vogl, Stefan [Technische Univ. Muenchen, Garching (Germany). Physik-Department

    2012-07-15

    Over the last years both cosmic-ray antiproton measurements and direct dark matter searches have proved particularly effective in constraining the nature of dark matter candidates. The present work focusses on these two types of constraints in a minimal framework which features a Majorana fermion as the dark matter particle and a scalar that mediates the coupling to quarks. Considering a wide range of coupling schemes, we derive antiproton and direct detection constraints using the latest data and paying close attention to astrophysical and nuclear uncertainties. Both signals are strongly enhanced in the presence of degenerate dark matter and scalar masses, but we show that the effect is especially dramatic in direct detection. Accordingly, the latest direct detection limits take the lead over antiprotons. We find that antiproton and direct detection data set stringent lower limits on the mass splitting, reaching 19% at a 300 GeV dark matter mass for a unity coupling. Interestingly, these limits are orthogonal to ongoing collider searches at the Large Hadron Collider, making it feasible to close in on degenerate dark matter scenarios within the next years.

  13. Sampling strategies for efficient estimation of tree foliage biomass

    Science.gov (United States)

    Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson

    2011-01-01

    Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...

  14. EU Emissions Trading Scheme and Investments in the power sector

    Energy Technology Data Exchange (ETDEWEB)

    Sapienza, M.D.; Stefanoni, S.

    2007-07-01

    How environmental regulation affects electricity players' investment decisions? Should policy makers look beyond for alternative mechanisms - such as energy efficiency, capture and storage of carbon dioxide, and incentives for renewables - to fulfill the environmental objectives set by Kyoto Protocol? This paper suggests - through a Real Option approach - how the efficacy of the EU Emission Trading Scheme on technological innovation, emissions reduction and energy price dynamics, is strongly affected by the 'hysteresis' emerging from the capital budgeting process of main utilities. As a matter of fact, long-term substitutions between coal-fired units and Combined Cycle Gas Turbine plants production only take place under quite restrictive conditions. (auth)

  15. The Redistribution of Trade Gains When Income Inequality Matters

    Directory of Open Access Journals (Sweden)

    Marco de Pinto

    2015-10-01

    Full Text Available How does a redistribution of trade gains affect welfare when income inequality matters? To answer this question, we extend the [1] model to unionized labor markets and heterogeneous workers. As redistribution schemes, we consider unemployment benefits that are financed either by a wage tax, a payroll tax or a profit tax. Assuming that welfare declines in income inequality, we find that welfare increases up to a maximum in the case of wage tax funding, while welfare declines weakly (sharply if a profit tax (payroll tax is implemented. These effects are caused by the wage tax neutrality (due to union wage setting and by a profit tax-induced decline in long-term unemployment. As a result, the government’s optimal redistribution scheme is to finance unemployment benefits by a wage tax.

  16. Methodology for optimization of process integration schemes in a biorefinery under uncertainty

    International Nuclear Information System (INIS)

    Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >González-Cortés, Meilyn; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Martínez-Martínez, Yenisleidys; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Albernas-Carvajal, Yailet; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Pedraza-Garciga, Julio; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Morales-Zamora, Marlen

    2017-01-01

    The uncertainty has a great impact in the investment decisions, operability of the plants and in the feasibility of integration opportunities in the chemical processes. This paper, presents the steps to consider the optimization of process investment in the processes integration under conditions of uncertainty. It is shown the potentialities of the biomass cane of sugar for the integration with several plants in a biorefinery scheme for the obtaining chemical products, thermal and electric energy. Among the factories with potentialities for this integration are the pulp and paper and sugar factories and other derivative processes. Theses factories have common resources and also have a variety of products that can be exchange between them so certain products generated in a one of them can be raw matter in another plant. The methodology developed guide to obtaining of feasible investment projects under uncertainty. As objective function was considered the maximization of net profitable value in different scenarios that are generated from the integration scheme. (author)

  17. High resolution gamma-ray spectroscopy applied to bulk sample analysis

    International Nuclear Information System (INIS)

    Kosanke, K.L.; Koch, C.D.; Wilson, R.D.

    1980-01-01

    A high resolution Ge(Li) gamma-ray spectrometer has been installed and made operational for use in routine bulk sample analysis by the Bendix Field Engineering Corporation (BFEC) geochemical analysis department. The Ge(Li) spectrometer provides bulk sample analyses for potassium, uranium, and thorium that are superior to those obtained by the BFEC sodium iodide spectrometer. The near term analysis scheme permits a direct assay for uranium that corrects for bulk sample self-absorption effects and is independent of the uranium/radium disequilibrium condition of the sample. A more complete analysis scheme has been developed that fully utilizes the gamma-ray data provided by the Ge(Li) spectrometer and that more properly accounts for the sample self-absorption effect. This new analysis scheme should be implemented on the BFEC Ge(Li) spectrometer at the earliest date

  18. Colour schemes

    DEFF Research Database (Denmark)

    van Leeuwen, Theo

    2013-01-01

    This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation.......This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation....

  19. Nonlinear binding of phenanthrene to the extracted fulvic acid fraction in soil in comparison with other organic matter fractions and to the whole soil sample

    International Nuclear Information System (INIS)

    Liu Wenxin; Xu, Shanshan; Xing, Baoshan; Pan, Bo; Tao, Shu

    2010-01-01

    Fractions of soil organic matter in a natural soil were extracted and sorption (or binding) characteristics of phenanthrene on each fraction and to the whole sample were investigated. The organic carbon normalized single point sorption (or binding) coefficient followed lipid > humin (HM) > humic acid (HA) > fulvic acid (FA) > whole soil sample, while the nonlinear exponent exhibited lipid > FA > HA > whole soil sample > HM. FA showed nonlinear binding of phenanthrene as it often does with other fractions. HM and HA contributed the majority of organic carbon in the soil. The calculated sorption coefficients of the whole soil were about two times greater than the measured values at different equilibrium phenanthrene concentrations. As for phenanthrene, the sorption capacity and nonlinearity of the physically mixed HA-HM mixtures were stronger as compared to the chemically reconstituted HA-HM composite. This was attributed to (besides the conditioning effect of the organic solvents) interactions between HA and HM and acid-base additions during fractionation. - Nonlinear binding of phenanthrene to fulvic acid extracted from soil organic matter was found.

  20. Assessment of hybrid rotation-translation scan schemes for in vivo animal SPECT imaging

    International Nuclear Information System (INIS)

    Xia Yan; Liu Yaqiang; Wang Shi; Ma Tianyu; Yao Rutao; Deng Xiao

    2013-01-01

    To perform in vivo animal single photon emission computed tomography imaging on a stationary detector gantry, we introduced a hybrid rotation-translation (HRT) tomographic scan, a combination of translational and limited angle rotational movements of the image object, to minimize gravity-induced animal motion. To quantitatively assess the performance of ten HRT scan schemes and the conventional rotation-only scan scheme, two simulated phantoms were first scanned with each scheme to derive the corresponding image resolution (IR) in the image field of view. The IR results of all the scan schemes were visually assessed and compared with corresponding outputs of four scan scheme evaluation indices, i.e. sampling completeness (SC), sensitivity (S), conventional system resolution (SR), and a newly devised directional spatial resolution (DR) that measures the resolution in any specified orientation. A representative HRT scheme was tested with an experimental phantom study. Eight of the ten HRT scan schemes evaluated achieved a superior performance compared to two other HRT schemes and the rotation-only scheme in terms of phantom image resolution. The same eight HRT scan schemes also achieved equivalent or better performance in terms of the four quantitative indices than the conventional rotation-only scheme. As compared to the conventional index SR, the new index DR appears to be a more relevant indicator of system resolution performance. The experimental phantom image obtained from the selected HRT scheme was satisfactory. We conclude that it is feasible to perform in vivo animal imaging with a HRT scan scheme and SC and DR are useful predictors for quantitatively assessing the performance of a scan scheme. (paper)

  1. Smartphone-Based Patients' Activity Recognition by Using a Self-Learning Scheme for Medical Monitoring.

    Science.gov (United States)

    Guo, Junqi; Zhou, Xi; Sun, Yunchuan; Ping, Gong; Zhao, Guoxing; Li, Zhuorong

    2016-06-01

    Smartphone based activity recognition has recently received remarkable attention in various applications of mobile health such as safety monitoring, fitness tracking, and disease prediction. To achieve more accurate and simplified medical monitoring, this paper proposes a self-learning scheme for patients' activity recognition, in which a patient only needs to carry an ordinary smartphone that contains common motion sensors. After the real-time data collection though this smartphone, we preprocess the data using coordinate system transformation to eliminate phone orientation influence. A set of robust and effective features are then extracted from the preprocessed data. Because a patient may inevitably perform various unpredictable activities that have no apriori knowledge in the training dataset, we propose a self-learning activity recognition scheme. The scheme determines whether there are apriori training samples and labeled categories in training pools that well match with unpredictable activity data. If not, it automatically assembles these unpredictable samples into different clusters and gives them new category labels. These clustered samples combined with the acquired new category labels are then merged into the training dataset to reinforce recognition ability of the self-learning model. In experiments, we evaluate our scheme using the data collected from two postoperative patient volunteers, including six labeled daily activities as the initial apriori categories in the training pool. Experimental results demonstrate that the proposed self-learning scheme for activity recognition works very well for most cases. When there exist several types of unseen activities without any apriori information, the accuracy reaches above 80 % after the self-learning process converges.

  2. Impacts of Rotation Schemes on Ground-Dwelling Beneficial Arthropods.

    Science.gov (United States)

    Dunbar, Mike W; Gassmann, Aaron J; O'Neal, Matthew E

    2016-10-01

    Crop rotation alters agroecosystem diversity temporally, and increasing the number of crops in rotation schemes can increase crop yields and reduce reliance on pesticides. We hypothesized that increasing the number of crops in annual rotation schemes would positively affect ground-dwelling beneficial arthropod communities. During 2012 and 2013, pitfall traps were used to measure activity-density and diversity of ground-dwelling communities within three previously established, long-term crop rotation studies located in Wisconsin and Illinois. Rotation schemes sampled included continuous corn, a 2-yr annual rotation of corn and soybean, and a 3-yr annual rotation of corn, soybean, and wheat. Insects captured were identified to family, and non-insect arthropods were identified to class, order, or family, depending upon the taxa. Beneficial arthropods captured included natural enemies, granivores, and detritivores. The beneficial community from continuous corn plots was significantly more diverse compared with the community in the 2-yr rotation, whereas the community in the 3-yr rotation did not differ from either rotation scheme. The activity-density of the total community and any individual taxa did not differ among rotation schemes in either corn or soybean. Crop species within all three rotation schemes were annual crops, and are associated with agricultural practices that make infield habitat subject to anthropogenic disturbances and temporally unstable. Habitat instability and disturbance can limit the effectiveness and retention of beneficial arthropods, including natural enemies, granivores, and detritivores. Increasing non-crop and perennial species within landscapes in conjunction with more diverse rotation schemes may increase the effect of biological control of pests by natural enemies. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Adaptive sampling method in deep-penetration particle transport problem

    International Nuclear Information System (INIS)

    Wang Ruihong; Ji Zhicheng; Pei Lucheng

    2012-01-01

    Deep-penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, a kind of particle transport random walking system under the emission point as a sampling station is built. Then, an adaptive sampling scheme is derived for better solution with the achieved information. The main advantage of the adaptive scheme is to choose the most suitable sampling number from the emission point station to obtain the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is introduced. Its main principle is to define the importance function due to the particle state and to ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive scheme under the emission point as a station could overcome the difficulty of underestimation of the result in some degree, and the adaptive importance sampling method gets satisfied results as well. (authors)

  4. Unified Importance Sampling Schemes for Efficient Simulation of Outage Capacity over Generalized Fading Channels

    KAUST Repository

    Rached, Nadhir B.; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    The outage capacity (OC) is among the most important performance metrics of communication systems operating over fading channels. Of interest in the present paper is the evaluation of the OC at the output of the Equal Gain Combining (EGC) and the Maximum Ratio Combining (MRC) receivers. In this case, it can be seen that this problem turns out to be that of computing the Cumulative Distribution Function (CDF) for the sum of independent random variables. Since finding a closedform expression for the CDF of the sum distribution is out of reach for a wide class of commonly used distributions, methods based on Monte Carlo (MC) simulations take pride of price. In order to allow for the estimation of the operating range of small outage probabilities, it is of paramount importance to develop fast and efficient estimation methods as naive Monte Carlo (MC) simulations would require high computational complexity. In this line, we propose in this work two unified, yet efficient, hazard rate twisting Importance Sampling (IS) based approaches that efficiently estimate the OC of MRC or EGC diversity techniques over generalized independent fading channels. The first estimator is shown to possess the asymptotic optimality criterion and applies for arbitrary fading models, whereas the second one achieves the well-desired bounded relative error property for the majority of the well-known fading variates. Moreover, the second estimator is shown to achieve the asymptotic optimality property under the particular Log-normal environment. Some selected simulation results are finally provided in order to illustrate the substantial computational gain achieved by the proposed IS schemes over naive MC simulations.

  5. Unified Importance Sampling Schemes for Efficient Simulation of Outage Capacity over Generalized Fading Channels

    KAUST Repository

    Rached, Nadhir B.

    2015-11-13

    The outage capacity (OC) is among the most important performance metrics of communication systems operating over fading channels. Of interest in the present paper is the evaluation of the OC at the output of the Equal Gain Combining (EGC) and the Maximum Ratio Combining (MRC) receivers. In this case, it can be seen that this problem turns out to be that of computing the Cumulative Distribution Function (CDF) for the sum of independent random variables. Since finding a closedform expression for the CDF of the sum distribution is out of reach for a wide class of commonly used distributions, methods based on Monte Carlo (MC) simulations take pride of price. In order to allow for the estimation of the operating range of small outage probabilities, it is of paramount importance to develop fast and efficient estimation methods as naive Monte Carlo (MC) simulations would require high computational complexity. In this line, we propose in this work two unified, yet efficient, hazard rate twisting Importance Sampling (IS) based approaches that efficiently estimate the OC of MRC or EGC diversity techniques over generalized independent fading channels. The first estimator is shown to possess the asymptotic optimality criterion and applies for arbitrary fading models, whereas the second one achieves the well-desired bounded relative error property for the majority of the well-known fading variates. Moreover, the second estimator is shown to achieve the asymptotic optimality property under the particular Log-normal environment. Some selected simulation results are finally provided in order to illustrate the substantial computational gain achieved by the proposed IS schemes over naive MC simulations.

  6. LevelScheme: A level scheme drawing and scientific figure preparation system for Mathematica

    Science.gov (United States)

    Caprio, M. A.

    2005-09-01

    LevelScheme is a scientific figure preparation system for Mathematica. The main emphasis is upon the construction of level schemes, or level energy diagrams, as used in nuclear, atomic, molecular, and hadronic physics. LevelScheme also provides a general infrastructure for the preparation of publication-quality figures, including support for multipanel and inset plotting, customizable tick mark generation, and various drawing and labeling tasks. Coupled with Mathematica's plotting functions and powerful programming language, LevelScheme provides a flexible system for the creation of figures combining diagrams, mathematical plots, and data plots. Program summaryTitle of program:LevelScheme Catalogue identifier:ADVZ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVZ Operating systems:Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux Programming language used:Mathematica 4 Number of bytes in distributed program, including test and documentation:3 051 807 Distribution format:tar.gz Nature of problem:Creation of level scheme diagrams. Creation of publication-quality multipart figures incorporating diagrams and plots. Method of solution:A set of Mathematica packages has been developed, providing a library of level scheme drawing objects, tools for figure construction and labeling, and control code for producing the graphics.

  7. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  8. Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling

    Directory of Open Access Journals (Sweden)

    Saeed Mian Qaisar

    2009-01-01

    Full Text Available The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.

  9. Adaptive sampling of AEM transients

    Science.gov (United States)

    Di Massa, Domenico; Florio, Giovanni; Viezzoli, Andrea

    2016-02-01

    This paper focuses on the sampling of the electromagnetic transient as acquired by airborne time-domain electromagnetic (TDEM) systems. Typically, the sampling of the electromagnetic transient is done using a fixed number of gates whose width grows logarithmically (log-gating). The log-gating has two main benefits: improving the signal to noise (S/N) ratio at late times, when the electromagnetic signal has amplitudes equal or lower than the natural background noise, and ensuring a good resolution at the early times. However, as a result of fixed time gates, the conventional log-gating does not consider any geological variations in the surveyed area, nor the possibly varying characteristics of the measured signal. We show, using synthetic models, how a different, flexible sampling scheme can increase the resolution of resistivity models. We propose a new sampling method, which adapts the gating on the base of the slope variations in the electromagnetic (EM) transient. The use of such an alternative sampling scheme aims to get more accurate inverse models by extracting the geoelectrical information from the measured data in an optimal way.

  10. Quantum superchemistry in an output coupler of coherent matter waves

    International Nuclear Information System (INIS)

    Jing, H.; Cheng, J.

    2006-01-01

    We investigate the quantum superchemistry or Bose-enhanced atom-molecule conversions in a coherent output coupler of matter waves, as a simple generalization of the two-color photoassociation. The stimulated effects of molecular output step and atomic revivals are exhibited by steering the rf output couplings. The quantum noise-induced molecular damping occurs near a total conversion in a levitation trap. This suggests a feasible two-trap scheme to make a stable coherent molecular beam

  11. Packet reversed packet combining scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2006-07-01

    The packet combining scheme is a well defined simple error correction scheme with erroneous copies at the receiver. It offers higher throughput combined with ARQ protocols in networks than that of basic ARQ protocols. But packet combining scheme fails to correct errors when the errors occur in the same bit locations of two erroneous copies. In the present work, we propose a scheme that will correct error if the errors occur at the same bit location of the erroneous copies. The proposed scheme when combined with ARQ protocol will offer higher throughput. (author)

  12. THE SL2S GALAXY-SCALE LENS SAMPLE. V. DARK MATTER HALOS AND STELLAR IMF OF MASSIVE EARLY-TYPE GALAXIES OUT TO REDSHIFT 0.8

    Energy Technology Data Exchange (ETDEWEB)

    Sonnenfeld, Alessandro; Treu, Tommaso [Physics Department, University of California, Santa Barbara, CA 93106 (United States); Marshall, Philip J. [Kavli Institute for Particle Astrophysics and Cosmology, P.O. Box 20450, MS29, Stanford, CA 94309 (United States); Suyu, Sherry H. [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan (China); Gavazzi, Raphaël [Institut d' Astrophysique de Paris, UMR7095 CNRS-Université Pierre et Marie Curie, 98bis bd Arago, F-75014 Paris (France); Auger, Matthew W. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Nipoti, Carlo, E-mail: sonnen@physics.ucsb.edu [Department of Physics and Astronomy, Bologna University, viale Berti-Pichat 6/2, I-40127 Bologna (Italy)

    2015-02-20

    We investigate the cosmic evolution of the internal structure of massive early-type galaxies over half of the age of the universe. We perform a joint lensing and stellar dynamics analysis of a sample of 81 strong lenses from the Strong Lensing Legacy Survey and Sloan ACS Lens Survey and combine the results with a hierarchical Bayesian inference method to measure the distribution of dark matter mass and stellar initial mass function (IMF) across the population of massive early-type galaxies. Lensing selection effects are taken into account. We find that the dark matter mass projected within the inner 5 kpc increases for increasing redshift, decreases for increasing stellar mass density, but is roughly constant along the evolutionary tracks of early-type galaxies. The average dark matter slope is consistent with that of a Navarro-Frenk-White profile, but is not well constrained. The stellar IMF normalization is close to a Salpeter IMF at log M {sub *} = 11.5 and scales strongly with increasing stellar mass. No dependence of the IMF on redshift or stellar mass density is detected. The anti-correlation between dark matter mass and stellar mass density supports the idea of mergers being more frequent in more massive dark matter halos.

  13. Practical continuous-variable quantum key distribution without finite sampling bandwidth effects.

    Science.gov (United States)

    Li, Huasheng; Wang, Chao; Huang, Peng; Huang, Duan; Wang, Tao; Zeng, Guihua

    2016-09-05

    In a practical continuous-variable quantum key distribution system, finite sampling bandwidth of the employed analog-to-digital converter at the receiver's side may lead to inaccurate results of pulse peak sampling. Then, errors in the parameters estimation resulted. Subsequently, the system performance decreases and security loopholes are exposed to eavesdroppers. In this paper, we propose a novel data acquisition scheme which consists of two parts, i.e., a dynamic delay adjusting module and a statistical power feedback-control algorithm. The proposed scheme may improve dramatically the data acquisition precision of pulse peak sampling and remove the finite sampling bandwidth effects. Moreover, the optimal peak sampling position of a pulse signal can be dynamically calibrated through monitoring the change of the statistical power of the sampled data in the proposed scheme. This helps to resist against some practical attacks, such as the well-known local oscillator calibration attack.

  14. A Method for Choosing the Best Samples for Mars Sample Return.

    Science.gov (United States)

    Gordon, Peter R; Sephton, Mark A

    2018-05-01

    Success of a future Mars Sample Return mission will depend on the correct choice of samples. Pyrolysis-FTIR can be employed as a triage instrument for Mars Sample Return. The technique can thermally dissociate minerals and organic matter for detection. Identification of certain mineral types can determine the habitability of the depositional environment, past or present, while detection of organic matter may suggest past or present habitation. In Mars' history, the Theiikian era represents an attractive target for life search missions and the acquisition of samples. The acidic and increasingly dry Theiikian may have been habitable and followed a lengthy neutral and wet period in Mars' history during which life could have originated and proliferated to achieve relatively abundant levels of biomass with a wide distribution. Moreover, the sulfate minerals produced in the Theiikian are also known to be good preservers of organic matter. We have used pyrolysis-FTIR and samples from a Mars analog ferrous acid stream with a thriving ecosystem to test the triage concept. Pyrolysis-FTIR identified those samples with the greatest probability of habitability and habitation. A three-tier scoring system was developed based on the detection of (i) organic signals, (ii) carbon dioxide and water, and (iii) sulfur dioxide. The presence of each component was given a score of A, B, or C depending on whether the substance had been detected, tentatively detected, or not detected, respectively. Single-step (for greatest possible sensitivity) or multistep (for more diagnostic data) pyrolysis-FTIR methods informed the assignments. The system allowed the highest-priority samples to be categorized as AAA (or A*AA if the organic signal was complex), while the lowest-priority samples could be categorized as CCC. Our methods provide a mechanism with which to rank samples and identify those that should take the highest priority for return to Earth during a Mars Sample Return mission. Key Words

  15. Uniformity and Deviation of Intra-axonal Cross-sectional Area Coverage of the Gray-to-White Matter Interface

    Directory of Open Access Journals (Sweden)

    Stefan Sommer

    2017-12-01

    Full Text Available Diffusion magnetic resonance imaging (dMRI is a compelling tool for investigating the structure and geometry of brain tissue based on indirect measurement of the diffusion anisotropy of water. Recent developments in global top-down tractogram optimizations enable the estimation of streamline weights, which characterize the connection between gray matter areas. In this work, the intra-axonal cross-sectional area coverage of the gray-to-white matter interface was examined by intersecting tractography streamlines with cortical regions of interest. The area coverage is the ratio of streamline weights divided by the surface area at the gray-to-white matter interface and assesses the estimated percentage which is covered by intra-axonal space. A high correlation (r = 0.935 between streamline weights and the cortical surface area was found across all regions of interest in all subjects. The variance across different cortical regions exhibits similarities to myelin maps. Additionally, we examined the effect of different diffusion gradient subsets at a lower, clinically feasible spatial resolution. Subsampling of the initial high-resolution diffusion dataset did not alter the tendency of the area coverage at the gray-to-white matter interface across cortical areas and subjects. However, single-shell acquisition schemes with lower b-values lead to a steady increase in area coverage in comparison to the full acquisition scheme at high resolution.

  16. A quantum-mechanics molecular-mechanics scheme for extended systems

    International Nuclear Information System (INIS)

    Hunt, Diego; Scherlis, Damián A; Sanchez, Veronica M

    2016-01-01

    We introduce and discuss a hybrid quantum-mechanics molecular-mechanics (QM-MM) approach for Car–Parrinello DFT simulations with pseudopotentials and planewaves basis, designed for the treatment of periodic systems. In this implementation the MM atoms are considered as additional QM ions having fractional charges of either sign, which provides conceptual and computational simplicity by exploiting the machinery already existing in planewave codes to deal with electrostatics in periodic boundary conditions. With this strategy, both the QM and MM regions are contained in the same supercell, which determines the periodicity for the whole system. Thus, while this method is not meant to compete with non-periodic QM-MM schemes able to handle extremely large but finite MM regions, it is shown that for periodic systems of a few hundred atoms, our approach provides substantial savings in computational times by treating classically a fraction of the particles. The performance and accuracy of the method is assessed through the study of energetic, structural, and dynamical aspects of the water dimer and of the aqueous bulk phase. Finally, the QM-MM scheme is applied to the computation of the vibrational spectra of water layers adsorbed at the TiO 2 anatase (1 0 1) solid–liquid interface. This investigation suggests that the inclusion of a second monolayer of H 2 O molecules is sufficient to induce on the first adsorbed layer, a vibrational dynamics similar to that taking place in the presence of an aqueous environment. The present QM-MM scheme appears as a very interesting tool to efficiently perform molecular dynamics simulations of complex condensed matter systems, from solutions to nanoconfined fluids to different kind of interfaces. (paper)

  17. A quantum-mechanics molecular-mechanics scheme for extended systems.

    Science.gov (United States)

    Hunt, Diego; Sanchez, Veronica M; Scherlis, Damián A

    2016-08-24

    We introduce and discuss a hybrid quantum-mechanics molecular-mechanics (QM-MM) approach for Car-Parrinello DFT simulations with pseudopotentials and planewaves basis, designed for the treatment of periodic systems. In this implementation the MM atoms are considered as additional QM ions having fractional charges of either sign, which provides conceptual and computational simplicity by exploiting the machinery already existing in planewave codes to deal with electrostatics in periodic boundary conditions. With this strategy, both the QM and MM regions are contained in the same supercell, which determines the periodicity for the whole system. Thus, while this method is not meant to compete with non-periodic QM-MM schemes able to handle extremely large but finite MM regions, it is shown that for periodic systems of a few hundred atoms, our approach provides substantial savings in computational times by treating classically a fraction of the particles. The performance and accuracy of the method is assessed through the study of energetic, structural, and dynamical aspects of the water dimer and of the aqueous bulk phase. Finally, the QM-MM scheme is applied to the computation of the vibrational spectra of water layers adsorbed at the TiO2 anatase (1 0 1) solid-liquid interface. This investigation suggests that the inclusion of a second monolayer of H2O molecules is sufficient to induce on the first adsorbed layer, a vibrational dynamics similar to that taking place in the presence of an aqueous environment. The present QM-MM scheme appears as a very interesting tool to efficiently perform molecular dynamics simulations of complex condensed matter systems, from solutions to nanoconfined fluids to different kind of interfaces.

  18. Two phase sampling

    CERN Document Server

    Ahmad, Zahoor; Hanif, Muhammad

    2013-01-01

    The development of estimators of population parameters based on two-phase sampling schemes has seen a dramatic increase in the past decade. Various authors have developed estimators of population using either one or two auxiliary variables. The present volume is a comprehensive collection of estimators available in single and two phase sampling. The book covers estimators which utilize information on single, two and multiple auxiliary variables of both quantitative and qualitative nature. Th...

  19. Transport in Chern-Simons-matter theories

    Energy Technology Data Exchange (ETDEWEB)

    Gur-Ari, Guy; Hartnoll, Sean; Mahajan, Raghu [Stanford Institute for Theoretical Physics, Stanford University,Stanford, CA 94305 (United States)

    2016-07-18

    The frequency-dependent longitudinal and Hall conductivities — σ{sub xx} and σ{sub xy} — are dimensionless functions of ω/T in 2+1 dimensional CFTs at nonzero temperature. These functions characterize the spectrum of charged excitations of the theory and are basic experimental observables. We compute these conductivities for large N Chern-Simons theory with fermion matter. The computation is exact in the ’t Hooft coupling λ at N=∞. We describe various physical features of the conductivity, including an explicit relation between the weight of the delta function at ω=0 in σ{sub xx} and the existence of infinitely many higher spin conserved currents in the theory. We also compute the conductivities perturbatively in Chern-Simons theory with scalar matter and show that the resulting functions of ω/T agree with the strong coupling fermionic result. This provides a new test of the conjectured 3d bosonization duality. In matching the Hall conductivities we resolve an outstanding puzzle by carefully treating an extra anomaly that arises in the regularization scheme used.

  20. A full quantum network scheme

    International Nuclear Information System (INIS)

    Ma Hai-Qiang; Wei Ke-Jin; Yang Jian-Hui; Li Rui-Xue; Zhu Wu

    2014-01-01

    We present a full quantum network scheme using a modified BB84 protocol. Unlike other quantum network schemes, it allows quantum keys to be distributed between two arbitrary users with the help of an intermediary detecting user. Moreover, it has good expansibility and prevents all potential attacks using loopholes in a detector, so it is more practical to apply. Because the fiber birefringence effects are automatically compensated, the scheme is distinctly stable in principle and in experiment. The simple components for every user make our scheme easier for many applications. The experimental results demonstrate the stability and feasibility of this scheme. (general)

  1. Transmission usage cost allocation schemes

    International Nuclear Information System (INIS)

    Abou El Ela, A.A.; El-Sehiemy, R.A.

    2009-01-01

    This paper presents different suggested transmission usage cost allocation (TCA) schemes to the system individuals. Different independent system operator (ISO) visions are presented using the proportional rata and flow-based TCA methods. There are two proposed flow-based TCA schemes (FTCA). The first FTCA scheme generalizes the equivalent bilateral exchanges (EBE) concepts for lossy networks through two-stage procedure. The second FTCA scheme is based on the modified sensitivity factors (MSF). These factors are developed from the actual measurements of power flows in transmission lines and the power injections at different buses. The proposed schemes exhibit desirable apportioning properties and are easy to implement and understand. Case studies for different loading conditions are carried out to show the capability of the proposed schemes for solving the TCA problem. (author)

  2. Algebraic renormalization of parity-preserving QED3 coupled to scalar matter II: broken case

    International Nuclear Information System (INIS)

    Cima, O.M. del; Franco, D.H.T.; Helayel-Neto, J.A.; Piguet, O.

    1996-11-01

    In this letter the algebraic renormalization method, which is independent of any kind of regularization scheme, is presented for the parity-preserving QED 3 coupled to scalar matter in the broken regime, where the scalar assumes a finite vacuum expectation value, =v. The model shows to be stable under radiative corrections and anomaly free. (author)

  3. An adaptive sampling scheme for deep-penetration calculation

    International Nuclear Information System (INIS)

    Wang, Ruihong; Ji, Zhicheng; Pei, Lucheng

    2013-01-01

    As we know, the deep-penetration problem has been one of the important and difficult problems in shielding calculation with Monte Carlo Method for several decades. In this paper, an adaptive Monte Carlo method under the emission point as a sampling station for shielding calculation is investigated. The numerical results show that the adaptive method may improve the efficiency of the calculation of shielding and might overcome the under-estimation problem easy to happen in deep-penetration calculation in some degree

  4. The British American Tobacco out growers scheme: Determinants of ...

    African Journals Online (AJOL)

    The study analyzed the operation and performance of Tobacco Out grower Scheme in Oyo State, Nigeria. The data for the analysis came from a random sample survey of the area of study. The treatment effect model was adopted in analyzing the data. Evidence from the probit analysis indicates that membership of the ...

  5. Thermal dark matter through the Dirac neutrino portal

    Science.gov (United States)

    Batell, Brian; Han, Tao; McKeen, David; Haghi, Barmak Shams Es

    2018-04-01

    We study a simple model of thermal dark matter annihilating to standard model neutrinos via the neutrino portal. A (pseudo-)Dirac sterile neutrino serves as a mediator between the visible and the dark sectors, while an approximate lepton number symmetry allows for a large neutrino Yukawa coupling and, in turn, efficient dark matter annihilation. The dark sector consists of two particles, a Dirac fermion and complex scalar, charged under a symmetry that ensures the stability of the dark matter. A generic prediction of the model is a sterile neutrino with a large active-sterile mixing angle that decays primarily invisibly. We derive existing constraints and future projections from direct detection experiments, colliders, rare meson and tau decays, electroweak precision tests, and small scale structure observations. Along with these phenomenological tests, we investigate the consequences of perturbativity and scalar mass fine tuning on the model parameter space. A simple, conservative scheme to confront the various tests with the thermal relic target is outlined, and we demonstrate that much of the cosmologically-motivated parameter space is already constrained. We also identify new probes of this scenario such as multibody kaon decays and Drell-Yan production of W bosons at the LHC.

  6. Matroids and quantum-secret-sharing schemes

    International Nuclear Information System (INIS)

    Sarvepalli, Pradeep; Raussendorf, Robert

    2010-01-01

    A secret-sharing scheme is a cryptographic protocol to distribute a secret state in an encoded form among a group of players such that only authorized subsets of the players can reconstruct the secret. Classically, efficient secret-sharing schemes have been shown to be induced by matroids. Furthermore, access structures of such schemes can be characterized by an excluded minor relation. No such relations are known for quantum secret-sharing schemes. In this paper we take the first steps toward a matroidal characterization of quantum-secret-sharing schemes. In addition to providing a new perspective on quantum-secret-sharing schemes, this characterization has important benefits. While previous work has shown how to construct quantum-secret-sharing schemes for general access structures, these schemes are not claimed to be efficient. In this context the present results prove to be useful; they enable us to construct efficient quantum-secret-sharing schemes for many general access structures. More precisely, we show that an identically self-dual matroid that is representable over a finite field induces a pure-state quantum-secret-sharing scheme with information rate 1.

  7. How old is this bird? The age distribution under some phase sampling schemes.

    Science.gov (United States)

    Hautphenne, Sophie; Massaro, Melanie; Taylor, Peter

    2017-12-01

    In this paper, we use a finite-state continuous-time Markov chain with one absorbing state to model an individual's lifetime. Under this model, the time of death follows a phase-type distribution, and the transient states of the Markov chain are known as phases. We then attempt to provide an answer to the simple question "What is the conditional age distribution of the individual, given its current phase"? We show that the answer depends on how we interpret the question, and in particular, on the phase observation scheme under consideration. We then apply our results to the computation of the age pyramid for the endangered Chatham Island black robin Petroica traversi during the monitoring period 2007-2014.

  8. Dependence of 210Po activity on organic matter in the reverine environs of coastal Kerala

    International Nuclear Information System (INIS)

    Narayana, Y.; Venunathan, N.

    2011-01-01

    This paper deals with the distribution of 210 Po in the river bank soil samples of three major rivers namely Bharathapuzha, Periyar and Kallada river of Kerala. The dependence of 210 Po activity on organic matter content in the samples was also studied. The soil samples were collected and analyzed for 210 Po radionuclide using standard radiochemical analytical method. Activity of 210 Po increases with increase in organic matter content in samples. Along the Bharathapuzha river bank the 210 Po activity ranges from 2.96 to 12.48 Bq kg -1 with mean 5.62 Bq kg -1 . The organic matter percentage in the samples ranges from 0.4 to 2.8 and a good correlation with correlation coefficient 0.9 was found between activity and organic matter percentage. In the Periyar river environs 210 Po activity ranges from 3.47 to 13.39 Bq kg -1 with mean value 9.27 Bq kg -1 . Organic matter percentage in these samples ranges from 1.20 to 4.10 and the correlation coefficient between 210 Po activity and organic matter percentage was found to be 0.8 In the Kallada river bank soil samples 210 Po activity ranges from 4.46 to 6.45 Bq kg -1 . The organic matter percentage ranges from 1.4 to 3. The correlation coefficient between 210 Po activity and organic matter percentage in the samples was found to be 0.9. (author)

  9. An effective model for fermion dark matter. Indirect detection of supersymmetric dark matter in astronomy with the CELESTE Telescope

    International Nuclear Information System (INIS)

    Lavalle, Julien

    2004-01-01

    The purpose of this thesis is to discuss both phenomenological and experimental aspects of Dark Matter, related to its indirect detection with gamma-ray astronomy. In the MSSM framework, neutralinos arise as natural candidates to non-baryonic and Cold Dark Matter, whose gravitational effects manifest in the Universe at different scales. As they are Majorana particles, they may in principle annihilate in high density regions, as the centres of galaxies, and produce gamma rays. Nevertheless, the expected fluxes are basically low compared to experimental sensitivities. After estimating gamma fluxes from M31 and Draco galaxies in the MSSM scheme, we first generalize the MSSM couplings by studying an effective Lagrangian. We show that the only constraint of imposing a relic abundance compatible with recent measurements obviously deplete significantly the gamma ray production, but also that predictions in this effective approach are more optimistic for indirect detection than the MSSM. In a second part, we present the indirect searches for Dark Matter performed with the CELESTE Cherenkov telescope towards the galaxy M31. We propose a statistical method to reconstruct spectra, mandatory to discriminate classical and exotic spectra. The M31 data analysis enables the extraction of an upper limit on the gamma ray flux, which is the first worldwide for a galaxy in the energy range 50-500 GeV, and whose astrophysical interest goes beyond indirect searches for Dark Matter. (author)

  10. Basic model of fermion dark matter. Indirect detection of supersymmetric dark matter in γ astronomy with the CELESTE telescope

    International Nuclear Information System (INIS)

    Lavalle, J.

    2004-10-01

    The purpose of this thesis is to discuss both phenomenological and experimental aspects of Dark Matter, related to its indirect detection with gamma-ray astronomy. In the MSSM (Minimal Supersymmetric Standard Model) framework, neutralinos arise as natural candidates to non-baryonic and Cold Dark Matter, whose gravitational effects manifest in the Universe at different scales. As they are Majorana particles, they may in principle annihilate in high density regions, as the centres of galaxies, and produce gamma rays. Nevertheless, the expected fluxes are basically low compared to experimental sensitivities. After estimating gamma fluxes from M31 and Draco galaxies in the MSSM scheme, we first generalize the MSSM couplings by studying an effective Lagrangian. We show that the only constraint of imposing a relic abundance compatible with recent measurements obviously deplete significantly the gamma ray production, but also that predictions in this effective approach are more optimistic for indirect detection than the MSSM. In a second part, we present the indirect searches for Dark Matter performed with the CELESTE Cherenkov telescope towards the galaxy M31. We propose a statistical method to reconstruct spectra, mandatory to discriminate classical and exotic spectra. The M31 data analysis enables the extraction of an upper limit on the gamma ray flux, which is the first worldwide for a galaxy in the energy range 50-500 GeV, and whose astrophysical interest goes beyond indirect searches for Dark Matter. (author)

  11. Samplings of urban particulate matter for mutagenicity assays; Campionamenti di particolato atmosferico in area urbana per valutazioni di potenziale mutageno

    Energy Technology Data Exchange (ETDEWEB)

    De Zaiacomo, T. [ENEA, Centro Ricerche Bologna (Italy). Dip. Ambiente

    1996-07-01

    In the frame of a specific program relating to the evaluation of mutagenic activity of urban particulate matter, an experimental arrangement has been developed to sample aerosuspended particles from the external environment carried indoor by means of a fan. Instrumentation was placed directly in the air flow to minimize particle losses, and consisted of total filter, collecting particles without any size separation; cascade impactor, fractioning urban particulate to obtain separate samples for analyses; an optical device, for real time monitoring of aerosol concentration, temperature and relative humidity sensors. Some of the samples obtained were analysed to investigate: particle morphology, aerosol granulometric distributions, effect of relative humidity on collected particulate, amount of ponderal mass compared with real time optical determinations. The results obtained are reported here, together with some considerations about carbonaceous particles, in urban areas mainly originated from diesel exhausts, their degree of agglomeration and role to vehiculate substances into the human respiratory.

  12. Advances in high pressure research in condensed matter: proceedings of the international conference on condensed matter under high pressures

    International Nuclear Information System (INIS)

    Sikka, S.K.; Gupta, Satish C.; Godwal, B.K.

    1997-01-01

    The use of pressure as a thermodynamic variable for studying condensed matter has become very important in recent years. Its main effect is to reduce the volume of a substance. Thus, in some sense, it mimics the phenomena taking place during the cohesion of solids like pressure ionization, modifications in electronic properties and phase changes etc. Some of the phase changes under pressure lead to synthesis of new materials. The recent discovery of high T c superconductivity in YBa 2 Cu 3 O 7 may be indirectly attributed to the pressure effect. In applied fields like simulation of reactor accident, design of inertial confinement fusion schemes and for understanding the rock mechanical effects of shock propagation in earth due to underground nuclear explosions, the pressure versus volume relations of condensed matter are a vital input. This volume containing the proceedings of the International Conference on Condensed Matter Under High Pressure covers various aspects of high pressure pertaining to equations of state, phase transitions, electronic, optical and transport properties of solids, atomic and molecular studies, shock induced reactions, energetic materials, materials synthesis, mineral physics, geophysical and planetary sciences, biological applications and food processing and advances in experimental techniques and numerical simulations. Papers relevant to INIS are indexed separately

  13. Scheme Program Documentation Tools

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2004-01-01

    are separate and intended for different documentation purposes they are related to each other in several ways. Both tools are based on XML languages for tool setup and for documentation authoring. In addition, both tools rely on the LAML framework which---in a systematic way---makes an XML language available...... as named functions in Scheme. Finally, the Scheme Elucidator is able to integrate SchemeDoc resources as part of an internal documentation resource....

  14. A Memory Efficient Network Encryption Scheme

    Science.gov (United States)

    El-Fotouh, Mohamed Abo; Diepold, Klaus

    In this paper, we studied the two widely used encryption schemes in network applications. Shortcomings have been found in both schemes, as these schemes consume either more memory to gain high throughput or low memory with low throughput. The need has aroused for a scheme that has low memory requirements and in the same time possesses high speed, as the number of the internet users increases each day. We used the SSM model [1], to construct an encryption scheme based on the AES. The proposed scheme possesses high throughput together with low memory requirements.

  15. Quantum field theory in curved spacetime and the dark matter problem

    International Nuclear Information System (INIS)

    Grib, A. A.; Pavlov, Yu. V.

    2007-01-01

    Quantum field theory in nonstationary curved Friedmann spacetime leads to the phenomenon of creation of massive particles. The hypothesis that in the end of inflation gravitation creates from vacuum superheavy particles decaying on quarks and leptons leading to the observed baryon charge is investigated. Taking the complex scalar field for these particles in analogy with K 0 -meson theory one obtains two components - the long living and short living ones, so that the long living component after breaking the Grand Unification symmetry has a long life time and is observed today as dark matter. The hypothesis that ultra high energy cosmic rays occur as manifestation of superheavy dark matter is considered and some experimental possibilities of the proposed scheme are analyzed

  16. A comparison of temporal and location-based sampling strategies for global positioning system-triggered electronic diaries.

    Science.gov (United States)

    Törnros, Tobias; Dorn, Helen; Reichert, Markus; Ebner-Priemer, Ulrich; Salize, Hans-Joachim; Tost, Heike; Meyer-Lindenberg, Andreas; Zipf, Alexander

    2016-11-21

    Self-reporting is a well-established approach within the medical and psychological sciences. In order to avoid recall bias, i.e. past events being remembered inaccurately, the reports can be filled out on a smartphone in real-time and in the natural environment. This is often referred to as ambulatory assessment and the reports are usually triggered at regular time intervals. With this sampling scheme, however, rare events (e.g. a visit to a park or recreation area) are likely to be missed. When addressing the correlation between mood and the environment, it may therefore be beneficial to include participant locations within the ambulatory assessment sampling scheme. Based on the geographical coordinates, the database query system then decides if a self-report should be triggered or not. We simulated four different ambulatory assessment sampling schemes based on movement data (coordinates by minute) from 143 voluntary participants tracked for seven consecutive days. Two location-based sampling schemes incorporating the environmental characteristics (land use and population density) at each participant's location were introduced and compared to a time-based sampling scheme triggering a report on the hour as well as to a sampling scheme incorporating physical activity. We show that location-based sampling schemes trigger a report less often, but we obtain more unique trigger positions and a greater spatial spread in comparison to sampling strategies based on time and distance. Additionally, the location-based methods trigger significantly more often at rarely visited types of land use and less often outside the study region where no underlying environmental data are available.

  17. Modified Aggressive Packet Combining Scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2010-06-01

    In this letter, a few schemes are presented to improve the performance of aggressive packet combining scheme (APC). To combat error in computer/data communication networks, ARQ (Automatic Repeat Request) techniques are used. Several modifications to improve the performance of ARQ are suggested by recent research and are found in literature. The important modifications are majority packet combining scheme (MjPC proposed by Wicker), packet combining scheme (PC proposed by Chakraborty), modified packet combining scheme (MPC proposed by Bhunia), and packet reversed packet combining (PRPC proposed by Bhunia) scheme. These modifications are appropriate for improving throughput of conventional ARQ protocols. Leung proposed an idea of APC for error control in wireless networks with the basic objective of error control in uplink wireless data network. We suggest a few modifications of APC to improve its performance in terms of higher throughput, lower delay and higher error correction capability. (author)

  18. Some aspects of the interaction of radiation with matter

    International Nuclear Information System (INIS)

    Cohen-Tannoudji, C.

    1984-01-01

    Three lectures concerning about some aspects of the interaction of radiation with matter are given. The contents of these lectures are: a) survey of some new possibilities opened by lasers; b) Atoms in intense resonant or quasi resonant lase beams. Discussion of several theoretical approaches to resonance fluorescence. New experimental schemes being currently devised; c) simple physical pictures to describe radiative corrections, with application to the Lamb-shift and to the anomalous gyromagnetic factor of leptons. The atom in a photon vacuum. (L.C.) [pt

  19. Laboratory Measurements of Particulate Matter Concentrations from Asphalt Pavement Abrasion

    Directory of Open Access Journals (Sweden)

    Fullová Daša

    2016-12-01

    Full Text Available The issue of emissions from road traffic is compounded by the fact that the number of vehicles and driven kilometres increase each year. Road traffic is one of the main sources of particulate matter and traffic volume is still increasing and has unpleasant impact on longevity of the pavements and the environment. Vehicle motions cause mechanical wearing of the asphalt pavement surface - wearing course by vehicle tyres. The contribution deals with abrasion of bituminous wearing courses of pavements. The asphalt mixtures of wearing courses are compared in terms of mechanically separated particulate matter. The samples of asphalt mixtures were rutted in wheel tracking machine. The particulate matter measurements were performed in laboratory conditions. The experimental laboratory measurements make it possible to sample particulates without contamination from exhaust emissions, abraded particles from vehicles, resuspension of road dust and climate affects. The contribution offers partial results of measurements on six trial samples of asphalt mixtures with different composition. It presents particulate matter morphology and the comparison of rutted asphalt samples in terms of PM mass concentrations and chemical composition.

  20. Asymptotic Effectiveness of the Event-Based Sampling According to the Integral Criterion

    Directory of Open Access Journals (Sweden)

    Marek Miskowicz

    2007-01-01

    Full Text Available A rapid progress in intelligent sensing technology creates new interest in a development of analysis and design of non-conventional sampling schemes. The investigation of the event-based sampling according to the integral criterion is presented in this paper. The investigated sampling scheme is an extension of the pure linear send-on- delta/level-crossing algorithm utilized for reporting the state of objects monitored by intelligent sensors. The motivation of using the event-based integral sampling is outlined. The related works in adaptive sampling are summarized. The analytical closed-form formulas for the evaluation of the mean rate of event-based traffic, and the asymptotic integral sampling effectiveness, are derived. The simulation results verifying the analytical formulas are reported. The effectiveness of the integral sampling is compared with the related linear send-on-delta/level-crossing scheme. The calculation of the asymptotic effectiveness for common signals, which model the state evolution of dynamic systems in time, is exemplified.

  1. Bonus schemes and trading activity

    NARCIS (Netherlands)

    Pikulina, E.S.; Renneboog, L.D.R.; ter Horst, J.R.; Tobler, P.N.

    2014-01-01

    Little is known about how different bonus schemes affect traders' propensity to trade and which bonus schemes improve traders' performance. We study the effects of linear versus threshold bonus schemes on traders' behavior. Traders buy and sell shares in an experimental stock market on the basis of

  2. Development of the KOSHA Proficiency Testing Scheme on Asbestos Analysis in Korea

    Directory of Open Access Journals (Sweden)

    Jiwoon Kwon

    2017-09-01

    Full Text Available This commentary presents the regulatory backgrounds and development of the national proficiency testing (PT scheme on asbestos analysis in the Republic of Korea. Since 2009, under the amended Occupational Safety and Health Act, the survey of asbestos in buildings and clearance test of asbestos removal works have been mandated to be carried out by the laboratories designated by the Ministry of Employment and Labor (MOEL in the Republic of Korea. To assess the performance of asbestos laboratories, a PT scheme on asbestos analysis was launched by the Korea Occupational Safety and Health Agency (KOSHA on behalf of the MOEL in 2007. Participating laboratories are evaluated once a year for fiber counting and bulk asbestos analysis by phase contrast microscopy and polarized light microscopy, respectively. Currently, the number of laboratory enrollments is > 200, and the percentage of passed laboratories is > 90. The current status and several significant changes in operation, sample preparations, and statistics of assigning the reference values of the KOSHA PT scheme on asbestos analysis are presented. Critical retrospect based on the experiences of operating the KOSHA PT scheme suggests considerations for developing a new national PT scheme for asbestos analysis.

  3. Extension of the time-average model to Candu refueling schemes involving reshuffling

    International Nuclear Information System (INIS)

    Rouben, Benjamin; Nichita, Eleodor

    2008-01-01

    Candu reactors consist of a horizontal non-pressurized heavy-water-filled vessel penetrated axially by fuel channels, each containing twelve 50-cm-long fuel bundles cooled by pressurized heavy water. Candu reactors are refueled on-line and, as a consequence, the core flux and power distributions change continuously. For design purposes, a 'time-average' model was developed in the 1970's to calculate the average over time of the flux and power distribution and to study the effects of different refueling schemes. The original time-average model only allows treatment of simple push-through refueling schemes whereby fresh fuel is inserted at one end of the channel and irradiated fuel is removed from the other end. With the advent of advanced fuel cycles and new Candu designs, novel refueling schemes may be considered, such as reshuffling discharged fuel from some channels into other channels, to achieve better overall discharge burnup. Such reshuffling schemes cannot be handled by the original time-average model. This paper presents an extension of the time-average model to allow for the treatment of refueling schemes with reshuffling. Equations for the extended model are presented, together with sample results for a simple demonstration case. (authors)

  4. A novel digitization scheme with FPGA-base TDC for beam loss monitors operating at cryogenic temperature

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jinyuan; Warner, Arden; /Fermilab

    2011-11-01

    Recycling integrators are common current-to-frequency converting circuits for measurements of low current such as that produced by Fermilab's cryogenic ionization chambers. In typical digitization/readout schemes, a counter is utilized to accumulate the number of pulses generated by the recycling integrator to adequately digitize the total charge. In order to calculate current with reasonable resolution (e.g., 7-8 bits), hundreds of pulses must be accumulated which corresponds to a long sampling period, i.e., a very low sampling rate. In our new scheme, an FPGA-based Time-to-Digital Convertor (TDC) is utilized to measure the time intervals between the pulses output from the recycling integrator. Using this method, a sample point of the current can be made with good resolution (>10 bits) for each pulse. This effectively increases the sampling rates by hundreds of times for the same recycling integrator front-end electronics. This scheme provides a fast response to the beams loss and is potentially suitable for accelerator protection applications. Moreover, the method is also self-zero-suppressed, i.e., it produces more data when the beam loss is high while it produces significantly less data when the beam loss is low.

  5. Conditional selectivity performance of Indian mutual fund schemes: An empirical study

    Directory of Open Access Journals (Sweden)

    Subrata Roy

    2015-06-01

    Full Text Available The present study seeks to examine the stock-selection performance of the sample open-ended equity mutual fund schemes of Birla Sun Life Mutual Fund Company based on traditional and conditional performance measures. It is generally expected that inclusion of some relevant predetermined public information variables in the conditional CAPM provides better performance estimates as compared to the traditional measures. The study reports that after inclusion of conditioning public information variables, the selectivity performances of the schemes have dramatically improved relative to the traditional measure and also found that conditional measure is superior to traditional measure in statistical test.

  6. Applicability of FTIR-spectroscopy for characterizing waste organic matter

    International Nuclear Information System (INIS)

    Smidt, E.

    2001-12-01

    State and development of waste organic matter were characterized by means of FTIR-spectroscopy. Due to the interaction of infrared light with matter energy is absorbed by chemical functional groups. Chemical preparation steps are not necessary and therefore this method offers a more holistic information about the material. The first part of experiments was focussed on spectra of different waste materials representing various stages of decomposition. Due to characteristics in the fingerprint- region the identity of wastes is provable. Heights of significant bands in the spectrum were measured and relative absorbances were calculated. Changes of relative absorbances indicate the development of organic matter during decomposition. Organic matter of waste samples was compared to organic matter originating from natural analogous processes (peat, soil). The second part of experiments concentrated on a composting process for a period of 260 days. Spectral characteristics of the samples were compared to their chemical, physical and biological data. The change of relative absorbances was reflected by conventional parameters. According to the development of the entire sample humic acids underwent a change as well. For practical use the method offers several possibilities: monitoring of a process, comparison of different processes, quality control of products originating from waste materials and the proof of their identity. (author)

  7. Frequency-Selective Signal Sensing with Sub-Nyquist Uniform Sampling Scheme

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2015-01-01

    In this paper the authors discuss a problem of acquisition and reconstruction of a signal polluted by adjacent- channel interference. The authors propose a method to find a sub-Nyquist uniform sampling pattern which allows for correct reconstruction of selected frequencies. The method is inspired...... by the Restricted Isometry Property, which is known from the field of compressed sensing. Then, compressed sensing is used to successfully reconstruct a wanted signal even if some of the uniform samples were randomly lost, e. g. due to ADC saturation. An experiment which tests the proposed method in practice...

  8. Distributed database kriging for adaptive sampling (D2KAS)

    International Nuclear Information System (INIS)

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph

    2015-01-01

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters

  9. A correction scheme for thermal conductivity measurement using the comparative cut-bar technique based on 3D numerical simulation

    International Nuclear Information System (INIS)

    Xing, Changhu; Folsom, Charles; Jensen, Colby; Ban, Heng; Marshall, Douglas W

    2014-01-01

    As an important factor affecting the accuracy of thermal conductivity measurement, systematic (bias) error in the guarded comparative axial heat flow (cut-bar) method was mostly neglected by previous researches. This bias is primarily due to the thermal conductivity mismatch between sample and meter bars (reference), which is common for a sample of unknown thermal conductivity. A correction scheme, based on finite element simulation of the measurement system, was proposed to reduce the magnitude of the overall measurement uncertainty. This scheme was experimentally validated by applying corrections on four types of sample measurements in which the specimen thermal conductivity is much smaller, slightly smaller, equal and much larger than that of the meter bar. As an alternative to the optimum guarding technique proposed before, the correction scheme can be used to minimize the uncertainty contribution from the measurement system with non-optimal guarding conditions. It is especially necessary for large thermal conductivity mismatches between sample and meter bars. (paper)

  10. Threshold Signature Schemes Application

    Directory of Open Access Journals (Sweden)

    Anastasiya Victorovna Beresneva

    2015-10-01

    Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.

  11. A Spatial Domain Quantum Watermarking Scheme

    International Nuclear Information System (INIS)

    Wei Zhan-Hong; Chen Xiu-Bo; Niu Xin-Xin; Yang Yi-Xian; Xu Shu-Jiang

    2016-01-01

    This paper presents a spatial domain quantum watermarking scheme. For a quantum watermarking scheme, a feasible quantum circuit is a key to achieve it. This paper gives a feasible quantum circuit for the presented scheme. In order to give the quantum circuit, a new quantum multi-control rotation gate, which can be achieved with quantum basic gates, is designed. With this quantum circuit, our scheme can arbitrarily control the embedding position of watermark images on carrier images with the aid of auxiliary qubits. Besides reversely acting the given quantum circuit, the paper gives another watermark extracting algorithm based on quantum measurements. Moreover, this paper also gives a new quantum image scrambling method and its quantum circuit. Differ from other quantum watermarking schemes, all given quantum circuits can be implemented with basic quantum gates. Moreover, the scheme is a spatial domain watermarking scheme, and is not based on any transform algorithm on quantum images. Meanwhile, it can make sure the watermark be secure even though the watermark has been found. With the given quantum circuit, this paper implements simulation experiments for the presented scheme. The experimental result shows that the scheme does well in the visual quality and the embedding capacity. (paper)

  12. Sample container for neutron activation analysis

    International Nuclear Information System (INIS)

    Lersmacher, B.; Verheijke, M.L.; Jaspers, H.J.

    1983-01-01

    The sample container avoids contaminating the sample substance by diffusion of foreign matter from the wall of the sample container into the sample. It cannot be activated, so that the results of measurements are not falsified by a radioactive container wall. It consists of solid carbon. (orig./HP) [de

  13. Quantification of energy savings from Ireland's Home Energy Saving scheme. An ex post billing analysis

    Energy Technology Data Exchange (ETDEWEB)

    Scheer, J.; Clancy, M.; Ni Hogain, S. [Sustainable Energy Authority of Ireland, Wilton Park House, Wilton Terrace, Dublin 2 (Ireland)

    2013-02-15

    This paper quantifies the energy savings realised by a sample of participants in the Sustainable Energy Authority of Ireland's Home Energy Saving (HES) residential retrofit scheme (currently branded as the Better Energy Homes scheme), through an ex post billing analysis. The billing data are used to evaluate: (1) the reduction in gas consumption of the sample between pre- (2008) and post- (2010) scheme participation when compared to the gas consumption of a control group, (2) an estimate of the shortfall when this result is compared to engineering-type ex ante savings estimates and (3) the degree to which these results may apply to the wider population. All dwellings in the study underwent energy efficiency improvements, including insulation upgrades (wall and/or roof), installation of high-efficiency boilers and/or improved heating controls, as part of the HES scheme. Metered gas use data for the 210 households were obtained from meter operators for a number of years preceding dwelling upgrades and for a post-intervention period of 1 year. Dwelling characteristics and some household behavioural data were obtained through a survey of the sample. The gas network operator provided anonymised data on gas usage for 640,000 customers collected over the same period as the HES sample. Dwelling type data provided with the population dataset enabled matching with the HES sample to increase the internal validity of the comparison between the control (matched population data) and the treatment (HES sample). Using a difference-in-difference methodology, the change in demand of the sample was compared with that of the matched population subset of gas-using customers in Ireland over the same time period. The mean reduction in gas demand as a result of energy efficiency upgrades for the HES sample is estimated as 21 % or 3,664{+-}603 kWh between 2008 and 2010. An ex ante estimate of average energy savings, based on engineering calculations (u value reductions and improved boiler

  14. Labeling schemes for bounded degree graphs

    DEFF Research Database (Denmark)

    Adjiashvili, David; Rotbart, Noy Galil

    2014-01-01

    We investigate adjacency labeling schemes for graphs of bounded degree Δ = O(1). In particular, we present an optimal (up to an additive constant) log n + O(1) adjacency labeling scheme for bounded degree trees. The latter scheme is derived from a labeling scheme for bounded degree outerplanar...... graphs. Our results complement a similar bound recently obtained for bounded depth trees [Fraigniaud and Korman, SODA 2010], and may provide new insights for closing the long standing gap for adjacency in trees [Alstrup and Rauhe, FOCS 2002]. We also provide improved labeling schemes for bounded degree...

  15. From meson- and photon-nucleon scattering to vector mesons in nuclear matter

    International Nuclear Information System (INIS)

    Wolf, Gy.; Lutz, M.F.M.; Friman, B.

    2003-01-01

    A relativistic and unitary approach to pion- and photon-nucleon scattering taking into account the πN, ρN, ωN, ηN, πΔ, KΛ and KΣ channels is presented. The scheme dynamically generates the s- and d-wave baryon resonances N(1535), N(1650), N(1520) and N(1700) and as well as Δ(1620) and Δ(1700) in terms of quasi-local two-body interaction terms. A fair description of the experimental data relevant to the properties of slow vector mesons in nuclear matter is obtained. The resulting s-wave ρ- and ω-meson-nucleon scattering amplitudes which define the leading density modification of the ρ- and ω-meson spectral functions in nuclear matter are presented. (author)

  16. Applications of Atomic Systems in Quantum Simulation, Quantum Computation and Topological Phases of Matter

    Science.gov (United States)

    Wang, Shengtao

    The ability to precisely and coherently control atomic systems has improved dramatically in the last two decades, driving remarkable advancements in quantum computation and simulation. In recent years, atomic and atom-like systems have also been served as a platform to study topological phases of matter and non-equilibrium many-body physics. Integrated with rapid theoretical progress, the employment of these systems is expanding the realm of our understanding on a range of physical phenomena. In this dissertation, I draw on state-of-the-art experimental technology to develop several new ideas for controlling and applying atomic systems. In the first part of this dissertation, we propose several novel schemes to realize, detect, and probe topological phases in atomic and atom-like systems. We first theoretically study the intriguing properties of Hopf insulators, a peculiar type of topological insulators beyond the standard classification paradigm of topological phases. Using a solid-state quantum simulator, we report the first experimental observation of Hopf insulators. We demonstrate the Hopf fibration with fascinating topological links in the experiment, showing clear signals of topological phase transitions for the underlying Hamiltonian. Next, we propose a feasible experimental scheme to realize the chiral topological insulator in three dimensions. They are a type of topological insulators protected by the chiral symmetry and have thus far remained unobserved in experiment. We then introduce a method to directly measure topological invariants in cold-atom experiments. This detection scheme is general and applicable to probe of different topological insulators in any spatial dimension. In another study, we theoretically discover a new type of topological gapless rings, dubbed a Weyl exceptional ring, in three-dimensional dissipative cold atomic systems. In the second part of this dissertation, we focus on the application of atomic systems in quantum computation

  17. First Investigations on the Energy Deposited in a D0 early separation scheme Dipole for the LHC upgrade

    CERN Document Server

    Hoa, C

    2007-01-01

    This note gives the first results of energy deposition calculation on a simplified model for an early scheme separation dipole D0, located at 3.5 m from the IP. The Monte Carlo code FLUKA version 2006.3 has been used for modelling the multi-particle interactions and energy transport. After a short introduction to particle interaction with matter and power deposition processes, the FLUKA modelling is described with bench marked power deposition calculation on the TAS, the absorber located in front of the triplet quadrupoles. The power deposition results for the D0 early scheme are then discussed in details, with the averaged and peak power density, and the variations of the total heat load in the dipole with the longitudinal position and with the aperture diameter.

  18. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  19. A new processing scheme for ultra-high resolution direct infusion mass spectrometry data

    Science.gov (United States)

    Zielinski, Arthur T.; Kourtchev, Ivan; Bortolini, Claudio; Fuller, Stephen J.; Giorio, Chiara; Popoola, Olalekan A. M.; Bogialli, Sara; Tapparo, Andrea; Jones, Roderic L.; Kalberer, Markus

    2018-04-01

    High resolution, high accuracy mass spectrometry is widely used to characterise environmental or biological samples with highly complex composition enabling the identification of chemical composition of often unknown compounds. Despite instrumental advancements, the accurate molecular assignment of compounds acquired in high resolution mass spectra remains time consuming and requires automated algorithms, especially for samples covering a wide mass range and large numbers of compounds. A new processing scheme is introduced implementing filtering methods based on element assignment, instrumental error, and blank subtraction. Optional post-processing incorporates common ion selection across replicate measurements and shoulder ion removal. The scheme allows both positive and negative direct infusion electrospray ionisation (ESI) and atmospheric pressure photoionisation (APPI) acquisition with the same programs. An example application to atmospheric organic aerosol samples using an Orbitrap mass spectrometer is reported for both ionisation techniques resulting in final spectra with 0.8% and 8.4% of the peaks retained from the raw spectra for APPI positive and ESI negative acquisition, respectively.

  20. Tabled Execution in Scheme

    Energy Technology Data Exchange (ETDEWEB)

    Willcock, J J; Lumsdaine, A; Quinlan, D J

    2008-08-19

    Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.

  1. Optimal Face-Iris Multimodal Fusion Scheme

    Directory of Open Access Journals (Sweden)

    Omid Sharifi

    2016-06-01

    Full Text Available Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA, a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.

  2. An Extended Multilocus Sequence Typing (MLST) Scheme for Rapid Direct Typing of Leptospira from Clinical Samples

    OpenAIRE

    Weiss, Sabrina; Menezes, Angela; Woods, Kate; Chanthongthip, Anisone; Dittrich, Sabine; Opoku-Boateng, Agatha; Kimuli, Maimuna; Chalker, Victoria

    2016-01-01

    Background Rapid typing of Leptospira is currently impaired by requiring time consuming culture of leptospires. The objective of this study was to develop an assay that provides multilocus sequence typing (MLST) data direct from patient specimens while minimising costs for subsequent sequencing. Methodology and Findings An existing PCR based MLST scheme was modified by designing nested primers including anchors for facilitated subsequent sequencing. The assay was applied to various specimen t...

  3. Tritium in organic matter around Krsko Nuclear Power Plant

    International Nuclear Information System (INIS)

    Kristof, Romana; Zorko, Benjamin; Kozar Logar, Jasmina; Kosenina, Suzana

    2017-01-01

    The aim of the research was to obtain first results of tritium in the organic matter of environmental samples in the vicinity of Krsko NPP. The emphasis was on the layout of suitable sampling network of crops and fruits in nearby agricultural area. Method for determination of tritium in organic matter in the form of Tissue Free Water Tritium (TFWT) and Organically Bound Tritium (OBT) has been implemented. Capabilities of the methods were tested on real environmental samples and its findings were compared to modeled activities of tritium from atmospheric releases and literature based results of TFWT and OBT. (author)

  4. Automatic remote sampling and delivery system incorporating decontamination and disposal of sample bottles

    International Nuclear Information System (INIS)

    Savarkar, V.K.; Mishra, A.K.; Bajpai, D.D.; Nair, M.K.T.

    1990-01-01

    The present generation of reprocessing plants have sampling and delivery systems that have to be operated manually with its associated problems. The complete automation and remotisation of sampling system has hence been considered to reduce manual intervention and personnel exposure. As a part of this scheme an attempt to automate and remotise various steps in sampling system has been made. This paper discusses in detail the development work carried out in this area as well as the tests conducted to incorporate the same in the existing plants. (author). 3 figs

  5. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    International Nuclear Information System (INIS)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra; Kalogeropoulou, Christina; Pratikakis, Ioannis; Costaridou, Lena

    2015-01-01

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the

  6. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    Energy Technology Data Exchange (ETDEWEB)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra [Department of Medical Physics, School of Medicine,University of Patras, Patras 26504 (Greece); Kalogeropoulou, Christina [Department of Radiology, School of Medicine, University of Patras, Patras 26504 (Greece); Pratikakis, Ioannis [Department of Electrical and Computer Engineering, Democritus University of Thrace, Xanthi 67100 (Greece); Costaridou, Lena, E-mail: costarid@upatras.gr [Department of Medical Physics, School of Medicine, University of Patras, Patras 26504 (Greece)

    2015-08-15

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the

  7. Matter-antimatter and matter-matter interactions at intermediate energies

    International Nuclear Information System (INIS)

    Santos, Antonio Carlos Fontes dos

    2002-01-01

    This article presents some of the recent experimental advances on the study on antimatter-matter and matter-matter interactions, and some of the subtle differences stimulated a great theoretical efforts for explanation of the results experimentally observed

  8. Mitigating Observation Perturbation Sampling Errors in the Stochastic EnKF

    KAUST Repository

    Hoteit, Ibrahim

    2015-03-17

    The stochastic ensemble Kalman filter (EnKF) updates its ensemble members with observations perturbed with noise sampled from the distribution of the observational errors. This was shown to introduce noise into the system and may become pronounced when the ensemble size is smaller than the rank of the observational error covariance, which is often the case in real oceanic and atmospheric data assimilation applications. This work introduces an efficient serial scheme to mitigate the impact of observations’ perturbations sampling in the analysis step of the EnKF, which should provide more accurate ensemble estimates of the analysis error covariance matrices. The new scheme is simple to implement within the serial EnKF algorithm, requiring only the approximation of the EnKF sample forecast error covariance matrix by a matrix with one rank less. The new EnKF scheme is implemented and tested with the Lorenz-96 model. Results from numerical experiments are conducted to compare its performance with the EnKF and two standard deterministic EnKFs. This study shows that the new scheme enhances the behavior of the EnKF and may lead to better performance than the deterministic EnKFs even when implemented with relatively small ensembles.

  9. Mitigating Observation Perturbation Sampling Errors in the Stochastic EnKF

    KAUST Repository

    Hoteit, Ibrahim; Pham, D.-T.; El Gharamti, Mohamad; Luo, X.

    2015-01-01

    The stochastic ensemble Kalman filter (EnKF) updates its ensemble members with observations perturbed with noise sampled from the distribution of the observational errors. This was shown to introduce noise into the system and may become pronounced when the ensemble size is smaller than the rank of the observational error covariance, which is often the case in real oceanic and atmospheric data assimilation applications. This work introduces an efficient serial scheme to mitigate the impact of observations’ perturbations sampling in the analysis step of the EnKF, which should provide more accurate ensemble estimates of the analysis error covariance matrices. The new scheme is simple to implement within the serial EnKF algorithm, requiring only the approximation of the EnKF sample forecast error covariance matrix by a matrix with one rank less. The new EnKF scheme is implemented and tested with the Lorenz-96 model. Results from numerical experiments are conducted to compare its performance with the EnKF and two standard deterministic EnKFs. This study shows that the new scheme enhances the behavior of the EnKF and may lead to better performance than the deterministic EnKFs even when implemented with relatively small ensembles.

  10. Toward a nomenclature and dosimetric scheme applicable to all radiations

    Energy Technology Data Exchange (ETDEWEB)

    Rupert, C S; Latarjet, R [Texas Univ., Dallas (USA)

    1978-07-01

    An informal Joint Working Group on Radiation Quantities, consisting of representatives of the ICRU and other international organizations was initiated at the International Congress of Radiation Research in 1974. The conclusions of a meeting of the Group held in 1975 are summarised. Quantities are proposed to describe any type of radiation field in terms of the total amount of energy carried by the radiation and its distribution with respect to time, area, volume and solid angle, expressed in terms of either radiant energy (joules) or number of particles. If this general approach is agreed to by the parent organizations and others the Group will go on to recommend quantities to represent the interactions of fields with matter and to provide a dosimetric scheme usable with all types of radiation.

  11. Origins and challenges of viral dark matter.

    Science.gov (United States)

    Krishnamurthy, Siddharth R; Wang, David

    2017-07-15

    The accurate classification of viral dark matter - metagenomic sequences that originate from viruses but do not align to any reference virus sequences - is one of the major obstacles in comprehensively defining the virome. Depending on the sample, viral dark matter can make up from anywhere between 40 and 90% of sequences. This review focuses on the specific nature of dark matter as it relates to viral sequences. We identify three factors that contribute to the existence of viral dark matter: the divergence and length of virus sequences, the limitations of alignment based classification, and limited representation of viruses in reference sequence databases. We then discuss current methods that have been developed to at least partially circumvent these limitations and thereby reduce the extent of viral dark matter. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Image Interpolation Scheme based on SVM and Improved PSO

    Science.gov (United States)

    Jia, X. F.; Zhao, B. T.; Liu, X. X.; Song, H. P.

    2018-01-01

    In order to obtain visually pleasing images, a support vector machines (SVM) based interpolation scheme is proposed, in which the improved particle swarm optimization is applied to support vector machine parameters optimization. Training samples are constructed by the pixels around the pixel to be interpolated. Then the support vector machine with optimal parameters is trained using training samples. After the training, we can get the interpolation model, which can be employed to estimate the unknown pixel. Experimental result show that the interpolated images get improvement PNSR compared with traditional interpolation methods, which is agrees with the subjective quality.

  13. Sampling calorimeters in high energy physics

    International Nuclear Information System (INIS)

    Gordon, H.A.; Smith, S.D.

    1980-01-01

    Attention is given to sampling calorimeters - those instruments in which part of the shower is sampled in an active medium sandwiched between absorbing layers. A very cursory overview is presented of some fundamental aspects of sampling calorimeters. First the properties of shower development are described for both the electromagnetic and hadronic cases. Then examples of various readout schemes are discussed. Finally, some currently promising new ideas in calorimetry are described

  14. Functional renormalization group and Kohn-Sham scheme in density functional theory

    Science.gov (United States)

    Liang, Haozhao; Niu, Yifei; Hatsuda, Tetsuo

    2018-04-01

    Deriving accurate energy density functional is one of the central problems in condensed matter physics, nuclear physics, and quantum chemistry. We propose a novel method to deduce the energy density functional by combining the idea of the functional renormalization group and the Kohn-Sham scheme in density functional theory. The key idea is to solve the renormalization group flow for the effective action decomposed into the mean-field part and the correlation part. Also, we propose a simple practical method to quantify the uncertainty associated with the truncation of the correlation part. By taking the φ4 theory in zero dimension as a benchmark, we demonstrate that our method shows extremely fast convergence to the exact result even for the highly strong coupling regime.

  15. The variance quadtree algorithm: use for spatial sampling design

    NARCIS (Netherlands)

    Minasny, B.; McBratney, A.B.; Walvoort, D.J.J.

    2007-01-01

    Spatial sampling schemes are mainly developed to determine sampling locations that can cover the variation of environmental properties in the area of interest. Here we proposed the variance quadtree algorithm for sampling in an area with prior information represented as ancillary or secondary

  16. A comparison of surface water natural organic matter in raw filtered water samples, XAD, and reverse osmosis isolates

    Science.gov (United States)

    Maurice, P.A.; Pullin, M.J.; Cabaniss, S.E.; Zhou, Q.; Namjesnik-Dejanovic, K.; Aiken, G.R.

    2002-01-01

    This research compared raw filtered waters (RFWs), XAD resin isolates (XAD-8 and XAD-4), and reverse osmosis (RO) isolates of several surface water samples from McDonalds Branch, a small freshwater fen in the New Jersey Pine Barrens (USA). RO and XAD-8 are two of the most common techniques used to isolate natural organic matter (NOM) for studies of composition and reactivity; therefore, it is important to understand how the isolates differ from bulk (unisolated) samples and from one another. Although, any comparison between the isolation methods needs to consider that XAD-8 is specifically designed to isolate the humic fraction, whereas RO concentrates a broad range of organic matter and is not specific to humics. The comparison included for all samples: weight average molecular weight (Mw), number average molecular weight (Mn), polydispersity (??), absorbance at 280nm normalized to moles C (??280) (RFW and isolates); and for isolates only: elemental analysis, % carbon distribution by 13C NMR, and aqueous FTIR spectra. As expected, RO isolation gave higher yield of NOM than XAD-8, but also higher ash content, especially Si and S. Mw decreased in the order: RO>XAD-8>RFW>XAD-4. The Mw differences of isolates compared with RFW may be due to selective isolation (fractionation), or possibly in the case of RO to condensation or coagulation during isolation. 13C NMR results were roughly similar for the two methods, but the XAD-8 isolate was slightly higher in 'aromatic' C and the RO isolate was slightly higher in heteroaliphatic and carbonyl C. Infrared spectra indicated a higher carboxyl content for the XAD-8 isolates and a higher ester:carboxyl ratio for the RO isolates. The spectroscopic data thus are consistent with selective isolation of more hydrophobic compounds by XAD-8, and also with potential ester hydrolysis during that process, although further study is needed to determine whether ester hydrolysis does indeed occur. Researchers choosing between XAD and RO

  17. Multiuser switched diversity scheduling schemes

    KAUST Repository

    Shaqfeh, Mohammad; Alnuweiri, Hussein M.; Alouini, Mohamed-Slim

    2012-01-01

    Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.

  18. Short-Term Saved Leave Scheme

    CERN Multimedia

    2007-01-01

    As announced at the meeting of the Standing Concertation Committee (SCC) on 26 June 2007 and in http://Bulletin No. 28/2007, the existing Saved Leave Scheme will be discontinued as of 31 December 2007. Staff participating in the Scheme will shortly receive a contract amendment stipulating the end of financial contributions compensated by save leave. Leave already accumulated on saved leave accounts can continue to be taken in accordance with the rules applicable to the current scheme. A new system of saved leave will enter into force on 1 January 2008 and will be the subject of a new implementation procedure entitled "Short-term saved leave scheme" dated 1 January 2008. At its meeting on 4 December 2007, the SCC agreed to recommend the Director-General to approve this procedure, which can be consulted on the HR Department’s website at the following address: https://cern.ch/hr-services/services-Ben/sls_shortterm.asp All staff wishing to participate in the new scheme a...

  19. Short-Term Saved Leave Scheme

    CERN Multimedia

    HR Department

    2007-01-01

    As announced at the meeting of the Standing Concertation Committee (SCC) on 26 June 2007 and in http://Bulletin No. 28/2007, the existing Saved Leave Scheme will be discontinued as of 31 December 2007. Staff participating in the Scheme will shortly receive a contract amendment stipulating the end of financial contributions compensated by save leave. Leave already accumulated on saved leave accounts can continue to be taken in accordance with the rules applicable to the current scheme. A new system of saved leave will enter into force on 1 January 2008 and will be the subject of a new im-plementation procedure entitled "Short-term saved leave scheme" dated 1 January 2008. At its meeting on 4 December 2007, the SCC agreed to recommend the Director-General to approve this procedure, which can be consulted on the HR Department’s website at the following address: https://cern.ch/hr-services/services-Ben/sls_shortterm.asp All staff wishing to participate in the new scheme ...

  20. Multiuser switched diversity scheduling schemes

    KAUST Repository

    Shaqfeh, Mohammad

    2012-09-01

    Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.

  1. Compact Spreader Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Placidi, M.; Jung, J. -Y.; Ratti, A.; Sun, C.

    2014-07-25

    This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.

  2. X- rays and matter- the basic interactions

    DEFF Research Database (Denmark)

    Als-Nielsen, Jens

    2008-01-01

    In this introductory article we attempt to provide the theoretical basis for developing the interaction between X-rays and matter, so that one can unravel properties of matter by interpretation of X-ray experiments on samples. We emphasize that we are dealing with the basics, which means that we...... shall limit ourselves to a discussion of the interaction of an X-ray photon with an isolated atom, or rather with a single electron in a Hartree-Fock atom. Subsequent articles in this issue deal with more complicated - and interesting - forms of matter encompassing many atoms or molecules. To cite...

  3. Extração de matéria orgânica aquática por abaixamento de temperatura: uma metodologia alternativa para manter a identidade da amostra Extraction of aquatic organic matter by temperature decreasing: an alternative methodology to keep the original sample characteristics

    Directory of Open Access Journals (Sweden)

    Rosana N. H. Martins de Almeida

    2003-03-01

    Full Text Available In this work was developed an alternative methodology to separation of aquatic organic matter (AOM present in natural river waters. The process is based in temperature decreasing of the aqueous sample under controlled conditions that provoke the freezing of the sample and separation of the dark extract, not frozen and rich in organic matter. The results showed that speed of temperature decreasing exerts strongly influence in relative recovery of organic carbon, enrichment and time separation of the organic matter present in water samples. Elemental composition, infrared spectra and thermal analysis results showed that the alternative methodology is less aggressive possible in the attempt of maintaining the integrity of the sample.

  4. Quantum signature scheme for known quantum messages

    International Nuclear Information System (INIS)

    Kim, Taewan; Lee, Hyang-Sook

    2015-01-01

    When we want to sign a quantum message that we create, we can use arbitrated quantum signature schemes which are possible to sign for not only known quantum messages but also unknown quantum messages. However, since the arbitrated quantum signature schemes need the help of a trusted arbitrator in each verification of the signature, it is known that the schemes are not convenient in practical use. If we consider only known quantum messages such as the above situation, there can exist a quantum signature scheme with more efficient structure. In this paper, we present a new quantum signature scheme for known quantum messages without the help of an arbitrator. Differing from arbitrated quantum signature schemes based on the quantum one-time pad with the symmetric key, since our scheme is based on quantum public-key cryptosystems, the validity of the signature can be verified by a receiver without the help of an arbitrator. Moreover, we show that our scheme provides the functions of quantum message integrity, user authentication and non-repudiation of the origin as in digital signature schemes. (paper)

  5. Two-level schemes for the advection equation

    Science.gov (United States)

    Vabishchevich, Petr N.

    2018-06-01

    The advection equation is the basis for mathematical models of continuum mechanics. In the approximate solution of nonstationary problems it is necessary to inherit main properties of the conservatism and monotonicity of the solution. In this paper, the advection equation is written in the symmetric form, where the advection operator is the half-sum of advection operators in conservative (divergent) and non-conservative (characteristic) forms. The advection operator is skew-symmetric. Standard finite element approximations in space are used. The standard explicit two-level scheme for the advection equation is absolutely unstable. New conditionally stable regularized schemes are constructed, on the basis of the general theory of stability (well-posedness) of operator-difference schemes, the stability conditions of the explicit Lax-Wendroff scheme are established. Unconditionally stable and conservative schemes are implicit schemes of the second (Crank-Nicolson scheme) and fourth order. The conditionally stable implicit Lax-Wendroff scheme is constructed. The accuracy of the investigated explicit and implicit two-level schemes for an approximate solution of the advection equation is illustrated by the numerical results of a model two-dimensional problem.

  6. Optimal Sales Schemes for Network Goods

    DEFF Research Database (Denmark)

    Parakhonyak, Alexei; Vikander, Nick

    consumers simultaneously, serve them all sequentially, or employ any intermediate scheme. We show that the optimal sales scheme is purely sequential, where each consumer observes all previous sales before choosing whether to buy himself. A sequential scheme maximizes the amount of information available...

  7. Estimating pesticide sampling rates by the polar organic chemical integrative sampler (POCIS) in the presence of natural organic matter and varying hydrodynamic conditions

    Science.gov (United States)

    Charlestra, Lucner; Amirbahman, Aria; Courtemanch, David L.; Alvarez, David A.; Patterson, Howard

    2012-01-01

    The polar organic chemical integrative sampler (POCIS) was calibrated to monitor pesticides in water under controlled laboratory conditions. The effect of natural organic matter (NOM) on the sampling rates (Rs) was evaluated in microcosms containing -1 of total organic carbon (TOC). The effect of hydrodynamics was studied by comparing Rs values measured in stirred (SBE) and quiescent (QBE) batch experiments and a flow-through system (FTS). The level of NOM in the water used in these experiments had no effect on the magnitude of the pesticide sampling rates (p > 0.05). However, flow velocity and turbulence significantly increased the sampling rates of the pesticides in the FTS and SBE compared to the QBE (p < 0.001). The calibration data generated can be used to derive pesticide concentrations in water from POCIS deployed in stagnant and turbulent environmental systems without correction for NOM.

  8. A magnet lattice for a tau-charm factory suitable for both standard scheme and monochromatization scheme

    International Nuclear Information System (INIS)

    Beloshitsky, P.

    1992-06-01

    A versatile magnet lattice for a tau-charm factory is considered in this report. The main feature of this lattice is the possibility to use it for both standard flat beam scheme and beam monochromatization scheme. The detailed description of the lattice is given. The restrictions following the compatibility of both schemes are discussed

  9. THROUGHPUT ANALYSIS OF EXTENDED ARQ SCHEMES

    African Journals Online (AJOL)

    PUBLICATIONS1

    ABSTRACT. Various Automatic Repeat Request (ARQ) schemes have been used to combat errors that befall in- formation transmitted in digital communication systems. Such schemes include simple ARQ, mixed mode ARQ and Hybrid ARQ (HARQ). In this study we introduce extended ARQ schemes and derive.

  10. A Stable Marching on-in-time Scheme for Solving the Time Domain Electric Field Volume Integral Equation on High-contrast Scatterers

    KAUST Repository

    Sayed, Sadeed Bin

    2015-05-05

    A time domain electric field volume integral equation (TD-EFVIE) solver is proposed for characterizing transient electromagnetic wave interactions on high-contrast dielectric scatterers. The TD-EFVIE is discretized using the Schaubert- Wilton-Glisson (SWG) and approximate prolate spherical wave (APSW) functions in space and time, respectively. The resulting system of equations can not be solved by a straightforward application of the marching on-in-time (MOT) scheme since the two-sided APSW interpolation functions require the knowledge of unknown “future” field samples during time marching. Causality of the MOT scheme is restored using an extrapolation technique that predicts the future samples from known “past” ones. Unlike the extrapolation techniques developed for MOT schemes that are used in solving time domain surface integral equations, this scheme trains the extrapolation coefficients using samples of exponentials with exponents on the complex frequency plane. This increases the stability of the MOT-TD-EFVIE solver significantly, since the temporal behavior of decaying and oscillating electromagnetic modes induced inside the scatterers is very accurately taken into account by this new extrapolation scheme. Numerical results demonstrate that the proposed MOT solver maintains its stability even when applied to analyzing wave interactions on high-contrast scatterers.

  11. A Stable Marching on-in-time Scheme for Solving the Time Domain Electric Field Volume Integral Equation on High-contrast Scatterers

    KAUST Repository

    Sayed, Sadeed Bin; Ulku, Huseyin; Bagci, Hakan

    2015-01-01

    A time domain electric field volume integral equation (TD-EFVIE) solver is proposed for characterizing transient electromagnetic wave interactions on high-contrast dielectric scatterers. The TD-EFVIE is discretized using the Schaubert- Wilton-Glisson (SWG) and approximate prolate spherical wave (APSW) functions in space and time, respectively. The resulting system of equations can not be solved by a straightforward application of the marching on-in-time (MOT) scheme since the two-sided APSW interpolation functions require the knowledge of unknown “future” field samples during time marching. Causality of the MOT scheme is restored using an extrapolation technique that predicts the future samples from known “past” ones. Unlike the extrapolation techniques developed for MOT schemes that are used in solving time domain surface integral equations, this scheme trains the extrapolation coefficients using samples of exponentials with exponents on the complex frequency plane. This increases the stability of the MOT-TD-EFVIE solver significantly, since the temporal behavior of decaying and oscillating electromagnetic modes induced inside the scatterers is very accurately taken into account by this new extrapolation scheme. Numerical results demonstrate that the proposed MOT solver maintains its stability even when applied to analyzing wave interactions on high-contrast scatterers.

  12. Ponzi scheme diffusion in complex networks

    Science.gov (United States)

    Zhu, Anding; Fu, Peihua; Zhang, Qinghe; Chen, Zhenyue

    2017-08-01

    Ponzi schemes taking the form of Internet-based financial schemes have been negatively affecting China's economy for the last two years. Because there is currently a lack of modeling research on Ponzi scheme diffusion within social networks yet, we develop a potential-investor-divestor (PID) model to investigate the diffusion dynamics of Ponzi scheme in both homogeneous and inhomogeneous networks. Our simulation study of artificial and real Facebook social networks shows that the structure of investor networks does indeed affect the characteristics of dynamics. Both the average degree of distribution and the power-law degree of distribution will reduce the spreading critical threshold and will speed up the rate of diffusion. A high speed of diffusion is the key to alleviating the interest burden and improving the financial outcomes for the Ponzi scheme operator. The zero-crossing point of fund flux function we introduce proves to be a feasible index for reflecting the fast-worsening situation of fiscal instability and predicting the forthcoming collapse. The faster the scheme diffuses, the higher a peak it will reach and the sooner it will collapse. We should keep a vigilant eye on the harm of Ponzi scheme diffusion through modern social networks.

  13. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  14. The Performance-based Funding Scheme of Universities

    Directory of Open Access Journals (Sweden)

    Juha KETTUNEN

    2016-05-01

    Full Text Available The purpose of this study is to analyse the effectiveness of the performance-based funding scheme of the Finnish universities that was adopted at the beginning of 2013. The political decision-makers expect that the funding scheme will create incentives for the universities to improve performance, but these funding schemes have largely failed in many other countries, primarily because public funding is only a small share of the total funding of universities. This study is interesting because Finnish universities have no tuition fees, unlike in many other countries, and the state allocates funding based on the objectives achieved. The empirical evidence of the graduation rates indicates that graduation rates increased when a new scheme was adopted, especially among male students, who have more room for improvement than female students. The new performance-based funding scheme allocates the funding according to the output-based indicators and limits the scope of strategic planning and the autonomy of the university. The performance-based funding scheme is transformed to the strategy map of the balanced scorecard. The new funding scheme steers universities in many respects but leaves the research and teaching skills to the discretion of the universities. The new scheme has also diminished the importance of the performance agreements between the university and the Ministry. The scheme increases the incentives for universities to improve the processes and structures in order to attain as much public funding as possible. It is optimal for the central administration of the university to allocate resources to faculties and other organisational units following the criteria of the performance-based funding scheme. The new funding scheme has made the universities compete with each other, because the total funding to the universities is allocated to each university according to the funding scheme. There is a tendency that the funding schemes are occasionally

  15. A Classification Scheme for Literary Characters

    Directory of Open Access Journals (Sweden)

    Matthew Berry

    2017-10-01

    Full Text Available There is no established classification scheme for literary characters in narrative theory short of generic categories like protagonist vs. antagonist or round vs. flat. This is so despite the ubiquity of stock characters that recur across media, cultures, and historical time periods. We present here a proposal of a systematic psychological scheme for classifying characters from the literary and dramatic fields based on a modification of the Thomas-Kilmann (TK Conflict Mode Instrument used in applied studies of personality. The TK scheme classifies personality along the two orthogonal dimensions of assertiveness and cooperativeness. To examine the validity of a modified version of this scheme, we had 142 participants provide personality ratings for 40 characters using two of the Big Five personality traits as well as assertiveness and cooperativeness from the TK scheme. The results showed that assertiveness and cooperativeness were orthogonal dimensions, thereby supporting the validity of using a modified version of TK’s two-dimensional scheme for classifying characters.

  16. How can conceptual schemes change teaching?

    Science.gov (United States)

    Wickman, Per-Olof

    2012-03-01

    Lundqvist, Almqvist and Östman describe a teacher's manner of teaching and the possible consequences it may have for students' meaning making. In doing this the article examines a teacher's classroom practice by systematizing the teacher's transactions with the students in terms of certain conceptual schemes, namely the epistemological moves, educational philosophies and the selective traditions of this practice. In connection to their study one may ask how conceptual schemes could change teaching. This article examines how the relationship of the conceptual schemes produced by educational researchers to educational praxis has developed from the middle of the last century to today. The relationship is described as having been transformed in three steps: (1) teacher deficit and social engineering, where conceptual schemes are little acknowledged, (2) reflecting practitioners, where conceptual schemes are mangled through teacher practice to aid the choices of already knowledgeable teachers, and (3) the mangling of the conceptual schemes by researchers through practice with the purpose of revising theory.

  17. A 64-channel readout ASIC for nanowire biosensor array with electrical calibration scheme.

    Science.gov (United States)

    Chai, Kevin T C; Choe, Kunil; Bernal, Olivier D; Gopalakrishnan, Pradeep K; Zhang, Guo-Jun; Kang, Tae Goo; Je, Minkyu

    2010-01-01

    A 1.8-mW, 18.5-mm(2) 64-channel current readout ASIC was implemented in 0.18-µm CMOS together with a new calibration scheme for silicon nanowire biosensor arrays. The ASIC consists of 64 channels of dedicated readout and conditioning circuits which incorporate correlated double sampling scheme to reduce the effect of 1/f noise and offset from the analog front-end. The ASIC provides a 10-bit digital output with a sampling rate of 300 S/s whilst achieving a minimum resolution of 7 pA(rms). A new electrical calibration method was introduced to mitigate the issue of large variations in the nano-scale sensor device parameters and optimize the sensor sensitivity. The experimental results show that the proposed calibration technique improved the sensitivity by 2 to 10 times and reduced the variation between dataset by 9 times.

  18. An Arbitrated Quantum Signature Scheme without Entanglement*

    International Nuclear Information System (INIS)

    Li Hui-Ran; Luo Ming-Xing; Peng Dai-Yuan; Wang Xiao-Jun

    2017-01-01

    Several quantum signature schemes are recently proposed to realize secure signatures of quantum or classical messages. Arbitrated quantum signature as one nontrivial scheme has attracted great interests because of its usefulness and efficiency. Unfortunately, previous schemes cannot against Trojan horse attack and DoS attack and lack of the unforgeability and the non-repudiation. In this paper, we propose an improved arbitrated quantum signature to address these secure issues with the honesty arbitrator. Our scheme takes use of qubit states not entanglements. More importantly, the qubit scheme can achieve the unforgeability and the non-repudiation. Our scheme is also secure for other known quantum attacks . (paper)

  19. An evaluation of a collaborative bibliotherapy scheme delivered via a library service.

    Science.gov (United States)

    Macdonald, J; Vallance, D; McGrath, M

    2013-12-01

    This paper reports on the evaluation of a bibliotherapy scheme delivered via a local library service, in conjunction with General Practice (GP) practices, local social welfare agencies and through self-referral. The Read Yourself Well (RYW) scheme was based on principles established from other similar schemes and as a way of delivering support for adults experiencing mild to moderate mental health problems for whom clinical treatments are not appropriate. The intervention consisted of initial referral and evaluation by the scheme bibliotherapist, a one-hour session at the beginning and end of the intervention where a purpose-designed questionnaire and two mental health assessments were carried out (the General Health Questionnaire and the Clinical Outcomes in Routine Evaluation questionnaire). Contact and support from the bibliotherapist was provided during the intervention period. One hundred and fifty-seven participants were recruited to the evaluation of whom 114 provided full data. Statistical analyses of the mental health scores showed significant improvements post treatment, for, both male and female participants, for all three referral routes, and for participants who were previously library users, and those who joined the library service to participate in the RYW scheme. The results of this large sample evaluation support the proposal that library-based bibliotherapy can be effective in the treatment of mental health problems. © 2012 John Wiley & Sons Ltd.

  20. Breeding schemes in reindeer husbandry

    Directory of Open Access Journals (Sweden)

    Lars Rönnegård

    2003-04-01

    Full Text Available The objective of the paper was to investigate annual genetic gain from selection (G, and the influence of selection on the inbreeding effective population size (Ne, for different possible breeding schemes within a reindeer herding district. The breeding schemes were analysed for different proportions of the population within a herding district included in the selection programme. Two different breeding schemes were analysed: an open nucleus scheme where males mix and mate between owner flocks, and a closed nucleus scheme where the males in non-selected owner flocks are culled to maximise G in the whole population. The theory of expected long-term genetic contributions was used and maternal effects were included in the analyses. Realistic parameter values were used for the population, modelled with 5000 reindeer in the population and a sex ratio of 14 adult females per male. The standard deviation of calf weights was 4.1 kg. Four different situations were explored and the results showed: 1. When the population was randomly culled, Ne equalled 2400. 2. When the whole population was selected on calf weights, Ne equalled 1700 and the total annual genetic gain (direct + maternal in calf weight was 0.42 kg. 3. For the open nucleus scheme, G increased monotonically from 0 to 0.42 kg as the proportion of the population included in the selection programme increased from 0 to 1.0, and Ne decreased correspondingly from 2400 to 1700. 4. In the closed nucleus scheme the lowest value of Ne was 1300. For a given proportion of the population included in the selection programme, the difference in G between a closed nucleus scheme and an open one was up to 0.13 kg. We conclude that for mass selection based on calf weights in herding districts with 2000 animals or more, there are no risks of inbreeding effects caused by selection.

  1. Quantum Secure Communication Scheme with W State

    International Nuclear Information System (INIS)

    Wang Jian; Zhang Quan; Tang Chaojng

    2007-01-01

    We present a quantum secure communication scheme using three-qubit W state. It is unnecessary for the present scheme to use alternative measurement or Bell basis measurement. Compared with the quantum secure direct communication scheme proposed by Cao et al. [H.J. Cao and H.S. Song, Chin. Phys. Lett. 23 (2006) 290], in our scheme, the detection probability for an eavesdropper's attack increases from 8.3% to 25%. We also show that our scheme is secure for a noise quantum channel.

  2. Mapping Soil Organic Matter with Hyperspectral Imaging

    Science.gov (United States)

    Moni, Christophe; Burud, Ingunn; Flø, Andreas; Rasse, Daniel

    2014-05-01

    Soil organic matter (SOM) plays a central role for both food security and the global environment. Soil organic matter is the 'glue' that binds soil particles together, leading to positive effects on soil water and nutrient availability for plant growth and helping to counteract the effects of erosion, runoff, compaction and crusting. Hyperspectral measurements of samples of soil profiles have been conducted with the aim of mapping soil organic matter on a macroscopic scale (millimeters and centimeters). Two soil profiles have been selected from the same experimental site, one from a plot amended with biochar and another one from a control plot, with the specific objective to quantify and map the distribution of biochar in the amended profile. The soil profiles were of size (30 x 10 x 10) cm3 and were scanned with two pushbroomtype hyperspectral cameras, one which is sensitive in the visible wavelength region (400 - 1000 nm) and one in the near infrared region (1000 - 2500 nm). The images from the two detectors were merged together into one full dataset covering the whole wavelength region. Layers of 15 mm were removed from the 10 cm high sample such that a total of 7 hyperspectral images were obtained from the samples. Each layer was analyzed with multivariate statistical techniques in order to map the different components in the soil profile. Moreover, a 3-dimensional visalization of the components through the depth of the sample was also obtained by combining the hyperspectral images from all the layers. Mid-infrared spectroscopy of selected samples of the measured soil profiles was conducted in order to correlate the chemical constituents with the hyperspectral results. The results show that hyperspectral imaging is a fast, non-destructive technique, well suited to characterize soil profiles on a macroscopic scale and hence to map elements and different organic matter quality present in a complete pedon. As such, we were able to map and quantify biochar in our

  3. Optimum RA reactor fuelling scheme

    International Nuclear Information System (INIS)

    Strugar, P.; Nikolic, V.

    1965-10-01

    Ideal reactor refueling scheme can be achieved only by continuous fuel elements movement in the core, which is not possible, and thus approximations are applied. One of the possible approximations is discontinuous movement of fuel elements groups in radial direction. This enables higher burnup especially if axial exchange is possible. Analysis of refueling schemes in the RA reactor core and schemes with mixing the fresh and used fuel elements show that 30% higher burnup can be achieved by applying mixing, and even 40% if reactivity due to decrease in experimental space is taken into account. Up to now, mean burnup of 4400 MWd/t has been achieved, and the proposed fueling scheme with reduction of experimental space could achieve mean burnup of 6300 MWd/t which means about 25 Mwd/t per fuel channel [sr

  4. The national scheme for monitoring radioactive fallout in milk

    International Nuclear Information System (INIS)

    Green, B.M.R.

    1979-01-01

    The National Radiological Protection Board, Harwell, assumed responsibility for the national milk monitoring scheme on Jan. 1, 1979. Milk contamination provides a good guide to radioactivity in the British diet. Brief reference is made to U.K. surveys of radioactive fallout in human food prior to January 1979, and current arrangements for the sampling of milk in the U.K. are explained. The milk is analysed for 90 Sr, 137 Cs and stable calcium. Additional samples are collected to check for 131 I or other short-lived isotopes in the event of atmospheric nuclear tests or accidents involving possible releases of radioactivity. (U.K.)

  5. Enhancement of Light-Matter Interaction in Semiconductor Nanostructures

    DEFF Research Database (Denmark)

    Stobbe, Søren

    This thesis reports research on enhancement of light-matter interaction in semi- conductor quantum nanostructures by means of nanostructure fabrication, optical measurements, and theoretical modeling. Photonic crystal membranes of very high quality and samples for studies of quantum dots in proxi......-matter interaction is investigated. For the rst time the vacuum Rabi splitting is observed in an electrically tunable device....

  6. Dielectric spectroscopy for evaluating dry matter content of potato tubers

    DEFF Research Database (Denmark)

    Nielsen, Glenn G. B.; Kjaer, Anders; Klösgen, Beate

    2016-01-01

    The present study investigated the application of dielectric spectroscopy as a method for evaluating the dry matter content of potato tubers. Sample specific factors determining the precision of this application were investigated by studying the prediction of the dry material content in agar gel...... of the predicted dry matter content was observed in chemically and spatially uniform systems, with a root mean square error (RMSE) of the predicted dry-matter content of 0.64 percentage points observed in agar gels containing refined potato starch. A marked decrease in precision is observed in model systems which...... include chemical variations between potato tuber samples. The added dry material content was predicted with a RMSE of 0.94 percentage points in agar gels with added dried material extracted from separate potato tubers. The local dry matter content from a region within 2 cm of the center location...

  7. Student’s scheme in solving mathematics problems

    Science.gov (United States)

    Setyaningsih, Nining; Juniati, Dwi; Suwarsono

    2018-03-01

    The purpose of this study was to investigate students’ scheme in solving mathematics problems. Scheme are data structures for representing the concepts stored in memory. In this study, we used it in solving mathematics problems, especially ratio and proportion topics. Scheme is related to problem solving that assumes that a system is developed in the human mind by acquiring a structure in which problem solving procedures are integrated with some concepts. The data were collected by interview and students’ written works. The results of this study revealed are students’ scheme in solving the problem of ratio and proportion as follows: (1) the content scheme, where students can describe the selected components of the problem according to their prior knowledge, (2) the formal scheme, where students can explain in construct a mental model based on components that have been selected from the problem and can use existing schemes to build planning steps, create something that will be used to solve problems and (3) the language scheme, where students can identify terms, or symbols of the components of the problem.Therefore, by using the different strategies to solve the problems, the students’ scheme in solving the ratio and proportion problems will also differ.

  8. A conservative numerical scheme for modeling nonlinear acoustic propagation in thermoviscous homogeneous media

    Science.gov (United States)

    Diaz, Manuel A.; Solovchuk, Maxim A.; Sheu, Tony W. H.

    2018-06-01

    A nonlinear system of partial differential equations capable of describing the nonlinear propagation and attenuation of finite amplitude perturbations in thermoviscous media is presented. This system constitutes a full nonlinear wave model that has been formulated in the conservation form. Initially, this model is investigated analytically in the inviscid limit where it has been found that the resulting flux function fulfills the Lax-Wendroff theorem, and the scheme can match the solutions of the Westervelt and Burgers equations numerically. Here, high-order numerical descriptions of strongly nonlinear wave propagations become of great interest. For that matter we consider finite difference formulations of the weighted essentially non-oscillatory (WENO) schemes associated with explicit strong stability preserving Runge-Kutta (SSP-RK) time integration methods. Although this strategy is known to be computationally demanding, it is found to be effective when implemented to be solved in graphical processing units (GPUs). As we consider wave propagations in unbounded domains, perfectly matching layers (PML) have been also considered in this work. The proposed system model is validated and illustrated by using one- and two-dimensional benchmark test cases proposed in the literature for nonlinear acoustic propagation in homogeneous thermoviscous media.

  9. hybrid modulation scheme fo rid modulation scheme fo dulation

    African Journals Online (AJOL)

    eobe

    control technique is done through simulations and ex control technique .... HYBRID MODULATION SCHEME FOR CASCADED H-BRIDGE INVERTER CELLS. C. I. Odeh ..... and OR operations. Referring to ... MATLAB/SIMULINK environment.

  10. Towards Symbolic Encryption Schemes

    DEFF Research Database (Denmark)

    Ahmed, Naveed; Jensen, Christian D.; Zenner, Erik

    2012-01-01

    , namely an authenticated encryption scheme that is secure under chosen ciphertext attack. Therefore, many reasonable encryption schemes, such as AES in the CBC or CFB mode, are not among the implementation options. In this paper, we report new attacks on CBC and CFB based implementations of the well......Symbolic encryption, in the style of Dolev-Yao models, is ubiquitous in formal security models. In its common use, encryption on a whole message is specified as a single monolithic block. From a cryptographic perspective, however, this may require a resource-intensive cryptographic algorithm......-known Needham-Schroeder and Denning-Sacco protocols. To avoid such problems, we advocate the use of refined notions of symbolic encryption that have natural correspondence to standard cryptographic encryption schemes....

  11. Modified dark matter: Relating dark energy, dark matter and baryonic matter

    Science.gov (United States)

    Edmonds, Douglas; Farrah, Duncan; Minic, Djordje; Ng, Y. Jack; Takeuchi, Tatsu

    Modified dark matter (MDM) is a phenomenological model of dark matter, inspired by gravitational thermodynamics. For an accelerating universe with positive cosmological constant (Λ), such phenomenological considerations lead to the emergence of a critical acceleration parameter related to Λ. Such a critical acceleration is an effective phenomenological manifestation of MDM, and it is found in correlations between dark matter and baryonic matter in galaxy rotation curves. The resulting MDM mass profiles, which are sensitive to Λ, are consistent with observational data at both the galactic and cluster scales. In particular, the same critical acceleration appears both in the galactic and cluster data fits based on MDM. Furthermore, using some robust qualitative arguments, MDM appears to work well on cosmological scales, even though quantitative studies are still lacking. Finally, we comment on certain nonlocal aspects of the quanta of modified dark matter, which may lead to novel nonparticle phenomenology and which may explain why, so far, dark matter detection experiments have failed to detect dark matter particles.

  12. AN ENSEMBLE TEMPLATE MATCHING AND CONTENT-BASED IMAGE RETRIEVAL SCHEME TOWARDS EARLY STAGE DETECTION OF MELANOMA

    Directory of Open Access Journals (Sweden)

    Spiros Kostopoulos

    2016-12-01

    Full Text Available Malignant melanoma represents the most dangerous type of skin cancer. In this study we present an ensemble classification scheme, employing the mutual information, the cross-correlation and the clustering based on proximity of image features methods, for early stage assessment of melanomas on plain photography images. The proposed scheme performs two main operations. First, it retrieves the most similar, to the unknown case, image samples from an available image database with verified benign moles and malignant melanoma cases. Second, it provides an automated estimation regarding the nature of the unknown image sample based on the majority of the most similar images retrieved from the available database. Clinical material comprised 75 melanoma and 75 benign plain photography images collected from publicly available dermatological atlases. Results showed that the ensemble scheme outperformed all other methods tested in terms of accuracy with 94.9±1.5%, following an external cross-validation evaluation methodology. The proposed scheme may benefit patients by providing a second opinion consultation during the self-skin examination process and the physician by providing a second opinion estimation regarding the nature of suspicious moles that may assist towards decision making especially for ambiguous cases, safeguarding, in this way from potential diagnostic misinterpretations.

  13. de Broglie Swapping Metadynamics for Quantum and Classical Sampling.

    Science.gov (United States)

    Nava, Marco; Quhe, Ruge; Palazzesi, Ferruccio; Tiwary, Pratyush; Parrinello, Michele

    2015-11-10

    This paper builds on our previous work on Path Integral Metadynamics [ Ruge et al. J. Chem. Theory Comput. 2015 , 11 , 1383 ] in which we have accelerated sampling in quantum systems described by Feynman's Path Integrals using Metadynamics. We extend the scope of Path Integral Metadynamics by combining it with a replica exchange scheme in which artificially enhanced quantum effects play the same role as temperature does in parallel tempering. Our scheme can be adapted so as to be used in an ancillary way to sample systems described by classical statistical mechanics. Contrary to Metadynamics and many other sampling methods no collective variables need to be defined. The method in its two variants, quantum and classical, is tested in a number of examples.

  14. Setting aside transactions from pyramid schemes as impeachable ...

    African Journals Online (AJOL)

    These schemes, which are often referred to as pyramid or Ponzi schemes, are unsustainable operations and give rise to problems in the law of insolvency. Investors in these schemes are often left empty-handed upon the scheme's eventual collapse and insolvency. Investors who received pay-outs from the scheme find ...

  15. Renormalization scheme-invariant perturbation theory

    International Nuclear Information System (INIS)

    Dhar, A.

    1983-01-01

    A complete solution to the problem of the renormalization scheme dependence of perturbative approximants to physical quantities is presented. An equation is derived which determines any physical quantity implicitly as a function of only scheme independent variables. (orig.)

  16. Nonlinear secret image sharing scheme.

    Science.gov (United States)

    Shin, Sang-Ho; Lee, Gil-Je; Yoo, Kee-Young

    2014-01-01

    Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively.

  17. Good governance for pension schemes

    CERN Document Server

    Thornton, Paul

    2011-01-01

    Regulatory and market developments have transformed the way in which UK private sector pension schemes operate. This has increased demands on trustees and advisors and the trusteeship governance model must evolve in order to remain fit for purpose. This volume brings together leading practitioners to provide an overview of what today constitutes good governance for pension schemes, from both a legal and a practical perspective. It provides the reader with an appreciation of the distinctive characteristics of UK occupational pension schemes, how they sit within the capital markets and their social and fiduciary responsibilities. Providing a holistic analysis of pension risk, both from the trustee and the corporate perspective, the essays cover the crucial role of the employer covenant, financing and investment risk, developments in longevity risk hedging and insurance de-risking, and best practice scheme administration.

  18. Analysis of parallel optical sampling rate and ADC requirements in digital coherent receivers

    DEFF Research Database (Denmark)

    Lorences Riesgo, Abel; Galili, Michael; Peucheret, Christophe

    2012-01-01

    We comprehensively assess analog-to-digital converter requirements in coherent digital receiver schemes with parallel optical sampling. We determine the electronic requirements in accordance with the properties of the free running local oscillator.......We comprehensively assess analog-to-digital converter requirements in coherent digital receiver schemes with parallel optical sampling. We determine the electronic requirements in accordance with the properties of the free running local oscillator....

  19. Surface based detection schemes for molecular interferometry experiments - implications and possible applications

    Science.gov (United States)

    Juffmann, Thomas; Milic, Adriana; Muellneritsch, Michael; Arndt, Markus

    2011-03-01

    Surface based detection schemes for molecular interferometry experiments might be crucial in the search for the quantum properties of larger and larger objects since they provide single particle sensitivity. Here we report on molecular interferograms of different biomolecules imaged using fluorescence microscopy. Being able to watch the build-up of an interferogram live and in situ reveals the matter-wave behavior of these complex molecules in an unprecedented way. We examine several problems encountered due to van-der-Waals forces between the molecules and the diffraction grating and discuss possible ways to circumvent these. Especially the advent of ultra-thin (1-100 atomic layers) diffraction masks might path the way towards molecular holography. We also discuss other possible applications such as coherent molecular microscopy.

  20. Symmetric weak ternary quantum homomorphic encryption schemes

    Science.gov (United States)

    Wang, Yuqi; She, Kun; Luo, Qingbin; Yang, Fan; Zhao, Chao

    2016-03-01

    Based on a ternary quantum logic circuit, four symmetric weak ternary quantum homomorphic encryption (QHE) schemes were proposed. First, for a one-qutrit rotation gate, a QHE scheme was constructed. Second, in view of the synthesis of a general 3 × 3 unitary transformation, another one-qutrit QHE scheme was proposed. Third, according to the one-qutrit scheme, the two-qutrit QHE scheme about generalized controlled X (GCX(m,n)) gate was constructed and further generalized to the n-qutrit unitary matrix case. Finally, the security of these schemes was analyzed in two respects. It can be concluded that the attacker can correctly guess the encryption key with a maximum probability pk = 1/33n, thus it can better protect the privacy of users’ data. Moreover, these schemes can be well integrated into the future quantum remote server architecture, and thus the computational security of the users’ private quantum information can be well protected in a distributed computing environment.

  1. The bias of weighted dark matter halos from peak theory

    CERN Document Server

    Verde, Licia; Simpson, Fergus; Alvarez-Gaume, Luis; Heavens, Alan; Matarrese, Sabino

    2014-01-01

    We give an analytical form for the weighted correlation function of peaks in a Gaussian random field. In a cosmological context, this approach strictly describes the formation bias and is the main result here. Nevertheless, we show its validity and applicability to the evolved cosmological density field and halo field, using Gaussian random field realisations and dark matter N-body numerical simulations. Using this result from peak theory we compute the bias of peaks (and dark matter halos) and show that it reproduces results from the simulations at the ${\\mathcal O}(10\\%)$ level. Our analytical formula for the bias predicts a scale-dependent bias with two characteristics: a broad band shape which, however, is most affected by the choice of weighting scheme and evolution bias, and a more robust, narrow feature localised at the BAO scale, an effect that is confirmed in simulations. This scale-dependent bias smooths the BAO feature but, conveniently, does not move it. We provide a simple analytic formula to des...

  2. Labelling schemes: From a consumer perspective

    DEFF Research Database (Denmark)

    Juhl, Hans Jørn; Stacey, Julia

    2000-01-01

    Labelling of food products attracts a lot of political attention these days. As a result of a number of food scandals, most European countries have acknowledged the need for more information and better protection of consumers. Labelling schemes are one way of informing and guiding consumers....... However, initiatives in relation to labelling schemes seldom take their point of departure in consumers' needs and expectations; and in many cases, the schemes are defined by the institutions guaranteeing the label. It is therefore interesting to study how consumers actually value labelling schemes....... A recent MAPP study has investigated the value consumers attach the Government-controlled labels 'Ø-mærket' and 'Den Blå Lup' and the private supermarket label 'Mesterhakket' when they purchase minced meat. The results reveal four consumer segments that use labelling schemes for food products very...

  3. Critical evaluation of sample pretreatment techniques.

    Science.gov (United States)

    Hyötyläinen, Tuulia

    2009-06-01

    Sample preparation before chromatographic separation is the most time-consuming and error-prone part of the analytical procedure. Therefore, selecting and optimizing an appropriate sample preparation scheme is a key factor in the final success of the analysis, and the judicious choice of an appropriate procedure greatly influences the reliability and accuracy of a given analysis. The main objective of this review is to critically evaluate the applicability, disadvantages, and advantages of various sample preparation techniques. Particular emphasis is placed on extraction techniques suitable for both liquid and solid samples.

  4. Field Sampling from a Segmented Image

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-06-01

    Full Text Available This paper presents a statistical method for deriving the optimal prospective field sampling scheme on a remote sensing image to represent different categories in the field. The iterated conditional modes algorithm (ICM) is used for segmentation...

  5. An improved sampling system installed for reprocessing

    International Nuclear Information System (INIS)

    Finsterwalder, L.; Zeh, H.

    1979-03-01

    Sampling devices are needed for taking representative samples from individual process containers during the reprocessing of irradiated fuel. The aqueous process stream in a reprocessing plant frequently contains, in addition to the dissolved radioactive materials, more or less small quantities of solid matter fraction of fuel material still remaining undissolved, insoluble fission-, corrosion-, or degradation products as well, in exceptional cases, ion exchange resin or silica gel. The solid matter is deposited partly on the upper surfaces of the sampling system and the radiation due to this makes maintenance and repair of the sampler more difficult. The purpose of the development work was to reduce the chance of accident and the maintenance costs and to lower the radiation exposure of the personnel. A new sampling system was developed and is described. (author)

  6. Analysis of central and upwind compact schemes

    International Nuclear Information System (INIS)

    Sengupta, T.K.; Ganeriwal, G.; De, S.

    2003-01-01

    Central and upwind compact schemes for spatial discretization have been analyzed with respect to accuracy in spectral space, numerical stability and dispersion relation preservation. A von Neumann matrix spectral analysis is developed here to analyze spatial discretization schemes for any explicit and implicit schemes to investigate the full domain simultaneously. This allows one to evaluate various boundary closures and their effects on the domain interior. The same method can be used for stability analysis performed for the semi-discrete initial boundary value problems (IBVP). This analysis tells one about the stability for every resolved length scale. Some well-known compact schemes that were found to be G-K-S and time stable are shown here to be unstable for selective length scales by this analysis. This is attributed to boundary closure and we suggest special boundary treatment to remove this shortcoming. To demonstrate the asymptotic stability of the resultant schemes, numerical solution of the wave equation is compared with analytical solution. Furthermore, some of these schemes are used to solve two-dimensional Navier-Stokes equation and a computational acoustic problem to check their ability to solve problems for long time. It is found that those schemes, that were found unstable for the wave equation, are unsuitable for solving incompressible Navier-Stokes equation. In contrast, the proposed compact schemes with improved boundary closure and an explicit higher-order upwind scheme produced correct results. The numerical solution for the acoustic problem is compared with the exact solution and the quality of the match shows that the used compact scheme has the requisite DRP property

  7. On the non-orthogonal sampling scheme for Gabor's signal expansion

    NARCIS (Netherlands)

    Bastiaans, M.J.; Leest, van A.J.; Veen, J.P.

    2000-01-01

    Gabor's signal expansion and the Gabor transform are formulated on a non-orthogonal time-frequency lattice instead of on the traditional rectangular lattice [1,2]. The reason for doing so is that a non-orthogonal sampling geometry might be better adapted to the form of the window functions (in the

  8. Implementation of an approximate zero-variance scheme in the TRIPOLI Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Dumonteil, E.; Petit, O.; Diop, C. [Commissariat a l' Energie Atomique CEA, Gif-sur-Yvette (France)

    2006-07-01

    In an accompanying paper it is shown that theoretically a zero-variance Monte Carlo scheme can be devised for criticality calculations if the space, energy and direction dependent adjoint function is exactly known. This requires biasing of the transition and collision kernels with the appropriate adjoint function. In this paper it is discussed how an existing general purpose Monte Carlo code like TRIPOLI can be modified to approach the zero-variance scheme. This requires modifications for reading in the adjoint function obtained from a separate deterministic calculation for a number of space intervals, energy groups and discrete directions. Furthermore, a function has to be added to supply the direction dependent and the averaged adjoint function at a specific position in the system by interpolation. The initial particle weights of a certain batch must be set inversely proportional to the averaged adjoint function and proper normalization of the initial weights must be secured. The sampling of the biased transition kernel requires cumulative integrals of the biased kernel along the flight path until a certain value, depending on a selected random number is reached to determine a new collision site. The weight of the particle must be adapted accordingly. The sampling of the biased collision kernel (in a multigroup treatment) is much more like the normal sampling procedure. A numerical example is given for a 3-group calculation with a simplified transport model (two-direction model), demonstrating that the zero-variance scheme can be approximated quite well for this simplified case. (authors)

  9. Development and evaluation of a multi-locus sequence typing scheme for Mycoplasma synoviae.

    Science.gov (United States)

    Dijkman, R; Feberwee, A; Landman, W J M

    2016-08-01

    Reproducible molecular Mycoplasma synoviae typing techniques with sufficient discriminatory power may help to expand knowledge on its epidemiology and contribute to the improvement of control and eradication programmes of this mycoplasma species. The present study describes the development and validation of a novel multi-locus sequence typing (MLST) scheme for M. synoviae. Thirteen M. synoviae isolates originating from different poultry categories, farms and lesions, were subjected to whole genome sequencing. Their sequences were compared to that of M. synoviae reference strain MS53. A high number of single nucleotide polymorphisms (SNPs) indicating considerable genetic diversity were identified. SNPs were present in over 40 putative target genes for MLST of which five target genes were selected (nanA, uvrA, lepA, ruvB and ugpA) for the MLST scheme. This scheme was evaluated analysing 209 M. synoviae samples from different countries, categories of poultry, farms and lesions. Eleven clonal clusters and 76 different sequence types (STs) were obtained. Clustering occurred following geographical origin, supporting the hypothesis of regional population evolution. M. synoviae samples obtained from epidemiologically linked outbreaks often harboured the same ST. In contrast, multiple M. synoviae lineages were found in samples originating from swollen joints or oviducts from hens that produce eggs with eggshell apex abnormalities indicating that further research is needed to identify the genetic factors of M. synoviae that may explain its variations in tissue tropism and disease inducing potential. Furthermore, MLST proved to have a higher discriminatory power compared to variable lipoprotein and haemagglutinin A typing, which generated 50 different genotypes on the same database.

  10. A THz Tomography System for Arbitrarily Shaped Samples

    Science.gov (United States)

    Stübling, E.; Bauckhage, Y.; Jelli, E.; Fischer, B.; Globisch, B.; Schell, M.; Heinrich, A.; Balzer, J. C.; Koch, M.

    2017-10-01

    We combine a THz time-domain spectroscopy system with a robotic arm. With this scheme, the THz emitter and receiver can be positioned perpendicular and at defined distance to the sample surface. Our system allows the acquisition of reflection THz tomographic images of samples with an arbitrarily shaped surface.

  11. Estimating pesticide sampling rates by the polar organic chemical integrative sampler (POCIS) in the presence of natural organic matter and varying hydrodynamic conditions

    International Nuclear Information System (INIS)

    Charlestra, Lucner; Amirbahman, Aria; Courtemanch, David L.; Alvarez, David A.; Patterson, Howard

    2012-01-01

    The polar organic chemical integrative sampler (POCIS) was calibrated to monitor pesticides in water under controlled laboratory conditions. The effect of natural organic matter (NOM) on the sampling rates (R s ) was evaluated in microcosms containing −1 of total organic carbon (TOC). The effect of hydrodynamics was studied by comparing R s values measured in stirred (SBE) and quiescent (QBE) batch experiments and a flow-through system (FTS). The level of NOM in the water used in these experiments had no effect on the magnitude of the pesticide sampling rates (p > 0.05). However, flow velocity and turbulence significantly increased the sampling rates of the pesticides in the FTS and SBE compared to the QBE (p < 0.001). The calibration data generated can be used to derive pesticide concentrations in water from POCIS deployed in stagnant and turbulent environmental systems without correction for NOM. - Highlights: ► We assessed the effect of TOC and stirring on pesticide sampling rates by POCIS. ► Total organic carbon (TOC) had no effect on the sampling rates. ► Water flow and stirring significantly increased the magnitude of the sampling rates. ► The sampling rates generated are directly applicable to field conditions. - This study provides POCIS sampling rates data that can be used to estimate freely dissolved concentrations of toxic pesticides in natural waters.

  12. A Novel Iris Segmentation Scheme

    Directory of Open Access Journals (Sweden)

    Chen-Chung Liu

    2014-01-01

    Full Text Available One of the key steps in the iris recognition system is the accurate iris segmentation from its surrounding noises including pupil, sclera, eyelashes, and eyebrows of a captured eye-image. This paper presents a novel iris segmentation scheme which utilizes the orientation matching transform to outline the outer and inner iris boundaries initially. It then employs Delogne-Kåsa circle fitting (instead of the traditional Hough transform to further eliminate the outlier points to extract a more precise iris area from an eye-image. In the extracted iris region, the proposed scheme further utilizes the differences in the intensity and positional characteristics of the iris, eyelid, and eyelashes to detect and delete these noises. The scheme is then applied on iris image database, UBIRIS.v1. The experimental results show that the presented scheme provides a more effective and efficient iris segmentation than other conventional methods.

  13. Analysis of Program Obfuscation Schemes with Variable Encoding Technique

    Science.gov (United States)

    Fukushima, Kazuhide; Kiyomoto, Shinsaku; Tanaka, Toshiaki; Sakurai, Kouichi

    Program analysis techniques have improved steadily over the past several decades, and software obfuscation schemes have come to be used in many commercial programs. A software obfuscation scheme transforms an original program or a binary file into an obfuscated program that is more complicated and difficult to analyze, while preserving its functionality. However, the security of obfuscation schemes has not been properly evaluated. In this paper, we analyze obfuscation schemes in order to clarify the advantages of our scheme, the XOR-encoding scheme. First, we more clearly define five types of attack models that we defined previously, and define quantitative resistance to these attacks. Then, we compare the security, functionality and efficiency of three obfuscation schemes with encoding variables: (1) Sato et al.'s scheme with linear transformation, (2) our previous scheme with affine transformation, and (3) the XOR-encoding scheme. We show that the XOR-encoding scheme is superior with regard to the following two points: (1) the XOR-encoding scheme is more secure against a data-dependency attack and a brute force attack than our previous scheme, and is as secure against an information-collecting attack and an inverse transformation attack as our previous scheme, (2) the XOR-encoding scheme does not restrict the calculable ranges of programs and the loss of efficiency is less than in our previous scheme.

  14. Efficient multiparty quantum-secret-sharing schemes

    International Nuclear Information System (INIS)

    Xiao Li; Deng Fuguo; Long Guilu; Pan Jianwei

    2004-01-01

    In this work, we generalize the quantum-secret-sharing scheme of Hillery, Buzek, and Berthiaume [Phys. Rev. A 59, 1829 (1999)] into arbitrary multiparties. Explicit expressions for the shared secret bit is given. It is shown that in the Hillery-Buzek-Berthiaume quantum-secret-sharing scheme the secret information is shared in the parity of binary strings formed by the measured outcomes of the participants. In addition, we have increased the efficiency of the quantum-secret-sharing scheme by generalizing two techniques from quantum key distribution. The favored-measuring-basis quantum-secret-sharing scheme is developed from the Lo-Chau-Ardehali technique [H. K. Lo, H. F. Chau, and M. Ardehali, e-print quant-ph/0011056] where all the participants choose their measuring-basis asymmetrically, and the measuring-basis-encrypted quantum-secret-sharing scheme is developed from the Hwang-Koh-Han technique [W. Y. Hwang, I. G. Koh, and Y. D. Han, Phys. Lett. A 244, 489 (1998)] where all participants choose their measuring basis according to a control key. Both schemes are asymptotically 100% in efficiency, hence nearly all the Greenberger-Horne-Zeilinger states in a quantum-secret-sharing process are used to generate shared secret information

  15. Interacting dark matter disguised as warm dark matter

    International Nuclear Information System (INIS)

    Boehm, Celine; Riazuelo, Alain; Hansen, Steen H.; Schaeffer, Richard

    2002-01-01

    We explore some of the consequences of dark-matter-photon interactions on structure formation, focusing on the evolution of cosmological perturbations and performing both an analytical and a numerical study. We compute the cosmic microwave background anisotropies and matter power spectrum in this class of models. We find, as the main result, that when dark matter and photons are coupled, dark matter perturbations can experience a new damping regime in addition to the usual collisional Silk damping effect. Such dark matter particles (having quite large photon interactions) behave like cold dark matter or warm dark matter as far as the cosmic microwave background anisotropies or matter power spectrum are concerned, respectively. These dark-matter-photon interactions leave specific imprints at sufficiently small scales on both of these two spectra, which may allow us to put new constraints on the acceptable photon-dark-matter interactions. Under the conservative assumption that the abundance of 10 12 M · galaxies is correctly given by the cold dark matter, and without any knowledge of the abundance of smaller objects, we obtain the limit on the ratio of the dark-matter-photon cross section to the dark matter mass σ γ-DM /m DM -6 σ Th /(100 GeV)≅6x10 -33 cm 2 GeV -1

  16. How to conduct External Quality Assessment Schemes for the pre-analytical phase?

    Science.gov (United States)

    Kristensen, Gunn B B; Aakre, Kristin Moberg; Kristoffersen, Ann Helen; Sandberg, Sverre

    2014-01-01

    In laboratory medicine, several studies have described the most frequent errors in the different phases of the total testing process, and a large proportion of these errors occur in the pre-analytical phase. Schemes for registration of errors and subsequent feedback to the participants have been conducted for decades concerning the analytical phase by External Quality Assessment (EQA) organizations operating in most countries. The aim of the paper is to present an overview of different types of EQA schemes for the pre-analytical phase, and give examples of some existing schemes. So far, very few EQA organizations have focused on the pre-analytical phase, and most EQA organizations do not offer pre-analytical EQA schemes (EQAS). It is more difficult to perform and standardize pre-analytical EQAS and also, accreditation bodies do not ask the laboratories for results from such schemes. However, some ongoing EQA programs for the pre-analytical phase do exist, and some examples are given in this paper. The methods used can be divided into three different types; collecting information about pre-analytical laboratory procedures, circulating real samples to collect information about interferences that might affect the measurement procedure, or register actual laboratory errors and relate these to quality indicators. These three types have different focus and different challenges regarding implementation, and a combination of the three is probably necessary to be able to detect and monitor the wide range of errors occurring in the pre-analytical phase.

  17. Gamma spectrometry; level schemes

    International Nuclear Information System (INIS)

    Blachot, J.; Bocquet, J.P.; Monnand, E.; Schussler, F.

    1977-01-01

    The research presented dealt with: a new beta emitter, isomer of 131 Sn; the 136 I levels fed through the radioactive decay of 136 Te (20.9s); the A=145 chain (β decay of Ba, La and Ce, and level schemes for 145 La, 145 Ce, 145 Pr); the A=47 chain (La and Ce, β decay, and the level schemes of 147 Ce and 147 Pr) [fr

  18. Developing and supporting coordinators of structured mentoring schemes in South Africa

    Directory of Open Access Journals (Sweden)

    Penny Abbott

    2010-10-01

    Research purpose: The aim of this research is to discover what the characteristics of coordinators of structured mentoring schemes in South Africa are, what is required of such coordinators and how they feel about their role, with a view to improving development and support for them. Motivation for the study: The limited amount of information about role requirements for coordinators which is available in the literature is not based on empirical research. This study aims to supply the empirical basis for improved development and support for coordinators. Research design and method: A purposive sample of 25 schemes was identified and both quantitative and qualitative data, obtained through questionnaires and interviews, were analysed using descriptive statistics and thematic analysis. Main findings: Functions of coordinators tend to be similar across different types of mentoring schemes. A passion for mentoring is important, as the role involves many frustrations. There is little formalised development and support for coordinators. Practical/managerial implications: The study clarifies the functions of the coordinator, offers a job description and profile and makes suggestions on how to improve the development of the coordinator’s skills. Contribution/value-add: An understanding of what is required from a coordinator, how the necessary knowledge and skills can be developed and how the coordinator can be supported,adds value to an organisation setting up or reviewing its structured mentoring schemes.

  19. Modeling and Analysis of Resonance in LCL-Type Grid-Connected Inverters under Different Control Schemes

    Directory of Open Access Journals (Sweden)

    Yanxue Yu

    2017-01-01

    Full Text Available As a basic building block in power systems, the three-phase voltage-source inverter (VSI connects the distributed energy to the grid. For the inductor-capacitor-inductor (LCL-filter three-phase VSI, according to different current sampling position and different reference frame, there mainly exist four control schemes. Different control schemes present different impedance characteristics in their corresponding determined frequency range. To analyze the existing resonance phenomena due to the variation of grid impedances, the sequence impedance models of LCL-type grid-connected three-phase inverters under different control schemes are presented using the harmonic linearization method. The impedance-based stability analysis approach is then applied to compare the relative stability issues due to the impedance differences at some frequencies and to choose the best control scheme and the better controller parameters regulating method for the LCL-type three-phase VSI. The simulation and experiments both validate the resonance analysis results.

  20. Coordinated renewable energy support schemes

    DEFF Research Database (Denmark)

    Morthorst, P.E.; Jensen, S.G.

    2006-01-01

    . The first example covers countries with regional power markets that also regionalise their support schemes, the second countries with separate national power markets that regionalise their support schemes. The main findings indicate that the almost ideal situation exists if the region prior to regionalising...

  1. Scheme for analysis of oily waters

    Energy Technology Data Exchange (ETDEWEB)

    Lysyj, I.; Rushworth, R.; Melvold, R.; Russell, E.C.

    1980-01-01

    A scheme is described for gross and detailed chemical characterization of oily waters. Total, suspended, and dissolved organic content and hydrocarbon levels of the sample are determined. Volatile and water-soluble fractions are characterized in greater detail. Lower aliphatic and aromatic hydrocarbons are separated from the water by nitrogen sparging and are collected in an activated carbon absorption column. They are then extracted into carbon disulfide and analyzed gas chromatographically. The water-soluble fraction is extracted into chloroform or adsorbed on Amberlite XAD type of resin. Class characterization of this fraction is performed using the HPLC procedure. GC-MS-C is used for detailed analysis. The methodology was used for studying the effectiveness of bilge and ballast water treatments.

  2. Asynchronous Channel-Hopping Scheme under Jamming Attacks

    Directory of Open Access Journals (Sweden)

    Yongchul Kim

    2018-01-01

    Full Text Available Cognitive radio networks (CRNs are considered an attractive technology to mitigate inefficiency in the usage of licensed spectrum. CRNs allow the secondary users (SUs to access the unused licensed spectrum and use a blind rendezvous process to establish communication links between SUs. In particular, quorum-based channel-hopping (CH schemes have been studied recently to provide guaranteed blind rendezvous in decentralized CRNs without using global time synchronization. However, these schemes remain vulnerable to jamming attacks. In this paper, we first analyze the limitations of quorum-based rendezvous schemes called asynchronous channel hopping (ACH. Then, we introduce a novel sequence sensing jamming attack (SSJA model in which a sophisticated jammer can dramatically reduce the rendezvous success rates of ACH schemes. In addition, we propose a fast and robust asynchronous rendezvous scheme (FRARS that can significantly enhance robustness under jamming attacks. Our numerical results demonstrate that the performance of the proposed scheme vastly outperforms the ACH scheme when there are security concerns about a sequence sensing jammer.

  3. Product forms in Gabor analysis for a quincunx-type sampling geometry

    NARCIS (Netherlands)

    Bastiaans, M.J.; Leest, van A.J.; Veen, J.P.

    1998-01-01

    Recently a new sampling lattice - the quincunx lattice - has been introduced [1] as a sampling geometry in the Gabor scheme, which geometry is different from the traditional rectangular sampling geometry. In this paper we will show how results that hold for rectangular sampling (see, for instance,

  4. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    Science.gov (United States)

    Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.

    2014-01-01

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes

  5. Knowledge and uptake of community-based health insurance scheme among residents of Olowora, Lagos

    Directory of Open Access Journals (Sweden)

    O A Ibukun

    2013-01-01

    Full Text Available Background and Objective: The informal sector population in developing nations has low health coverage from Community Based Health Insurance (CBHI and problems such as limited awareness about the potential impact of prepayment health financing and the limited resources to finance health care can impede success. This study assessed the community based health insurance scheme uptake and determinants in Olowora, Lagos State. Methods: This was a descriptive cross sectional study carried out in July 2010 in all households of 12 out of 41 streets in Olowora,by multistage sampling. Four hundred and sixteen interviewer-administered questionnaires were completed and returned. Analysis was by Epi- info version 3.5.1 software. Results: Although 75.5% of respondents were aware of the Community Health Insurance scheme at Olowora, just about half (49.5% of them had good knowledge of the scheme. A substantial proportion (44.2% of respondents did not believe in contributing money for illness yet to come, and majority (72.3% of such respondents prefers payment for health care when ill. While about half (53% of respondentshad enrolled into the community health insurance scheme, 45.6% of those who had not enrolled were not aware of the scheme. Lack of money was the main reason (51.5% why some enrollees had defaulted. Conclusion: The study identified information gaps and poor understanding of the scheme as well as poverty as factors that have negatively affected uptake. The scheme management has to re-evaluate its sensitization programmes, and also strengthen marketing strategies with special emphasis on the poor.

  6. The Function of Credit Scheme to Improve Family Income among Beef Cattle Farmers in Central Java Province

    Science.gov (United States)

    Prasetyo, E.; Ekowati, T.; Roessali, W.; Gayatri, S.

    2018-02-01

    The aims of study were: (i) identify of beef cattle fattening credit scheme, (ii) calculating and analyze of beef cattle farmers’ income, (iii) analyze of factors influencing beef cattle credit scheme towards farmer’s income. The research was held in five regencies in Central Java Province. Beef cattle fattening farm was standardized as an elementary unit. Survey method was used, while Two Stage Cluster Purposive Sampling was used for determining of sample. Data were analyzed using statistical method of quantitative descriptive and inferential statistics in term of income analysis and multiple linear regression models. The result showed that farmers used their own capital to run the farm. The average amount was IDR 10,769,871. Kredit Ketahanan Pangan dan Energi was credit scheme which was dominantly access by farmers. The average credit was IDR 23,312,200/farmer with rate of credit equal to 6.46%, the time of credit returning equal to 24.60 monthand the prediction of average collateral equal to IDR 35,800,00. The average of farmers’ income was IDR 4,361,611.60/2.96 head of beef cattle/fattening period. If the labour cost did not calculate as a cost production, hence the farmer’ income was IDR 7,608,630.41 or in other word the farmer’ income increase 74.44%. Factors of credit scheme which partially significant influence to the farmers’ income were number of own capital usage and value of credit collateral. Meanwhile, name of credit scheme, financing institution as a creditor, amount of credit, rate of credit scheme and time of returning credit were not significantly influence towards farmers’ income.

  7. Multi-fractal analysis and lacunarity spectrum of the dark matter haloes in the SDSS-DR7

    International Nuclear Information System (INIS)

    Chacón-Cardona, C.A.; Casas-Miranda, R.A.; Muñoz-Cuartas, J.C.

    2016-01-01

    Highlights: • We analysed the dark matter in Seventh Data Release of the Sloan Digital Sky Survey. • From the initial sample with 412,468 galaxies, 339,505 dark matter haloes were used. • We found the multifractal and the lacunarity spectrum as radial distance function. • The dark matter set did not achieve at the physical dimension of the space. - Abstract: The dark matter halo distribution of the nearby universe is used to study the fractal behaviour in the proximate universe. The data, which is based on four volume-limited galaxy samples was obtained by Muñoz-Cuartas and Mueller (2012) from the Seventh Data Release of the Sloan Digital Sky Survey (SDSS-DR7). In order to know the fractal behaviour of the observed universe, from the initial sample which contains 412,468 galaxies and 339,505 dark matter haloes were used as input for the fractal calculations. Using this data we use the sliding-window technique for the dark matter distribution and compute the multi-fractal dimension and the lacunarity spectrum and use it to study its dependence on radial distance in every sample. The transition to homogeneity is not observed in the dark matter halo distribution obtained from the SDSS-DR7 volume-limited galaxy samples; in its place the dark matter halo distribution exhibits a persistent multi-fractal behaviour where the measured dimension does not arrive at the value of the physical dimension of the space, for all structure parameter values of the analysed set, at least up to radial distances of the ordered from 165 Mpc/h from the available centres of each sample. Our results and their implications are discussed in the context of the formation of large-scale structures in the universe.

  8. A fast resonance interference treatment scheme with subgroup method

    International Nuclear Information System (INIS)

    Cao, L.; He, Q.; Wu, H.; Zu, T.; Shen, W.

    2015-01-01

    A fast Resonance Interference Factor (RIF) scheme is proposed to treat the resonance interference effects between different resonance nuclides. This scheme utilizes the conventional subgroup method to evaluate the self-shielded cross sections of the dominant resonance nuclide in the heterogeneous system and the hyper-fine energy group method to represent the resonance interference effects in a simplified homogeneous model. In this paper, the newly implemented scheme is compared to the background iteration scheme, the Resonance Nuclide Group (RNG) scheme and the conventional RIF scheme. The numerical results show that the errors of the effective self-shielded cross sections are significantly reduced by the fast RIF scheme compared with the background iteration scheme and the RNG scheme. Besides, the fast RIF scheme consumes less computation time than the conventional RIF schemes. The speed-up ratio is ~4.5 for MOX pin cell problems. (author)

  9. Arbitrated quantum signature scheme with message recovery

    International Nuclear Information System (INIS)

    Lee, Hwayean; Hong, Changho; Kim, Hyunsang; Lim, Jongin; Yang, Hyung Jin

    2004-01-01

    Two quantum signature schemes with message recovery relying on the availability of an arbitrator are proposed. One scheme uses a public board and the other does not. However both schemes provide confidentiality of the message and a higher efficiency in transmission

  10. CANONICAL BACKWARD DIFFERENTIATION SCHEMES FOR ...

    African Journals Online (AJOL)

    This paper describes a new nonlinear backward differentiation schemes for the numerical solution of nonlinear initial value problems of first order ordinary differential equations. The schemes are based on rational interpolation obtained from canonical polynomials. They are A-stable. The test problems show that they give ...

  11. Using Linked Survey Paradata to Improve Sampling Strategies in the Medical Expenditure Panel Survey

    Directory of Open Access Journals (Sweden)

    Mirel Lisa B.

    2017-06-01

    Full Text Available Using paradata from a prior survey that is linked to a new survey can help a survey organization develop more effective sampling strategies. One example of this type of linkage or subsampling is between the National Health Interview Survey (NHIS and the Medical Expenditure Panel Survey (MEPS. MEPS is a nationally representative sample of the U.S. civilian, noninstitutionalized population based on a complex multi-stage sample design. Each year a new sample is drawn as a subsample of households from the prior year’s NHIS. The main objective of this article is to examine how paradata from a prior survey can be used in developing a sampling scheme in a subsequent survey. A framework for optimal allocation of the sample in substrata formed for this purpose is presented and evaluated for the relative effectiveness of alternative substratification schemes. The framework is applied, using real MEPS data, to illustrate how utilizing paradata from the linked survey offers the possibility of making improvements to the sampling scheme for the subsequent survey. The improvements aim to reduce the data collection costs while maintaining or increasing effective responding sample sizes and response rates for a harder to reach population.

  12. Non-invasive localization of organic matter in soil aggregates using SR-μCT

    Science.gov (United States)

    Peth, Stephan; Mordhorst, Anneka; Chenu, Claire; Uteau Puschmann, Daniel; Garnier, Patricia; Nunan, Naoise; Pot, Valerie; Beckmann, Felix; Ogurreck, Malte

    2014-05-01

    Knowledge of the location of soil organic matter (SOM) and its spatial association to soil structure is an important step in improving modeling approaches for simulating organic matter turnover processes. Advanced models for carbon mineralization are able to account for the 3D distribution of SOM which is assumed to influence mineralisation. However, their application is still limited by the fact that no method exists to non-invasively determine the 3D spatial distribution of SOM in structured soils. SR-based X-ray microtomography (SR-µCT) is an advanced and promising tool in gaining knowledge on the 3-dimensional organization of soil phases (minerals, organic matter, water, air) which on a voxel level could be implemented into spatially explicit models. However, since the contrast of linear attenuation coefficients of soil organic matter on the one hand and mineral components and water on the other hand are relatively low, especially when materials are finely dispersed, organic matter within the soil pore space is often not resolved in ordinary X-ray absorption contrast imaging. To circumvent this problem we have developed a staining procedure for organic matter using Osmium-tetroxide since Osmium is an element with an absorption edge at a higher X-ray energy level. Osmium is known from transmission electron microscopy analysis (TEM) to stain organic matter specifically and irreversibly while having an absorption edge at approximately 74 keV. We report on the application of a novel Osmium vapor staining method to analyze differences in organic matter content and identify small scale spatial distribution of SOM in soil aggregates. To achieve this we have taken soil aggregate samples (6-8 mm across) obtained from arable soils differing in soil management. Aggregate samples were investigated by synchrotron-based X-ray microtomography (SR-µCT) after staining the sample with Osmium-tetroxide (OsO4) vapor. We utilized the monochromatic X-ray beam to locate osmium

  13. Occurrence of benzothiazole, benzotriazole and benzenesulfonamide derivates in outdoor air particulate matter samples and human exposure assessment.

    Science.gov (United States)

    Maceira, Alba; Marcé, Rosa Maria; Borrull, Francesc

    2018-02-01

    Benzothiazole (BTHs), benzotriazole (BTRs) and benzenesulfonamide (BSAs) derivates are high production volume chemicals and they are used in several industrial and household applications, therefore it is expected their occurrence in various environments, especially water and air. In this study we developed a method based on gas chromatography-mass spectrometry (GC-MS) combined with pressurised liquid extraction (PLE) to simultaneously determine four BTR, five BTH and six BSA derivates in the particulate matter (PM 10 ) of outdoor air samples collected in quartz fibre filters (QFFs). To the best of our knowledge, this is the first time these compounds have been determined in open ambient environments. Under optimised conditions, method recoveries at the lower and upper concentration levels (0.8 and 4.2 ng m -3 ) ranged from 70 to 120%, except for 1-H-benzothiazole and 2-chlorobenzothiazole, which were about 50%. The repeatability of the method was usually below 20% (n = 3, %RSD) for both concentration levels. This method enables the contaminants to be detected at pg m -3 concentration levels. Several samples from two different sites influenced by local industries showed that BTRs, followed by BTHs, were the most detected compounds, whereas BSAs were hardly found. The most frequently determined compounds were 1-H-benzothiazole, 2-chlorobenzothiazole, 1-H-benzotriazole, 2-hydroxibenzothiazole, 5,6-dimethyl-1-H-benzotriazole and the isomers 4- and 5-methyl-1-H-benzotriazole. With the concentrations found, the human exposure assessment and health risk characterization via ambient inhalation were also evaluated taking into account different subpopulation groups classified by age for the two sampling points. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Conserving relativistic many-body approach: Equation of state, spectral function, and occupation probabilities of nuclear matter

    International Nuclear Information System (INIS)

    de Jong, F.; Malfliet, R.

    1991-01-01

    Starting from a relativistic Lagrangian we derive a ''conserving'' approximation for the description of nuclear matter. We show this to be a nontrivial extension over the relativistic Dirac-Brueckner scheme. The saturation point of the equation of state calculated agrees very well with the empirical saturation point. The conserving character of the approach is tested by means of the Hugenholtz--van Hove theorem. We find the theorem fulfilled very well around saturation. A new value for compression modulus is derived, K=310 MeV. Also we calculate the occupation probabilities at normal nuclear matter densities by means of the spectral function. The average depletion κ of the Fermi sea is found to be κ∼0.11

  15. A simple angular transmit diversity scheme using a single RF frontend for PSK modulation schemes

    DEFF Research Database (Denmark)

    Alrabadi, Osama Nafeth Saleem; Papadias, Constantinos B.; Kalis, Antonis

    2009-01-01

    array (SPA) with a single transceiver, and an array area of 0.0625 square wavelengths. The scheme which requires no channel state information (CSI) at the transmitter, provides mainly a diversity gain to combat against multipath fading. The performance/capacity of the proposed diversity scheme...

  16. Evaluating statistical cloud schemes

    OpenAIRE

    Grützun, Verena; Quaas, Johannes; Morcrette , Cyril J.; Ament, Felix

    2015-01-01

    Statistical cloud schemes with prognostic probability distribution functions have become more important in atmospheric modeling, especially since they are in principle scale adaptive and capture cloud physics in more detail. While in theory the schemes have a great potential, their accuracy is still questionable. High-resolution three-dimensional observational data of water vapor and cloud water, which could be used for testing them, are missing. We explore the potential of ground-based re...

  17. Superposition Enhanced Nested Sampling

    Directory of Open Access Journals (Sweden)

    Stefano Martiniani

    2014-08-01

    Full Text Available The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  18. A Classification Scheme for Analyzing Mobile Apps Used to Prevent and Manage Disease in Late Life

    Science.gov (United States)

    Wang, Aiguo; Lu, Xin; Chen, Hongtu; Li, Changqun; Levkoff, Sue

    2014-01-01

    Background There are several mobile apps that offer tools for disease prevention and management among older adults, and promote health behaviors that could potentially reduce or delay the onset of disease. A classification scheme that categorizes apps could be useful to both older adult app users and app developers. Objective The objective of our study was to build and evaluate the effectiveness of a classification scheme that classifies mobile apps available for older adults in the “Health & Fitness” category of the iTunes App Store. Methods We constructed a classification scheme for mobile apps according to three dimensions: (1) the Precede-Proceed Model (PPM), which classifies mobile apps in terms of predisposing, enabling, and reinforcing factors for behavior change; (2) health care process, specifically prevention versus management of disease; and (3) health conditions, including physical health and mental health. Content analysis was conducted by the research team on health and fitness apps designed specifically for older adults, as well as those applicable to older adults, released during the months of June and August 2011 and August 2012. Face validity was assessed by a different group of individuals, who were not related to the study. A reliability analysis was conducted to confirm the accuracy of the coding scheme of the sample apps in this study. Results After applying sample inclusion and exclusion criteria, a total of 119 apps were included in the study sample, of which 26/119 (21.8%) were released in June 2011, 45/119 (37.8%) in August 2011, and 48/119 (40.3%) in August 2012. Face validity was determined by interviewing 11 people, who agreed that this scheme accurately reflected the nature of this application. The entire study sample was successfully coded, demonstrating satisfactory inter-rater reliability by two independent coders (95.8% initial concordance and 100% concordance after consensus was reached). The apps included in the study sample

  19. SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS

    OpenAIRE

    Sampath Sundaram; Ammani Sivaraman

    2010-01-01

    In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i)  Balanced Systematic Sampling (BSS) of  Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg  (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...

  20. Short-term incentive schemes for hospital managers

    Directory of Open Access Journals (Sweden)

    Lucas Malambe

    2013-10-01

    Full Text Available Orientation: Short-term incentives, considered to be an extrinsic motivation, are commonly used to motivate performance. This study explored hospital managers’ perceptions of short term incentives in maximising performance and retention. Research purpose: The study explored the experiences, views and perceptions of private hospital managers in South Africa regarding the use of short-term incentives to maximise performance and retention, as well as the applicability of the findings to public hospitals. Motivation for the study: Whilst there is an established link between performance reward schemes and organisational performance, there is little understanding of the effects of short term incentives on the performance and retention of hospital managers within the South African context. Research design, approach, and method: The study used a qualitative research design: interviews were conducted with a purposive sample of 19 hospital managers, and a thematic content analysis was performed. Main findings: Short-term incentives may not be the primary motivator for hospital managers, but they do play a critical role in sustaining motivation. Participants indicated that these schemes could also be applicable to public hospitals. Practical/managerial implications: Hospital managers are inclined to be more motivated by intrinsic than extrinsic factors. However, hospital managers (as middle managers also seem to be motivated by short-term incentives. A combination of intrinsic and extrinsic motivators should thus be used to maximise performance and retention. Contribution/value-add: Whilst the study sought to explore hospital managers’ perceptions of short-term incentives, it also found that an adequate balance between internal and external motivators is key to implementing an effective short-term incentive scheme.

  1. A comprehensive review on two-stage integrative schemes for the valorization of dark fermentative effluents.

    Science.gov (United States)

    Sivagurunathan, Periyasamy; Kuppam, Chandrasekhar; Mudhoo, Ackmez; Saratale, Ganesh D; Kadier, Abudukeremu; Zhen, Guangyin; Chatellard, Lucile; Trably, Eric; Kumar, Gopalakrishnan

    2017-12-21

    This review provides the alternative routes towards the valorization of dark H 2 fermentation effluents that are mainly rich in volatile fatty acids such as acetate and butyrate. Various enhancement and alternative routes such as photo fermentation, anaerobic digestion, utilization of microbial electrochemical systems, and algal system towards the generation of bioenergy and electricity and also for efficient organic matter utilization are highlighted. What is more, various integration schemes and two-stage fermentation for the possible scale up are reviewed. Moreover, recent progress for enhanced performance towards waste stabilization and overall utilization of useful and higher COD present in the organic source into value-added products are extensively discussed.

  2. LDPC-PPM Coding Scheme for Optical Communication

    Science.gov (United States)

    Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael

    2009-01-01

    In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.

  3. Multidimensional flux-limited advection schemes

    International Nuclear Information System (INIS)

    Thuburn, J.

    1996-01-01

    A general method for building multidimensional shape preserving advection schemes using flux limiters is presented. The method works for advected passive scalars in either compressible or incompressible flow and on arbitrary grids. With a minor modification it can be applied to the equation for fluid density. Schemes using the simplest form of the flux limiter can cause distortion of the advected profile, particularly sideways spreading, depending on the orientation of the flow relative to the grid. This is partly because the simple limiter is too restrictive. However, some straightforward refinements lead to a shape-preserving scheme that gives satisfactory results, with negligible grid-flow angle-dependent distortion

  4. Determination of mercury in airborne particulate matter collected on glass fiber filters using high-resolution continuum source graphite furnace atomic absorption spectrometry and direct solid sampling

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Rennan G.O., E-mail: rgoa01@terra.com.br [Laboratorio de Quimica Analitica Ambiental, Departamento de Quimica, Universidade Federal de Sergipe, Campus Sao Cristovao, 49.100-000, Sao Cristovao, SE (Brazil); Departamento de Quimica, Universidade Federal de Santa Catarina, 88040-900, Florianopolis, SC (Brazil); Vignola, Fabiola; Castilho, Ivan N.B. [Departamento de Quimica, Universidade Federal de Santa Catarina, 88040-900, Florianopolis, SC (Brazil); Borges, Daniel L.G.; Welz, Bernhard [Departamento de Quimica, Universidade Federal de Santa Catarina, 88040-900, Florianopolis, SC (Brazil); Instituto Nacional de Ciencia e Tecnologia do CNPq, INCT de Energia e Ambiente, Universidade Federal da Bahia, 40170-115 Salvador, BA (Brazil); Vale, Maria Goreti R. [Instituto Nacional de Ciencia e Tecnologia do CNPq, INCT de Energia e Ambiente, Universidade Federal da Bahia, 40170-115 Salvador, BA (Brazil); Instituto de Quimica, Universidade Federal do Rio Grande do Sul, 91501-970 Porto Alegre, RS (Brazil); Smichowski, Patricia [Comision Nacional de Energia Atomica (CNEA) and Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET), Buenos Aires (Argentina); Ferreira, Sergio L.C. [Instituto Nacional de Ciencia e Tecnologia do CNPq, INCT de Energia e Ambiente, Universidade Federal da Bahia, 40170-115 Salvador, BA (Brazil); Instituto de Quimica, Universidade Federal da Bahia, 40170-290, Salvador, BA (Brazil); Becker-Ross, Helmut [Leibniz-Institut fuer Analytische Wissenschaften-ISAS-e.V., Department Berlin, 12489 Berlin (Germany)

    2011-05-15

    A study has been undertaken to assess the capability of high-resolution continuum source graphite furnace atomic absorption spectrometry for the determination of mercury in airborne particulate matter (APM) collected on glass fiber filters using direct solid sampling. The main Hg absorption line at 253.652 nm was used for all determinations. The certified reference material NIST SRM 1648 (Urban Particulate Matter) was used to check the accuracy of the method, and good agreement was obtained between published and determined values. The characteristic mass was 22 pg Hg. The limit of detection (3{sigma}), based on ten atomizations of an unexposed filter, was 40 ng g{sup -1}, corresponding to 0.12 ng m{sup -3} in the air for a typical air volume of 1440 m{sup 3} collected within 24 h. The limit of quantification was 150 ng g{sup -1}, equivalent to 0.41 ng m{sup -3} in the air. The repeatability of measurements was better than 17% RSD (n = 5). Mercury concentrations found in filter samples loaded with APM collected in Buenos Aires, Argentina, were between < 40 ng g{sup -1} and 381 {+-} 24 ng g{sup -1}. These values correspond to a mercury concentration in the air between < 0.12 ng m{sup -3} and 1.47 {+-} 0.09 ng m{sup -3}. The proposed procedure was found to be simple, fast and reliable, and suitable as a screening procedure for the determination of mercury in APM samples.

  5. Tightly Secure Signatures From Lossy Identification Schemes

    OpenAIRE

    Abdalla , Michel; Fouque , Pierre-Alain; Lyubashevsky , Vadim; Tibouchi , Mehdi

    2015-01-01

    International audience; In this paper, we present three digital signature schemes with tight security reductions in the random oracle model. Our first signature scheme is a particularly efficient version of the short exponent discrete log-based scheme of Girault et al. (J Cryptol 19(4):463–487, 2006). Our scheme has a tight reduction to the decisional short discrete logarithm problem, while still maintaining the non-tight reduction to the computational version of the problem upon which the or...

  6. 40 CFR 761.130 - Sampling requirements.

    Science.gov (United States)

    2010-07-01

    ... 761.130 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL... sampling scheme is that it is designed to characterize the degree of contamination within the entire.... For this purpose, the numerical level of cleanup required for spills cleaned in accordance with § 761...

  7. Matter, dark matter, and anti-matter in search of the hidden universe

    CERN Document Server

    Mazure, Alain

    2012-01-01

    For over ten years, the dark side of the universe has been headline news. Detailed studies of the rotation of spiral galaxies, and 'mirages' created by clusters of galaxies bending the light from very remote objects, have convinced astronomers of the presence of large quantities of dark (unseen) matter in the cosmos. Moreover, in the 1990s, it was discovered that some four to five billion years ago the expansion of the universe entered a phase of acceleration. This implies the existence of dark energy. The nature of these 'dark; ingredients remains a mystery, but they seem to comprise about 95 percent of the matter/energy content of the universe. As for ordinary matter, although we are immersed in a sea of dark particles, including primordial neutrinos and photons from 'fossil' cosmological radiation, both we and our environment are made of ordinary, baryonic matter. Strangely, even if 15-20 percent of matter is baryonic matter, this represents only 4-5 percent of the total matter/energy content of the cosmos...

  8. Scheme of energy utilities

    International Nuclear Information System (INIS)

    2002-04-01

    This scheme defines the objectives relative to the renewable energies and the rational use of the energy in the framework of the national energy policy. It evaluates the needs and the potentialities of the regions and preconizes the actions between the government and the territorial organizations. The document is presented in four parts: the situation, the stakes and forecasts; the possible actions for new measures; the scheme management and the regional contributions analysis. (A.L.B.)

  9. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  10. Estimating plume dispersion: a comparison of several sigma schemes

    International Nuclear Information System (INIS)

    Irwin, J.S.

    1983-01-01

    The lateral and vertical Gaussian plume dispersion parameters are estimated and compared with field tracer data collected at 11 sites. The dispersion parameter schemes used in this analysis include Cramer's scheme, suggested for tall stack dispersion estimates, Draxler's scheme, suggested for elevated and surface releases, Pasquill's scheme, suggested for interim use in dispersion estimates, and the Pasquill--Gifford scheme using Turner's technique for assigning stability categories. The schemes suggested by Cramer, Draxler and Pasquill estimate the dispersion parameters using onsite measurements of the vertical and lateral wind-velocity variances at the effective release height. The performances of these schemes in estimating the dispersion parameters are compared with that of the Pasquill--Gifford scheme, using the Prairie Grass and Karlsruhe data. For these two experiments, the estimates of the dispersion parameters using Draxler's scheme correlate better with the measurements than did estimates using the Pasquill--Gifford scheme. Comparison of the dispersion parameter estimates with the measurement suggests that Draxler's scheme for characterizing the dispersion results in the smallest mean fractional error in the estimated dispersion parameters and the smallest variance of the fractional errors

  11. Progress of organic matter degradation and maturity of compost produced in a large-scale composting facility.

    Science.gov (United States)

    Nakasaki, Kiyohiko; Marui, Taketoshi

    2011-06-01

    To monitor the progress of organic matter degradation in a large-scale composting facility, the percentage of organic matter degradation was determined by measuring CO(2) evolution during recomposting of compost samples withdrawn from the facility. The percentage of organic matter degradation was calculated as the ratio of the amount of CO(2) evolved from compost raw material to that evolved from each sample during recomposting in the laboratory composting apparatus. It was assumed that the difference in the cumulative emission of CO(2) between the compost raw material and a sample corresponds to the amount of CO( 2) evolved from the sample in the composting facility. Using this method, the changes in organic matter degradation during composting in practical large-scale composting facilities were estimated and it was found that the percentage of organic matter degradation increased more vigorously in the earlier stages than in the later stages of composting. The percentage of organic matter degradation finally reached 78 and 55% for the compost produced from garbage-animal manure mixture and distillery waste (shochu residue), respectively. It was thus ascertained that organic matter degradation progressed well in both composting facilities. Furthermore, by performing a plant growth assay, it was observed that the compost products of both the facilities did not inhibit seed germination and thus were useful in promoting plant growth.

  12. New optical scheme for differential measurements of diffraction reflection intensity on X-radiation sliding incidence

    International Nuclear Information System (INIS)

    Golovin, A.L.; Mas', E.T.

    1989-01-01

    An X-ray optical scheme for differential measurements of X-ray diffraction under sliding incidence conditions is proposed and an attachment design realizng this scheme, using standard equipment, is described. The main feature of the scheme is the following: collimation according to the Bragg angle is carried out for the reflected beam rather than the incident one. Goniometers can be used from DRON, TRS, GS-5 and other spectrometers. The goniometer head carrying the sample is standard, it is a part of the DRON, TRS and DTS. The crystal analyzer is fixed on the attachment. The angular position of the crystal monochromator is controlled by an inductive sensor. The experimental differential curves of X-ray diffraction under conditions of sliding incidence, taken for a silicon crystal having the 111 orientation, are given as well

  13. An authentication scheme for secure access to healthcare services.

    Science.gov (United States)

    Khan, Muhammad Khurram; Kumari, Saru

    2013-08-01

    Last few decades have witnessed boom in the development of information and communication technologies. Health-sector has also been benefitted with this advancement. To ensure secure access to healthcare services some user authentication mechanisms have been proposed. In 2012, Wei et al. proposed a user authentication scheme for telecare medical information system (TMIS). Recently, Zhu pointed out offline password guessing attack on Wei et al.'s scheme and proposed an improved scheme. In this article, we analyze both of these schemes for their effectiveness in TMIS. We show that Wei et al.'s scheme and its improvement proposed by Zhu fail to achieve some important characteristics necessary for secure user authentication. We find that security problems of Wei et al.'s scheme stick with Zhu's scheme; like undetectable online password guessing attack, inefficacy of password change phase, traceability of user's stolen/lost smart card and denial-of-service threat. We also identify that Wei et al.'s scheme lacks forward secrecy and Zhu's scheme lacks session key between user and healthcare server. We therefore propose an authentication scheme for TMIS with forward secrecy which preserves the confidentiality of air messages even if master secret key of healthcare server is compromised. Our scheme retains advantages of Wei et al.'s scheme and Zhu's scheme, and offers additional security. The security analysis and comparison results show the enhanced suitability of our scheme for TMIS.

  14. Quantifying Matter

    CERN Document Server

    Angelo, Joseph A

    2011-01-01

    Quantifying Matter explains how scientists learned to measure matter and quantify some of its most fascinating and useful properties. It presents many of the most important intellectual achievements and technical developments that led to the scientific interpretation of substance. Complete with full-color photographs, this exciting new volume describes the basic characteristics and properties of matter. Chapters include:. -Exploring the Nature of Matter. -The Origin of Matter. -The Search for Substance. -Quantifying Matter During the Scientific Revolution. -Understanding Matter's Electromagnet

  15. Deconvolution of the density of states of tip and sample through constant-current tunneling spectroscopy

    Directory of Open Access Journals (Sweden)

    Holger Pfeifer

    2011-09-01

    Full Text Available We introduce a scheme to obtain the deconvolved density of states (DOS of the tip and sample, from scanning tunneling spectra determined in the constant-current mode (z–V spectroscopy. The scheme is based on the validity of the Wentzel–Kramers–Brillouin (WKB approximation and the trapezoidal approximation of the electron potential within the tunneling barrier. In a numerical treatment of z–V spectroscopy, we first analyze how the position and amplitude of characteristic DOS features change depending on parameters such as the energy position, width, barrier height, and the tip–sample separation. Then it is shown that the deconvolution scheme is capable of recovering the original DOS of tip and sample with an accuracy of better than 97% within the one-dimensional WKB approximation. Application of the deconvolution scheme to experimental data obtained on Nb(110 reveals a convergent behavior, providing separately the DOS of both sample and tip. In detail, however, there are systematic quantitative deviations between the DOS results based on z–V data and those based on I–V data. This points to an inconsistency between the assumed and the actual transmission probability function. Indeed, the experimentally determined differential barrier height still clearly deviates from that derived from the deconvolved DOS. Thus, the present progress in developing a reliable deconvolution scheme shifts the focus towards how to access the actual transmission probability function.

  16. Cost-based droop scheme for DC microgrid

    DEFF Research Database (Denmark)

    Nutkani, Inam Ullah; Wang, Peng; Loh, Poh Chiang

    2014-01-01

    voltage level, less on optimized operation and control of generation sources. The latter theme is perused in this paper, where cost-based droop scheme is proposed for distributed generators (DGs) in DC microgrids. Unlike traditional proportional power sharing based droop scheme, the proposed scheme......-connected operation. Most importantly, the proposed scheme can reduce overall total generation cost in DC microgrids without centralized controller and communication links. The performance of the proposed scheme has been verified under different load conditions.......DC microgrids are gaining interest due to higher efficiencies of DC distribution compared with AC. The benefits of DC systems have been widely researched for data centers, IT facilities and residential applications. The research focus, however, has been more on system architecture and optimal...

  17. Resonance ionization scheme development for europium

    Energy Technology Data Exchange (ETDEWEB)

    Chrysalidis, K., E-mail: katerina.chrysalidis@cern.ch; Goodacre, T. Day; Fedosseev, V. N.; Marsh, B. A. [CERN (Switzerland); Naubereit, P. [Johannes Gutenberg-Universität, Institiut für Physik (Germany); Rothe, S.; Seiffert, C. [CERN (Switzerland); Kron, T.; Wendt, K. [Johannes Gutenberg-Universität, Institiut für Physik (Germany)

    2017-11-15

    Odd-parity autoionizing states of europium have been investigated by resonance ionization spectroscopy via two-step, two-resonance excitations. The aim of this work was to establish ionization schemes specifically suited for europium ion beam production using the ISOLDE Resonance Ionization Laser Ion Source (RILIS). 13 new RILIS-compatible ionization schemes are proposed. The scheme development was the first application of the Photo Ionization Spectroscopy Apparatus (PISA) which has recently been integrated into the RILIS setup.

  18. Secure RAID Schemes for Distributed Storage

    OpenAIRE

    Huang, Wentao; Bruck, Jehoshua

    2016-01-01

    We propose secure RAID, i.e., low-complexity schemes to store information in a distributed manner that is resilient to node failures and resistant to node eavesdropping. We generalize the concept of systematic encoding to secure RAID and show that systematic schemes have significant advantages in the efficiencies of encoding, decoding and random access. For the practical high rate regime, we construct three XOR-based systematic secure RAID schemes with optimal or almost optimal encoding and ...

  19. Wireless Broadband Access and Accounting Schemes

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In this paper, we propose two wireless broadband access and accounting schemes. In both schemes, the accounting system adopts RADIUS protocol, but the access system adopts SSH and SSL protocols respectively.

  20. Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction

    Science.gov (United States)

    2016-01-01

    human-sized scene in 0.048sec− 0.101sec. Index Terms—Microwave imaging, multistatic radar, Fast Fourier Transform (FFT). I. INTRODUCTION Near-field...configuration, but its computational demands are extreme. Fast Fourier Transform (FFT) imaging has long been used to efficiently construct images sampled...with the block diagram depicted in Fig. 4. It is noted that the multistatic to monostatic correction is valid over a finite imaging domain. However, as

  1. Security analysis and improvements of arbitrated quantum signature schemes

    International Nuclear Information System (INIS)

    Zou Xiangfu; Qiu Daowen

    2010-01-01

    A digital signature is a mathematical scheme for demonstrating the authenticity of a digital message or document. For signing quantum messages, some arbitrated quantum signature (AQS) schemes have been proposed. It was claimed that these AQS schemes could guarantee unconditional security. However, we show that they can be repudiated by the receiver Bob. To conquer this shortcoming, we construct an AQS scheme using a public board. The AQS scheme not only avoids being disavowed by the receiver but also preserves all merits in the existing schemes. Furthermore, we discover that entanglement is not necessary while all these existing AQS schemes depend on entanglement. Therefore, we present another AQS scheme without utilizing entangled states in the signing phase and the verifying phase. This scheme has three advantages: it does not utilize entangled states and it preserves all merits in the existing schemes; the signature can avoid being disavowed by the receiver; and it provides a higher efficiency in transmission and reduces the complexity of implementation.

  2. Design of Infusion Schemes for Neuroreceptor Imaging

    DEFF Research Database (Denmark)

    Feng, Ling; Svarer, Claus; Madsen, Karine

    2016-01-01

    for bolus infusion (BI) or programmed infusion (PI) experiments. Steady-state quantitative measurements can be made with one short scan and venous blood samples. The GABAA receptor ligand [(11)C]Flumazenil (FMZ) was chosen for this purpose, as it lacks a suitable reference region. Methods. Five bolus [(11)C...... state was attained within 40 min, which was 8 min earlier than the optimal BI (B/I ratio = 55 min). Conclusions. The system can design both BI and PI schemes to attain steady state rapidly. For example, subjects can be [(11)C]FMZ-PET scanned after 40 min of tracer infusion for 40 min with venous...

  3. Sequential extraction procedures to ascertain the role of organic matter in the fate of iodine in soils

    International Nuclear Information System (INIS)

    Gavalda, D.; Colle, C.

    2004-01-01

    In the assessment of the radiological impact on man of radioactive substances the fate of the long-lived 129 I in soils is of special interest. In order to predict the behaviour of iodine in the environment the knowledge of soil parameters which are responsible for its sorption is necessary. Sequential extraction techniques were performed to investigate the degree of binding of iodine with soil components and more specifically with the different constituents of soil organic matter (humic acid, fulvic acid, humin) which are liable to change with time. A speciation scheme was especially developed to study the role of organic matter in iodine retention and complexation. In the first steps, several mineral fractions of iodine were extracted: water soluble (H 2 O), exchangeable (1M MgCl 2 ), carbonate bound (0.01N HCl), bound to Fe-Mn oxides (0.5 M NH 4 OH,HCl adjusted to pH=2 with HNO 3 ). After these preliminary steps, the extraction of organic matter was carried out with neutral pyrophosphate (Na 2 H 2 P 2 O 7 / K 4 P 2 O 7 1/1 0.1M pH=7) to determine iodine bound to organo-mineral complexes and sodium hydroxide (0.5 M NaOH) to quantify iodine bound to humic substances. For these extracts, the distribution of iodine between humic and fulvic acids was studied. Iodine bound to residual and insoluble organic matter (humin) was extracted with H 2 O 2 30% adjusted to pH=2 with HNO 3 . In the last step, iodine bound to the residual soil was extracted by wet digestion (H 2 SO 4 ). In this scheme, all the traditional organic reagents (acetate, acetic acid,..) were removed and replaced by mineral reagents to allow the monitoring of organic carbon in the soil extracts. (author)

  4. Dark Matter

    Directory of Open Access Journals (Sweden)

    Einasto J.

    2011-06-01

    Full Text Available I give a review of the development of the concept of dark matter. The dark matter story passed through several stages from a minor observational puzzle to a major challenge for theory of elementary particles. Modern data suggest that dark matter is the dominant matter component in the Universe, and that it consists of some unknown non-baryonic particles. Dark matter is the dominant matter component in the Universe, thus properties of dark matter particles determine the structure of the cosmic web.

  5. Sliding-MOMP Based Channel Estimation Scheme for ISDB-T Systems

    Directory of Open Access Journals (Sweden)

    Ziji Ma

    2016-01-01

    Full Text Available Compressive sensing based channel estimation has shown its advantage of accurate reconstruction for sparse signal with less pilots for OFDM systems. However, high computational cost requirement of CS method, due to linear programming, significantly restricts its implementation in practical applications. In this paper, we propose a reduced complexity channel estimation scheme of modified orthogonal matching pursuit with sliding windows for ISDB-T (Integrated Services Digital Broadcasting for Terrestrial system. The proposed scheme can reduce the computational cost by limiting the searching region as well as making effective use of the last estimation result. In addition, adaptive tracking strategy with sliding sampling window can improve the robustness of CS based methods to guarantee its accuracy of channel matrix reconstruction, even for fast time-variant channels. The computer simulation demonstrates its impact on improving bit error rate and computational complexity for ISDB-T system.

  6. SU-E-T-58: A Novel Monte Carlo Photon Transport Simulation Scheme and Its Application in Cone Beam CT Projection Simulation

    International Nuclear Information System (INIS)

    Xu, Y; Tian, Z; Jiang, S; Jia, X; Zhou, L

    2015-01-01

    Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source. After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle

  7. SU-E-T-58: A Novel Monte Carlo Photon Transport Simulation Scheme and Its Application in Cone Beam CT Projection Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Y [UT Southwestern Medical Center, Dallas, TX (United States); Southern Medical University, Guangzhou (China); Tian, Z; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Zhou, L [Southern Medical University, Guangzhou (China)

    2015-06-15

    Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source. After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle

  8. Capacity-achieving CPM schemes

    OpenAIRE

    Perotti, Alberto; Tarable, Alberto; Benedetto, Sergio; Montorsi, Guido

    2008-01-01

    The pragmatic approach to coded continuous-phase modulation (CPM) is proposed as a capacity-achieving low-complexity alternative to the serially-concatenated CPM (SC-CPM) coding scheme. In this paper, we first perform a selection of the best spectrally-efficient CPM modulations to be embedded into SC-CPM schemes. Then, we consider the pragmatic capacity (a.k.a. BICM capacity) of CPM modulations and optimize it through a careful design of the mapping between input bits and CPM waveforms. The s...

  9. Improvement of a Quantum Proxy Blind Signature Scheme

    Science.gov (United States)

    Zhang, Jia-Lei; Zhang, Jian-Zhong; Xie, Shu-Cui

    2018-06-01

    Improvement of a quantum proxy blind signature scheme is proposed in this paper. Six-qubit entangled state functions as quantum channel. In our scheme, a trust party Trent is introduced so as to avoid David's dishonest behavior. The receiver David verifies the signature with the help of Trent in our scheme. The scheme uses the physical characteristics of quantum mechanics to implement message blinding, delegation, signature and verification. Security analysis proves that our scheme has the properties of undeniability, unforgeability, anonymity and can resist some common attacks.

  10. A group signature scheme based on quantum teleportation

    International Nuclear Information System (INIS)

    Wen Xiaojun; Tian Yuan; Ji Liping; Niu Xiamu

    2010-01-01

    In this paper, we present a group signature scheme using quantum teleportation. Different from classical group signature and current quantum signature schemes, which could only deliver either group signature or unconditional security, our scheme guarantees both by adopting quantum key preparation, quantum encryption algorithm and quantum teleportation. Security analysis proved that our scheme has the characteristics of group signature, non-counterfeit, non-disavowal, blindness and traceability. Our quantum group signature scheme has a foreseeable application in the e-payment system, e-government, e-business, etc.

  11. A group signature scheme based on quantum teleportation

    Energy Technology Data Exchange (ETDEWEB)

    Wen Xiaojun; Tian Yuan; Ji Liping; Niu Xiamu, E-mail: wxjun36@gmail.co [Information Countermeasure Technique Research Institute, Harbin Institute of Technology, Harbin 150001 (China)

    2010-05-01

    In this paper, we present a group signature scheme using quantum teleportation. Different from classical group signature and current quantum signature schemes, which could only deliver either group signature or unconditional security, our scheme guarantees both by adopting quantum key preparation, quantum encryption algorithm and quantum teleportation. Security analysis proved that our scheme has the characteristics of group signature, non-counterfeit, non-disavowal, blindness and traceability. Our quantum group signature scheme has a foreseeable application in the e-payment system, e-government, e-business, etc.

  12. The local dark matter phase-space density and impact on WIMP direct detection

    International Nuclear Information System (INIS)

    Catena, Riccardo; Ullio, Piero

    2012-01-01

    We present a new determination of the local dark matter phase-space density. This result is obtained implementing, in the limit of isotropic velocity distribution and spherical symmetry, Eddington's inversion formula, which links univocally the dark matter distribution function to the density profile, and applying, within a Bayesian framework, a Markov Chain Monte Carlo algorithm to sample mass models for the Milky Way against a broad and variegated sample of dynamical constraints. We consider three possible choices for the dark matter density profile, namely the Einasto, NFW and Burkert profiles, finding that the velocity dispersion, which characterizes the width in the distribution, tends to be larger for the Burkert case, while the escape velocity depends very weakly on the profile, with the mean value we obtain being in very good agreement with estimates from stellar kinematics. The derived dark matter phase-space densities differ significantly — most dramatically in the high velocity tails — from the model usually taken as a reference in dark matter detection studies, a Maxwell-Boltzmann distribution with velocity dispersion fixed in terms of the local circular velocity and with a sharp truncation at a given value of the escape velocity. We discuss the impact of astrophysical uncertainties on dark matter scattering rates and direct detection exclusion limits, considering a few sample cases and showing that the most sensitive ones are those for light dark matter particles and for particles scattering inelastically. As a general trend, regardless of the assumed profile, when adopting a self-consistent phase-space density, we find that rates are larger, and hence exclusion limits stronger, than with the standard Maxwell-Boltzmann approximation. Tools for applying our result on the local dark matter phase-space density to other dark matter candidates or experimental setups are provided

  13. A new access scheme in OFDMA systems

    Institute of Scientific and Technical Information of China (English)

    GU Xue-lin; YAN Wei; TIAN Hui; ZHANG Ping

    2006-01-01

    This article presents a dynamic random access scheme for orthogonal frequency division multiple access (OFDMA) systems. The key features of the proposed scheme are:it is a combination of both the distributed and the centralized schemes, it can accommodate several delay sensitivity classes,and it can adjust the number of random access channels in a media access control (MAC) frame and the access probability according to the outcome of Mobile Terminals access attempts in previous MAC frames. For floating populated packet-based networks, the proposed scheme possibly leads to high average user satisfaction.

  14. Adaptive transmission schemes for MISO spectrum sharing systems

    KAUST Repository

    Bouida, Zied

    2013-06-01

    We propose three adaptive transmission techniques aiming to maximize the capacity of a multiple-input-single-output (MISO) secondary system under the scenario of an underlay cognitive radio network. In the first scheme, namely the best antenna selection (BAS) scheme, the antenna maximizing the capacity of the secondary link is used for transmission. We then propose an orthogonal space time bloc code (OSTBC) transmission scheme using the Alamouti scheme with transmit antenna selection (TAS), namely the TAS/STBC scheme. The performance improvement offered by this scheme comes at the expense of an increased complexity and delay when compared to the BAS scheme. As a compromise between these schemes, we propose a hybrid scheme using BAS when only one antenna verifies the interference condition and TAS/STBC when two or more antennas are illegible for communication. We first derive closed-form expressions of the statistics of the received signal-to-interference-and-noise ratio (SINR) at the secondary receiver (SR). These results are then used to analyze the performance of the proposed techniques in terms of the average spectral efficiency, the average number of transmit antennas, and the average bit error rate (BER). This performance is then illustrated via selected numerical examples. © 2013 IEEE.

  15. Adaptive sampling rate control for networked systems based on statistical characteristics of packet disordering.

    Science.gov (United States)

    Li, Jin-Na; Er, Meng-Joo; Tan, Yen-Kheng; Yu, Hai-Bin; Zeng, Peng

    2015-09-01

    This paper investigates an adaptive sampling rate control scheme for networked control systems (NCSs) subject to packet disordering. The main objectives of the proposed scheme are (a) to avoid heavy packet disordering existing in communication networks and (b) to stabilize NCSs with packet disordering, transmission delay and packet loss. First, a novel sampling rate control algorithm based on statistical characteristics of disordering entropy is proposed; secondly, an augmented closed-loop NCS that consists of a plant, a sampler and a state-feedback controller is transformed into an uncertain and stochastic system, which facilitates the controller design. Then, a sufficient condition for stochastic stability in terms of Linear Matrix Inequalities (LMIs) is given. Moreover, an adaptive tracking controller is designed such that the sampling period tracks a desired sampling period, which represents a significant contribution. Finally, experimental results are given to illustrate the effectiveness and advantages of the proposed scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Sampling strategies in antimicrobial resistance monitoring: evaluating how precision and sensitivity vary with the number of animals sampled per farm.

    Directory of Open Access Journals (Sweden)

    Takehisa Yamamoto

    Full Text Available Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.

  17. Scheme-Independent Predictions in QCD: Commensurate Scale Relations and Physical Renormalization Schemes

    International Nuclear Information System (INIS)

    Brodsky, Stanley J.

    1998-01-01

    Commensurate scale relations are perturbative QCD predictions which relate observable to observable at fixed relative scale, such as the ''generalized Crewther relation'', which connects the Bjorken and Gross-Llewellyn Smith deep inelastic scattering sum rules to measurements of the e + e - annihilation cross section. All non-conformal effects are absorbed by fixing the ratio of the respective momentum transfer and energy scales. In the case of fixed-point theories, commensurate scale relations relate both the ratio of couplings and the ratio of scales as the fixed point is approached. The relations between the observables are independent of the choice of intermediate renormalization scheme or other theoretical conventions. Commensurate scale relations also provide an extension of the standard minimal subtraction scheme, which is analytic in the quark masses, has non-ambiguous scale-setting properties, and inherits the physical properties of the effective charge α V (Q 2 ) defined from the heavy quark potential. The application of the analytic scheme to the calculation of quark-mass-dependent QCD corrections to the Z width is also reviewed

  18. Quantum attack-resistent certificateless multi-receiver signcryption scheme.

    Directory of Open Access Journals (Sweden)

    Huixian Li

    Full Text Available The existing certificateless signcryption schemes were designed mainly based on the traditional public key cryptography, in which the security relies on the hard problems, such as factor decomposition and discrete logarithm. However, these problems will be easily solved by the quantum computing. So the existing certificateless signcryption schemes are vulnerable to the quantum attack. Multivariate public key cryptography (MPKC, which can resist the quantum attack, is one of the alternative solutions to guarantee the security of communications in the post-quantum age. Motivated by these concerns, we proposed a new construction of the certificateless multi-receiver signcryption scheme (CLMSC based on MPKC. The new scheme inherits the security of MPKC, which can withstand the quantum attack. Multivariate quadratic polynomial operations, which have lower computation complexity than bilinear pairing operations, are employed in signcrypting a message for a certain number of receivers in our scheme. Security analysis shows that our scheme is a secure MPKC-based scheme. We proved its security under the hardness of the Multivariate Quadratic (MQ problem and its unforgeability under the Isomorphism of Polynomials (IP assumption in the random oracle model. The analysis results show that our scheme also has the security properties of non-repudiation, perfect forward secrecy, perfect backward secrecy and public verifiability. Compared with the existing schemes in terms of computation complexity and ciphertext length, our scheme is more efficient, which makes it suitable for terminals with low computation capacity like smart cards.

  19. Mirror matter as self-interacting dark matter

    International Nuclear Information System (INIS)

    Mohapatra, R.N.; Nussinov, S.; Teplitz, V.L.

    2002-01-01

    It has been argued that the observed core density profile of galaxies is inconsistent with having a dark matter particle that is collisionless and that alternative dark matter candidates which are self-interacting may explain observations better. One new class of self-interacting dark matter that has been proposed in the context of mirror universe models of particle physics is the mirror hydrogen atom, whose stability is guaranteed by the conservation of mirror baryon number. We show that the effective transport cross section for mirror hydrogen atoms has the right order of magnitude for solving the 'cuspy' halo problem. Furthermore, the suppression of dissipation effects for mirror atoms due to a higher mirror mass scale prevents the mirror halo matter from collapsing into a disk, strengthening the argument for mirror matter as galactic dark matter

  20. Effects of sampling methods on the quantity and quality of dissolved organic matter in sediment pore waters as revealed by absorption and fluorescence spectroscopy.

    Science.gov (United States)

    Chen, Meilian; Lee, Jong-Hyeon; Hur, Jin

    2015-10-01

    Despite literature evidence suggesting the importance of sampling methods on the properties of sediment pore waters, their effects on the dissolved organic matter (PW-DOM) have been unexplored to date. Here, we compared the effects of two commonly used sampling methods (i.e., centrifuge and Rhizon sampler) on the characteristics of PW-DOM for the first time. The bulk dissolved organic carbon (DOC), ultraviolet-visible (UV-Vis) absorption, and excitation-emission matrixes coupled with parallel factor analysis (EEM-PARAFAC) of the PW-DOM samples were compared for the two sampling methods with the sediments from minimal to severely contaminated sites. The centrifuged samples were found to have higher average values of DOC, UV absorption, and protein-like EEM-PARAFAC components. The samples collected with the Rhizon sampler, however, exhibited generally more humified characteristics than the centrifuged ones, implying a preferential collection of PW-DOM with respect to the sampling methods. Furthermore, the differences between the two sampling methods seem more pronounced in relatively more polluted sites. Our observations were possibly explained by either the filtration effect resulting from the smaller pore size of the Rhizon sampler or the desorption of DOM molecules loosely bound to minerals during centrifugation, or both. Our study suggests that consistent use of one sampling method is crucial for PW-DOM studies and also that caution should be taken in the comparison of data collected with different sampling methods.

  1. Hyperon interactions in nuclear matter

    Energy Technology Data Exchange (ETDEWEB)

    Dhar, Madhumita; Lenske, Horst [Institut fuer Theoretische Physik, Universitaet Giessen (Germany)

    2014-07-01

    Baryon-baryon interactions within the SU(3)-octet are investigated in free space and nuclear matter. A meson exchange model is used for determining the interaction. The Bethe-Salpeter equations are solved in a 3-D reduction scheme. In-medium effects have been incorporated by including a two particle Pauli projection operator in the scattering equation. The coupling of the various channels of total strangeness S=-1,-2 and conserved total charge is studied in detail. Calculations and the corresponding results are compared for using the isospin and the particle basis. Matrix elements are compared in detail, in particular discussing mixing effects of different hyperon channels. Special attention is paid to the physical thresholds. The density dependence of interaction is clearly seen in the variation of the in-medium low-energy parameters. The approach is compared to descriptions derived from chiral-EFT and other meson-exchange models e.g. the Nijmegen and the Juelich model.

  2. The Matter-Antimatter Concept Revisited

    Directory of Open Access Journals (Sweden)

    Marquet P.

    2010-04-01

    Full Text Available In this paper, we briefly review the theory elaborated by Louis de Broglie who showed that in some circumstances, a particle tunneling through a dispersive refracting material may reverse its velocity with respect to that of its associated wave (phase velocity: this is a consequence of Rayleigh's formula defining the group velocity. Within his Double Solution Theory, de Broglie re-interprets Dirac's aether concept which was an early attempt to describe the matter-antimatter symmetry. In this new approach, de Broglie suggests that the (hidden sub-quantum medium required by his theory be likened to the dispersive and refracting material with identical properties. A Riemannian generalization of this scheme restricted to a space-time section, and formulated within an holonomic frame is here considered. This procedure is shown to be founded and consistent if one refers to the extended formulation of General Relativity (EGR theory, wherein pre-exists a persistent field.

  3. The Matter-Antimatter Concept Revisited

    Directory of Open Access Journals (Sweden)

    Marquet P.

    2010-04-01

    Full Text Available In this paper, we briefly review the theory elaborated by Louis de Broglie who showed that in some circumstances, a particle tunneling through a dispersive refracting material may reverse its velocity with respect to that of its associated wave (phase velocity: this is a consequence of Rayleigh’s formula defining the group velocity. Within his “Double Solution Theory”, de Broglie re-interprets Dirac’s aether concept which was an early attempt to describe the matter-antimatter symmetry. In this new approach, de Broglie suggests that the (hidden sub-quantum medium required by his theory be likened to the dispersive and refracting material with identical properties. A Riemannian generalization of this scheme restricted to a space-time section, and formulated within an holonomic frame is here considered. This procedure is shown to be founded and consistent if one refers to the extended formulation of General Relativity (EGR theory, wherein pre-exists a persistent field.

  4. Birkhoffian Symplectic Scheme for a Quantum System

    International Nuclear Information System (INIS)

    Su Hongling

    2010-01-01

    In this paper, a classical system of ordinary differential equations is built to describe a kind of n-dimensional quantum systems. The absorption spectrum and the density of the states for the system are defined from the points of quantum view and classical view. From the Birkhoffian form of the equations, a Birkhoffian symplectic scheme is derived for solving n-dimensional equations by using the generating function method. Besides the Birkhoffian structure-preserving, the new scheme is proven to preserve the discrete local energy conservation law of the system with zero vector f. Some numerical experiments for a 3-dimensional example show that the new scheme can simulate the general Birkhoffian system better than the implicit midpoint scheme, which is well known to be symplectic scheme for Hamiltonian system. (general)

  5. Autonomous droop scheme with reduced generation cost

    DEFF Research Database (Denmark)

    Nutkani, Inam Ullah; Loh, Poh Chiang; Blaabjerg, Frede

    2013-01-01

    Droop scheme has been widely applied to the control of Distributed Generators (DGs) in microgrids for proportional power sharing based on their ratings. For standalone microgrid, where centralized management system is not viable, the proportional power sharing based droop might not suit well since...... DGs are usually of different types unlike synchronous generators. This paper presents an autonomous droop scheme that takes into consideration the operating cost, efficiency and emission penalty of each DG since all these factors directly or indirectly contributes to the Total Generation Cost (TGC......) of the overall microgrid. Comparing it with the traditional scheme, the proposed scheme has retained its simplicity, which certainly is a feature preferred by the industry. The overall performance of the proposed scheme has been verified through simulation and experiment....

  6. Enhanced arbitrated quantum signature scheme using Bell states

    International Nuclear Information System (INIS)

    Wang Chao; Liu Jian-Wei; Shang Tao

    2014-01-01

    We investigate the existing arbitrated quantum signature schemes as well as their cryptanalysis, including intercept-resend attack and denial-of-service attack. By exploring the loopholes of these schemes, a malicious signatory may successfully disavow signed messages, or the receiver may actively negate the signature from the signatory without being detected. By modifying the existing schemes, we develop counter-measures to these attacks using Bell states. The newly proposed scheme puts forward the security of arbitrated quantum signature. Furthermore, several valuable topics are also presented for further research of the quantum signature scheme

  7. Decoupling schemes for the SSC Collider

    International Nuclear Information System (INIS)

    Cai, Y.; Bourianoff, G.; Cole, B.; Meinke, R.; Peterson, J.; Pilat, F.; Stampke, S.; Syphers, M.; Talman, R.

    1993-05-01

    A decoupling system is designed for the SSC Collider. This system can accommodate three decoupling schemes by using 44 skew quadrupoles in the different configurations. Several decoupling schemes are studied and compared in this paper

  8. Warm dense matter and Thomson scattering at FLASH

    International Nuclear Information System (INIS)

    Faeustlin, Roland Rainer

    2010-05-01

    X-ray free electron lasers are powerful tools to investigate moderately to strongly correlated solid density low temperature plasmas, named warm dense matter. These plasmas are of most interest for astrophysics and laser plasma interaction, particularly inertial confinement fusion. This work utilizes the ultrashort soft x-ray pulse duration and high brilliance of the free electron laser in Hamburg, FLASH, to generate warm dense matter and to study its ultrafast processes. The techniques applied are absorption measurement, emission spectroscopy and Thomson scattering. Radiative hydrodynamics and Thomson scattering simulations are used to investigate the impact of temperature and density gradients in the sample and to fit the experimental data. The measurements result in a comprehensive picture of soft x-ray matter interaction related to warm dense matter and yield insight into ultrafast equilibration and relaxation mechanisms, in particular impact ionization and radiative recombination. (orig.)

  9. Warm dense matter and Thomson scattering at FLASH

    Energy Technology Data Exchange (ETDEWEB)

    Faeustlin, Roland Rainer

    2010-05-15

    X-ray free electron lasers are powerful tools to investigate moderately to strongly correlated solid density low temperature plasmas, named warm dense matter. These plasmas are of most interest for astrophysics and laser plasma interaction, particularly inertial confinement fusion. This work utilizes the ultrashort soft x-ray pulse duration and high brilliance of the free electron laser in Hamburg, FLASH, to generate warm dense matter and to study its ultrafast processes. The techniques applied are absorption measurement, emission spectroscopy and Thomson scattering. Radiative hydrodynamics and Thomson scattering simulations are used to investigate the impact of temperature and density gradients in the sample and to fit the experimental data. The measurements result in a comprehensive picture of soft x-ray matter interaction related to warm dense matter and yield insight into ultrafast equilibration and relaxation mechanisms, in particular impact ionization and radiative recombination. (orig.)

  10. Sampling for diesel particulate matter in mines : Diesel Emissions Evaluation Program (DEEP), technology transfer initiative, October 2001

    International Nuclear Information System (INIS)

    Grenier, M.; Gangal, M.; Goyer, N.; McGinn, S.; Penney, J.; Vergunst, J.

    2001-10-01

    The physical and chemical characteristics of diesel particulate matter (DPM) from exhaust gases from diesel powered mining equipment were presented along with guidelines and regulation for exposure monitoring in the workplace. The report addresses issues related to personal and direct exhaust sampling in mines and presents evidence about potential carcinogenicity of the solid fraction of diesel exhaust. The incomplete combustion of diesel fuel results in the formation of solid and liquid particles in the exhaust. DPM is defined as being the portion of diesel exhaust which is made up of solid carbon particles and the attached chemicals such as polycyclic aromatic hydrocarbons and inorganics such as sulphate compounds. DPM is a submicron aerosol and as such, it is a respirable dust which penetrates deep into the lungs. In addition, DPMs are not easily removed from the air stream because of their small size. Control of DPM is crucial because once they are airborne, they are likely to remain that way and will affect the workplace where they are produced as well as workplaces downwind. In January 2001, the Mine Safety and Health Administration issued a ruling for U.S. metal and non-metal mines requiring that mines meet a limit of exposure of 0.40 mg/m 3 . Mines are expected to reduce exposure to meet a 0.16 mg/m 3 limit of exposure by January 2006. European mines and tunnel construction projects must also meet DPM exposure limits. DPM sampling in Canada has been regulated for nearly one decade. Sampling protocols in Canada and the United States were described with reference to equipment and procedures testing DPM filtration efficiency of after-treatment modules and to evaluate the impact of diesel equipment maintenance on gaseous particulate emissions. 23 refs., 1 tab., 7 figs

  11. Cancelable remote quantum fingerprint templates protection scheme

    International Nuclear Information System (INIS)

    Liao Qin; Guo Ying; Huang Duan

    2017-01-01

    With the increasing popularity of fingerprint identification technology, its security and privacy have been paid much attention. Only the security and privacy of biological information are insured, the biological technology can be better accepted and used by the public. In this paper, we propose a novel quantum bit (qbit)-based scheme to solve the security and privacy problem existing in the traditional fingerprint identification system. By exploiting the properties of quantm mechanics, our proposed scheme, cancelable remote quantum fingerprint templates protection scheme, can achieve the unconditional security guaranteed in an information-theoretical sense. Moreover, this novel quantum scheme can invalidate most of the attacks aimed at the fingerprint identification system. In addition, the proposed scheme is applicable to the requirement of remote communication with no need to worry about its security and privacy during the transmission. This is an absolute advantage when comparing with other traditional methods. Security analysis shows that the proposed scheme can effectively ensure the communication security and the privacy of users’ information for the fingerprint identification. (paper)

  12. Nuclear matter from chiral effective field theory

    International Nuclear Information System (INIS)

    Drischler, Christian

    2017-01-01

    Nuclear matter is an ideal theoretical system that provides key insights into the physics of different length scales. While recent ab initio calculations of medium-mass to heavy nuclei have demonstrated that realistic saturation properties in infinite matter are crucial for reproducing experimental binding energies and charge radii, the nuclear-matter equation of state allows tight constraints on key quantities of neutron stars. In the present thesis we take advantage of both aspects. Chiral effective field theory (EFT) with pion and nucleon degrees of freedom has become the modern low-energy approach to nuclear forces based on the symmetries of quantum chromodynamics, the fundamental theory of strong interactions. The systematic chiral expansion enables improvable calculations associated with theoretical uncertainty estimates. In recent years, chiral many-body forces were derived up to high orders, allowing consistent calculations including all many-body contributions at next-to-next-to-next-to-leading order (N 3 LO). Many further advances have driven the construction of novel chiral potentials with different regularization schemes. Here, we develop advanced methods for microscopic calculations of the equation of state of homogeneous nuclear matter with arbitrary proton-to-neutron ratio at zero temperature. Specifically, we push the limits of many-body perturbation theory (MBPT) considerations to high orders in the chiral and in the many-body expansion. To address the challenging inclusion of three-body forces, we introduce a new partial-wave method for normal ordering that generalizes the treatment of these contributions. We show improved predictions for the neutron-matter equation of state with consistent N 3 LO nucleon-nucleon (NN) plus three-nucleon (3N) potentials using MBPT up to third order and self-consistent Green's function theory. The latter also provides nonperturbative benchmarks for the many-body convergence. In addition, we extend the normal

  13. Nuclear matter from chiral effective field theory

    Energy Technology Data Exchange (ETDEWEB)

    Drischler, Christian

    2017-11-15

    Nuclear matter is an ideal theoretical system that provides key insights into the physics of different length scales. While recent ab initio calculations of medium-mass to heavy nuclei have demonstrated that realistic saturation properties in infinite matter are crucial for reproducing experimental binding energies and charge radii, the nuclear-matter equation of state allows tight constraints on key quantities of neutron stars. In the present thesis we take advantage of both aspects. Chiral effective field theory (EFT) with pion and nucleon degrees of freedom has become the modern low-energy approach to nuclear forces based on the symmetries of quantum chromodynamics, the fundamental theory of strong interactions. The systematic chiral expansion enables improvable calculations associated with theoretical uncertainty estimates. In recent years, chiral many-body forces were derived up to high orders, allowing consistent calculations including all many-body contributions at next-to-next-to-next-to-leading order (N{sup 3}LO). Many further advances have driven the construction of novel chiral potentials with different regularization schemes. Here, we develop advanced methods for microscopic calculations of the equation of state of homogeneous nuclear matter with arbitrary proton-to-neutron ratio at zero temperature. Specifically, we push the limits of many-body perturbation theory (MBPT) considerations to high orders in the chiral and in the many-body expansion. To address the challenging inclusion of three-body forces, we introduce a new partial-wave method for normal ordering that generalizes the treatment of these contributions. We show improved predictions for the neutron-matter equation of state with consistent N{sup 3}LO nucleon-nucleon (NN) plus three-nucleon (3N) potentials using MBPT up to third order and self-consistent Green's function theory. The latter also provides nonperturbative benchmarks for the many-body convergence. In addition, we extend the

  14. Chemical and biological characterization of urban particulate matter

    International Nuclear Information System (INIS)

    Agurell, E.; Alsberg, T.; Assefaz-Redda, Y.

    1990-11-01

    Airborne particulate matter has been collected on glass fiber filter by high volume sampling in the Goeteborg urban area. The samples were, after extraction with respect to organic components, tested for biological effect in the Salmonella mutagenicity assay, affinity to the cytosol TCDD receptor and toxicity towards a mammalian cell system and analysed chemically for selected polycyclic aromatic compounds. A series of samples collected simultaneously at a street level location and a rooftop site showed that most parameters associated with the organic compounds adsorbed to airborne particulate matter has similar concentrations at the two levels. The differences observed for the mutagenic effect in different strains and conditions showed that the rooftop samples had a different composition compared to the street samples indicating that atmospheric transformations have occurred. Chemical fractionation of representative samples showed that the distribution of mutagenic activity among different fractions is dissimilar to the distribution obtained in the fractionation of both gasoline and diesel engine exhaust particles. Partial least squares regression analysis showed qualitatively that diesel exhaust is a major source of airborne particulate mutagenic activity and source apportionment with chemical mass balance and multilinear regression corroborated this quantitatively. The multilinear regression analysis gave the result that the airborne activity in Salmonella TA90-S9 originated to 54±4% from diesel exhaust and to 26±3% from gasoline exhaust. The contribution is more equal for the activity measured with TA98+S9. The usefulness of short-term bioassays as an addition to chemical analysis of airborne particulate matter depends on whether only polycylic aromatic hydrocarbons (PAH) are major carcinogens, as has been suggested in the literature, or whether also other polycyclic aromatic compound (PAC) are of importance. (au)

  15. Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach

    KAUST Repository

    Ballal, Tarig

    2014-01-01

    This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.

  16. Effect of concentration of dispersed organic matter on optical maturity parameters. Interlaboratory results of the organic matter concentration working group of the ICCP

    Energy Technology Data Exchange (ETDEWEB)

    Mendonca Filho, J.G.; Kern, M.L.; Mendonca, J.O. [Palynofacies and Organic Facies Laboratory (LAFO), DEGL, IGEO, UFRJ, Cidade Universitaria, Rio de Janeiro (Brazil); Araujo, C.V.; Menezes, T.R.; Souza, I.V.A.F. [Petrobras R and D Center, Rio de Janeiro (Brazil); Borrego, A.G.; Suarez-Ruiz, I. [Instituto Nacional del Carbon, CSIC, Oviedo (Spain); Cook, A.; Ranasinghe, P. [Keiraville Konsultants Pty. Ltd, NSW (Australia); Flores, D. [University of Porto, Departamento de Geologia (Portugal); Hackley, P. [U.S. Geological Survey, MS 956 National Center Reston, VA (United States); Hower, J.C. [University of Kentucky, Center for Applied Energy Research, Lexington (United States); Kommeren, K. [Shell International Exploration and Production, Rijswijk (Netherlands); Kus, J. [Germany Federal Institute for Geosciences and Natural Resources in Geozentrum, Hannover (Germany); Mastalerz, M. [Indiana Geological Survey, Indiana University, Bloomington (United States); Newman, J. [Newman Energy Research Ltd, Christchurch (New Zealand); Ujiie, Y. [Graduate School of Science and Technology, Hirosaki University (Japan)

    2010-12-01

    The main objective of this work was to study the effect of the kerogen isolation procedures on maturity parameters of organic matter using optical microscopes. This work represents the results of the Organic Matter Concentration Working Group (OMCWG) of the International Committee for Coal and Organic Petrology (ICCP) during the years 2008 and 2009. Four samples have been analysed covering a range of maturity (low and moderate) and terrestrial and marine geological settings. The analyses comprise random vitrinite reflectance measured on both kerogen concentrate and whole rock mounts and fluorescence spectra taken on alginite. Eighteen participants from twelve laboratories from all over the world performed the analyses. Samples of continental settings contained enough vitrinite for participants to record around 50 measurements whereas fewer readings were taken on samples from marine setting. The scatter of results was also larger in the samples of marine origin. Similar vitrinite reflectance values were in general recorded in the whole rock and in the kerogen concentrate. The small deviations of the trend cannot be attributed to the acid treatment involved in kerogen isolation but to reasons related to components identification or to the difficulty to achieve a good polish of samples with high mineral matter content. In samples difficult to polish, vitrinite reflectance was measured on whole rock tended to be lower. The presence or absence of rock fabric affected the selection of the vitrinite population for measurement and this also had an influence in the average value reported and in the scatter of the results. Slightly lower standard deviations were reported for the analyses run on kerogen concentrates. Considering the spectral fluorescence results, it was observed that the {lambda}max presents a shift to higher wavelengths in the kerogen concentrate sample in comparison to the whole-rock sample, thus revealing an influence of preparation methods (acid treatment

  17. A New Adaptive Hungarian Mating Scheme in Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Chanju Jung

    2016-01-01

    Full Text Available In genetic algorithms, selection or mating scheme is one of the important operations. In this paper, we suggest an adaptive mating scheme using previously suggested Hungarian mating schemes. Hungarian mating schemes consist of maximizing the sum of mating distances, minimizing the sum, and random matching. We propose an algorithm to elect one of these Hungarian mating schemes. Every mated pair of solutions has to vote for the next generation mating scheme. The distance between parents and the distance between parent and offspring are considered when they vote. Well-known combinatorial optimization problems, the traveling salesperson problem, and the graph bisection problem are used for the test bed of our method. Our adaptive strategy showed better results than not only pure and previous hybrid schemes but also existing distance-based mating schemes.

  18. OUTPACE long duration stations: physical variability, context of biogeochemical sampling, and evaluation of sampling strategy

    Directory of Open Access Journals (Sweden)

    A. de Verneil

    2018-04-01

    Full Text Available Research cruises to quantify biogeochemical fluxes in the ocean require taking measurements at stations lasting at least several days. A popular experimental design is the quasi-Lagrangian drifter, often mounted with in situ incubations or sediment traps that follow the flow of water over time. After initial drifter deployment, the ship tracks the drifter for continuing measurements that are supposed to represent the same water environment. An outstanding question is how to best determine whether this is true. During the Oligotrophy to UlTra-oligotrophy PACific Experiment (OUTPACE cruise, from 18 February to 3 April 2015 in the western tropical South Pacific, three separate stations of long duration (five days over the upper 500 m were conducted in this quasi-Lagrangian sampling scheme. Here we present physical data to provide context for these three stations and to assess whether the sampling strategy worked, i.e., that a single body of water was sampled. After analyzing tracer variability and local water circulation at each station, we identify water layers and times where the drifter risks encountering another body of water. While almost no realization of this sampling scheme will be truly Lagrangian, due to the presence of vertical shear, the depth-resolved observations during the three stations show most layers sampled sufficiently homogeneous physical environments during OUTPACE. By directly addressing the concerns raised by these quasi-Lagrangian sampling platforms, a protocol of best practices can begin to be formulated so that future research campaigns include the complementary datasets and analyses presented here to verify the appropriate use of the drifter platform.

  19. Quantum Communication Scheme Using Non-symmetric Quantum Channel

    International Nuclear Information System (INIS)

    Cao Haijing; Chen Zhonghua; Song Heshan

    2008-01-01

    A theoretical quantum communication scheme based on entanglement swapping and superdense coding is proposed with a 3-dimensional Bell state and 2-dimensional Bell state function as quantum channel. quantum key distribution and quantum secure direct communication can be simultaneously accomplished in the scheme. The scheme is secure and has high source capacity. At last, we generalize the quantum communication scheme to d-dimensional quantum channel

  20. Examining gray matter structure associated with academic performance in a large sample of Chinese high school students.

    Science.gov (United States)

    Wang, Song; Zhou, Ming; Chen, Taolin; Yang, Xun; Chen, Guangxiang; Wang, Meiyun; Gong, Qiyong

    2017-04-18

    Achievement in school is crucial for students to be able to pursue successful careers and lead happy lives in the future. Although many psychological attributes have been found to be associated with academic performance, the neural substrates of academic performance remain largely unknown. Here, we investigated the relationship between brain structure and academic performance in a large sample of high school students via structural magnetic resonance imaging (S-MRI) using voxel-based morphometry (VBM) approach. The whole-brain regression analyses showed that higher academic performance was related to greater regional gray matter density (rGMD) of the left dorsolateral prefrontal cortex (DLPFC), which is considered a neural center at the intersection of cognitive and non-cognitive functions. Furthermore, mediation analyses suggested that general intelligence partially mediated the impact of the left DLPFC density on academic performance. These results persisted even after adjusting for the effect of family socioeconomic status (SES). In short, our findings reveal a potential neuroanatomical marker for academic performance and highlight the role of general intelligence in explaining the relationship between brain structure and academic performance.

  1. A universal encoding scheme for MIMO transmission using a single active element for PSK modulation schemes

    DEFF Research Database (Denmark)

    Alrabadi, Osama; Papadias, C.B.; Kalis, A.

    2009-01-01

    A universal scheme for encoding multiple symbol streams using a single driven element (and consequently a single radio frequency (RF) frontend) surrounded by parasitic elements (PE) loaded with variable reactive loads, is proposed in this paper. The proposed scheme is based on creating a MIMO sys...

  2. TVD schemes in one and two space dimensions

    International Nuclear Information System (INIS)

    Leveque, R.J.; Goodman, J.B.; New York Univ., NY)

    1985-01-01

    The recent development of schemes which are second order accurate in smooth regions has made it possible to overcome certain difficulties which used to arise in numerical computations of discontinuous solutions of conservation laws. The present investigation is concerned with scalar conservation laws, taking into account the employment of total variation diminishing (TVD) schemes. The concept of a TVD scheme was introduced by Harten et al. (1976). Harten et al. first constructed schemes which are simultaneously TVD and second order accurate on smooth solutions. In the present paper, a summary is provided of recently conducted work in this area. Attention is given to TVD schemes in two space dimensions, a second order accurate TVD scheme in one dimension, and the entropy condition and spreading of rarefaction waves. 19 references

  3. Distribution of transformed organic matter in structural units of loamy sandy soddy-podzolic soil

    Science.gov (United States)

    Kogut, B. M.; Yashin, M. A.; Semenov, V. M.; Avdeeva, T. N.; Markina, L. G.; Lukin, S. M.; Tarasov, S. I.

    2016-01-01

    The effect of land use types and fertilizing systems on the structural and aggregate composition of loamy sandy soddy-podzolic soil and the quantitative parameters of soil organic matter has been studied. The contribution of soil aggregates 2-1 mm in size to the total Corg reserve in the humus horizon is higher than the contributions of other aggregates by 1.3-4.2 times. Reliable correlations have been revealed between the contents of total (Corg), labile (Clab), and active (C0) organic matter in the soil. The proportion of C0 is 44-70% of Clab extractable by neutral sodium pyrophosphate solution. The contributions of each of the 2-1, 0.5-0.25, and fractions to the total C0 reserve are 14-21%; the contributions of each of the other fractions are 4-12%. The chemically labile and biologically active components of humic substances reflect the quality changes of soil organic matter under agrogenic impacts. A conceptual scheme has been proposed for the subdivision of soil organic matter into the active, slow (intermediate), and passive pools. In the humus horizon of loamy sandy soddy-podzolic soil, the active, slow, and passive pools contain 6-11, 34-65, and 26-94% of the total Corg, respectively.

  4. Schemes for fibre-based entanglement generation in the telecom band

    International Nuclear Information System (INIS)

    Chen, Jun; Lee, Kim Fook; Li Xiaoying; Voss, Paul L; Kumar, Prem

    2007-01-01

    We investigate schemes for generating polarization-entangled photon pairs in standard optical fibres. The advantages of a double-loop scheme are explored through comparison with two other schemes, namely, the Sagnac-loop scheme and the counter-propagating scheme. Experimental measurements with the double-loop scheme verify the predicted advantages

  5. Tradable schemes

    NARCIS (Netherlands)

    J.K. Hoogland (Jiri); C.D.D. Neumann

    2000-01-01

    textabstractIn this article we present a new approach to the numerical valuation of derivative securities. The method is based on our previous work where we formulated the theory of pricing in terms of tradables. The basic idea is to fit a finite difference scheme to exact solutions of the pricing

  6. Finite-volume scheme for anisotropic diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Es, Bram van, E-mail: bramiozo@gmail.com [Centrum Wiskunde & Informatica, P.O. Box 94079, 1090GB Amsterdam (Netherlands); FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands); Koren, Barry [Eindhoven University of Technology (Netherlands); Blank, Hugo J. de [FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands)

    2016-02-01

    In this paper, we apply a special finite-volume scheme, limited to smooth temperature distributions and Cartesian grids, to test the importance of connectivity of the finite volumes. The area of application is nuclear fusion plasma with field line aligned temperature gradients and extreme anisotropy. We apply the scheme to the anisotropic heat-conduction equation, and compare its results with those of existing finite-volume schemes for anisotropic diffusion. Also, we introduce a general model adaptation of the steady diffusion equation for extremely anisotropic diffusion problems with closed field lines.

  7. Antikaons in infinite nuclear matter and nuclei

    International Nuclear Information System (INIS)

    Moeller, M.

    2007-01-01

    In this work we studied the properties of antikaons and hyperons in infinite cold nuclear matter. The in-medium antikaon-nucleon scattering amplitude and self-energy has been calculated within a covariant many-body framework in the first part. Nuclear saturation effects have been taken into account in terms of scalar and vector nucleon mean-fields. In the second part of the work we introduced a non-local method for the description of kaonic atoms. The many-body approach of anti KN scattering can be tested by the application to kaonic atoms. A self-consistent and covariant many-body approach has been used for the determination of the antikaon spectral function and anti KN scattering amplitudes. It considers s-, p- and d-waves and the application of an in-medium projector algebra accounts for proper mixing of partial waves in the medium. The on-shell reduction scheme is also implemented by means of the projector algebra. The Bethe-Salpeter equation has been rewritten, so that the free-space anti KN scattering can be used as the interaction kernel for the in-medium scattering equation. The latter free-space scattering is based on a realistic coupled-channel dynamics and chiral SU(3) Lagrangian. Our many-body approach is generalized for the presence of large scalar and vector nucleon mean-fields. It is supplemented by an improved renormalization scheme, that systematically avoids the occurrence of medium-induced power-divergent structures and kinematical singularities. A modified projector basis has been introduced, that allows for a convenient inclusion of nucleon mean-fields. The description of the results in terms of the 'physical' basis is done with the help of a recoupling scheme based on the projector algebra properties. (orig.)

  8. Summary Report for Evaluation of Compost Sample Drying Methods

    National Research Council Canada - National Science Library

    Frye, Russell

    1994-01-01

    .... Previous work in Support of these efforts developed a compost sample preparation scheme, consisting of air drying followed by milling, to reduce analytical variability in the heterogeneous compost matrix...

  9. The Cannabis Infringement Notice scheme in Western Australia: a review of policy, police and judicial perspectives.

    Science.gov (United States)

    Sutton, Adam; Hawks, David

    2005-07-01

    Western Australia (WA) became the fourth Australian jurisdiction to adopt a 'prohibition with civil penalties scheme' for minor cannabis offences when its Cannabis Infringement Notice (CIN) scheme became law on 22 March 2004. This study examined the attitudes and practices of policy makers, members of the law enforcement and magistracy and other judicial sectors involved in enforcing the new scheme, and their views as to its likely impact on the drug market. As part of the pre--post evaluation of the legislative reforms a sample of 30 police, other criminal justice personnel and policy makers have been qualitatively interviewed. Data were collected both at the pre-implementation stage (March and June 2003) and shortly after the Act became operational (mid-June 2004). The Western Australia Police Service's implementation of the CIN scheme has been extremely professional. However, these early results suggest that while the CIN scheme has been designed to take into account problems with similar schemes elsewhere in Australia, possible problems include: some operational police being unsure about the operation of the scheme; expected savings in police resources will probably be reduced by procedures which require offenders to be taken back to the station rather than issue notices on the spot as intended by the scheme's architects; probable net widening; problems with exercise of police discretion to issue a CIN; and public misunderstanding of the scheme. In the early months of the scheme understanding of the new laws among both police and members of the public was far from perfect. For the system to achieve the outcomes intended by legislators, it is essential that levels of understanding improve. Media and other campaigns to inform the public that cannabis cultivation and use remain illegal, and to warn about risks associated with cannabis use, should be extended. As it will be at least 18 months before the scheme is operationally settled in, the media and others

  10. Iodination of the humic samples from Hupa project

    International Nuclear Information System (INIS)

    Reiller, P.; Mercier-Bion, F.; Barre, N.; Gimenez, N.; Miserque, F.

    2005-01-01

    Full text of publication follows: Iodine radioactive isotopes, such as 129 I, are important radionuclides due to their significant impact in geological disposal: in the conditions of natural reducing groundwaters, iodine would essentially be present in the form of highly mobile iodide anion. But in shallow waters the presence of molecular iodine is to be taken into account. The interaction of iodine with natural organic matter in general and with humic substances (HS) in particular, has been the subject of numerous studies. It has been shown that in some cases, organically bound iodine can dominate the speciation either as methyl iodide or bound to humic substances [1, 2]. It is now also clear that this reactivity is closely related to the occurrence of molecular iodine I 2 (aq). The reaction scheme can be viewed as an electrophilic substitution of a hydrogen atom by an iodine atom on a phenolic ring. Nevertheless, in some of the latter studies, the characterization of the final reaction products did not satisfy the authors completely as total separation from I - produced during the iodination could not be achieved. Thus, further studies were led using samples from the CCE HUPA project: natural humic and fulvic extract from Gorleben [3] and synthetic samples obtained from FZ Rossendorf [4]. Dialysis procedures were envisaged to improve the incomplete separation between the colloidal humic matter and the iodide ions either unreacted or produced by the reaction [2]. The iodination of these samples were monitored using UV-Visible spectrophotometry. As in previous studies [2], the kinetics could not be linearized in simple order but the trends were conform to simple phenolic patterns. The apparent rates could nevertheless be correlated to the aromaticity (H/C ratio) of the samples. After dialysis, the iodine humic/fulvic interaction was characterised to occur as a carbon-iodine covalent bonding by X-ray photoelectron spectroscopy (XPS) [5]. Electro-spray ionisation

  11. Confirming theoretical pay constructs of a variable pay scheme

    Directory of Open Access Journals (Sweden)

    Sibangilizwe Ncube

    2013-05-01

    Full Text Available Orientation: Return on the investment in variable pay programmes remains controversial because their cost versus contribution cannot be empirically justified. Research purpose: This study validates the findings of the model developed by De Swardt on the factors related to successful variable pay programmes. Motivation for the study: Many organisations blindly implement variable pay programmes without any means to assess the impact these programmes have on the company’s performance. This study was necessary to validate the findings of an existing instrument that validates the contribution of variable pay schemes. Research design, approach and method: The study was conducted using quantitative research. A total of 300 completed questionnaires from a non-purposive sample of 3000 participants in schemes across all South African industries were returned and analysed. Main findings: Using exploratory and confirmatory factor analysis, it was found that the validation instrument developed by De Swardt is still largely valid in evaluating variable pay schemes. The differences between the study and the model were reported. Practical/managerial implications: The study confirmed the robustness of an existing model that enables practitioners to empirically validate the use of variable pay plans. This model assists in the design and implementation of variable pay programmes that meet critical success factors. Contribution/value-add: The study contributed to the development of a measurement instrument that will assess whether a variable pay plan contributes to an organisation’s success.

  12. Solid Matter

    CERN Document Server

    Angelo, Joseph A

    2011-01-01

    Supported by a generous quantity of full-color illustrations and interesting sidebars, Solid Matter introduces the basic characteristics and properties of solid matter. It briefly describes the cosmic connection of the elements, leading readers through several key events in human pre-history that resulted in more advanced uses of matter in the solid state. Chapters include:. -Solid Matter: An Initial Perspective. -Physical Behavior of Matter. -The Gravity of Matter. -Fundamentals of Materials Science. -Rocks and Minerals. -Metals. -Building Materials. -Carbon Earth's Most Versatile Element. -S

  13. Computing with high-resolution upwind schemes for hyperbolic equations

    International Nuclear Information System (INIS)

    Chakravarthy, S.R.; Osher, S.; California Univ., Los Angeles)

    1985-01-01

    Computational aspects of modern high-resolution upwind finite-difference schemes for hyperbolic systems of conservation laws are examined. An operational unification is demonstrated for constructing a wide class of flux-difference-split and flux-split schemes based on the design principles underlying total variation diminishing (TVD) schemes. Consideration is also given to TVD scheme design by preprocessing, the extension of preprocessing and postprocessing approaches to general control volumes, the removal of expansion shocks and glitches, relaxation methods for implicit TVD schemes, and a new family of high-accuracy TVD schemes. 21 references

  14. Generalized dynamics of soft-matter quasicrystals mathematical models and solutions

    CERN Document Server

    Fan, Tian-You

    2017-01-01

    The book systematically introduces the mathematical models and solutions of generalized hydrodynamics of soft-matter quasicrystals (SMQ). It provides methods for solving the initial-boundary value problems in these systems. The solutions obtained demonstrate the distribution, deformation and motion of the soft-matter quasicrystals, and determine the stress, velocity and displacement fields. The interactions between phonons, phasons and fluid phonons are discussed in some fundamental materials samples. Mathematical solutions for solid and soft-matter quasicrystals are compared, to help readers to better understand the featured properties of SMQ.

  15. Detection of triglycerides using immobilized enzymes in food and biological samples

    Science.gov (United States)

    Raichur, Ashish; Lesi, Abiodun; Pedersen, Henrik

    1996-04-01

    A scheme for the determination of total triglyceride (fat) content in biomedical and food samples is being developed. The primary emphasis is to minimize the reagents used, simplify sample preparation and develop a robust system that would facilitate on-line monitoring. The new detection scheme developed thus far involves extracting triglycerides into an organic solvent (cyclohexane) and performing partial least squares (PLS) analysis on the NIR (1100 - 2500 nm) absorbance spectra of the solution. A training set using 132 spectra of known triglyceride mixtures was complied. Eight PLS calibrations were generated and were used to predict the total fat extracted from commercial samples such as mayonnaise, butter, corn oil and coconut oil. The results typically gave a correlation coefficient (r) of 0.99 or better. Predictions were typically within 90% and better at higher concentrations. Experiments were also performed using an immobilized lipase reactor to hydrolyze the fat extracted into the organic solvent. Performing PLS analysis on the difference spectra of the substrate and product could enhance specificity. This is being verified experimentally. Further work with biomedical samples is to be performed. This scheme may be developed into a feasible detection method for triglycerides in the biomedical and food industries.

  16. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  17. Mixed ultrasoft/norm-conserved pseudopotential scheme

    DEFF Research Database (Denmark)

    Stokbro, Kurt

    1996-01-01

    A variant of the Vanderbilt ultrasoft pseudopotential scheme, where the norm conservation is released for only one or a few angular channels, is presented. Within this scheme some difficulties of the truly ultrasoft pseudopotentials are overcome without sacrificing the pseudopotential softness. (...

  18. New practicable Siberian Snake schemes

    International Nuclear Information System (INIS)

    Steffen, K.

    1983-07-01

    Siberian Snake schemes can be inserted in ring accelerators for making the spin tune almost independent of energy. Two such schemes are here suggested which lend particularly well to practical application over a wide energy range. Being composed of horizontal and vertical bending magnets, the proposed snakes are designed to have a small maximum beam excursion in one plane. By applying in this plane a bending correction that varies with energy, they can be operated at fixed geometry in the other plane where most of the bending occurs, thus avoiding complicated magnet motion or excessively large magnet apertures that would otherwise be needed for large energy variations. The first of the proposed schemes employs a pair of standard-type Siberian Snakes, i.e. of the usual 1st and 2nd kind which rotate the spin about the longitudinal and the transverse horizontal axis, respectively. The second scheme employs a pair of novel-type snakes which rotate the spin about either one of the horizontal axes that are at 45 0 to the beam direction. In obvious reference to these axes, they are called left-pointed and right-pointed snakes. (orig.)

  19. Geochemical sampling scheme optimization on mine wastes based on hyperspectral data

    CSIR Research Space (South Africa)

    Zhao, T

    2008-07-01

    Full Text Available decontamination, for example, acid-generating minerals. Acid rock drainage can adversely have an impact on the quality of drinking water and the health of riparian ecosystems. To assess or monitor environmental impact of mining, sampling of mine waste is required...

  20. Optimized spectroscopic scheme for enhanced precision CO measurements with applications to urban source attribution

    Science.gov (United States)

    Nottrott, A.; Hoffnagle, J.; Farinas, A.; Rella, C.

    2014-12-01

    Carbon monoxide (CO) is an urban pollutant generated by internal combustion engines which contributes to the formation of ground level ozone (smog). CO is also an excellent tracer for emissions from mobile combustion sources. In this work we present an optimized spectroscopic sampling scheme that enables enhanced precision CO measurements. The scheme was implemented on the Picarro G2401 Cavity Ring-Down Spectroscopy (CRDS) analyzer which measures CO2, CO, CH4 and H2O at 0.2 Hz. The optimized scheme improved the raw precision of CO measurements by 40% from 5 ppb to 3 ppb. Correlations of measured CO2, CO, CH4 and H2O from an urban tower were partitioned by wind direction and combined with a concentration footprint model for source attribution. The application of a concentration footprint for source attribution has several advantages. The upwind extent of the concentration footprint for a given sensor is much larger than the flux footprint. Measurements of mean concentration at the sensor location can be used to estimate source strength from a concentration footprint, while measurements of the vertical concentration flux are necessary to determine source strength from the flux footprint. Direct measurement of vertical concentration flux requires high frequency temporal sampling and increases the cost and complexity of the measurement system.

  1. Estimates and sampling schemes for the instrumentation of accountability systems

    International Nuclear Information System (INIS)

    Jewell, W.S.; Kwiatkowski, J.W.

    1976-10-01

    The problem of estimation of a physical quantity from a set of measurements is considered, where the measurements are made on samples with a hierarchical error structure, and where within-groups error variances may vary from group to group at each level of the structure; minimum mean squared-error estimators are developed, and the case where the physical quantity is a random variable with known prior mean and variance is included. Estimators for the error variances are also given, and optimization of experimental design is considered

  2. Robust and Efficient Authentication Scheme for Session Initiation Protocol

    Directory of Open Access Journals (Sweden)

    Yanrong Lu

    2015-01-01

    Full Text Available The session initiation protocol (SIP is a powerful application-layer protocol which is used as a signaling one for establishing, modifying, and terminating sessions among participants. Authentication is becoming an increasingly crucial issue when a user asks to access SIP services. Hitherto, many authentication schemes have been proposed to enhance the security of SIP. In 2014, Arshad and Nikooghadam proposed an enhanced authentication and key agreement scheme for SIP and claimed that their scheme could withstand various attacks. However, in this paper, we show that Arshad and Nikooghadam’s authentication scheme is still susceptible to key-compromise impersonation and trace attacks and does not provide proper mutual authentication. To conquer the flaws, we propose a secure and efficient ECC-based authentication scheme for SIP. Through the informal and formal security analyses, we demonstrate that our scheme is resilient to possible known attacks including the attacks found in Arshad et al.’s scheme. In addition, the performance analysis shows that our scheme has similar or better efficiency in comparison with other existing ECC-based authentication schemes for SIP.

  3. Perceptions of healthcare quality in Ghana: Does health insurance status matter?

    Science.gov (United States)

    Duku, Stephen Kwasi Opoku; Nketiah-Amponsah, Edward; Janssens, Wendy; Pradhan, Menno

    2018-01-01

    This study's objective is to provide an alternative explanation for the low enrolment in health insurance in Ghana by analysing differences in perceptions between the insured and uninsured of the non-technical quality of healthcare. It further explores the association between insurance status and perception of healthcare quality to ascertain whether insurance status matters in the perception of healthcare quality. Data from a survey of 1,903 households living in the catchment area of 64 health centres were used for the analysis. Two sample independent t-tests were employed to compare the average perceptions of the insured and uninsured on seven indicators of non-technical quality of healthcare. A generalised ordered logit regression, controlling for socio-economic characteristics and clustering at the health facility level, tested the association between insurance status and perceived quality of healthcare. The perceptions of the insured were found to be significantly more negative than the uninsured and those of the previously insured were significantly more negative than the never insured. Being insured was associated with a significantly lower perception of healthcare quality. Thus, once people are insured, they tend to perceive the quality of healthcare they receive as poor compared to those without insurance. This study demonstrated that health insurance status matters in the perceptions of healthcare quality. The findings also imply that perceptions of healthcare quality may be shaped by individual experiences at the health facilities, where the insured and uninsured may be treated differently. Health insurance then becomes less attractive due to the poor perception of the healthcare quality provided to individuals with insurance, resulting in low demand for health insurance in Ghana. Policy makers in Ghana should consider redesigning, reorganizing, and reengineering the National Healthcare Insurance Scheme to ensure the provision of better quality healthcare

  4. Accelerators for condensed matter research

    International Nuclear Information System (INIS)

    Williams, P.R.

    1990-01-01

    The requirement for high energy, high luminosity beams has stimulated the science and engineering of accelerators to a point where they open up opportunities for new areas of scientific application to benefit from the advances driven by particle physics. One area of great importance is the use of electron or positron storage rings as a source of intense VUV or X-ray synchrotron radiation. An accelerator application that has grown in prominence over the last 10 years has been spallation neutron sources. Neutrons offer an advantage over X-rays as a condensed matter probe because the neutron energy is usually of the same order as the room temperature thermal energy fluctuations in the sample being studied. Another area in which accelerators are playing an increasingly important role in condensed matter research concerns the use of Mu mesons, Muons, as a probe. This paper also presents a description of the ISIS Spallation Neutron Source. The design and status of the facility are described, and examples are given of its application to the study of condensed matter. (N.K.)

  5. Certificateless Key-Insulated Generalized Signcryption Scheme without Bilinear Pairings

    Directory of Open Access Journals (Sweden)

    Caixue Zhou

    2017-01-01

    Full Text Available Generalized signcryption (GSC can be applied as an encryption scheme, a signature scheme, or a signcryption scheme with only one algorithm and one key pair. A key-insulated mechanism can resolve the private key exposure problem. To ensure the security of cloud storage, we introduce the key-insulated mechanism into GSC and propose a concrete scheme without bilinear pairings in the certificateless cryptosystem setting. We provide a formal definition and a security model of certificateless key-insulated GSC. Then, we prove that our scheme is confidential under the computational Diffie-Hellman (CDH assumption and unforgeable under the elliptic curve discrete logarithm (EC-DL assumption. Our scheme also supports both random-access key update and secure key update. Finally, we evaluate the efficiency of our scheme and demonstrate that it is highly efficient. Thus, our scheme is more suitable for users who communicate with the cloud using mobile devices.

  6. Galaxy clusters in simulations of the local Universe: a matter of constraints

    Science.gov (United States)

    Sorce, Jenny G.; Tempel, Elmo

    2018-06-01

    To study the full formation and evolution history of galaxy clusters and their population, high-resolution simulations of the latter are flourishing. However, comparing observed clusters to the simulated ones on a one-to-one basis to refine the models and theories down to the details is non-trivial. The large variety of clusters limits the comparisons between observed and numerical clusters. Simulations resembling the local Universe down to the cluster scales permit pushing the limit. Simulated and observed clusters can be matched on a one-to-one basis for direct comparisons provided that clusters are well reproduced besides being in the proper large-scale environment. Comparing random and local Universe-like simulations obtained with differently grouped observational catalogues of peculiar velocities, this paper shows that the grouping scheme used to remove non-linear motions in the catalogues that constrain the simulations affects the quality of the numerical clusters. With a less aggressive grouping scheme - galaxies still falling on to clusters are preserved - combined with a bias minimization scheme, the mass of the dark matter haloes, simulacra for five local clusters - Virgo, Centaurus, Coma, Hydra, and Perseus - is increased by 39 per cent closing the gap with observational mass estimates. Simulacra are found on average in 89 per cent of the simulations, an increase of 5 per cent with respect to the previous grouping scheme. The only exception is Perseus. Since the Perseus-Pisces region is not well covered by the used peculiar velocity catalogue, the latest release lets us foresee a better simulacrum for Perseus in a near future.

  7. Anonymous Credential Schemes with Encrypted Attributes

    NARCIS (Netherlands)

    Guajardo Merchan, J.; Mennink, B.; Schoenmakers, B.

    2011-01-01

    In anonymous credential schemes, users obtain credentials on certain attributes from an issuer, and later show these credentials to a relying party anonymously and without fully disclosing the attributes. In this paper, we introduce the notion of (anonymous) credential schemes with encrypted

  8. Simple Numerical Schemes for the Korteweg-deVries Equation

    International Nuclear Information System (INIS)

    McKinstrie, C. J.; Kozlov, M.V.

    2000-01-01

    Two numerical schemes, which simulate the propagation of dispersive non-linear waves, are described. The first is a split-step Fourier scheme for the Korteweg-de Vries (KdV) equation. The second is a finite-difference scheme for the modified KdV equation. The stability and accuracy of both schemes are discussed. These simple schemes can be used to study a wide variety of physical processes that involve dispersive nonlinear waves

  9. Simple Numerical Schemes for the Korteweg-deVries Equation

    Energy Technology Data Exchange (ETDEWEB)

    C. J. McKinstrie; M. V. Kozlov

    2000-12-01

    Two numerical schemes, which simulate the propagation of dispersive non-linear waves, are described. The first is a split-step Fourier scheme for the Korteweg-de Vries (KdV) equation. The second is a finite-difference scheme for the modified KdV equation. The stability and accuracy of both schemes are discussed. These simple schemes can be used to study a wide variety of physical processes that involve dispersive nonlinear waves.

  10. Performance comparison of renewable incentive schemes using optimal control

    International Nuclear Information System (INIS)

    Oak, Neeraj; Lawson, Daniel; Champneys, Alan

    2014-01-01

    Many governments worldwide have instituted incentive schemes for renewable electricity producers in order to meet carbon emissions targets. These schemes aim to boost investment and hence growth in renewable energy industries. This paper examines four such schemes: premium feed-in tariffs, fixed feed-in tariffs, feed-in tariffs with contract for difference and the renewable obligations scheme. A generalised mathematical model of industry growth is presented and fitted with data from the UK onshore wind industry. The model responds to subsidy from each of the four incentive schemes. A utility or ‘fitness’ function that maximises installed capacity at some fixed time in the future while minimising total cost of subsidy is postulated. Using this function, the optimal strategy for provision and timing of subsidy for each scheme is calculated. Finally, a comparison of the performance of each scheme, given that they use their optimal control strategy, is presented. This model indicates that the premium feed-in tariff and renewable obligation scheme produce the joint best results. - Highlights: • Stochastic differential equation model of renewable energy industry growth and prices, using UK onshore wind data 1992–2010. • Cost of production reduces as cumulative installed capacity of wind energy increases, consistent with the theory of learning. • Studies the effect of subsidy using feed-in tariff schemes, and the ‘renewable obligations’ scheme. • We determine the optimal timing and quantity of subsidy required to maximise industry growth and minimise costs. • The premium feed-in tariff scheme and the renewable obligations scheme produce the best results under optimal control

  11. Compact stars with a small electric charge: the limiting radius to mass relation and the maximum mass for incompressible matter

    Energy Technology Data Exchange (ETDEWEB)

    Lemos, Jose P.S.; Lopes, Francisco J.; Quinta, Goncalo [Universidade de Lisboa, UL, Departamento de Fisica, Centro Multidisciplinar de Astrofisica, CENTRA, Instituto Superior Tecnico, IST, Lisbon (Portugal); Zanchin, Vilson T. [Universidade Federal do ABC, Centro de Ciencias Naturais e Humanas, Santo Andre, SP (Brazil)

    2015-02-01

    One of the stiffest equations of state for matter in a compact star is constant energy density and this generates the interior Schwarzschild radius to mass relation and the Misner maximum mass for relativistic compact stars. If dark matter populates the interior of stars, and this matter is supersymmetric or of some other type, some of it possessing a tiny electric charge, there is the possibility that highly compact stars can trap a small but non-negligible electric charge. In this case the radius to mass relation for such compact stars should get modifications. We use an analytical scheme to investigate the limiting radius to mass relation and the maximum mass of relativistic stars made of an incompressible fluid with a small electric charge. The investigation is carried out by using the hydrostatic equilibrium equation, i.e., the Tolman-Oppenheimer-Volkoff (TOV) equation, together with the other equations of structure, with the further hypothesis that the charge distribution is proportional to the energy density. The approach relies on Volkoff and Misner's method to solve the TOV equation. For zero charge one gets the interior Schwarzschild limit, and supposing incompressible boson or fermion matter with constituents with masses of the order of the neutron mass one finds that the maximum mass is the Misner mass. For a small electric charge, our analytical approximating scheme, valid in first order in the star's electric charge, shows that the maximum mass increases relatively to the uncharged case, whereas the minimum possible radius decreases, an expected effect since the new field is repulsive, aiding the pressure to sustain the star against gravitational collapse. (orig.)

  12. Performance Comparison of Reconstruction Algorithms in Discrete Blind Multi-Coset Sampling

    DEFF Research Database (Denmark)

    Grigoryan, Ruben; Arildsen, Thomas; Tandur, Deepaknath

    2012-01-01

    This paper investigates the performance of different reconstruction algorithms in discrete blind multi-coset sampling. Multi-coset scheme is a promising compressed sensing architecture that can replace traditional Nyquist-rate sampling in the applications with multi-band frequency sparse signals...

  13. A rational function based scheme for solving advection equation

    International Nuclear Information System (INIS)

    Xiao, Feng; Yabe, Takashi.

    1995-07-01

    A numerical scheme for solving advection equations is presented. The scheme is derived from a rational interpolation function. Some properties of the scheme with respect to convex-concave preserving and monotone preserving are discussed. We find that the scheme is attractive in surpressinging overshoots and undershoots even in the vicinities of discontinuity. The scheme can also be easily swicthed as the CIP (Cubic interpolated Pseudo-Particle) method to get a third-order accuracy in smooth region. Numbers of numerical tests are carried out to show the non-oscillatory and less diffusive nature of the scheme. (author)

  14. Rigid Body Sampling and Individual Time Stepping for Rigid-Fluid Coupling of Fluid Simulation

    Directory of Open Access Journals (Sweden)

    Xiaokun Wang

    2017-01-01

    Full Text Available In this paper, we propose an efficient and simple rigid-fluid coupling scheme with scientific programming algorithms for particle-based fluid simulation and three-dimensional visualization. Our approach samples the surface of rigid bodies with boundary particles that interact with fluids. It contains two procedures, that is, surface sampling and sampling relaxation, which insures uniform distribution of particles with less iterations. Furthermore, we present a rigid-fluid coupling scheme integrating individual time stepping to rigid-fluid coupling, which gains an obvious speedup compared to previous method. The experimental results demonstrate the effectiveness of our approach.

  15. Major Superficial White Matter Abnormalities in Huntington's Disease

    Science.gov (United States)

    Phillips, Owen R.; Joshi, Shantanu H.; Squitieri, Ferdinando; Sanchez-Castaneda, Cristina; Narr, Katherine; Shattuck, David W.; Caltagirone, Carlo; Sabatini, Umberto; Di Paola, Margherita

    2016-01-01

    Background: The late myelinating superficial white matter at the juncture of the cortical gray and white matter comprising the intracortical myelin and short-range association fibers has not received attention in Huntington's disease. It is an area of the brain that is late myelinating and is sensitive to both normal aging and neurodegenerative disease effects. Therefore, it may be sensitive to Huntington's disease processes. Methods: Structural MRI data from 25 Pre-symptomatic subjects, 24 Huntington's disease patients and 49 healthy controls was run through a cortical pattern-matching program. The surface corresponding to the white matter directly below the cortical gray matter was then extracted. Individual subject's Diffusion Tensor Imaging (DTI) data was aligned to their structural MRI data. Diffusivity values along the white matter surface were then sampled at each vertex point. DTI measures with high spatial resolution across the superficial white matter surface were then analyzed with the General Linear Model to test for the effects of disease. Results: There was an overall increase in the axial and radial diffusivity across much of the superficial white matter (p < 0.001) in Pre-symptomatic subjects compared to controls. In Huntington's disease patients increased diffusivity covered essentially the whole brain (p < 0.001). Changes are correlated with genotype (CAG repeat number) and disease burden (p < 0.001). Conclusions: This study showed broad abnormalities in superficial white matter even before symptoms are present in Huntington's disease. Since, the superficial white matter has a unique microstructure and function these abnormalities suggest it plays an important role in the disease. PMID:27242403

  16. Quark matter 93

    Energy Technology Data Exchange (ETDEWEB)

    Otterlund, Ingvar; Ruuskanen, Vesa

    1993-12-15

    In his welcome address to the 10th International Conference on Ultra- Relativistic Nucleus-Nucleus Collisions (Quark Matter '93), held in Borlange, Sweden, from 20-24 June, Hans-Ake Gustafsson was puzzled why this year's conference was billed as the tenth in the series. He had tried to count but could only find eight forerunners - Bielefeld (1982), Brookhaven (1983), Helsinki (1984), Asilomar (1986), Nordkirchen (1987), Lenox (1988), Menton (1990), Gatlinburg (1991), making this year's meeting at Borlange the ninth. The answer was given by Helmut Satz in his introductory talk, pointing out that at the time of the Bielefeld meeting, a few conferences dealing with similar topics had already been held. The Bielefeld organizers thus did not consider their conference the first. Whatever its pedigree, the Borlange meeting covered particle production in highly excited and compressed nuclear matter, fluctuations and correlations, quark phenomena (quantum chromodynamics - QCD) in nuclear collisions, probes and signatures of Quark-Gluon Plasma (QGP), future collider experiments and instrumentation. The theoretical talks were split between the fundamental properties of the hot and dense matter at or near equilibrium, and the interface between theory and experiment. The phenomenological modelling of heavy ion collisions seems to reproduce at least all the main features of the data with hadrons, resonances and strings as the degrees of freedom. However secondary interactions among the produced hadrons or strings need to be added. Hydrodynamic calculations lead to results which reproduce the main features of the collisions. With increasing collision energy, the parton degrees of freedom become more important. Klaus Geiger described an ambitious scheme treating the whole nucleus-nucleus collision in terms of a kinetic parton (quark/gluon) cascade. The initial parton distribution at the beginning of the collision is determined from the quark-gluon nuclear structure and the

  17. Quark matter 93

    International Nuclear Information System (INIS)

    Otterlund, Ingvar; Ruuskanen, Vesa

    1993-01-01

    In his welcome address to the 10th International Conference on Ultra- Relativistic Nucleus-Nucleus Collisions (Quark Matter '93), held in Borlange, Sweden, from 20-24 June, Hans-Ake Gustafsson was puzzled why this year's conference was billed as the tenth in the series. He had tried to count but could only find eight forerunners - Bielefeld (1982), Brookhaven (1983), Helsinki (1984), Asilomar (1986), Nordkirchen (1987), Lenox (1988), Menton (1990), Gatlinburg (1991), making this year's meeting at Borlange the ninth. The answer was given by Helmut Satz in his introductory talk, pointing out that at the time of the Bielefeld meeting, a few conferences dealing with similar topics had already been held. The Bielefeld organizers thus did not consider their conference the first. Whatever its pedigree, the Borlange meeting covered particle production in highly excited and compressed nuclear matter, fluctuations and correlations, quark phenomena (quantum chromodynamics - QCD) in nuclear collisions, probes and signatures of Quark-Gluon Plasma (QGP), future collider experiments and instrumentation. The theoretical talks were split between the fundamental properties of the hot and dense matter at or near equilibrium, and the interface between theory and experiment. The phenomenological modelling of heavy ion collisions seems to reproduce at least all the main features of the data with hadrons, resonances and strings as the degrees of freedom. However secondary interactions among the produced hadrons or strings need to be added. Hydrodynamic calculations lead to results which reproduce the main features of the collisions. With increasing collision energy, the parton degrees of freedom become more important. Klaus Geiger described an ambitious scheme treating the whole nucleus-nucleus collision in terms of a kinetic parton (quark/gluon) cascade. The initial parton distribution at the beginning of the collision is determined from the quark-gluon nuclear structure

  18. Algebraic K-theory of generalized schemes

    DEFF Research Database (Denmark)

    Anevski, Stella Victoria Desiree

    and geometry over the field with one element. It also permits the construction of important Arakelov theoretical objects, such as the completion \\Spec Z of Spec Z. In this thesis, we prove a projective bundle theorem for the eld with one element and compute the Chow rings of the generalized schemes Sp\\ec ZN......Nikolai Durov has developed a generalization of conventional scheme theory in which commutative algebraic monads replace commutative unital rings as the basic algebraic objects. The resulting geometry is expressive enough to encompass conventional scheme theory, tropical algebraic geometry......, appearing in the construction of \\Spec Z....

  19. Real-time photonic sampling with improved signal-to-noise and distortion ratio using polarization-dependent modulators

    Science.gov (United States)

    Liang, Dong; Zhang, Zhiyao; Liu, Yong; Li, Xiaojun; Jiang, Wei; Tan, Qinggui

    2018-04-01

    A real-time photonic sampling structure with effective nonlinearity suppression and excellent signal-to-noise ratio (SNR) performance is proposed. The key points of this scheme are the polarization-dependent modulators (P-DMZMs) and the sagnac loop structure. Thanks to the polarization sensitive characteristic of P-DMZMs, the differences between transfer functions of the fundamental signal and the distortion become visible. Meanwhile, the selection of specific biases in P-DMZMs is helpful to achieve a preferable linearized performance with a low noise level for real-time photonic sampling. Compared with the quadrature-biased scheme, the proposed scheme is capable of valid nonlinearity suppression and is able to provide a better SNR performance even in a large frequency range. The proposed scheme is proved to be effective and easily implemented for real time photonic applications.

  20. A modified symplectic PRK scheme for seismic wave modeling

    Science.gov (United States)

    Liu, Shaolin; Yang, Dinghui; Ma, Jian

    2017-02-01

    A new scheme for the temporal discretization of the seismic wave equation is constructed based on symplectic geometric theory and a modified strategy. The ordinary differential equation in terms of time, which is obtained after spatial discretization via the spectral-element method, is transformed into a Hamiltonian system. A symplectic partitioned Runge-Kutta (PRK) scheme is used to solve the Hamiltonian system. A term related to the multiplication of the spatial discretization operator with the seismic wave velocity vector is added into the symplectic PRK scheme to create a modified symplectic PRK scheme. The symplectic coefficients of the new scheme are determined via Taylor series expansion. The positive coefficients of the scheme indicate that its long-term computational capability is more powerful than that of conventional symplectic schemes. An exhaustive theoretical analysis reveals that the new scheme is highly stable and has low numerical dispersion. The results of three numerical experiments demonstrate the high efficiency of this method for seismic wave modeling.

  1. Finite Difference Schemes as Algebraic Correspondences between Layers

    Science.gov (United States)

    Malykh, Mikhail; Sevastianov, Leonid

    2018-02-01

    For some differential equations, especially for Riccati equation, new finite difference schemes are suggested. These schemes define protective correspondences between the layers. Calculation using these schemes can be extended to the area beyond movable singularities of exact solution without any error accumulation.

  2. Financial incentive schemes in primary care

    Directory of Open Access Journals (Sweden)

    Gillam S

    2015-09-01

    Full Text Available Stephen Gillam Department of Public Health and Primary Care, Institute of Public Health, University of Cambridge, Cambridge, UK Abstract: Pay-for-performance (P4P schemes have become increasingly common in primary care, and this article reviews their impact. It is based primarily on existing systematic reviews. The evidence suggests that P4P schemes can change health professionals' behavior and improve recorded disease management of those clinical processes that are incentivized. P4P may narrow inequalities in performance comparing deprived with nondeprived areas. However, such schemes have unintended consequences. Whether P4P improves the patient experience, the outcomes of care or population health is less clear. These practical uncertainties mirror the ethical concerns of many clinicians that a reductionist approach to managing markers of chronic disease runs counter to the humanitarian values of family practice. The variation in P4P schemes between countries reflects different historical and organizational contexts. With so much uncertainty regarding the effects of P4P, policy makers are well advised to proceed carefully with the implementation of such schemes until and unless clearer evidence for their cost–benefit emerges. Keywords: financial incentives, pay for performance, quality improvement, primary care

  3. Computerized detection method for asymptomatic white matter lesions in brain screening MR images using a clustering technique

    International Nuclear Information System (INIS)

    Kunieda, Takuya; Uchiyama, Yoshikazu; Hara, Takeshi

    2008-01-01

    Asymptomatic white matter lesions are frequently identified by the screening system known as Brain Dock, which is intended for the detection of asymptomatic brain diseases. The detection of asymptomatic white matter lesions is important because their presence is associated with an increased risk of stroke. Therefore, we have developed a computerized method for the detection of asymptomatic white matter lesions in order to assist radiologists in image interpretation as a ''second opinion''. Our database consisted of T 1 - and T 2 -weighted images obtained from 73 patients. The locations of the white matter lesions were determined by an experienced neuroradiologist. In order to restrict the area to be searched for white matter lesions, we first segmented the cerebral region in T 1 -weighted images by applying thresholding and region-growing techniques. To identify the initial candidate lesions, k-means clustering with pixel values in T 1 - and T 2 -weighted images was applied to the segmented cerebral region. To eliminate false positives (FPs), we determined the features, such as location, size, and circularity, of each of the initial candidate lesions. Finally, a rule-based scheme and a quadratic discriminant analysis with these features were employed to distinguish between white matter lesions and FPs. The results showed that the sensitivity for the detection of white matter lesions was 93.2%, with 4.3 FPs per image, suggesting that our computerized method may be useful for the detection of asymptomatic white matter lesions in T 1 - and T 2 -weighted images. (author)

  4. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    Os, J.J.A.M. van; Uittenbogaard, R.E.

    2004-01-01

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  5. Generalization of binary tensor product schemes depends upon four parameters

    International Nuclear Information System (INIS)

    Bashir, R.; Bari, M.; Mustafa, G.

    2018-01-01

    This article deals with general formulae of parametric and non parametric bivariate subdivision scheme with four parameters. By assigning specific values to those parameters we get some special cases of existing tensor product schemes as well as a new proposed scheme. The behavior of schemes produced by the general formulae is interpolating, approximating and relaxed. Approximating bivariate subdivision schemes produce some other surfaces as compared to interpolating bivariate subdivision schemes. Polynomial reproduction and polynomial generation are desirable properties of subdivision schemes. Capability of polynomial reproduction and polynomial generation is strongly connected with smoothness, sum rules, convergence and approximation order. We also calculate the polynomial generation and polynomial reproduction of 9-point bivariate approximating subdivision scheme. Comparison of polynomial reproduction, polynomial generation and continuity of existing and proposed schemes has also been established. Some numerical examples are also presented to show the behavior of bivariate schemes. (author)

  6. The effect of sampling frequency on the accuracy of estimates of milk ...

    African Journals Online (AJOL)

    The results of this study support the five-weekly sampling procedure currently used by the South African National Dairy Cattle Performance Testing Scheme. However, replacement of proportional bulking of individual morning and evening samples with a single evening milk sample would not compromise accuracy provided ...

  7. Development of Mycoplasma synoviae (MS) core genome multilocus sequence typing (cgMLST) scheme.

    Science.gov (United States)

    Ghanem, Mostafa; El-Gazzar, Mohamed

    2018-05-01

    Mycoplasma synoviae (MS) is a poultry pathogen with reported increased prevalence and virulence in recent years. MS strain identification is essential for prevention, control efforts and epidemiological outbreak investigations. Multiple multilocus based sequence typing schemes have been developed for MS, yet the resolution of these schemes could be limited for outbreak investigation. The cost of whole genome sequencing became close to that of sequencing the seven MLST targets; however, there is no standardized method for typing MS strains based on whole genome sequences. In this paper, we propose a core genome multilocus sequence typing (cgMLST) scheme as a standardized and reproducible method for typing MS based whole genome sequences. A diverse set of 25 MS whole genome sequences were used to identify 302 core genome genes as cgMLST targets (35.5% of MS genome) and 44 whole genome sequences of MS isolates from six countries in four continents were used for typing applying this scheme. cgMLST based phylogenetic trees displayed a high degree of agreement with core genome SNP based analysis and available epidemiological information. cgMLST allowed evaluation of two conventional MLST schemes of MS. The high discriminatory power of cgMLST allowed differentiation between samples of the same conventional MLST type. cgMLST represents a standardized, accurate, highly discriminatory, and reproducible method for differentiation between MS isolates. Like conventional MLST, it provides stable and expandable nomenclature, allowing for comparing and sharing the typing results between different laboratories worldwide. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Molecular characterization of macrophyte-derived dissolved organic matters and their implications for lakes

    Science.gov (United States)

    Chemical properties of whole organic matter (OM) and its dissolved organic matter (DOM) fraction from six dominant macrophytes in Lake Dianchi were comparatively characterized, and their environmental implications were discussed. Significant differences in chemical composition of the OM samples were...

  9. Signature Schemes Secure against Hard-to-Invert Leakage

    DEFF Research Database (Denmark)

    Faust, Sebastian; Hazay, Carmit; Nielsen, Jesper Buus

    2012-01-01

    of the secret key. As a second contribution, we construct a signature scheme that achieves security for random messages assuming that the adversary is given a polynomial-time hard to invert function. Here, polynomial-hardness is required even when given the entire public-key – so called weak auxiliary input......-theoretically reveal the entire secret key. In this work, we propose the first constructions of digital signature schemes that are secure in the auxiliary input model. Our main contribution is a digital signature scheme that is secure against chosen message attacks when given an exponentially hard-to-invert function...... security. We show that such signature schemes readily give us auxiliary input secure identification schemes...

  10. Networked Estimation for Event-Based Sampling Systems with Packet Dropouts

    Directory of Open Access Journals (Sweden)

    Young Soo Suh

    2009-04-01

    Full Text Available This paper is concerned with a networked estimation problem in which sensor data are transmitted over the network. In the event-based sampling scheme known as level-crossing or send-on-delta (SOD, sensor data are transmitted to the estimator node if the difference between the current sensor value and the last transmitted one is greater than a given threshold. Event-based sampling has been shown to be more efficient than the time-triggered one in some situations, especially in network bandwidth improvement. However, it cannot detect packet dropout situations because data transmission and reception do not use a periodical time-stamp mechanism as found in time-triggered sampling systems. Motivated by this issue, we propose a modified event-based sampling scheme called modified SOD in which sensor data are sent when either the change of sensor output exceeds a given threshold or the time elapses more than a given interval. Through simulation results, we show that the proposed modified SOD sampling significantly improves estimation performance when packet dropouts happen.

  11. A combinatorial characterization scheme for high-throughput investigations of hydrogen storage materials

    International Nuclear Information System (INIS)

    Hattrick-Simpers, Jason R; Chiu, Chun; Bendersky, Leonid A; Tan Zhuopeng; Oguchi, Hiroyuki; Heilweil, Edwin J; Maslar, James E

    2011-01-01

    In order to increase measurement throughput, a characterization scheme has been developed that accurately measures the hydrogen storage properties of materials in quantities ranging from 10 ng to 1 g. Initial identification of promising materials is realized by rapidly screening thin-film composition spread and thickness wedge samples using normalized IR emissivity imaging. The hydrogen storage properties of promising samples are confirmed through measurements on single-composition films with high-sensitivity (resolution <0.3 μg) Sievert's-type apparatus. For selected samples, larger quantities of up to ∼100 mg may be prepared and their (de)hydrogenation and micro-structural properties probed via parallel in situ Raman spectroscopy. Final confirmation of the hydrogen storage properties is obtained on ∼1 g powder samples using a combined Raman spectroscopy/Sievert's apparatus.

  12. ONU Power Saving Scheme for EPON System

    Science.gov (United States)

    Mukai, Hiroaki; Tano, Fumihiko; Tanaka, Masaki; Kozaki, Seiji; Yamanaka, Hideaki

    PON (Passive Optical Network) achieves FTTH (Fiber To The Home) economically, by sharing an optical fiber among plural subscribers. Recently, global climate change has been recognized as a serious near term problem. Power saving techniques for electronic devices are important. In PON system, the ONU (Optical Network Unit) power saving scheme has been studied and defined in XG-PON. In this paper, we propose an ONU power saving scheme for EPON. Then, we present an analysis of the power reduction effect and the data transmission delay caused by the ONU power saving scheme. According to the analysis, we propose an efficient provisioning method for the ONU power saving scheme which is applicable to both of XG-PON and EPON.

  13. Overestimation of Crop Root Biomass in Field Experiments Due to Extraneous Organic Matter.

    Science.gov (United States)

    Hirte, Juliane; Leifeld, Jens; Abiven, Samuel; Oberholzer, Hans-Rudolf; Hammelehle, Andreas; Mayer, Jochen

    2017-01-01

    Root biomass is one of the most relevant root parameters for studies of plant response to environmental change, soil carbon modeling or estimations of soil carbon sequestration. A major source of error in root biomass quantification of agricultural crops in the field is the presence of extraneous organic matter in soil: dead roots from previous crops, weed roots, incorporated above ground plant residues and organic soil amendments, or remnants of soil fauna. Using the isotopic difference between recent maize root biomass and predominantly C3-derived extraneous organic matter, we determined the proportions of maize root biomass carbon of total carbon in root samples from the Swiss long-term field trial "DOK." We additionally evaluated the effects of agricultural management (bio-organic and conventional), sampling depth (0-0.25, 0.25-0.5, 0.5-0.75 m) and position (within and between maize rows), and root size class (coarse and fine roots) as defined by sieve mesh size (2 and 0.5 mm) on those proportions, and quantified the success rate of manual exclusion of extraneous organic matter from root samples. Only 60% of the root mass that we retrieved from field soil cores was actual maize root biomass from the current season. While the proportions of maize root biomass carbon were not affected by agricultural management, they increased consistently with soil depth, were higher within than between maize rows, and were higher in coarse (>2 mm) than in fine (≤2 and >0.5) root samples. The success rate of manual exclusion of extraneous organic matter from root samples was related to agricultural management and, at best, about 60%. We assume that the composition of extraneous organic matter is strongly influenced by agricultural management and soil depth and governs the effect size of the investigated factors. Extraneous organic matter may result in severe overestimation of recovered root biomass and has, therefore, large implications for soil carbon modeling and estimations

  14. A survey of Strong Convergent Schemes for the Simulation of ...

    African Journals Online (AJOL)

    We considered strong convergent stochastic schemes for the simulation of stochastic differential equations. The stochastic Taylor's expansion, which is the main tool used for the derivation of strong convergent schemes; the Euler Maruyama, Milstein scheme, stochastic multistep schemes, Implicit and Explicit schemes were ...

  15. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    Science.gov (United States)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  16. A unitary ESPRIT scheme of joint angle estimation for MOTS MIMO radar.

    Science.gov (United States)

    Wen, Chao; Shi, Guangming

    2014-08-07

    The transmit array of multi-overlapped-transmit-subarray configured bistatic multiple-input multiple-output (MOTS MIMO) radar is partitioned into a number of overlapped subarrays, which is different from the traditional bistatic MIMO radar. In this paper, a new unitary ESPRIT scheme for joint estimation of the direction of departure (DOD) and the direction of arrival (DOA) for MOTS MIMO radar is proposed. In our method, each overlapped-transmit-subarray (OTS) with the identical effective aperture is regarded as a transmit element and the characteristics that the phase delays between the two OTSs is utilized. First, the measurements corresponding to all the OTSs are partitioned into two groups which have a rotational invariance relationship with each other. Then, the properties of centro-Hermitian matrices and real-valued rotational invariance factors are exploited to double the measurement samples and reduce computational complexity. Finally, the close-formed solution of automatically paired DOAs and DODs of targets is derived in a new manner. The proposed scheme provides increased estimation accuracy with the combination of inherent advantages of MOTS MIMO radar with unitary ESPRIT. Simulation results are presented to demonstrate the effectiveness and advantage of the proposed scheme.

  17. Sampling or gambling

    Energy Technology Data Exchange (ETDEWEB)

    Gy, P.M.

    1981-12-01

    Sampling can be compared to no other technique. A mechanical sampler must above all be selected according to its aptitude for supressing or reducing all components of the sampling error. Sampling is said to be correct when it gives all elements making up the batch of matter submitted to sampling an uniform probability of being selected. A sampler must be correctly designed, built, installed, operated and maintained. When the conditions of sampling correctness are not strictly respected, the sampling error can no longer be controlled and can, unknown to the user, be unacceptably large: the sample is no longer representative. The implementation of an incorrect sampler is a form of gambling and this paper intends to show that at this game the user is nearly always the loser in the long run. The users' and the manufacturers' interests may diverge and the standards which should safeguard the users' interests very often fail to do so by tolerating or even recommending incorrect techniques such as the implementation of too narrow cutters traveling too fast through the stream to be sampled.

  18. Laplace-Fourier-domain dispersion analysis of an average derivative optimal scheme for scalar-wave equation

    Science.gov (United States)

    Chen, Jing-Bo

    2014-06-01

    By using low-frequency components of the damped wavefield, Laplace-Fourier-domain full waveform inversion (FWI) can recover a long-wavelength velocity model from the original undamped seismic data lacking low-frequency information. Laplace-Fourier-domain modelling is an important foundation of Laplace-Fourier-domain FWI. Based on the numerical phase velocity and the numerical attenuation propagation velocity, a method for performing Laplace-Fourier-domain numerical dispersion analysis is developed in this paper. This method is applied to an average-derivative optimal scheme. The results show that within the relative error of 1 per cent, the Laplace-Fourier-domain average-derivative optimal scheme requires seven gridpoints per smallest wavelength and smallest pseudo-wavelength for both equal and unequal directional sampling intervals. In contrast, the classical five-point scheme requires 23 gridpoints per smallest wavelength and smallest pseudo-wavelength to achieve the same accuracy. Numerical experiments demonstrate the theoretical analysis.

  19. A Fuzzy Commitment Scheme with McEliece's Cipher

    Directory of Open Access Journals (Sweden)

    Deo Brat Ojha

    2010-04-01

    Full Text Available In this paper an attempt has been made to explain a fuzzy commitment scheme with McEliece scheme. The efficiency and security of this cryptosystem is comparatively better than any other cryptosystem. This scheme is one of the interesting candidates for post quantum cryptography. Hence our interest to deal with this system with fuzzy commitment scheme. The concept itself is illustrated with the help of a simple situation and the validation of mathematical experimental verification is provided.

  20. Feasible Teleportation Schemes with Five-Atom Entangled State

    Institute of Scientific and Technical Information of China (English)

    XUE Zheng-Yuan; YI You-Min; CAO Zhuo-Liang

    2006-01-01

    Teleportation schemes with a five-atom entangled state are investigated. In the teleportation scheme Bell state measurements (BSMs) are difficult for physical realization, so we investigate another strategy using separate measurements instead of BSM based on cavity quantum electrodynamics techniques. The scheme of two-atom entangled state teleportation is a controlled and probabilistic one. For the teleportation of the three-atom entangled state, the scheme is a probabilistic one. The fidelity and the probability of the successful teleportation are also obtained.

  1. Rapid habitability assessment of Mars samples by pyrolysis-FTIR

    Science.gov (United States)

    Gordon, Peter R.; Sephton, Mark A.

    2016-02-01

    Pyrolysis Fourier transform infrared spectroscopy (pyrolysis FTIR) is a potential sample selection method for Mars Sample Return missions. FTIR spectroscopy can be performed on solid and liquid samples but also on gases following preliminary thermal extraction, pyrolysis or gasification steps. The detection of hydrocarbon and non-hydrocarbon gases can reveal information on sample mineralogy and past habitability of the environment in which the sample was created. The absorption of IR radiation at specific wavenumbers by organic functional groups can indicate the presence and type of any organic matter present. Here we assess the utility of pyrolysis-FTIR to release water, carbon dioxide, sulfur dioxide and organic matter from Mars relevant materials to enable a rapid habitability assessment of target rocks for sample return. For our assessment a range of minerals were analyzed by attenuated total reflectance FTIR. Subsequently, the mineral samples were subjected to single step pyrolysis and multi step pyrolysis and the products characterised by gas phase FTIR. Data from both single step and multi step pyrolysis-FTIR provide the ability to identify minerals that reflect habitable environments through their water and carbon dioxide responses. Multi step pyrolysis-FTIR can be used to gain more detailed information on the sources of the liberated water and carbon dioxide owing to the characteristic decomposition temperatures of different mineral phases. Habitation can be suggested when pyrolysis-FTIR indicates the presence of organic matter within the sample. Pyrolysis-FTIR, therefore, represents an effective method to assess whether Mars Sample Return target rocks represent habitable conditions and potential records of habitation and can play an important role in sample triage operations.

  2. National health insurance scheme: How receptive are the private healthcare practitioners in a local government area of Lagos state.

    Science.gov (United States)

    Christina, Campbell Princess; Latifat, Taiwo Toyin; Collins, Nnaji Feziechukwu; Olatunbosun, Abolarin Thaddeus

    2014-11-01

    National Health Insurance Scheme (NHIS) is one of the health financing options adopted by Nigeria for improved healthcare access especially to the low income earners. One of the key operators of the scheme is the health care providers, thus their uptake of the scheme is fundamental to the survival of the scheme. The study reviewed the uptake of the NHIS by private health care providers in a Local Government Area in Lagos State. To assess the uptake of the NHIS by private healthcare practitioners. This descriptive cross-sectional study recruited 180 private healthcare providers selected by multistage sampling technique with a response rate of 88.9%. Awareness, knowledge and uptake of NHIS were 156 (97.5%), 110 (66.8%) and 97 (60.6%), respectively. Half of the respondents 82 (51.3%) were dissatisfied with the operations of the scheme. Major reasons were failure of entitlement payment by Health Maintenance Organisations 13 (81.3%) and their incurring losses in participating in the scheme 8(50%). There was a significant association between awareness, level of education, knowledge of NHIS and registration into scheme by the respondents P-value NHIS were commendable among the private health care providers. Six out of 10 had registered with the NHIS but half of the respondents 82 (51.3%) were dissatisfied with the scheme and 83 (57.2%) regretted participating in the scheme. There is need to improve payment modalities and ensure strict adherence to laid down policies.

  3. Homogenization scheme for acoustic metamaterials

    KAUST Repository

    Yang, Min

    2014-02-26

    We present a homogenization scheme for acoustic metamaterials that is based on reproducing the lowest orders of scattering amplitudes from a finite volume of metamaterials. This approach is noted to differ significantly from that of coherent potential approximation, which is based on adjusting the effective-medium parameters to minimize scatterings in the long-wavelength limit. With the aid of metamaterials’ eigenstates, the effective parameters, such as mass density and elastic modulus can be obtained by matching the surface responses of a metamaterial\\'s structural unit cell with a piece of homogenized material. From the Green\\'s theorem applied to the exterior domain problem, matching the surface responses is noted to be the same as reproducing the scattering amplitudes. We verify our scheme by applying it to three different examples: a layered lattice, a two-dimensional hexagonal lattice, and a decorated-membrane system. It is shown that the predicted characteristics and wave fields agree almost exactly with numerical simulations and experiments and the scheme\\'s validity is constrained by the number of dominant surface multipoles instead of the usual long-wavelength assumption. In particular, the validity extends to the full band in one dimension and to regimes near the boundaries of the Brillouin zone in two dimensions.

  4. A hybrid pi control scheme for airship hovering

    International Nuclear Information System (INIS)

    Ashraf, Z.; Choudhry, M.A.; Hanif, A.

    2012-01-01

    Airship provides us many attractive applications in aerospace industry including transportation of heavy payloads, tourism, emergency management, communication, hover and vision based applications. Hovering control of airship has many utilizations in different engineering fields. However, it is a difficult problem to sustain the hover condition maintaining controllability. So far, different solutions have been proposed in literature but most of them are difficult in analysis and implementation. In this paper, we have presented a simple and efficient scheme to design a multi input multi output hybrid PI control scheme for airship. It can maintain stability of the plant by rejecting disturbance inputs to ensure robustness. A control scheme based on feedback theory is proposed that uses principles of optimality with integral action for hovering applications. Simulations are carried out in MTALAB for examining the proposed control scheme for hovering in different wind conditions. Comparison of the technique with an existing scheme is performed, describing the effectiveness of control scheme. (author)

  5. Privacy Preserving Mapping Schemes Supporting Comparison

    NARCIS (Netherlands)

    Tang, Qiang

    2010-01-01

    To cater to the privacy requirements in cloud computing, we introduce a new primitive, namely Privacy Preserving Mapping (PPM) schemes supporting comparison. An PPM scheme enables a user to map data items into images in such a way that, with a set of images, any entity can determine the <, =, >

  6. Consolidation of the health insurance scheme

    CERN Document Server

    Association du personnel

    2009-01-01

    In the last issue of Echo, we highlighted CERN’s obligation to guarantee a social security scheme for all employees, pensioners and their families. In that issue we talked about the first component: pensions. This time we shall discuss the other component: the CERN Health Insurance Scheme (CHIS).

  7. A numerical scheme for the generalized Burgers–Huxley equation

    Directory of Open Access Journals (Sweden)

    Brajesh K. Singh

    2016-10-01

    Full Text Available In this article, a numerical solution of generalized Burgers–Huxley (gBH equation is approximated by using a new scheme: modified cubic B-spline differential quadrature method (MCB-DQM. The scheme is based on differential quadrature method in which the weighting coefficients are obtained by using modified cubic B-splines as a set of basis functions. This scheme reduces the equation into a system of first-order ordinary differential equation (ODE which is solved by adopting SSP-RK43 scheme. Further, it is shown that the proposed scheme is stable. The efficiency of the proposed method is illustrated by four numerical experiments, which confirm that obtained results are in good agreement with earlier studies. This scheme is an easy, economical and efficient technique for finding numerical solutions for various kinds of (nonlinear physical models as compared to the earlier schemes.

  8. General analysis for experimental studies of time-reversal-violating effects in slow neutron propagation through polarized matter

    International Nuclear Information System (INIS)

    Lamoreaux, S.K.; Golub, R.

    1994-01-01

    A general technique is developed for the analysis of proposed experimental studies of possible P,T-violating effects in the neutron-nucleus interaction based on low-energy neutron transmission through polarized matter. The analysis is applied to proposed experimental schemes and we determine the levels at which the absolute neutron polarization, magnetic fields, and target polarization must be controlled in order for these experiments to obtain a given sensitivity to P,T-violating effects

  9. Hyperon interaction in free space and nuclear matter

    Energy Technology Data Exchange (ETDEWEB)

    Dhar, Madhumita; Lenske, Horst [Institute for Theoretical Physics, Justus- Liebig-University Giessen (Germany)

    2015-07-01

    Baryon-baryon interactions within the SU(3)-octet are investigated in free space and nuclear matter.A meson exchange model based on SU(3) symmetry is used for determining the interaction. The Bethe-Salpeter equations are solved in a 3-D reduction scheme. In-medium effect has been incorporated by including a two particle Pauli projector operator in the scattering equation. The coupling of the various channels of total strangeness S and conserved total charge is studied in detail. Special attention is paid to the physical thresholds. The density dependence of interaction is clearly seen in the variation of the in-medium low-energy parameters. The approach is compared to descriptions derived from chiral-EFT and other meson-exchange models e.g. the Nijmegen and the Juelich model.

  10. Sampling methods for titica vine (Heteropsis spp.) inventory in a tropical forest

    Science.gov (United States)

    Carine Klauberg; Edson Vidal; Carlos Alberto Silva; Michelliny de M. Bentes; Andrew Thomas. Hudak

    2016-01-01

    Titica vine provides useful raw fiber material. Using sampling schemes that reduce sampling error can provide direction for sustainable forest management of this vine. Sampling systematically with rectangular plots (10× 25 m) promoted lower error and greater accuracy in the inventory of titica vines in tropical rainforest.

  11. Behaviour of organic matters in uranium ore processing

    International Nuclear Information System (INIS)

    Wu Sanmin

    1991-01-01

    The oxidation-reduction behaviour of organic matters in the course of oxidation roasting, acid leaching and alkali leaching, the regeneration of humic acid and the consumption of reagents are described. The mineralogical characteristics of the organic matter samples were studied. The results show that its organic matter rich in volatile carbon and with the shorter evolutionary process and lower association is easily oxidized with higher consumption of oxidant during its acid leaching; it is easily oxidized with forming humic acid during alkali leaching; and pretreating it by oxidation roasting is beneficial to the oxidation of uranium. On the contrary, the organic matter rich in fixed carbon, and with longer evolutionary process and higher association is difficultly oxidized with lower consumption of oxidant during its acid leaching; it is difficult to regenerate humic acid for it during alkali leaching; and the uranium can be easily reduced and the leaching performance of uranium can be lowered

  12. Motor learning in healthy humans is associated to gray matter changes: a tensor-based morphometry study.

    Science.gov (United States)

    Filippi, Massimo; Ceccarelli, Antonia; Pagani, Elisabetta; Gatti, Roberto; Rossi, Alice; Stefanelli, Laura; Falini, Andrea; Comi, Giancarlo; Rocca, Maria Assunta

    2010-04-15

    We used tensor-based morphometry (TBM) to: 1) map gray matter (GM) volume changes associated with motor learning in young healthy individuals; 2) evaluate if GM changes persist three months after cessation of motor training; and 3) assess whether the use of different schemes of motor training during the learning phase could lead to volume modifications of specific GM structures. From 31 healthy subjects, motor functional assessment and brain 3D T1-weighted sequence were obtained: before motor training (time 0), at the end of training (two weeks) (time 2), and three months later (time 3). Fifteen subjects (group A) were trained with goal-directed motor sequences, and 16 (group B) with non purposeful motor actions of the right hand. At time 1 vs. time 0, the whole sample of subjects had GM volume increase in regions of the temporo-occipital lobes, inferior parietal lobule (IPL) and middle frontal gyrus, while at time 2 vs. time 1, an increased GM volume in the middle temporal gyrus was seen. At time 1 vs. time 0, compared to group B, group A had a GM volume increase of the hippocampi, while the opposite comparison showed greater GM volume increase in the IPL and insula in group B vs. group A. Motor learning results in structural GM changes of different brain areas which are part of specific neuronal networks and tend to persist after training is stopped. The scheme applied during the learning phase influences the pattern of such structural changes.

  13. Motor learning in healthy humans is associated to gray matter changes: a tensor-based morphometry study.

    Directory of Open Access Journals (Sweden)

    Massimo Filippi

    Full Text Available We used tensor-based morphometry (TBM to: 1 map gray matter (GM volume changes associated with motor learning in young healthy individuals; 2 evaluate if GM changes persist three months after cessation of motor training; and 3 assess whether the use of different schemes of motor training during the learning phase could lead to volume modifications of specific GM structures. From 31 healthy subjects, motor functional assessment and brain 3D T1-weighted sequence were obtained: before motor training (time 0, at the end of training (two weeks (time 2, and three months later (time 3. Fifteen subjects (group A were trained with goal-directed motor sequences, and 16 (group B with non purposeful motor actions of the right hand. At time 1 vs. time 0, the whole sample of subjects had GM volume increase in regions of the temporo-occipital lobes, inferior parietal lobule (IPL and middle frontal gyrus, while at time 2 vs. time 1, an increased GM volume in the middle temporal gyrus was seen. At time 1 vs. time 0, compared to group B, group A had a GM volume increase of the hippocampi, while the opposite comparison showed greater GM volume increase in the IPL and insula in group B vs. group A. Motor learning results in structural GM changes of different brain areas which are part of specific neuronal networks and tend to persist after training is stopped. The scheme applied during the learning phase influences the pattern of such structural changes.

  14. Robust second-order scheme for multi-phase flow computations

    Science.gov (United States)

    Shahbazi, Khosro

    2017-06-01

    A robust high-order scheme for the multi-phase flow computations featuring jumps and discontinuities due to shock waves and phase interfaces is presented. The scheme is based on high-order weighted-essentially non-oscillatory (WENO) finite volume schemes and high-order limiters to ensure the maximum principle or positivity of the various field variables including the density, pressure, and order parameters identifying each phase. The two-phase flow model considered besides the Euler equations of gas dynamics consists of advection of two parameters of the stiffened-gas equation of states, characterizing each phase. The design of the high-order limiter is guided by the findings of Zhang and Shu (2011) [36], and is based on limiting the quadrature values of the density, pressure and order parameters reconstructed using a high-order WENO scheme. The proof of positivity-preserving and accuracy is given, and the convergence and the robustness of the scheme are illustrated using the smooth isentropic vortex problem with very small density and pressure. The effectiveness and robustness of the scheme in computing the challenging problem of shock wave interaction with a cluster of tightly packed air or helium bubbles placed in a body of liquid water is also demonstrated. The superior performance of the high-order schemes over the first-order Lax-Friedrichs scheme for computations of shock-bubble interaction is also shown. The scheme is implemented in two-dimensional space on parallel computers using message passing interface (MPI). The proposed scheme with limiter features approximately 50% higher number of inter-processor message communications compared to the corresponding scheme without limiter, but with only 10% higher total CPU time. The scheme is provably second-order accurate in regions requiring positivity enforcement and higher order in the rest of domain.

  15. Digital Signature Schemes with Complementary Functionality and Applications

    OpenAIRE

    S. N. Kyazhin

    2012-01-01

    Digital signature schemes with additional functionality (an undeniable signature, a signature of the designated confirmee, a signature blind, a group signature, a signature of the additional protection) and examples of their application are considered. These schemes are more practical, effective and useful than schemes of ordinary digital signature.

  16. A combined spectrum sensing and OFDM demodulation scheme

    NARCIS (Netherlands)

    Heskamp, M.; Slump, Cornelis H.

    2009-01-01

    In this paper we propose a combined signaling and spectrum sensing scheme for cognitive radio that can detect in-band primary users while the networks own signal is active. The signaling scheme uses OFDM with phase shift keying modulated sub-carriers, and the detection scheme measures the deviation

  17. A country-wide probability sample of public attitudes toward stuttering in Portugal.

    Science.gov (United States)

    Valente, Ana Rita S; St Louis, Kenneth O; Leahy, Margaret; Hall, Andreia; Jesus, Luis M T

    2017-06-01

    Negative public attitudes toward stuttering have been widely reported, although differences among countries and regions exist. Clear reasons for these differences remain obscure. Published research is unavailable on public attitudes toward stuttering in Portugal as well as a representative sample that explores stuttering attitudes in an entire country. This study sought to (a) determine the feasibility of a country-wide probability sampling scheme to measure public stuttering attitudes in Portugal using a standard instrument (the Public Opinion Survey of Human Attributes-Stuttering [POSHA-S]) and (b) identify demographic variables that predict Portuguese attitudes. The POSHA-S was translated to European Portuguese through a five-step process. Thereafter, a local administrative office-based, three-stage, cluster, probability sampling scheme was carried out to obtain 311 adult respondents who filled out the questionnaire. The Portuguese population held stuttering attitudes that were generally within the average range of those observed from numerous previous POSHA-S samples. Demographic variables that predicted more versus less positive stuttering attitudes were respondents' age, region of the country, years of school completed, working situation, and number of languages spoken. Non-predicting variables were respondents' sex, marital status, and parental status. A local administrative office-based, probability sampling scheme generated a respondent profile similar to census data and indicated that Portuguese attitudes are generally typical. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Method for determination of stable carbon isotope ratio of methylnitrophenols in atmospheric particulate matter

    Directory of Open Access Journals (Sweden)

    S. Moukhtar

    2011-11-01

    Full Text Available A technique for the measurement of the stable isotope ratio of methylnitrophenols in atmospheric particulate matter is presented. Atmospheric samples from rural and suburban areas were collected for evaluation of the procedure. Particulate matter was collected on quartz fibre filters using dichotomous high volume air samplers. Methylnitrophenols were extracted from the filters using acetonitrile. The sample was then purified using a combination of high-performance liquid chromatography and solid phase extraction. The final solution was then divided into two aliquots. To one aliquot, a derivatising agent, Bis(trimethylsilyltrifluoroacetamide, was added for Gas Chromatography-Mass Spectrometry analysis. The second half of the sample was stored in a refrigerator. For samples with concentrations exceeding 1 ng μl−1, the second half of the sample was used for measurement of stable carbon isotope ratios by Gas Chromatography-Isotope Ratio Mass Spectrometry.

    The procedure described in this paper provides a method for the analysis of methylnitrophenols in atmospheric particulate matter at concentrations as low as 0.3 pg m−3 and for stable isotope ratios with an accuracy of better than ±0.5‰ for concentrations exceeding 100 pg m−3.

    In all atmospheric particulate matter samples analysed, 2-methyl-4-nitrophenol was found to be the most abundant methylnitrophenol, with concentrations ranging from the low pg m−3 range in rural areas to more than 200 pg m−3 in some samples from a suburban location.

  19. The Politico-Economic Challenges of Ghana's National Health Insurance Scheme Implementation.

    Science.gov (United States)

    Fusheini, Adam

    2016-04-27

    National/social health insurance schemes have increasingly been seen in many low- and middle-income countries (LMICs) as a vehicle to universal health coverage (UHC) and a viable alternative funding mechanism for the health sector. Several countries, including Ghana, have thus introduced and implemented mandatory national health insurance schemes (NHIS) as part of reform efforts towards increasing access to health services. Ghana passed mandatory national health insurance (NHI) legislation (ACT 650) in 2003 and commenced nationwide implementation in 2004. Several peer review studies and other research reports have since assessed the performance of the scheme with positive rating while challenges also noted. This paper contributes to the literature on economic and political implementation challenges based on empirical evidence from the perspectives of the different category of actors and institutions involved in the process. Qualitative in-depth interviews were held with 33 different category of participants in four selected district mutual health insurance schemes in Southern (two) and Northern (two) Ghana. This was to ascertain their views regarding the main challenges in the implementation process. The participants were selected through purposeful sampling, stakeholder mapping, and snowballing. Data was analysed using thematic grouping procedure. Participants identified political issues of over politicisation and political interference as main challenges. The main economic issues participants identified included low premiums or contributions; broad exemptions, poor gatekeeper enforcement system; and culture of curative and hospital-centric care. The study establishes that political and economic factors have influenced the implementation process and the degree to which the policy has been implemented as intended. Thus, we conclude that there is a synergy between implementation and politics; and achieving UHC under the NHIS requires political stewardship. Political

  20. The new WAGR data acquisition scheme

    International Nuclear Information System (INIS)

    Ellis, W.E.; Leng, J.H.; Smith, I.C.; Smith, M.R.

    1976-06-01

    The existing WAGR data acquisition equipment was inadequate to meet the requirements introduced by the installation of two additional experimental loops and was in any case due for replacement. A completely new scheme was planned and implemented based on mini-computers, which while preserving all the useful features of the old scheme provided additional flexibility and improved data display. Both the initial objectives of the design and the final implementation are discussed without introducing detailed descriptions of hardware or the programming techniques employed. Although the scheme solves a specific problem the general principles are more widely applicable and could readily be adapted to other data checking and display problems. (author)

  1. An Efficient Homomorphic Aggregate Signature Scheme Based on Lattice

    Directory of Open Access Journals (Sweden)

    Zhengjun Jing

    2014-01-01

    Full Text Available Homomorphic aggregate signature (HAS is a linearly homomorphic signature (LHS for multiple users, which can be applied for a variety of purposes, such as multi-source network coding and sensor data aggregation. In order to design an efficient postquantum secure HAS scheme, we borrow the idea of the lattice-based LHS scheme over binary field in the single-user case, and develop it into a new lattice-based HAS scheme in this paper. The security of the proposed scheme is proved by showing a reduction to the single-user case and the signature length remains invariant. Compared with the existing lattice-based homomorphic aggregate signature scheme, our new scheme enjoys shorter signature length and high efficiency.

  2. Quantum election scheme based on anonymous quantum key distribution

    International Nuclear Information System (INIS)

    Zhou Rui-Rui; Yang Li

    2012-01-01

    An unconditionally secure authority-certified anonymous quantum key distribution scheme using conjugate coding is presented, based on which we construct a quantum election scheme without the help of an entanglement state. We show that this election scheme ensures the completeness, soundness, privacy, eligibility, unreusability, fairness, and verifiability of a large-scale election in which the administrator and counter are semi-honest. This election scheme can work even if there exist loss and errors in quantum channels. In addition, any irregularity in this scheme is sensible. (general)

  3. Effects of systematic sampling on satellite estimates of deforestation rates

    International Nuclear Information System (INIS)

    Steininger, M K; Godoy, F; Harper, G

    2009-01-01

    Options for satellite monitoring of deforestation rates over large areas include the use of sampling. Sampling may reduce the cost of monitoring but is also a source of error in estimates of areas and rates. A common sampling approach is systematic sampling, in which sample units of a constant size are distributed in some regular manner, such as a grid. The proposed approach for the 2010 Forest Resources Assessment (FRA) of the UN Food and Agriculture Organization (FAO) is a systematic sample of 10 km wide squares at every 1 deg. intersection of latitude and longitude. We assessed the outcome of this and other systematic samples for estimating deforestation at national, sub-national and continental levels. The study is based on digital data on deforestation patterns for the five Amazonian countries outside Brazil plus the Brazilian Amazon. We tested these schemes by varying sample-unit size and frequency. We calculated two estimates of sampling error. First we calculated the standard errors, based on the size, variance and covariance of the samples, and from this calculated the 95% confidence intervals (CI). Second, we calculated the actual errors, based on the difference between the sample-based estimates and the estimates from the full-coverage maps. At the continental level, the 1 deg., 10 km scheme had a CI of 21% and an actual error of 8%. At the national level, this scheme had CIs of 126% for Ecuador and up to 67% for other countries. At this level, increasing sampling density to every 0.25 deg. produced a CI of 32% for Ecuador and CIs of up to 25% for other countries, with only Brazil having a CI of less than 10%. Actual errors were within the limits of the CIs in all but two of the 56 cases. Actual errors were half or less of the CIs in all but eight of these cases. These results indicate that the FRA 2010 should have CIs of smaller than or close to 10% at the continental level. However, systematic sampling at the national level yields large CIs unless the

  4. WENO schemes for balance laws with spatially varying flux

    International Nuclear Information System (INIS)

    Vukovic, Senka; Crnjaric-Zic, Nelida; Sopta, Luka

    2004-01-01

    In this paper we construct numerical schemes of high order of accuracy for hyperbolic balance law systems with spatially variable flux function and a source term of the geometrical type. We start with the original finite difference characteristicwise weighted essentially nonoscillatory (WENO) schemes and then we create new schemes by modifying the flux formulations (locally Lax-Friedrichs and Roe with entropy fix) in order to account for the spatially variable flux, and by decomposing the source term in order to obtain balance between numerical approximations of the flux gradient and of the source term. We apply so extended WENO schemes to the one-dimensional open channel flow equations and to the one-dimensional elastic wave equations. In particular, we prove that in these applications the new schemes are exactly consistent with steady-state solutions from an appropriately chosen subset. Experimentally obtained orders of accuracy of the extended and original WENO schemes are almost identical on a convergence test. Other presented test problems illustrate the improvement of the proposed schemes relative to the original WENO schemes combined with the pointwise source term evaluation. As expected, the increase in the formal order of accuracy of applied WENO reconstructions in all the tests causes visible increase in the high resolution properties of the schemes

  5. Vibronic Boson Sampling: Generalized Gaussian Boson Sampling for Molecular Vibronic Spectra at Finite Temperature.

    Science.gov (United States)

    Huh, Joonsuk; Yung, Man-Hong

    2017-08-07

    Molecular vibroic spectroscopy, where the transitions involve non-trivial Bosonic correlation due to the Duschinsky Rotation, is strongly believed to be in a similar complexity class as Boson Sampling. At finite temperature, the problem is represented as a Boson Sampling experiment with correlated Gaussian input states. This molecular problem with temperature effect is intimately related to the various versions of Boson Sampling sharing the similar computational complexity. Here we provide a full description to this relation in the context of Gaussian Boson Sampling. We find a hierarchical structure, which illustrates the relationship among various Boson Sampling schemes. Specifically, we show that every instance of Gaussian Boson Sampling with an initial correlation can be simulated by an instance of Gaussian Boson Sampling without initial correlation, with only a polynomial overhead. Since every Gaussian state is associated with a thermal state, our result implies that every sampling problem in molecular vibronic transitions, at any temperature, can be simulated by Gaussian Boson Sampling associated with a product of vacuum modes. We refer such a generalized Gaussian Boson Sampling motivated by the molecular sampling problem as Vibronic Boson Sampling.

  6. Deltamethrin in sediment samples of the Okavango Delta, Botswana ...

    African Journals Online (AJOL)

    Analysis of samples for organic matter content showed percentage total organic carbon (% TOC) ranging between 0.19% and 8.21%, with samples collected from the pool having the highest total organic carbon. The concentrations of deltamethrin residues and the % TOC in sediment samples showed a similar trend with ...

  7. A repeat-until-success quantum computing scheme

    Energy Technology Data Exchange (ETDEWEB)

    Beige, A [School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT (United Kingdom); Lim, Y L [DSO National Laboratories, 20 Science Park Drive, Singapore 118230, Singapore (Singapore); Kwek, L C [Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542, Singapore (Singapore)

    2007-06-15

    Recently we proposed a hybrid architecture for quantum computing based on stationary and flying qubits: the repeat-until-success (RUS) quantum computing scheme. The scheme is largely implementation independent. Despite the incompleteness theorem for optical Bell-state measurements in any linear optics set-up, it allows for the implementation of a deterministic entangling gate between distant qubits. Here we review this distributed quantum computation scheme, which is ideally suited for integrated quantum computation and communication purposes.

  8. A repeat-until-success quantum computing scheme

    International Nuclear Information System (INIS)

    Beige, A; Lim, Y L; Kwek, L C

    2007-01-01

    Recently we proposed a hybrid architecture for quantum computing based on stationary and flying qubits: the repeat-until-success (RUS) quantum computing scheme. The scheme is largely implementation independent. Despite the incompleteness theorem for optical Bell-state measurements in any linear optics set-up, it allows for the implementation of a deterministic entangling gate between distant qubits. Here we review this distributed quantum computation scheme, which is ideally suited for integrated quantum computation and communication purposes

  9. Is there a dichotomy in the Dark Matter as well as in the Baryonic Matter properties of ellipticals?

    NARCIS (Netherlands)

    Napolitano, NR; Capaccioli, M; Arnaboldi, M; Merrifield, MR; Douglas, NG; Kuijken, K; Romanowsky, AJ; Freeman, KC; Ryder, SD; Pisano, DJ; Walker, MA; Freeman, KC

    2004-01-01

    We have found a correlation between the M/L global gradients and the structural parameters of the luminous components of a sample of 19 early-type galaxies. Such a correlation supports the hypothesis that there is a connection between the dark matter content and the evolution of the baryonic

  10. High-order UWB pulses scheme to generate multilevel modulation formats based on incoherent optical sources.

    Science.gov (United States)

    Bolea, Mario; Mora, José; Ortega, Beatriz; Capmany, José

    2013-11-18

    We present a high-order UWB pulses generator based on a microwave photonic filter which provides a set of positive and negative samples by using the slicing of an incoherent optical source and the phase inversion in a Mach-Zehnder modulator. The simple scalability and high reconfigurability of the system permit a better accomplishment of the FCC requirements. Moreover, the proposed scheme permits an easy adaptation to pulse amplitude modulation, bi phase modulation, pulse shape modulation and pulse position modulation. The flexibility of the scheme for being adaptable to multilevel modulation formats permits to increase the transmission bit rate by using hybrid modulation formats.

  11. Scalable Nonlinear Compact Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Debojyoti [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil M. [Univ. of Chicago, IL (United States); Brown, Jed [Univ. of Colorado, Boulder, CO (United States)

    2014-04-01

    In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.

  12. MIMO transmit scheme based on morphological perceptron with competitive learning.

    Science.gov (United States)

    Valente, Raul Ambrozio; Abrão, Taufik

    2016-08-01

    This paper proposes a new multi-input multi-output (MIMO) transmit scheme aided by artificial neural network (ANN). The morphological perceptron with competitive learning (MP/CL) concept is deployed as a decision rule in the MIMO detection stage. The proposed MIMO transmission scheme is able to achieve double spectral efficiency; hence, in each time-slot the receiver decodes two symbols at a time instead one as Alamouti scheme. Other advantage of the proposed transmit scheme with MP/CL-aided detector is its polynomial complexity according to modulation order, while it becomes linear when the data stream length is greater than modulation order. The performance of the proposed scheme is compared to the traditional MIMO schemes, namely Alamouti scheme and maximum-likelihood MIMO (ML-MIMO) detector. Also, the proposed scheme is evaluated in a scenario with variable channel information along the frame. Numerical results have shown that the diversity gain under space-time coding Alamouti scheme is partially lost, which slightly reduces the bit-error rate (BER) performance of the proposed MP/CL-NN MIMO scheme. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Monitoring forest areas from continental to territorial levels using a sample of medium spatial resolution satellite imagery

    Science.gov (United States)

    Eva, Hugh; Carboni, Silvia; Achard, Frédéric; Stach, Nicolas; Durieux, Laurent; Faure, Jean-François; Mollicone, Danilo

    A global systematic sampling scheme has been developed by the UN FAO and the EC TREES project to estimate rates of deforestation at global or continental levels at intervals of 5 to 10 years. This global scheme can be intensified to produce results at the national level. In this paper, using surrogate observations, we compare the deforestation estimates derived from these two levels of sampling intensities (one, the global, for the Brazilian Amazon the other, national, for French Guiana) to estimates derived from the official inventories. We also report the precisions that are achieved due to sampling errors and, in the case of French Guiana, compare such precision with the official inventory precision. We extract nine sample data sets from the official wall-to-wall deforestation map derived from satellite interpretations produced for the Brazilian Amazon for the year 2002 to 2003. This global sampling scheme estimate gives 2.81 million ha of deforestation (mean from nine simulated replicates) with a standard error of 0.10 million ha. This compares with the full population estimate from the wall-to-wall interpretations of 2.73 million ha deforested, which is within one standard error of our sampling test estimate. The relative difference between the mean estimate from sampling approach and the full population estimate is 3.1%, and the standard error represents 4.0% of the full population estimate. This global sampling is then intensified to a territorial level with a case study over French Guiana to estimate deforestation between the years 1990 and 2006. For the historical reference period, 1990, Landsat-5 Thematic Mapper data were used. A coverage of SPOT-HRV imagery at 20 m × 20 m resolution acquired at the Cayenne receiving station in French Guiana was used for year 2006. Our estimates from the intensified global sampling scheme over French Guiana are compared with those produced by the national authority to report on deforestation rates under the Kyoto

  14. Analysis of poly-beta-hydroxybutyrate in environmental samples by GC-MS/MS.

    Science.gov (United States)

    Elhottová, D; Tríska, J; Petersen, S O; Santrůcková, H

    2000-05-01

    Application of gas chromatography-mass spectrometry (GC-MS) can significantly improve trace analyses of compounds in complex matrices from natural environments compared to gas chromatography only. A GC-MS/MS technique for determination of poly-beta-hydroxybutyrate (PHB), a bacterial storage compound, has been developed and used for analysis of two soils stored for up to 319 d, fresh samples of sewage sludge, as well as a pure culture of Bacillus megaterium. Specific derivatization of beta-hydroxybutyrate (3-OH C4:0) PHB monomer units by N-tert-butyl-dimethylsilyl-N-methyltrifluoracetamide (MTBSTFA) improved chromatographic and mass spectrometric properties of the analyte. The diagnostic fragmentation scheme of the derivates tert-butyldimethylsilyl ester and ether of beta-hydroxybutyric acid (MTBSTFA-HB) essential for the PHB identification was shown. The ion trap MS was used, therefore the scan gave the best sensitivity and with MS/MS the noise decreased, so the S/N was better and also with second fragmentation the amount of ions increased compared to SIM. The detection limit for MTBSTFA-HB by GC-MS/MS was about 10(-13) g microL(-1) of injected volume, while by GC (FID) and GC-MS (scan) it was around 10(-10) g microL(-1) of injected volume. Sensitivity of GC-MS/MS measurements of PHB in arable soil and activated sludge samples was down to 10 pg of PHB g(-1) dry matter. Comparison of MTBSTFA-HB detection in natural soil sample by GC (FID), GC-MS (scan) and by GC-MS/MS demonstrated potentials and limitations of the individual measurement techniques.

  15. On doublet composite schemes of leptons and quarks

    International Nuclear Information System (INIS)

    Pirogov, Yu.F.

    1981-01-01

    All simplest doublet composite schemes are classified. Four different doublet schemes are shown to be available. A new scheme with charge doublet Q=(2/3, -1/3) rather advantageous as compared with the previous ones is being considered. Some difficulties in interpreting the colour as an effective symmetry are pointed out [ru

  16. New analytic unitarization schemes

    International Nuclear Information System (INIS)

    Cudell, J.-R.; Predazzi, E.; Selyugin, O. V.

    2009-01-01

    We consider two well-known classes of unitarization of Born amplitudes of hadron elastic scattering. The standard class, which saturates at the black-disk limit includes the standard eikonal representation, while the other class, which goes beyond the black-disk limit to reach the full unitarity circle, includes the U matrix. It is shown that the basic properties of these schemes are independent of the functional form used for the unitarization, and that U matrix and eikonal schemes can be extended to have similar properties. A common form of unitarization is proposed interpolating between both classes. The correspondence with different nonlinear equations are also briefly examined.

  17. Canonical, stable, general mapping using context schemes.

    Science.gov (United States)

    Novak, Adam M; Rosen, Yohei; Haussler, David; Paten, Benedict

    2015-11-15

    Sequence mapping is the cornerstone of modern genomics. However, most existing sequence mapping algorithms are insufficiently general. We introduce context schemes: a method that allows the unambiguous recognition of a reference base in a query sequence by testing the query for substrings from an algorithmically defined set. Context schemes only map when there is a unique best mapping, and define this criterion uniformly for all reference bases. Mappings under context schemes can also be made stable, so that extension of the query string (e.g. by increasing read length) will not alter the mapping of previously mapped positions. Context schemes are general in several senses. They natively support the detection of arbitrary complex, novel rearrangements relative to the reference. They can scale over orders of magnitude in query sequence length. Finally, they are trivially extensible to more complex reference structures, such as graphs, that incorporate additional variation. We demonstrate empirically the existence of high-performance context schemes, and present efficient context scheme mapping algorithms. The software test framework created for this study is available from https://registry.hub.docker.com/u/adamnovak/sequence-graphs/. anovak@soe.ucsc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models

    KAUST Repository

    El Gharamti, Mohamad; Valstar, Johan R.; Hoteit, Ibrahim

    2014-01-01

    Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system's parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.

  19. An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models

    KAUST Repository

    El Gharamti, Mohamad

    2014-09-01

    Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.

  20. Improvements of the Vis-NIRS Model in the Prediction of Soil Organic Matter Content Using Spectral Pretreatments, Sample Selection, and Wavelength Optimization

    Science.gov (United States)

    Lin, Z. D.; Wang, Y. B.; Wang, R. J.; Wang, L. S.; Lu, C. P.; Zhang, Z. Y.; Song, L. T.; Liu, Y.

    2017-07-01

    A total of 130 topsoil samples collected from Guoyang County, Anhui Province, China, were used to establish a Vis-NIR model for the prediction of organic matter content (OMC) in lime concretion black soils. Different spectral pretreatments were applied for minimizing the irrelevant and useless information of the spectra and increasing the spectra correlation with the measured values. Subsequently, the Kennard-Stone (KS) method and sample set partitioning based on joint x-y distances (SPXY) were used to select the training set. Successive projection algorithm (SPA) and genetic algorithm (GA) were then applied for wavelength optimization. Finally, the principal component regression (PCR) model was constructed, in which the optimal number of principal components was determined using the leave-one-out cross validation technique. The results show that the combination of the Savitzky-Golay (SG) filter for smoothing and multiplicative scatter correction (MSC) can eliminate the effect of noise and baseline drift; the SPXY method is preferable to KS in the sample selection; both the SPA and the GA can significantly reduce the number of wavelength variables and favorably increase the accuracy, especially GA, which greatly improved the prediction accuracy of soil OMC with Rcc, RMSEP, and RPD up to 0.9316, 0.2142, and 2.3195, respectively.