WorldWideScience

Sample records for sample set consisted

  1. Range-efficient consistent sampling and locality-sensitive hashing for polygons

    DEFF Research Database (Denmark)

    Gudmundsson, Joachim; Pagh, Rasmus

    2017-01-01

    Locality-sensitive hashing (LSH) is a fundamental technique for similarity search and similarity estimation in high-dimensional spaces. The basic idea is that similar objects should produce hash collisions with probability significantly larger than objects with low similarity. We consider LSH for...... or union of a set of preprocessed polygons. Curiously, our consistent sampling method uses transformation to a geometric problem....

  2. EVALUATION OF CONSISTENCY AND SETTING TIME OF IRANIAN DENTAL STONES

    Directory of Open Access Journals (Sweden)

    F GOL BIDI

    2000-09-01

    Full Text Available Introduction. Dental stones are widely used in dentistry and the success or failure of many dental treatments depend on the accuracy of these gypsums. The purpose of this study was the evaluation of Iranian dental stones and comparison between Iranian and foreign ones. In this investigation, consistency and setting time were compared between Pars Dendn, Almas and Hinrizit stones. The latter is accepted by ADA (American Dental Association. Consistency and setting time are 2 of 5 properties that are necessitated by both ADA specification No. 25 and Iranian Standard Organization specification No. 2569 for evaluation of dental stones. Methods. In this study, the number and preparation of specimens and test conditions were done according to the ADA specification No. 25 and all the measurements were done with vicat apparatus. Results. The results of this study showed that the standard consistency of Almas stone was obtained by 42ml water and 100gr powder and the setting time of this stone was 11±0.03 min. Which was with in the limits of ADA specification (12±4 min. The standard consistency of Pars Dandan stone was obrianed by 31ml water and 100 gr powder, but the setting time of this stone was 5± 0.16 min which was nt within the limits of ADA specification. Discussion: Comparison of Iranian and Hinrizit stones properties showed that two probable problems of Iranian stones are:1- Unhemogrnousity of Iranian stoned powder was caused by uncontrolled temperature, pressure and humidity in the production process of stone. 2- Impurities such as sodium chloride was responsible fo shortening of Pars Dendens setting time.

  3. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  4. Hyperspectral band selection based on consistency-measure of neighborhood rough set theory

    International Nuclear Information System (INIS)

    Liu, Yao; Xie, Hong; Wang, Liguo; Tan, Kezhu; Chen, Yuehua; Xu, Zhen

    2016-01-01

    Band selection is a well-known approach for reducing dimensionality in hyperspectral imaging. In this paper, a band selection method based on consistency-measure of neighborhood rough set theory (CMNRS) was proposed to select informative bands from hyperspectral images. A decision-making information system was established by the reflection spectrum of soybeans’ hyperspectral data between 400 nm and 1000 nm wavelengths. The neighborhood consistency-measure, which reflects not only the size of the decision positive region, but also the sample distribution in the boundary region, was used as the evaluation function of band significance. The optimal band subset was selected by a forward greedy search algorithm. A post-pruning strategy was employed to overcome the over-fitting problem and find the minimum subset. To assess the effectiveness of the proposed band selection technique, two classification models (extreme learning machine (ELM) and random forests (RF)) were built. The experimental results showed that the proposed algorithm can effectively select key bands and obtain satisfactory classification accuracy. (paper)

  5. Inverse consistent non-rigid image registration based on robust point set matching

    Science.gov (United States)

    2014-01-01

    Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number

  6. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    Science.gov (United States)

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  7. Characterization of full set material constants of piezoelectric materials based on ultrasonic method and inverse impedance spectroscopy using only one sample.

    Science.gov (United States)

    Li, Shiyang; Zheng, Limei; Jiang, Wenhua; Sahul, Raffi; Gopalan, Venkatraman; Cao, Wenwu

    2013-09-14

    The most difficult task in the characterization of complete set material properties for piezoelectric materials is self-consistency. Because there are many independent elastic, dielectric, and piezoelectric constants, several samples are needed to obtain the full set constants. Property variation from sample to sample often makes the obtained data set lack of self-consistency. Here, we present a method, based on pulse-echo ultrasound and inverse impedance spectroscopy, to precisely determine the full set physical properties of piezoelectric materials using only one small sample, which eliminated the sample to sample variation problem to guarantee self-consistency. The method has been applied to characterize the [001] C poled Mn modified 0.27Pb(In 1/2 Nb 1/2 )O 3 -0.46Pb(Mg 1/3 Nb 2/3 )O 3 -0.27PbTiO 3 single crystal and the validity of the measured data is confirmed by a previously established method. For the inverse calculations using impedance spectrum, the stability of reconstructed results is analyzed by fluctuation analysis of input data. In contrast to conventional regression methods, our method here takes the full advantage of both ultrasonic and inverse impedance spectroscopy methods to extract all constants from only one small sample. The method provides a powerful tool for assisting novel piezoelectric materials of small size and for generating needed input data sets for device designs using finite element simulations.

  8. Density Functional Theory and the Basis Set Truncation Problem with Correlation Consistent Basis Sets: Elephant in the Room or Mouse in the Closet?

    Science.gov (United States)

    Feller, David; Dixon, David A

    2018-03-08

    Two recent papers in this journal called into question the suitability of the correlation consistent basis sets for density functional theory (DFT) calculations, because the sets were designed for correlated methods such as configuration interaction, perturbation theory, and coupled cluster theory. These papers focused on the ability of the correlation consistent and other basis sets to reproduce total energies, atomization energies, and dipole moments obtained from "quasi-exact" multiwavelet results. Undesirably large errors were observed for the correlation consistent basis sets. One of the papers argued that basis sets specifically optimized for DFT methods were "essential" for obtaining high accuracy. In this work we re-examined the performance of the correlation consistent basis sets by resolving problems with the previous calculations and by making more appropriate basis set choices for the alkali and alkaline-earth metals and second-row elements. When this is done, the statistical errors with respect to the benchmark values and with respect to DFT optimized basis sets are greatly reduced, especially in light of the relatively large intrinsic error of the underlying DFT method. When judged with respect to high-quality Feller-Peterson-Dixon coupled cluster theory atomization energies, the PBE0 DFT method used in the previous studies exhibits a mean absolute deviation more than a factor of 50 larger than the quintuple zeta basis set truncation error.

  9. Evaluating the consistency of gene sets used in the analysis of bacterial gene expression data

    Directory of Open Access Journals (Sweden)

    Tintle Nathan L

    2012-08-01

    Full Text Available Abstract Background Statistical analyses of whole genome expression data require functional information about genes in order to yield meaningful biological conclusions. The Gene Ontology (GO and Kyoto Encyclopedia of Genes and Genomes (KEGG are common sources of functionally grouped gene sets. For bacteria, the SEED and MicrobesOnline provide alternative, complementary sources of gene sets. To date, no comprehensive evaluation of the data obtained from these resources has been performed. Results We define a series of gene set consistency metrics directly related to the most common classes of statistical analyses for gene expression data, and then perform a comprehensive analysis of 3581 Affymetrix® gene expression arrays across 17 diverse bacteria. We find that gene sets obtained from GO and KEGG demonstrate lower consistency than those obtained from the SEED and MicrobesOnline, regardless of gene set size. Conclusions Despite the widespread use of GO and KEGG gene sets in bacterial gene expression data analysis, the SEED and MicrobesOnline provide more consistent sets for a wide variety of statistical analyses. Increased use of the SEED and MicrobesOnline gene sets in the analysis of bacterial gene expression data may improve statistical power and utility of expression data.

  10. Set Up of an Automatic Water Quality Sampling System in Irrigation Agriculture

    Directory of Open Access Journals (Sweden)

    Emanuel Heinz

    2013-12-01

    Full Text Available We have developed a high-resolution automatic sampling system for continuous in situ measurements of stable water isotopic composition and nitrogen solutes along with hydrological information. The system facilitates concurrent monitoring of a large number of water and nutrient fluxes (ground, surface, irrigation and rain water in irrigated agriculture. For this purpose we couple an automatic sampling system with a Wavelength-Scanned Cavity Ring Down Spectrometry System (WS-CRDS for stable water isotope analysis (δ2H and δ18O, a reagentless hyperspectral UV photometer (ProPS for monitoring nitrate content and various water level sensors for hydrometric information. The automatic sampling system consists of different sampling stations equipped with pumps, a switch cabinet for valve and pump control and a computer operating the system. The complete system is operated via internet-based control software, allowing supervision from nearly anywhere. The system is currently set up at the International Rice Research Institute (Los Baños, The Philippines in a diversified rice growing system to continuously monitor water and nutrient fluxes. Here we present the system’s technical set-up and provide initial proof-of-concept with results for the isotopic composition of different water sources and nitrate values from the 2012 dry season.

  11. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    Science.gov (United States)

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

  12. Factors Influencing the Degree of Intrajudge Consistency during the Standard Setting Process.

    Science.gov (United States)

    Plake, Barbara S.; And Others

    The accuracy of standards obtained from judgmental methods is dependent on the quality of the judgments made by experts throughout the standard setting process. One important dimension of the quality of these judgments is the consistency of the judges' perceptions with item performance of minimally competent candidates. Several interrelated…

  13. A general framework for thermodynamically consistent parameterization and efficient sampling of enzymatic reactions.

    Directory of Open Access Journals (Sweden)

    Pedro Saa

    2015-04-01

    Full Text Available Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol, a transition region (-2> ΔGr >-20 kJ/mol and a constant elasticity region (ΔGr <-20 kJ/mol. We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only

  14. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    Science.gov (United States)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application

  15. Random Sampling of Correlated Parameters – a Consistent Solution for Unfavourable Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Žerovnik, G., E-mail: gasper.zerovnik@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Trkov, A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Kodeli, I.A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Capote, R. [International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Smith, D.L. [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, CA 92118-3073 (United States)

    2015-01-15

    Two methods for random sampling according to a multivariate lognormal distribution – the correlated sampling method and the method of transformation of correlation coefficients – are briefly presented. The methods are mathematically exact and enable consistent sampling of correlated inherently positive parameters with given information on the first two distribution moments. Furthermore, a weighted sampling method to accelerate the convergence of parameters with extremely large relative uncertainties is described. However, the method is efficient only for a limited number of correlated parameters.

  16. Rapid Fractionation and Isolation of Whole Blood Components in Samples Obtained from a Community-based Setting.

    Science.gov (United States)

    Weckle, Amy; Aiello, Allison E; Uddin, Monica; Galea, Sandro; Coulborn, Rebecca M; Soliven, Richelo; Meier, Helen; Wildman, Derek E

    2015-11-30

    Collection and processing of whole blood samples in a non-clinical setting offers a unique opportunity to evaluate community-dwelling individuals both with and without preexisting conditions. Rapid processing of these samples is essential to avoid degradation of key cellular components. Included here are methods for simultaneous peripheral blood mononuclear cell (PBMC), DNA, RNA and serum isolation from a single blood draw performed in the homes of consenting participants across a metropolitan area, with processing initiated within 2 hr of collection. We have used these techniques to process over 1,600 blood specimens yielding consistent, high quality material, which has subsequently been used in successful DNA methylation, genotyping, gene expression and flow cytometry analyses. Some of the methods employed are standard; however, when combined in the described manner, they enable efficient processing of samples from participants of population- and/or community-based studies who would not normally be evaluated in a clinical setting. Therefore, this protocol has the potential to obtain samples (and subsequently data) that are more representative of the general population.

  17. An Optimization Study on Listening Experiments to Improve the Comparability of Annoyance Ratings of Noise Samples from Different Experimental Sample Sets.

    Science.gov (United States)

    Di, Guoqing; Lu, Kuanguang; Shi, Xiaofan

    2018-03-08

    Annoyance ratings obtained from listening experiments are widely used in studies on health effect of environmental noise. In listening experiments, participants usually give the annoyance rating of each noise sample according to its relative annoyance degree among all samples in the experimental sample set if there are no reference sound samples, which leads to poor comparability between experimental results obtained from different experimental sample sets. To solve this problem, this study proposed to add several pink noise samples with certain loudness levels into experimental sample sets as reference sound samples. On this basis, the standard curve between logarithmic mean annoyance and loudness level of pink noise was used to calibrate the experimental results and the calibration procedures were described in detail. Furthermore, as a case study, six different types of noise sample sets were selected to conduct listening experiments using this method to examine the applicability of it. Results showed that the differences in the annoyance ratings of each identical noise sample from different experimental sample sets were markedly decreased after calibration. The determination coefficient ( R ²) of linear fitting functions between psychoacoustic annoyance (PA) and mean annoyance (MA) of noise samples from different experimental sample sets increased obviously after calibration. The case study indicated that the method above is applicable to calibrating annoyance ratings obtained from different types of noise sample sets. After calibration, the comparability of annoyance ratings of noise samples from different experimental sample sets can be distinctly improved.

  18. Efficient triangulation of Poisson-disk sampled point sets

    KAUST Repository

    Guo, Jianwei

    2014-05-06

    In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.

  19. Methods to characterize environmental settings of stream and groundwater sampling sites for National Water-Quality Assessment

    Science.gov (United States)

    Nakagaki, Naomi; Hitt, Kerie J.; Price, Curtis V.; Falcone, James A.

    2012-01-01

    Characterization of natural and anthropogenic features that define the environmental settings of sampling sites for streams and groundwater, including drainage basins and groundwater study areas, is an essential component of water-quality and ecological investigations being conducted as part of the U.S. Geological Survey's National Water-Quality Assessment program. Quantitative characterization of environmental settings, combined with physical, chemical, and biological data collected at sampling sites, contributes to understanding the status of, and influences on, water-quality and ecological conditions. To support studies for the National Water-Quality Assessment program, a geographic information system (GIS) was used to develop a standard set of methods to consistently characterize the sites, drainage basins, and groundwater study areas across the nation. This report describes three methods used for characterization-simple overlay, area-weighted areal interpolation, and land-cover-weighted areal interpolation-and their appropriate applications to geographic analyses that have different objectives and data constraints. In addition, this document records the GIS thematic datasets that are used for the Program's national design and data analyses.

  20. Effect of the absolute statistic on gene-sampling gene-set analysis methods.

    Science.gov (United States)

    Nam, Dougu

    2017-06-01

    Gene-set enrichment analysis and its modified versions have commonly been used for identifying altered functions or pathways in disease from microarray data. In particular, the simple gene-sampling gene-set analysis methods have been heavily used for datasets with only a few sample replicates. The biggest problem with this approach is the highly inflated false-positive rate. In this paper, the effect of absolute gene statistic on gene-sampling gene-set analysis methods is systematically investigated. Thus far, the absolute gene statistic has merely been regarded as a supplementary method for capturing the bidirectional changes in each gene set. Here, it is shown that incorporating the absolute gene statistic in gene-sampling gene-set analysis substantially reduces the false-positive rate and improves the overall discriminatory ability. Its effect was investigated by power, false-positive rate, and receiver operating curve for a number of simulated and real datasets. The performances of gene-set analysis methods in one-tailed (genome-wide association study) and two-tailed (gene expression data) tests were also compared and discussed.

  1. Correlation consistent basis sets for lanthanides: The atoms La–Lu

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Qing; Peterson, Kirk A., E-mail: kipeters@wsu.edu [Department of Chemistry, Washington State University, Pullman, Washington 99164-4630 (United States)

    2016-08-07

    Using the 3rd-order Douglas-Kroll-Hess (DKH3) Hamiltonian, all-electron correlation consistent basis sets of double-, triple-, and quadruple-zeta quality have been developed for the lanthanide elements La through Lu. Basis sets designed for the recovery of valence correlation (defined here as 4f5s5p5d6s), cc-pVnZ-DK3, and outer-core correlation (valence + 4s4p4d), cc-pwCVnZ-DK3, are reported (n = D, T, and Q). Systematic convergence of both Hartree-Fock and correlation energies towards their respective complete basis set (CBS) limits are observed. Benchmark calculations of the first three ionization potentials (IPs) of La through Lu are reported at the DKH3 coupled cluster singles and doubles with perturbative triples, CCSD(T), level of theory, including effects of correlation down through the 4s electrons. Spin-orbit coupling is treated at the 2-component HF level. After extrapolation to the CBS limit, the average errors with respect to experiment were just 0.52, 1.14, and 4.24 kcal/mol for the 1st, 2nd, and 3rd IPs, respectively, compared to the average experimental uncertainties of 0.03, 1.78, and 2.65 kcal/mol, respectively. The new basis sets are also used in CCSD(T) benchmark calculations of the equilibrium geometries, atomization energies, and heats of formation for Gd{sub 2}, GdF, and GdF{sub 3}. Except for the equilibrium geometry and harmonic frequency of GdF, which are accurately known from experiment, all other calculated quantities represent significant improvements compared to the existing experimental quantities. With estimated uncertainties of about ±3 kcal/mol, the 0 K atomization energies (298 K heats of formation) are calculated to be (all in kcal/mol): 33.2 (160.1) for Gd{sub 2}, 151.7 (−36.6) for GdF, and 447.1 (−295.2) for GdF{sub 3}.

  2. A consistent data set of Antarctic ice sheet topography, cavity geometry, and global bathymetry

    Directory of Open Access Journals (Sweden)

    R. Timmermann

    2010-12-01

    Full Text Available Sub-ice shelf circulation and freezing/melting rates in ocean general circulation models depend critically on an accurate and consistent representation of cavity geometry. Existing global or pan-Antarctic topography data sets have turned out to contain various inconsistencies and inaccuracies. The goal of this work is to compile independent regional surveys and maps into a global data set. We use the S-2004 global 1-min bathymetry as the backbone and add an improved version of the BEDMAP topography (ALBMAP bedrock topography for an area that roughly coincides with the Antarctic continental shelf. The position of the merging line is individually chosen in different sectors in order to capture the best of both data sets. High-resolution gridded data for ice shelf topography and cavity geometry of the Amery, Fimbul, Filchner-Ronne, Larsen C and George VI Ice Shelves, and for Pine Island Glacier are carefully merged into the ambient ice and ocean topographies. Multibeam survey data for bathymetry in the former Larsen B cavity and the southeastern Bellingshausen Sea have been obtained from the data centers of Alfred Wegener Institute (AWI, British Antarctic Survey (BAS and Lamont-Doherty Earth Observatory (LDEO, gridded, and blended into the existing bathymetry map. The resulting global 1-min Refined Topography data set (RTopo-1 contains self-consistent maps for upper and lower ice surface heights, bedrock topography, and surface type (open ocean, grounded ice, floating ice, bare land surface. The data set is available in NetCDF format from the PANGAEA database at doi:10.1594/pangaea.741917.

  3. RANKED SET SAMPLING FOR ECOLOGICAL RESEARCH: ACCOUNTING FOR THE TOTAL COSTS OF SAMPLING

    Science.gov (United States)

    Researchers aim to design environmental studies that optimize precision and allow for generalization of results, while keeping the costs of associated field and laboratory work at a reasonable level. Ranked set sampling is one method to potentially increase precision and reduce ...

  4. Electronic structure of thin films by the self-consistent numerical-basis-set linear combination of atomic orbitals method: Ni(001)

    International Nuclear Information System (INIS)

    Wang, C.S.; Freeman, A.J.

    1979-01-01

    We present the self-consistent numerical-basis-set linear combination of atomic orbitals (LCAO) discrete variational method for treating the electronic structure of thin films. As in the case of bulk solids, this method provides for thin films accurate solutions of the one-particle local density equations with a non-muffin-tin potential. Hamiltonian and overlap matrix elements are evaluated accurately by means of a three-dimensional numerical Diophantine integration scheme. Application of this method is made to the self-consistent solution of one-, three-, and five-layer Ni(001) unsupported films. The LCAO Bloch basis set consists of valence orbitals (3d, 4s, and 4p states for transition metals) orthogonalized to the frozen-core wave functions. The self-consistent potential is obtained iteratively within the superposition of overlapping spherical atomic charge density model with the atomic configurations treated as adjustable parameters. Thus the crystal Coulomb potential is constructed as a superposition of overlapping spherically symmetric atomic potentials and, correspondingly, the local density Kohn-Sham (α = 2/3) potential is determined from a superposition of atomic charge densities. At each iteration in the self-consistency procedure, the crystal charge density is evaluated using a sampling of 15 independent k points in (1/8)th of the irreducible two-dimensional Brillouin zone. The total density of states (DOS) and projected local DOS (by layer plane) are calculated using an analytic linear energy triangle method (presented as an Appendix) generalized from the tetrahedron scheme for bulk systems. Distinct differences are obtained between the surface and central plane local DOS. The central plane DOS is found to converge rapidly to the DOS of bulk paramagnetic Ni obtained by Wang and Callaway. Only a very small surplus charge (0.03 electron/atom) is found on the surface planes, in agreement with jellium model calculations

  5. A consistent set of thermodynamic constants for americium (III) species with hydroxyl and carbonate

    International Nuclear Information System (INIS)

    Kerrisk, J.F.; Silva, R.J.

    1986-01-01

    A consistent set of thermodynamic constants for aqueous species, and compounds of Am(III) with hydroxyl and carbonate ligands has been developed. The procedure used to develop these constants involved establishing a value for one formation constant at a time in a sequential order, starting with the hydrolysis products and hydroxide solids, and then proceeding to carbonate species. The EQ3NR chemical-equilibrium model was used to test the constants developed. These constants are consistent with most of the experimental data that form their basis; however, considerable uncertainty still exists in some aspects of the Am(III) data

  6. Correlation consistent basis sets for actinides. I. The Th and U atoms

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, Kirk A., E-mail: kipeters@wsu.edu [Department of Chemistry, Washington State University, Pullman, Washington 99164-4630 (United States)

    2015-02-21

    New correlation consistent basis sets based on both pseudopotential (PP) and all-electron Douglas-Kroll-Hess (DKH) Hamiltonians have been developed from double- to quadruple-zeta quality for the actinide atoms thorium and uranium. Sets for valence electron correlation (5f6s6p6d), cc − pV nZ − PP and cc − pV nZ − DK3, as well as outer-core correlation (valence + 5s5p5d), cc − pwCV nZ − PP and cc − pwCV nZ − DK3, are reported (n = D, T, Q). The -PP sets are constructed in conjunction with small-core, 60-electron PPs, while the -DK3 sets utilized the 3rd-order Douglas-Kroll-Hess scalar relativistic Hamiltonian. Both series of basis sets show systematic convergence towards the complete basis set limit, both at the Hartree-Fock and correlated levels of theory, making them amenable to standard basis set extrapolation techniques. To assess the utility of the new basis sets, extensive coupled cluster composite thermochemistry calculations of ThF{sub n} (n = 2 − 4), ThO{sub 2}, and UF{sub n} (n = 4 − 6) have been carried out. After accurately accounting for valence and outer-core correlation, spin-orbit coupling, and even Lamb shift effects, the final 298 K atomization enthalpies of ThF{sub 4}, ThF{sub 3}, ThF{sub 2}, and ThO{sub 2} are all within their experimental uncertainties. Bond dissociation energies of ThF{sub 4} and ThF{sub 3}, as well as UF{sub 6} and UF{sub 5}, were similarly accurate. The derived enthalpies of formation for these species also showed a very satisfactory agreement with experiment, demonstrating that the new basis sets allow for the use of accurate composite schemes just as in molecular systems composed only of lighter atoms. The differences between the PP and DK3 approaches were found to increase with the change in formal oxidation state on the actinide atom, approaching 5-6 kcal/mol for the atomization enthalpies of ThF{sub 4} and ThO{sub 2}. The DKH3 atomization energy of ThO{sub 2} was calculated to be smaller than the DKH2

  7. Automatic setting of the distance between sample and detector in gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Andeweg, A.H.

    1980-01-01

    An apparatus has been developed that automatically sets the distance from the sample to the detector according to the radioactivity of the sample. The distance-setting unit works in conjuction with an automatic sample changer, and is interconnected with other components so that the counting head automatically moves to the optimum distance for the analysis of a particular sample. The distance, which is indicated digitally in increments of 0,01 mm, can be set between 18 and 995 mm at count rates that can be preset between 1000 and 10 000 counts per second. On being tested, the instrument performed well within the desired range and accuracy. Under routine conditions, the spectra were much more accurate than before, especially when samples of different radioactivity were counted

  8. A Consistent System for Coding Laboratory Samples

    Science.gov (United States)

    Sih, John C.

    1996-07-01

    A formal laboratory coding system is presented to keep track of laboratory samples. Preliminary useful information regarding the sample (origin and history) is gained without consulting a research notebook. Since this system uses and retains the same research notebook page number for each new experiment (reaction), finding and distinguishing products (samples) of the same or different reactions becomes an easy task. Using this system multiple products generated from a single reaction can be identified and classified in a uniform fashion. Samples can be stored and filed according to stage and degree of purification, e.g. crude reaction mixtures, recrystallized samples, chromatographed or distilled products.

  9. Correlation consistent basis sets for actinides. II. The atoms Ac and Np-Lr.

    Science.gov (United States)

    Feng, Rulin; Peterson, Kirk A

    2017-08-28

    New correlation consistent basis sets optimized using the all-electron third-order Douglas-Kroll-Hess (DKH3) scalar relativistic Hamiltonian are reported for the actinide elements Ac and Np through Lr. These complete the series of sets reported previously for Th-U [K. A. Peterson, J. Chem. Phys. 142, 074105 (2015); M. Vasiliu et al., J. Phys. Chem. A 119, 11422 (2015)]. The new sets range in size from double- to quadruple-zeta and encompass both those optimized for valence (6s6p5f7s6d) and outer-core electron correlations (valence + 5s5p5d). The final sets have been contracted for both the DKH3 and eXact 2-component (X2C) Hamiltonians, yielding cc-pVnZ-DK3/cc-pVnZ-X2C sets for valence correlation and cc-pwCVnZ-DK3/cc-pwCVnZ-X2C sets for outer-core correlation (n = D, T, Q in each case). In order to test the effectiveness of the new basis sets, both atomic and molecular benchmark calculations have been carried out. In the first case, the first three atomic ionization potentials (IPs) of all the actinide elements Ac-Lr have been calculated using the Feller-Peterson-Dixon (FPD) composite approach, primarily with the multireference configuration interaction (MRCI) method. Excellent convergence towards the respective complete basis set (CBS) limits is achieved with the new sets, leading to good agreement with experiment, where these exist, after accurately accounting for spin-orbit effects using the 4-component Dirac-Hartree-Fock method. For a molecular test, the IP and atomization energy (AE) of PuO 2 have been calculated also using the FPD method but using a coupled cluster approach with spin-orbit coupling accounted for using the 4-component MRCI. The present calculations yield an IP 0 for PuO 2 of 159.8 kcal/mol, which is in excellent agreement with the experimental electron transfer bracketing value of 162 ± 3 kcal/mol. Likewise, the calculated 0 K AE of 305.6 kcal/mol is in very good agreement with the currently accepted experimental value of 303.1 ± 5 kcal

  10. Consistency Analysis of Nearest Subspace Classifier

    OpenAIRE

    Wang, Yi

    2015-01-01

    The Nearest subspace classifier (NSS) finds an estimation of the underlying subspace within each class and assigns data points to the class that corresponds to its nearest subspace. This paper mainly studies how well NSS can be generalized to new samples. It is proved that NSS is strongly consistent under certain assumptions. For completeness, NSS is evaluated through experiments on various simulated and real data sets, in comparison with some other linear model based classifiers. It is also ...

  11. Self-sampling for human papillomavirus in a community setting: feasibility in Hispanic women.

    Science.gov (United States)

    De Alba, Israel; Anton-Culver, Hoda; Hubbell, F Allan; Ziogas, Argyrios; Hess, James R; Bracho, America; Arias, Caleb; Manetta, Alberto

    2008-08-01

    The aim of the study was (a) to assess sensitivity and specificity of self-sampling in a community setting for identifying high-risk human papillomavirus (HPV) infection and abnormal Papanicolaou (Pap) smears and (b) to assess satisfaction with this collection method among Hispanic women. Lay health workers distributed self-collection kits to Hispanic women in the community. Participants collected an unsupervised vaginal sample at home or in the place and time of their preference. A total of 1,213 Hispanics were included and provided a self-sample for HPV testing and were invited for a Pap smear; 662 (55%) of them had a Pap smear and the first 386 of these also had a physician-collected sample for HPV retesting. Using physician collection as the gold standard, unsupervised self-collection had a sensitivity of 90% and specificity of 88% for identifying high-risk HPV. Compared with physician sampling, self-sampling in a community setting had comparable sensitivity for identifying a low-grade lesions or greater in the Pap smear (50% versus 55%; P = 0.45) but lower specificity (94% versus 79%). Overall experience with self-sampling was reported as excellent or very good by 64% and only 2.6% reported a poor or fair experience. Unsupervised self-collection of vaginal samples for HPV testing in a community setting has a high sensitivity for identifying high-risk HPV and a high satisfaction among Hispanics. This approach may benefit populations with limited access to health care or with cultural barriers to cervical cancer screening.

  12. An approach based on HPLC-fingerprint and chemometrics to quality consistency evaluation of Matricaria chamomilla L. commercial samples

    Directory of Open Access Journals (Sweden)

    Agnieszka Viapiana

    2016-10-01

    Full Text Available Chamomile has been used as an herbal medication since ancient times and is still popular because it contains various bioactive phytochemicals that could provide therapeutic effects. In this study, a simple and reliable HPLC method was developed to evaluate the quality consistency of nineteen chamomile samples through establishing a chromatographic fingerprint, quantification of phenolic compounds and determination of antioxidant activity. For fingerprint analysis, 12 peaks were selected as the common peaks to evaluate the similarities of commercial samples of chamomile obtained from different manufacturers. A similarity analysis was performed to assess the similarity/dissimilarity of chamomile samples where values varied from 0.868 to 0.990 what indicating that samples from different manufacturers were consistent. Additionally, simultaneous quantification of five phenolic acids (gallic, caffeic, syringic, p-coumaric, ferulic and four flavonoids (rutin, myricetin, quercetin and keampferol was performed to interpret the quality consistency. In quantitative analysis, the nine individual phenolic compounds showed good regression (r > 0.9975. Inter- and intra-day precisions for all analysed compounds expressed as relative standard deviation (CV ranged from 0.05% to 3.12%. Since flavonoids and other polyphenols are commonly recognised as natural antioxidants, the antioxidant activity of chamomile samples was evaluated using 1,1-diphenyl-2-picrylhydrazyl (DPPH radical scavenging activity and ferric reducing/antioxidant power (FRAP assay. Correlation analysis was used to assess the relationship between antioxidant activity and phenolic composition, and multivariate analysis (PCA and HCA were applied to distinguish chamomile samples. Results shown in the study indicate high similarity of chamomile samples among them, widely spread in the market and commonly used by people as infusions or teas, as well as that there were no statistically significant

  13. An Approach Based on HPLC-Fingerprint and Chemometrics to Quality Consistency Evaluation of Matricaria chamomilla L. Commercial Samples

    Science.gov (United States)

    Viapiana, Agnieszka; Struck-Lewicka, Wiktoria; Konieczynski, Pawel; Wesolowski, Marek; Kaliszan, Roman

    2016-01-01

    Chamomile has been used as an herbal medication since ancient times and is still popular because it contains various bioactive phytochemicals that could provide therapeutic effects. In this study, a simple and reliable HPLC method was developed to evaluate the quality consistency of nineteen chamomile samples through establishing a chromatographic fingerprint, quantification of phenolic compounds and determination of antioxidant activity. For fingerprint analysis, 12 peaks were selected as the common peaks to evaluate the similarities of commercial samples of chamomile obtained from different manufacturers. A similarity analysis was performed to assess the similarity/dissimilarity of chamomile samples where values varied from 0.868 to 0.990 what indicating that samples from different manufacturers were consistent. Additionally, simultaneous quantification of five phenolic acids (gallic, caffeic, syringic, p-coumaric, ferulic) and four flavonoids (rutin, myricetin, quercetin and keampferol) was performed to interpret the quality consistency. In quantitative analysis, the nine individual phenolic compounds showed good regression (r > 0.9975). Inter- and intra-day precisions for all analyzed compounds expressed as relative standard deviation (CV) ranged from 0.05% to 3.12%. Since flavonoids and other polyphenols are commonly recognized as natural antioxidants, the antioxidant activity of chamomile samples was evaluated using 1,1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging activity and ferric reducing/antioxidant power (FRAP) assay. Correlation analysis was used to assess the relationship between antioxidant activity and phenolic composition, and multivariate analysis (PCA and HCA) were applied to distinguish chamomile samples. Results shown in the study indicate high similarity of chamomile samples among them, widely spread in the market and commonly used by people as infusions or teas, as well as that there were no statistically significant differences among

  14. Consistent gaussian basis sets of double- and triple-zeta valence with polarization quality of the fifth period for solid-state calculations.

    Science.gov (United States)

    Laun, Joachim; Vilela Oliveira, Daniel; Bredow, Thomas

    2018-02-22

    Consistent basis sets of double- and triple-zeta valence with polarization quality for the fifth period have been derived for periodic quantum-chemical solid-state calculations with the crystalline-orbital program CRYSTAL. They are an extension of the pob-TZVP basis sets, and are based on the full-relativistic effective core potentials (ECPs) of the Stuttgart/Cologne group and on the def2-SVP and def2-TZVP valence basis of the Ahlrichs group. We optimized orbital exponents and contraction coefficients to supply robust and stable self-consistent field (SCF) convergence for a wide range of different compounds. The computed crystal structures are compared to those obtained with standard basis sets available from the CRYSTAL basis set database. For the applied hybrid density functional PW1PW, the average deviations of calculated lattice constants from experimental references are smaller with pob-DZVP and pob-TZVP than with standard basis sets. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  15. New approaches to wipe sampling methods for antineoplastic and other hazardous drugs in healthcare settings.

    Science.gov (United States)

    Connor, Thomas H; Smith, Jerome P

    2016-09-01

    At the present time, the method of choice to determine surface contamination of the workplace with antineoplastic and other hazardous drugs is surface wipe sampling and subsequent sample analysis with a variety of analytical techniques. The purpose of this article is to review current methodology for determining the level of surface contamination with hazardous drugs in healthcare settings and to discuss recent advances in this area. In addition it will provide some guidance for conducting surface wipe sampling and sample analysis for these drugs in healthcare settings. Published studies on the use of wipe sampling to measure hazardous drugs on surfaces in healthcare settings drugs were reviewed. These studies include the use of well-documented chromatographic techniques for sample analysis in addition to newly evolving technology that provides rapid analysis of specific antineoplastic. Methodology for the analysis of surface wipe samples for hazardous drugs are reviewed, including the purposes, technical factors, sampling strategy, materials required, and limitations. The use of lateral flow immunoassay (LFIA) and fluorescence covalent microbead immunosorbent assay (FCMIA) for surface wipe sample evaluation is also discussed. Current recommendations are that all healthc a re settings where antineoplastic and other hazardous drugs are handled include surface wipe sampling as part of a comprehensive hazardous drug-safe handling program. Surface wipe sampling may be used as a method to characterize potential occupational dermal exposure risk and to evaluate the effectiveness of implemented controls and the overall safety program. New technology, although currently limited in scope, may make wipe sampling for hazardous drugs more routine, less costly, and provide a shorter response time than classical analytical techniques now in use.

  16. Hair MDMA samples are consistent with reported ecstasy use: findings from a study investigating effects of ecstasy on mood and memory.

    Science.gov (United States)

    Scholey, A B; Owen, L; Gates, J; Rodgers, J; Buchanan, T; Ling, J; Heffernan, T; Swan, P; Stough, C; Parrott, A C

    2011-01-01

    Our group has conducted several Internet investigations into the biobehavioural effects of self-reported recreational use of MDMA (3,4-methylenedioxymethamphetamine or Ecstasy) and other psychosocial drugs. Here we report a new study examining the relationship between self-reported Ecstasy use and traces of MDMA found in hair samples. In a laboratory setting, 49 undergraduate volunteers performed an Internet-based assessment which included mood scales and the University of East London Drug Use Questionnaire, which asks for history and current drug use. They also provided a hair sample for determination of exposure to MDMA over the previous month. Self-report of Ecstasy use and presence in hair samples were consistent (p happiness and higher self-reported stress. Self-reported Ecstasy use, but not presence in hair, was also associated with decreased tension. Different psychoactive drugs can influence long-term mood and cognition in complex and dynamically interactive ways. Here we have shown a good correspondence between self-report and objective assessment of exposure to MDMA. These data suggest that the Internet has potentially high utility as a useful medium to complement traditional laboratory studies into the sequelae of recreational drug use. Copyright © 2010 S. Karger AG, Basel.

  17. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  18. An historically consistent and broadly applicable MRV system based on LiDAR sampling and Landsat time-series

    Science.gov (United States)

    W. Cohen; H. Andersen; S. Healey; G. Moisen; T. Schroeder; C. Woodall; G. Domke; Z. Yang; S. Stehman; R. Kennedy; C. Woodcock; Z. Zhu; J. Vogelmann; D. Steinwand; C. Huang

    2014-01-01

    The authors are developing a REDD+ MRV system that tests different biomass estimation frameworks and components. Design-based inference from a costly fi eld plot network was compared to sampling with LiDAR strips and a smaller set of plots in combination with Landsat for disturbance monitoring. Biomass estimation uncertainties associated with these different data sets...

  19. Multi-valued logic circuits using hybrid circuit consisting of three gates single-electron transistors (TG-SETs) and MOSFETs.

    Science.gov (United States)

    Shin, SeungJun; Yu, YunSeop; Choi, JungBum

    2008-10-01

    New multi-valued logic (MVL) families using the hybrid circuits consisting of three gates single-electron transistors (TG-SETs) and a metal-oxide-semiconductor field-effect transistor (MOSFET) are proposed. The use of SETs offers periodic literal characteristics due to Coulomb oscillation of SET, which allows a realization of binary logic (BL) circuits as well as multi-valued logic (MVL) circuits. The basic operations of the proposed MVL families are successfully confirmed through SPICE circuit simulation based on the physical device model of a TG-SET. The proposed MVL circuits are found to be much faster, but much larger power consumption than a previously reported MVL, and they have a trade-off between speed and power consumption. As an example to apply the newly developed MVL families, a half-adder is introduced.

  20. Epiphytic bryozoans on Neptune grass - a sample-based data set.

    Science.gov (United States)

    Lepoint, Gilles; Heughebaert, André; Michel, Loïc N

    2016-01-01

    The seagrass Posidonia oceanica L. Delile, commonly known as Neptune grass, is an endemic species of the Mediterranean Sea. It hosts a distinctive and diverse epiphytic community, dominated by various macroalgal and animal organisms. Mediterranean bryozoans have been extensively studied but quantitative data assessing temporal and spatial variability have rarely been documented. In Lepoint et al. (2014a, b) occurrence and abundance data of epiphytic bryozoan communities on leaves of Posidonia oceanica inhabiting Revellata Bay (Corsica, Mediterranean Sea) were reported and trophic ecology of Electra posidoniae Gautier assessed. Here, metadata information is provided on the data set discussed in Lepoint et al. (2014a) and published on the GBIF portal as a sampling-event data set: http://ipt.biodiversity.be/resource?r=ulg_bryozoa&v=1.0). The data set is enriched by data concerning species settled on Posidonia scales (dead petiole of Posidonia leaves, remaining after limb abscission).

  1. Amplification Biases and Consistent Recovery of Loci in a Double-Digest RAD-seq Protocol

    Science.gov (United States)

    DaCosta, Jeffrey M.; Sorenson, Michael D.

    2014-01-01

    A growing variety of “genotype-by-sequencing” (GBS) methods use restriction enzymes and high throughput DNA sequencing to generate data for a subset of genomic loci, allowing the simultaneous discovery and genotyping of thousands of polymorphisms in a set of multiplexed samples. We evaluated a “double-digest” restriction-site associated DNA sequencing (ddRAD-seq) protocol by 1) comparing results for a zebra finch (Taeniopygia guttata) sample with in silico predictions from the zebra finch reference genome; 2) assessing data quality for a population sample of indigobirds (Vidua spp.); and 3) testing for consistent recovery of loci across multiple samples and sequencing runs. Comparison with in silico predictions revealed that 1) over 90% of predicted, single-copy loci in our targeted size range (178–328 bp) were recovered; 2) short restriction fragments (38–178 bp) were carried through the size selection step and sequenced at appreciable depth, generating unexpected but nonetheless useful data; 3) amplification bias favored shorter, GC-rich fragments, contributing to among locus variation in sequencing depth that was strongly correlated across samples; 4) our use of restriction enzymes with a GC-rich recognition sequence resulted in an up to four-fold overrepresentation of GC-rich portions of the genome; and 5) star activity (i.e., non-specific cutting) resulted in thousands of “extra” loci sequenced at low depth. Results for three species of indigobirds show that a common set of thousands of loci can be consistently recovered across both individual samples and sequencing runs. In a run with 46 samples, we genotyped 5,996 loci in all individuals and 9,833 loci in 42 or more individuals, resulting in <1% missing data for the larger data set. We compare our approach to similar methods and discuss the range of factors (fragment library preparation, natural genetic variation, bioinformatics) influencing the recovery of a consistent set of loci among

  2. Different goodness of fit tests for Rayleigh distribution in ranked set sampling

    Directory of Open Access Journals (Sweden)

    Amer Al-Omari

    2016-03-01

    Full Text Available In this paper, different goodness of fit tests for the Rayleigh distribution are considered based on simple random sampling (SRS and ranked set sampling (RSS techniques. The performance of the suggested estimators is evaluated in terms of the power of the tests by using Monte Carlo simulation. It is found that the suggested RSS tests perform better than their counterparts  in SRS.

  3. Cognitive Sex Differences in Reasoning Tasks: Evidence from Brazilian Samples of Educational Settings

    Science.gov (United States)

    Flores-Mendoza, Carmen; Widaman, Keith F.; Rindermann, Heiner; Primi, Ricardo; Mansur-Alves, Marcela; Pena, Carla Couto

    2013-01-01

    Sex differences on the Attention Test (AC), the Raven's Standard Progressive Matrices (SPM), and the Brazilian Cognitive Battery (BPR5), were investigated using four large samples (total N=6780), residing in the states of Minas Gerais and Sao Paulo. The majority of samples used, which were obtained from educational settings, could be considered a…

  4. Consistent structures and interactions by density functional theory with small atomic orbital basis sets.

    Science.gov (United States)

    Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas

    2015-08-07

    A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of "low-cost" electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT methods

  5. Consistent structures and interactions by density functional theory with small atomic orbital basis sets

    International Nuclear Information System (INIS)

    Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas

    2015-01-01

    A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of “low-cost” electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT

  6. The Work Role Functioning Questionnaire v2.0 Showed Consistent Factor Structure Across Six Working Samples

    DEFF Research Database (Denmark)

    Abma, Femke I.; Bültmann, Ute; Amick, Benjamin C.

    2017-01-01

    Objective: The Work Role Functioning Questionnaire v2.0 (WRFQ) is an outcome measure linking a persons’ health to the ability to meet work demands in the twenty-first century. We aimed to examine the construct validity of the WRFQ in a heterogeneous set of working samples in the Netherlands...

  7. The Work Role Functioning Questionnaire v2.0 Showed Consistent Factor Structure Across Six Working Samples

    NARCIS (Netherlands)

    Abma, F.I.; Bultmann, U.; Amick III, B.C.; Arends, I.; Dorland, P.A.; Flach, P.A.; Klink, J.J.L van der; Ven H.A., van de; Bjørner, J.B.

    2017-01-01

    Objective The Work Role Functioning Questionnaire v2.0 (WRFQ) is an outcome measure linking a persons’ health to the ability to meet work demands in the twenty-first century. We aimed to examine the construct validity of the WRFQ in a heterogeneous set of working samples in the Netherlands with

  8. Replica consistency in a Data Grid

    International Nuclear Information System (INIS)

    Domenici, Andrea; Donno, Flavia; Pucciani, Gianni; Stockinger, Heinz; Stockinger, Kurt

    2004-01-01

    A Data Grid is a wide area computing infrastructure that employs Grid technologies to provide storage capacity and processing power to applications that handle very large quantities of data. Data Grids rely on data replication to achieve better performance and reliability by storing copies of data sets on different Grid nodes. When a data set can be modified by applications, the problem of maintaining consistency among existing copies arises. The consistency problem also concerns metadata, i.e., additional information about application data sets such as indices, directories, or catalogues. This kind of metadata is used both by the applications and by the Grid middleware to manage the data. For instance, the Replica Management Service (the Grid middleware component that controls data replication) uses catalogues to find the replicas of each data set. Such catalogues can also be replicated and their consistency is crucial to the correct operation of the Grid. Therefore, metadata consistency generally poses stricter requirements than data consistency. In this paper we report on the development of a Replica Consistency Service based on the middleware mainly developed by the European Data Grid Project. The paper summarises the main issues in the replica consistency problem, and lays out a high-level architectural design for a Replica Consistency Service. Finally, results from simulations of different consistency models are presented

  9. Selection of representative calibration sample sets for near-infrared reflectance spectroscopy to predict nitrogen concentration in grasses

    DEFF Research Database (Denmark)

    Shetty, Nisha; Rinnan, Åsmund; Gislum, René

    2012-01-01

    ) algorithm were used and compared. Both Puchwein and CADEX methods provide a calibration set equally distributed in space, and both methods require a minimum prior of knowledge. The samples were also selected randomly using complete random, cultivar random (year fixed), year random (cultivar fixed......) and interaction (cultivar × year fixed) random procedures to see the influence of different factors on sample selection. Puchwein's method performed best with lowest RMSEP followed by CADEX, interaction random, year random, cultivar random and complete random. Out of 118 samples of the complete calibration set...... effectively enhance the cost-effectiveness of NIR spectral analysis by reducing the number of analyzed samples in the calibration set by more than 80%, which substantially reduces the effort of laboratory analyses with no significant loss in prediction accuracy....

  10. The Effect of Rest Interval Length on Repetition Consistency and Perceived Exertion During Near Maximal Loaded Bench Press Sets.

    Science.gov (United States)

    Scudese, Estevão; Willardson, Jeffrey M; Simão, Roberto; Senna, Gilmar; de Salles, Belmiro F; Miranda, Humberto

    2015-11-01

    The purpose of this study was to compare different rest intervals between sets on repetition consistency and ratings of perceived exertion (RPE) during consecutive bench press sets with an absolute 3RM (3 repetition maximum) load. Sixteen trained men (23.75 ± 4.21 years; 74.63 ± 5.36 kg; 175 ± 4.64 cm; bench press relative strength: 1.44 ± 0.19 kg/kg of body mass) attended 4 randomly ordered sessions during which 5 consecutive sets of the bench press were performed with an absolute 3RM load and 1, 2, 3, or 5 minutes of rest interval between sets. The results indicated that significantly greater bench press repetitions were completed with 2, 3, and 5 minutes vs. 1-minute rest between sets (p ≤ 0.05); no significant differences were noted between the 2, 3, and 5 minutes rest conditions. For the 1-minute rest condition, performance reductions (relative to the first set) were observed commencing with the second set; whereas for the other conditions (2, 3, and 5 minutes rest), performance reductions were not evident until the third and fourth sets. The RPE values before each of the successive sets were significantly greater, commencing with the second set for the 1-minute vs. the 3 and 5 minutes rest conditions. Significant increases were also evident in RPE immediately after each set between the 1 and 5 minutes rest conditions from the second through fifth sets. These findings indicate that when utilizing an absolute 3RM load for the bench press, practitioners may prescribe a time-efficient minimum of 2 minutes rest between sets without significant impairments in repetition performance. However, lower perceived exertion levels may necessitate prescription of a minimum of 3 minutes rest between sets.

  11. Consistency argued students of fluid

    Science.gov (United States)

    Viyanti; Cari; Suparmi; Winarti; Slamet Budiarti, Indah; Handika, Jeffry; Widyastuti, Fatma

    2017-01-01

    Problem solving for physics concepts through consistency arguments can improve thinking skills of students and it is an important thing in science. The study aims to assess the consistency of the material Fluid student argmentation. The population of this study are College students PGRI Madiun, UIN Sunan Kalijaga Yogyakarta and Lampung University. Samples using cluster random sampling, 145 samples obtained by the number of students. The study used a descriptive survey method. Data obtained through multiple-choice test and interview reasoned. Problem fluid modified from [9] and [1]. The results of the study gained an average consistency argmentation for the right consistency, consistency is wrong, and inconsistent respectively 4.85%; 29.93%; and 65.23%. Data from the study have an impact on the lack of understanding of the fluid material which is ideally in full consistency argued affect the expansion of understanding of the concept. The results of the study as a reference in making improvements in future studies is to obtain a positive change in the consistency of argumentations.

  12. Improving small RNA-seq by using a synthetic spike-in set for size-range quality control together with a set for data normalization.

    Science.gov (United States)

    Locati, Mauro D; Terpstra, Inez; de Leeuw, Wim C; Kuzak, Mateusz; Rauwerda, Han; Ensink, Wim A; van Leeuwen, Selina; Nehrdich, Ulrike; Spaink, Herman P; Jonker, Martijs J; Breit, Timo M; Dekker, Rob J

    2015-08-18

    There is an increasing interest in complementing RNA-seq experiments with small-RNA (sRNA) expression data to obtain a comprehensive view of a transcriptome. Currently, two main experimental challenges concerning sRNA-seq exist: how to check the size distribution of isolated sRNAs, given the sensitive size-selection steps in the protocol; and how to normalize data between samples, given the low complexity of sRNA types. We here present two separate sets of synthetic RNA spike-ins for monitoring size-selection and for performing data normalization in sRNA-seq. The size-range quality control (SRQC) spike-in set, consisting of 11 oligoribonucleotides (10-70 nucleotides), was tested by intentionally altering the size-selection protocol and verified via several comparative experiments. We demonstrate that the SRQC set is useful to reproducibly track down biases in the size-selection in sRNA-seq. The external reference for data-normalization (ERDN) spike-in set, consisting of 19 oligoribonucleotides, was developed for sample-to-sample normalization in differential-expression analysis of sRNA-seq data. Testing and applying the ERDN set showed that it can reproducibly detect differential expression over a dynamic range of 2(18). Hence, biological variation in sRNA composition and content between samples is preserved while technical variation is effectively minimized. Together, both spike-in sets can significantly improve the technical reproducibility of sRNA-seq. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. Optimizing sampling strategy for radiocarbon dating of Holocene fluvial systems in a vertically aggrading setting

    International Nuclear Information System (INIS)

    Toernqvist, T.E.; Dijk, G.J. Van

    1993-01-01

    The authors address the question of how to determine the period of activity (sedimentation) of fossil (Holocene) fluvial systems in vertically aggrading environments. The available data base consists of almost 100 14 C ages from the Rhine-Meuse delta. Radiocarbon samples from the tops of lithostratigraphically correlative organic beds underneath overbank deposits (sample type 1) yield consistent ages, indicating a synchronous onset of overbank deposition over distances of at least up to 20 km along channel belts. Similarly, 14 C ages from the base of organic residual channel fills (sample type 3) generally indicate a clear termination of within-channel sedimentation. In contrast, 14 C ages from the base of organic beds overlying overbank deposits (sample type 2), commonly assumed to represent the end of fluvial sedimentation, show a large scatter reaching up to 1000 14 C years. It is concluded that a combination of sample types 1 and 3 generally yields a satisfactory delimitation of the period of activity of a fossil fluvial system. 30 refs., 11 figs., 4 tabs

  14. The Consistency of Isotopologues of Ambient Atmospheric Nitric Acid in Passively Collected Samples

    Science.gov (United States)

    Bell, M. D.; Sickman, J. O.; Bytnerowicz, A.; Padgett, P.; Allen, E. B.

    2012-12-01

    Anthropogenic sources of nitrogen oxides have previously been shown to have distinctive isotopic signatures of oxygen and nitrogen. Nylon filters are currently used in passive sampling arrays to measure ambient atmospheric nitric acid concentrations and estimate deposition rates. This experiment measured the ability of nylon filters to consistently collect isotopologues of atmospheric nitric acid in the same ratios as they are present in the atmosphere. Samplers were deployed in continuous stirred tank reactors (CSTR) and at field sites across a nitrogen deposition gradient in Southern California. Filters were exposed over a four week period with individual filters being subjected to 1-4 week exposure times. Extracted nitric acid were measured for δ18O and δ15N ratios and compared for consistency based on length of exposure and amount of HNO3 collected. Filters within the CSTRs collected HNO3 at a consistent rate in both high and low concentration chambers. After two weeks of exposure, the mean δ18O values were within 0.5‰ of the δ18O of the source HNO3 solution. The mean of all weekly exposures were within 0.5‰ of the δ15N of the source solution, but after three weeks, the mean δ15N of adsorbed HNO3 was within 0.2‰. As the length of the exposure increased, the variability of measured delta values decreased for both elements. The field samplers collected HNO3 consistent with previously measured values along a deposition gradient. The mean δ18O at high deposition sites was 52.2‰ compared to 35.7‰ at the low deposition sites. Mean δ15N values were similar at all sites across the deposition gradient. Due to precipitation events occurring during the exposure period, the δ15N and δ18O of nitric acid were highly variable at all field sites. At single sites, changes in δ15N and δ18O were negatively correlated, consistent with two-sourcing mixing dynamics, but the slope of the regressions differed between high and low deposition sites. Anthropogenic

  15. SIMPLE ESTIMATOR AND CONSISTENT STRONGLY OF STABLE DISTRIBUTIONS

    Directory of Open Access Journals (Sweden)

    Cira E. Guevara Otiniano

    2016-06-01

    Full Text Available Stable distributions are extensively used to analyze earnings of financial assets, such as exchange rates and stock prices assets. In this paper we propose a simple and strongly consistent estimator for the scale parameter of a symmetric stable L´evy distribution. The advantage of this estimator is that your computational time is minimum thus it can be used to initialize intensive computational procedure such as maximum likelihood. With random samples of sized n we tested the efficacy of these estimators by Monte Carlo method. We also included applications for three data sets.

  16. Using Set Covering with Item Sampling to Analyze the Infeasibility of Linear Programming Test Assembly Models

    Science.gov (United States)

    Huitzing, Hiddo A.

    2004-01-01

    This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…

  17. Reliability and consistency of a validated sun exposure questionnaire in a population-based Danish sample.

    Science.gov (United States)

    Køster, B; Søndergaard, J; Nielsen, J B; Olsen, A; Bentzen, J

    2018-06-01

    An important feature of questionnaire validation is reliability. To be able to measure a given concept by questionnaire validly, the reliability needs to be high. The objectives of this study were to examine reliability of attitude and knowledge and behavioral consistency of sunburn in a developed questionnaire for monitoring and evaluating population sun-related behavior. Sun related behavior, attitude and knowledge was measured weekly by a questionnaire in the summer of 2013 among 664 Danes. Reliability was tested in a test-retest design. Consistency of behavioral information was tested similarly in a questionnaire adapted to measure behavior throughout the summer. The response rates for questionnaire 1, 2 and 3 were high and the drop out was not dependent on demographic characteristic. There was at least 73% agreement between sunburns in the measurement week and the entire summer, and a possible sunburn underestimation in questionnaires summarizing the entire summer. The participants underestimated their outdoor exposure in the evaluation covering the entire summer as compared to the measurement week. The reliability of scales measuring attitude and knowledge was high for majority of scales, while consistency in protection behavior was low. To our knowledge, this is the first study to report reliability for a completely validated questionnaire on sun-related behavior in a national random population based sample. Further, we show that attitude and knowledge questions confirmed their validity with good reliability, while consistency of protection behavior in general and in a week's measurement was low.

  18. VLE measurements using a static cell vapor phase manual sampling method accompanied with an empirical data consistency test

    International Nuclear Information System (INIS)

    Freitag, Joerg; Kosuge, Hitoshi; Schmelzer, Juergen P.; Kato, Satoru

    2015-01-01

    Highlights: • We use a new, simple static cell vapor phase manual sampling method (SCVMS) for VLE (x, y, T) measurement. • The method is applied to non-azeotropic, asymmetric and two-liquid phase forming azeotropic binaries. • The method is approved by a data consistency test, i.e., a plot of the polarity exclusion factor vs. pressure. • The consistency test reveals that with the new SCVMS method accurate VLE near ambient temperature can be measured. • Moreover, the consistency test approves that the effect of air in the SCVMS system is negligible. - Abstract: A new static cell vapor phase manual sampling (SCVMS) method is used for the simple measurement of constant temperature x, y (vapor + liquid) equilibria (VLE). The method was applied to the VLE measurements of the (methanol + water) binary at T/K = (283.2, 298.2, 308.2 and 322.9), asymmetric (acetone + 1-butanol) binary at T/K = (283.2, 295.2, 308.2 and 324.2) and two-liquid phase forming azeotropic (water + 1-butanol) binary at T/K = (283.2 and 298.2). The accuracy of the experimental data was approved by a data consistency test, that is, an empirical plot of the polarity exclusion factor, β, vs. the system pressure, P. The SCVMS data are accurate, because the VLE data converge to the same lnβ vs. lnP straight line determined from conventional distillation-still method and a headspace gas chromatography method

  19. Factor Structure, Internal Consistency, and Screening Sensitivity of the GARS-2 in a Developmental Disabilities Sample

    Directory of Open Access Journals (Sweden)

    Martin A. Volker

    2016-01-01

    Full Text Available The Gilliam Autism Rating Scale-Second Edition (GARS-2 is a widely used screening instrument that assists in the identification and diagnosis of autism. The purpose of this study was to examine the factor structure, internal consistency, and screening sensitivity of the GARS-2 using ratings from special education teaching staff for a sample of 240 individuals with autism or other significant developmental disabilities. Exploratory factor analysis yielded a correlated three-factor solution similar to that found in 2005 by Lecavalier for the original GARS. Though the three factors appeared to be reasonably consistent with the intended constructs of the three GARS-2 subscales, the analysis indicated that more than a third of the GARS-2 items were assigned to the wrong subscale. Internal consistency estimates met or exceeded standards for screening and were generally higher than those in previous studies. Screening sensitivity was .65 and specificity was .81 for the Autism Index using a cut score of 85. Based on these findings, recommendations are made for instrument revision.

  20. Factor Structure, Internal Consistency, and Screening Sensitivity of the GARS-2 in a Developmental Disabilities Sample

    OpenAIRE

    Martin A. Volker; Elissa H. Dua; Christopher Lopata; Marcus L. Thomeer; Jennifer A. Toomey; Audrey M. Smerbeck; Jonathan D. Rodgers; Joshua R. Popkin; Andrew T. Nelson; Gloria K. Lee

    2016-01-01

    The Gilliam Autism Rating Scale-Second Edition (GARS-2) is a widely used screening instrument that assists in the identification and diagnosis of autism. The purpose of this study was to examine the factor structure, internal consistency, and screening sensitivity of the GARS-2 using ratings from special education teaching staff for a sample of 240 individuals with autism or other significant developmental disabilities. Exploratory factor analysis yielded a correlated three-factor solution si...

  1. Adaptive Angular Sampling for SPECT Imaging

    OpenAIRE

    Li, Nan; Meng, Ling-Jian

    2011-01-01

    This paper presents an analytical approach for performing adaptive angular sampling in single photon emission computed tomography (SPECT) imaging. It allows for a rapid determination of the optimum sampling strategy that minimizes image variance in regions-of-interest (ROIs). The proposed method consists of three key components: (a) a set of close-form equations for evaluating image variance and resolution attainable with a given sampling strategy, (b) a gradient-based algor...

  2. Reliability and consistency of a validated sun exposure questionnaire in a population-based Danish sample

    Directory of Open Access Journals (Sweden)

    B. Køster

    2018-06-01

    Full Text Available An important feature of questionnaire validation is reliability. To be able to measure a given concept by questionnaire validly, the reliability needs to be high.The objectives of this study were to examine reliability of attitude and knowledge and behavioral consistency of sunburn in a developed questionnaire for monitoring and evaluating population sun-related behavior.Sun related behavior, attitude and knowledge was measured weekly by a questionnaire in the summer of 2013 among 664 Danes. Reliability was tested in a test-retest design. Consistency of behavioral information was tested similarly in a questionnaire adapted to measure behavior throughout the summer.The response rates for questionnaire 1, 2 and 3 were high and the drop out was not dependent on demographic characteristic. There was at least 73% agreement between sunburns in the measurement week and the entire summer, and a possible sunburn underestimation in questionnaires summarizing the entire summer. The participants underestimated their outdoor exposure in the evaluation covering the entire summer as compared to the measurement week. The reliability of scales measuring attitude and knowledge was high for majority of scales, while consistency in protection behavior was low.To our knowledge, this is the first study to report reliability for a completely validated questionnaire on sun-related behavior in a national random population based sample. Further, we show that attitude and knowledge questions confirmed their validity with good reliability, while consistency of protection behavior in general and in a week's measurement was low. Keywords: Questionnaire, Validation, Reliability, Skin cancer, Prevention, Ultraviolet radiation

  3. Secondary electron emission and self-consistent charge transport in semi-insulating samples

    Energy Technology Data Exchange (ETDEWEB)

    Fitting, H.-J. [Institute of Physics, University of Rostock, Universitaetsplatz 3, D-18051 Rostock (Germany); Touzin, M. [Unite Materiaux et Transformations, UMR CNRS 8207, Universite de Lille 1, F-59655 Villeneuve d' Ascq (France)

    2011-08-15

    Electron beam induced self-consistent charge transport and secondary electron emission (SEE) in insulators are described by means of an electron-hole flight-drift model (FDM) now extended by a certain intrinsic conductivity (c) and are implemented by an iterative computer simulation. Ballistic secondary electrons (SE) and holes, their attenuation to drifting charge carriers, and their recombination, trapping, and field- and temperature-dependent detrapping are included. As a main result the time dependent ''true'' secondary electron emission rate {delta}(t) released from the target material and based on ballistic electrons and the spatial distributions of currents j(x,t), charges {rho}(x,t), field F(x,t), and potential V(x,t) are obtained where V{sub 0} = V(0,t) presents the surface potential. The intrinsic electronic conductivity limits the charging process and leads to a conduction sample current to the support. In that case the steady-state total SE yield will be fixed below the unit: i.e., {sigma} {eta} + {delta} < 1.

  4. Consistency of Trend Break Point Estimator with Underspecified Break Number

    Directory of Open Access Journals (Sweden)

    Jingjing Yang

    2017-01-01

    Full Text Available This paper discusses the consistency of trend break point estimators when the number of breaks is underspecified. The consistency of break point estimators in a simple location model with level shifts has been well documented by researchers under various settings, including extensions such as allowing a time trend in the model. Despite the consistency of break point estimators of level shifts, there are few papers on the consistency of trend shift break point estimators in the presence of an underspecified break number. The simulation study and asymptotic analysis in this paper show that the trend shift break point estimator does not converge to the true break points when the break number is underspecified. In the case of two trend shifts, the inconsistency problem worsens if the magnitudes of the breaks are similar and the breaks are either both positive or both negative. The limiting distribution for the trend break point estimator is developed and closely approximates the finite sample performance.

  5. Use of family relationships improved consistency of identification of Aboriginal people in linked administrative data.

    Science.gov (United States)

    Gibberd, Alison J; Simpson, Judy M; Eades, Sandra J

    2017-10-01

    Algorithms are often used to improve identification of Aboriginal Australians in linked data sets with inconsistent and incomplete recording of Aboriginal status. We compared how consistently some common algorithms identified family members, developed a new algorithm incorporating relatives' information, and assessed the effects of these algorithms on health estimates. The sample was people born 1980-2011 recorded as Aboriginal at least once (or a relative) in four Western Australian data sets and their relatives (N = 156,407). A very inclusive approach, ever-Aboriginal (EA/EA+, where + denotes children's records incorporated), and two more specific approaches, multistage median (MSM/MSM+) and last record (LR/LR+), were chosen, along with the new algorithm (MSM+Family). Ever-Aboriginal (EA) categorized relatives the least consistently; 25% of parent-child triads had incongruent Aboriginal statuses with EA+, compared with only 9% with MSM+. With EA+, 14% of full siblings had different statuses compared with 8% for MSM+. EA produced the lowest estimates of the proportion of Aboriginal people with poor health outcomes. Using relatives' records reduced the number of uncategorized people and categorized as Aboriginal more people who had few records (e.g., no hospital admissions). When many data sets are linked, more specific algorithms select more representative Aboriginal samples and identify Aboriginality of relatives more consistently. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Using XCO2 retrievals for assessing the long-term consistency of NDACC/FTIR data sets

    Science.gov (United States)

    Barthlott, S.; Schneider, M.; Hase, F.; Wiegele, A.; Christner, E.; González, Y.; Blumenstock, T.; Dohe, S.; García, O. E.; Sepúlveda, E.; Strong, K.; Mendonca, J.; Weaver, D.; Palm, M.; Deutscher, N. M.; Warneke, T.; Notholt, J.; Lejeune, B.; Mahieu, E.; Jones, N.; Griffith, D. W. T.; Velazco, V. A.; Smale, D.; Robinson, J.; Kivi, R.; Heikkinen, P.; Raffalski, U.

    2015-03-01

    Within the NDACC (Network for the Detection of Atmospheric Composition Change), more than 20 FTIR (Fourier-transform infrared) spectrometers, spread worldwide, provide long-term data records of many atmospheric trace gases. We present a method that uses measured and modelled XCO2 for assessing the consistency of these NDACC data records. Our XCO2 retrieval setup is kept simple so that it can easily be adopted for any NDACC/FTIR-like measurement made since the late 1950s. By a comparison to coincident TCCON (Total Carbon Column Observing Network) measurements, we empirically demonstrate the useful quality of this suggested NDACC XCO2 product (empirically obtained scatter between TCCON and NDACC is about 4‰ for daily mean as well as monthly mean comparisons, and the bias is 25‰). Our XCO2 model is a simple regression model fitted to CarbonTracker results and the Mauna Loa CO2 in situ records. A comparison to TCCON data suggests an uncertainty of the model for monthly mean data of below 3‰. We apply the method to the NDACC/FTIR spectra that are used within the project MUSICA (multi-platform remote sensing of isotopologues for investigating the cycle of atmospheric water) and demonstrate that there is a good consistency for these globally representative set of spectra measured since 1996: the scatter between the modelled and measured XCO2 on a yearly time scale is only 3‰.

  7. Electron-neutral scattering cross sections for CO2: a complete and consistent set and an assessment of dissociation

    International Nuclear Information System (INIS)

    Grofulović, Marija; Alves, Luís L; Guerra, Vasco

    2016-01-01

    This work proposes a complete and consistent set of cross sections for electron collisions with carbon dioxide (CO 2 ) molecules to be published in the IST-Lisbon database with LXCat. The set is validated from the comparison between swarm parameters calculated using a two-term Boltzmann solver and the available experimental data. The importance of superelastic collisions with CO 2 (0 1 0) molecules at low values of the reduced electric field is discussed. Due to significant uncertainties, there are ongoing debates regarding the deconvolution of cross sections that describe generic energy losses at specific energy thresholds into cross sections that describe individual processes. An important example of these uncertainties is with the dissociation of CO 2 , for which the total electron impact dissociation cross section has not yet been unambiguously identified. The available dissociation cross sections are evaluated and discussed, and a strategy to obtain electron-impact dissociation rate coefficients is suggested. (paper)

  8. Irradiation chamber and sample changer for biological samples

    International Nuclear Information System (INIS)

    Kraft, G.; Daues, H.W.; Fischer, B.; Kopf, U.; Liebold, H.P.; Quis, D.; Stelzer, H.; Kiefer, J.; Schoepfer, F.; Schneider, E.

    1980-01-01

    This paper describes an irradiaton system with which living cells of different origin are irradiated with heavy ion beams (18 <- Z <- 92) at energies up to 10 MeV/amu. The system consists of a beam monitor connected to the vacuum system of the accelerator and the irradiation chamber, containing the biological samples under atmospheric pressure. The requirements and aims of the set up are discussed. The first results with saccharomyces cerevisiae and Chinese Hamster tissue cells are presented. (orig.)

  9. A sample design for globally consistent biomass estimation using lidar data from the Geoscience Laser Altimeter System (GLAS)

    Science.gov (United States)

    Sean P. Healey; Paul L. Patterson; Sassan S. Saatchi; Michael A. Lefsky; Andrew J. Lister; Elizabeth A. Freeman

    2012-01-01

    Lidar height data collected by the Geosciences Laser Altimeter System (GLAS) from 2002 to 2008 has the potential to form the basis of a globally consistent sample-based inventory of forest biomass. GLAS lidar return data were collected globally in spatially discrete full waveform "shots," which have been shown to be strongly correlated with aboveground forest...

  10. Iterative algorithm of discrete Fourier transform for processing randomly sampled NMR data sets

    International Nuclear Information System (INIS)

    Stanek, Jan; Kozminski, Wiktor

    2010-01-01

    Spectra obtained by application of multidimensional Fourier Transformation (MFT) to sparsely sampled nD NMR signals are usually corrupted due to missing data. In the present paper this phenomenon is investigated on simulations and experiments. An effective iterative algorithm for artifact suppression for sparse on-grid NMR data sets is discussed in detail. It includes automated peak recognition based on statistical methods. The results enable one to study NMR spectra of high dynamic range of peak intensities preserving benefits of random sampling, namely the superior resolution in indirectly measured dimensions. Experimental examples include 3D 15 N- and 13 C-edited NOESY-HSQC spectra of human ubiquitin.

  11. Role of the urate transporter SLC2A9 gene in susceptibility to gout in New Zealand Māori, Pacific Island, and Caucasian case-control sample sets.

    Science.gov (United States)

    Hollis-Moffatt, Jade E; Xu, Xin; Dalbeth, Nicola; Merriman, Marilyn E; Topless, Ruth; Waddell, Chloe; Gow, Peter J; Harrison, Andrew A; Highton, John; Jones, Peter B B; Stamp, Lisa K; Merriman, Tony R

    2009-11-01

    To examine the role of genetic variation in the renal urate transporter SLC2A9 in gout in New Zealand sample sets of Māori, Pacific Island, and Caucasian ancestry and to determine if the Māori and Pacific Island samples could be useful for fine-mapping. Patients (n= 56 Māori, 69 Pacific Island, and 131 Caucasian) were recruited from rheumatology outpatient clinics and satisfied the American College of Rheumatology criteria for gout. The control samples comprised 125 Māori subjects, 41 Pacific Island subjects, and 568 Caucasian subjects without arthritis. SLC2A9 single-nucleotide polymorphisms rs16890979 (V253I), rs5028843, rs11942223, and rs12510549 were genotyped (possible etiologic variants in Caucasians). Association of the major allele of rs16890979, rs11942223, and rs5028843 with gout was observed in all sample sets (P = 3.7 x 10(-7), 1.6 x 10(-6), and 7.6 x 10(-5) for rs11942223 in the Māori, Pacific Island, and Caucasian samples, respectively). One 4-marker haplotype (1/1/2/1; more prevalent in the Māori and Pacific Island control samples) was not observed in a single gout case. Our data confirm a role of SLC2A9 in gout susceptibility in a New Zealand Caucasian sample set, with the effect on risk (odds ratio >2.0) greater than previous estimates. We also demonstrate association of SLC2A9 with gout in samples of Māori and Pacific Island ancestry and a consistent pattern of haplotype association. The presence of both alleles of rs16890979 on susceptibility and protective haplotypes in the Māori and Pacific Island sample is evidence against a role for this nonsynonymous variant as the sole etiologic agent. More extensive linkage disequilibrium in Māori and Pacific Island samples suggests that Caucasian samples may be more useful for fine-mapping.

  12. Handling missing data in ranked set sampling

    CERN Document Server

    Bouza-Herrera, Carlos N

    2013-01-01

    The existence of missing observations is a very important aspect to be considered in the application of survey sampling, for example. In human populations they may be caused by a refusal of some interviewees to give the true value for the variable of interest. Traditionally, simple random sampling is used to select samples. Most statistical models are supported by the use of samples selected by means of this design. In recent decades, an alternative design has started being used, which, in many cases, shows an improvement in terms of accuracy compared with traditional sampling. It is called R

  13. Globfit: Consistently fitting primitives by discovering global relations

    KAUST Repository

    Li, Yangyan; Wu, Xiaokun; Chrysathou, Yiorgos; Sharf, Andrei Sharf; Cohen-Or, Daniel; Mitra, Niloy J.

    2011-01-01

    Given a noisy and incomplete point set, we introduce a method that simultaneously recovers a set of locally fitted primitives along with their global mutual relations. We operate under the assumption that the data corresponds to a man-made engineering object consisting of basic primitives, possibly repeated and globally aligned under common relations. We introduce an algorithm to directly couple the local and global aspects of the problem. The local fit of the model is determined by how well the inferred model agrees to the observed data, while the global relations are iteratively learned and enforced through a constrained optimization. Starting with a set of initial RANSAC based locally fitted primitives, relations across the primitives such as orientation, placement, and equality are progressively learned and conformed to. In each stage, a set of feasible relations are extracted among the candidate relations, and then aligned to, while best fitting to the input data. The global coupling corrects the primitives obtained in the local RANSAC stage, and brings them to precise global alignment. We test the robustness of our algorithm on a range of synthesized and scanned data, with varying amounts of noise, outliers, and non-uniform sampling, and validate the results against ground truth, where available. © 2011 ACM.

  14. Globfit: Consistently fitting primitives by discovering global relations

    KAUST Repository

    Li, Yangyan

    2011-07-01

    Given a noisy and incomplete point set, we introduce a method that simultaneously recovers a set of locally fitted primitives along with their global mutual relations. We operate under the assumption that the data corresponds to a man-made engineering object consisting of basic primitives, possibly repeated and globally aligned under common relations. We introduce an algorithm to directly couple the local and global aspects of the problem. The local fit of the model is determined by how well the inferred model agrees to the observed data, while the global relations are iteratively learned and enforced through a constrained optimization. Starting with a set of initial RANSAC based locally fitted primitives, relations across the primitives such as orientation, placement, and equality are progressively learned and conformed to. In each stage, a set of feasible relations are extracted among the candidate relations, and then aligned to, while best fitting to the input data. The global coupling corrects the primitives obtained in the local RANSAC stage, and brings them to precise global alignment. We test the robustness of our algorithm on a range of synthesized and scanned data, with varying amounts of noise, outliers, and non-uniform sampling, and validate the results against ground truth, where available. © 2011 ACM.

  15. Preanalytical Blood Sampling Errors in Clinical Settings

    International Nuclear Information System (INIS)

    Zehra, N.; Malik, A. H.; Arshad, Q.; Sarwar, S.; Aslam, S.

    2016-01-01

    Background: Blood sampling is one of the common procedures done in every ward for disease diagnosis and prognosis. Daily hundreds of samples are collected from different wards but lack of appropriate knowledge of blood sampling by paramedical staff and accidental errors make the samples inappropriate for testing. Thus the need to avoid these errors for better results still remains. We carried out this research with an aim to determine the common errors during blood sampling; find factors responsible and propose ways to reduce these errors. Methods: A cross sectional descriptive study was carried out at the Military and Combined Military Hospital Rawalpindi during February and March 2014. A Venous Blood Sampling questionnaire (VBSQ) was filled by the staff on voluntary basis in front of the researchers. The staff was briefed on the purpose of the survey before filling the questionnaire. Sample size was 228. Results were analysed using SPSS-21. Results: When asked in the questionnaire, around 61.6 percent of the paramedical staff stated that they cleaned the vein by moving the alcohol swab from inward to outwards while 20.8 percent of the staff reported that they felt the vein after disinfection. On contrary to WHO guidelines, 89.6 percent identified that they had a habit of placing blood in the test tube by holding it in the other hand, which should actually be done after inserting it into the stand. Although 86 percent thought that they had ample knowledge regarding the blood sampling process but they did not practice it properly. Conclusion: Pre analytical blood sampling errors are common in our setup. Eighty six percent participants though thought that they had adequate knowledge regarding blood sampling, but most of them were not adhering to standard protocols. There is a need of continued education and refresher courses. (author)

  16. Image recognition and consistency of response

    Science.gov (United States)

    Haygood, Tamara M.; Ryan, John; Liu, Qing Mary A.; Bassett, Roland; Brennan, Patrick C.

    2012-02-01

    Purpose: To investigate the connection between conscious recognition of an image previously encountered in an experimental setting and consistency of response to the experimental question. Materials and Methods: Twenty-four radiologists viewed 40 frontal chest radiographs and gave their opinion as to the position of a central venous catheter. One-to-three days later they again viewed 40 frontal chest radiographs and again gave their opinion as to the position of the central venous catheter. Half of the radiographs in the second set were repeated images from the first set and half were new. The radiologists were asked of each image whether it had been included in the first set. For this study, we are evaluating only the 20 repeated images. We used the Kruskal-Wallis test and Fisher's exact test to determine the relationship between conscious recognition of a previously interpreted image and consistency in interpretation of the image. Results. There was no significant correlation between recognition of the image and consistency in response regarding the position of the central venous catheter. In fact, there was a trend in the opposite direction, with radiologists being slightly more likely to give a consistent response with respect to images they did not recognize than with respect to those they did recognize. Conclusion: Radiologists' recognition of previously-encountered images in an observer-performance study does not noticeably color their interpretation on the second encounter.

  17. Determining a Consistent Set of Accounting and Financial Reporting Standards

    OpenAIRE

    Anne Le Manh-Béna; Olivier Ramond

    2011-01-01

    Following the debate on the Conceptual Framework revision undertaken by the IASB and the FASB, this paper discusses three major concerns about the way financial reporting standards should be determined: (1) What is the role a Conceptual Framework?; (2) For whom and for which needs are accounting and financial reporting standards made?; and (3) What information set should financial reporting provide? We show that the perceived need of a Framework has resulted in practice in weak usefulness We ...

  18. Acceptability of self-collection sampling for HPV-DNA testing in low-resource settings: a mixed methods approach.

    Science.gov (United States)

    Bansil, Pooja; Wittet, Scott; Lim, Jeanette L; Winkler, Jennifer L; Paul, Proma; Jeronimo, Jose

    2014-06-12

    Vaginal self-sampling with HPV-DNA tests is a promising primary screening method for cervical cancer. However, women's experiences, concerns and the acceptability of such tests in low-resource settings remain unknown. In India, Nicaragua, and Uganda, a mixed-method design was used to collect data from surveys (N = 3,863), qualitative interviews (N = 72; 20 providers and 52 women) and focus groups (N = 30 women) on women's and providers' experiences with self-sampling, women's opinions of sampling at home, and their future needs. Among surveyed women, 90% provided a self- collected sample. Of these, 75% reported it was easy, although 52% were initially concerned about hurting themselves and 24% were worried about not getting a good sample. Most surveyed women preferred self-sampling (78%). However it was not clear if they responded to the privacy of self-sampling or the convenience of avoiding a pelvic examination, or both. In follow-up interviews, most women reported that they didn't mind self-sampling, but many preferred to have a provider collect the vaginal sample. Most women also preferred clinic-based screening (as opposed to home-based self-sampling), because the sample could be collected by a provider, women could receive treatment if needed, and the clinic was sanitary and provided privacy. Self-sampling acceptability was higher when providers prepared women through education, allowed women to examine the collection brush, and were present during the self-collection process. Among survey respondents, aids that would facilitate self-sampling in the future were: staff help (53%), additional images in the illustrated instructions (31%), and a chance to practice beforehand with a doll/model (26%). Self-and vaginal-sampling are widely acceptable among women in low-resource settings. Providers have a unique opportunity to educate and prepare women for self-sampling and be flexible in accommodating women's preference for self-sampling.

  19. An Approach Based on HPLC-Fingerprint and Chemometrics to Quality Consistency Evaluation of Matricaria chamomilla L. Commercial Samples

    OpenAIRE

    Viapiana, Agnieszka; Struck-Lewicka, Wiktoria; Konieczynski, Pawel; Wesolowski, Marek; Kaliszan, Roman

    2016-01-01

    Chamomile has been used as an herbal medication since ancient times and is still popular because it contains various bioactive phytochemicals that could provide therapeutic effects. In this study, a simple and reliable HPLC method was developed to evaluate the quality consistency of nineteen chamomile samples through establishing a chromatographic fingerprint, quantification of phenolic compounds and determination of antioxidant activity. For fingerprint analysis, 12 peaks were selected as th...

  20. Detecting representative data and generating synthetic samples to improve learning accuracy with imbalanced data sets.

    Directory of Open Access Journals (Sweden)

    Der-Chiang Li

    Full Text Available It is difficult for learning models to achieve high classification performances with imbalanced data sets, because with imbalanced data sets, when one of the classes is much larger than the others, most machine learning and data mining classifiers are overly influenced by the larger classes and ignore the smaller ones. As a result, the classification algorithms often have poor learning performances due to slow convergence in the smaller classes. To balance such data sets, this paper presents a strategy that involves reducing the sizes of the majority data and generating synthetic samples for the minority data. In the reducing operation, we use the box-and-whisker plot approach to exclude outliers and the Mega-Trend-Diffusion method to find representative data from the majority data. To generate the synthetic samples, we propose a counterintuitive hypothesis to find the distributed shape of the minority data, and then produce samples according to this distribution. Four real datasets were used to examine the performance of the proposed approach. We used paired t-tests to compare the Accuracy, G-mean, and F-measure scores of the proposed data pre-processing (PPDP method merging in the D3C method (PPDP+D3C with those of the one-sided selection (OSS, the well-known SMOTEBoost (SB study, and the normal distribution-based oversampling (NDO approach, and the proposed data pre-processing (PPDP method. The results indicate that the classification performance of the proposed approach is better than that of above-mentioned methods.

  1. Is cosmology consistent?

    International Nuclear Information System (INIS)

    Wang Xiaomin; Tegmark, Max; Zaldarriaga, Matias

    2002-01-01

    We perform a detailed analysis of the latest cosmic microwave background (CMB) measurements (including BOOMERaNG, DASI, Maxima and CBI), both alone and jointly with other cosmological data sets involving, e.g., galaxy clustering and the Lyman Alpha Forest. We first address the question of whether the CMB data are internally consistent once calibration and beam uncertainties are taken into account, performing a series of statistical tests. With a few minor caveats, our answer is yes, and we compress all data into a single set of 24 bandpowers with associated covariance matrix and window functions. We then compute joint constraints on the 11 parameters of the 'standard' adiabatic inflationary cosmological model. Our best fit model passes a series of physical consistency checks and agrees with essentially all currently available cosmological data. In addition to sharp constraints on the cosmic matter budget in good agreement with those of the BOOMERaNG, DASI and Maxima teams, we obtain a heaviest neutrino mass range 0.04-4.2 eV and the sharpest constraints to date on gravity waves which (together with preference for a slight red-tilt) favor 'small-field' inflation models

  2. An evaluation of sampling and full enumeration strategies for Fisher Jenks classification in big data settings

    Science.gov (United States)

    Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.

    2017-01-01

    Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.

  3. The Consistency of Performance Management System Based on Attributes of the Performance Indicator: An Empirical Study

    Directory of Open Access Journals (Sweden)

    Jan Zavadsky

    2014-07-01

    Full Text Available Purpose: The performance management system (PMS is a metasystem over all business processes at the strategic and operational level. Effectiveness of the various management systems depends on many factors. One of them is the consistent definition of each system elements. The main purpose of this study is to explore if the performance management systems of the sample companies is consistent and how companies can create such a system. The consistency in this case is based on the homogenous definition of attributes relating to the performance indicator as a basic element of PMS.Methodology: At the beginning, we used an affinity diagram that helped us to clarify and to group various attributes of performance indicators. The main research results we achieved are through empirical study. The empirical study was carried out in a sample of Slovak companies. The criterion for selection was the existence of the certified management systems according to the ISO 9001. Representativeness of the sample companies was confirmed by application of Pearson´s chi-squared test (χ2 - test due to above standards. Findings: Coming from the review of various literature, we defined four groups of attributes relating to the performance indicator: formal attributes, attributes of target value, informational attributes and attributes of evaluation. The whole set contains 21 attributes. The consistency of PMS is based not on maximum or minimum number of attributes, but on the same type of attributes for each performance indicator used in PMS at both the operational and strategic level. The main findings are: companies use various financial and non-financial indicators at strategic or operational level; companies determine various attributes of performance indicator, but most of the performance indicators are otherwise determined; we identified the common attributes for the whole sample of companies. Practical implications: The research results have got an implication for

  4. Information overload or search-amplified risk? Set size and order effects on decisions from experience.

    Science.gov (United States)

    Hills, Thomas T; Noguchi, Takao; Gibbert, Michael

    2013-10-01

    How do changes in choice-set size influence information search and subsequent decisions? Moreover, does information overload influence information processing with larger choice sets? We investigated these questions by letting people freely explore sets of gambles before choosing one of them, with the choice sets either increasing or decreasing in number for each participant (from two to 32 gambles). Set size influenced information search, with participants taking more samples overall, but sampling a smaller proportion of gambles and taking fewer samples per gamble, when set sizes were larger. The order of choice sets also influenced search, with participants sampling from more gambles and taking more samples overall if they started with smaller as opposed to larger choice sets. Inconsistent with information overload, information processing appeared consistent across set sizes and choice order conditions, reliably favoring gambles with higher sample means. Despite the lack of evidence for information overload, changes in information search did lead to systematic changes in choice: People who started with smaller choice sets were more likely to choose gambles with the highest expected values, but only for small set sizes. For large set sizes, the increase in total samples increased the likelihood of encountering rare events at the same time that the reduction in samples per gamble amplified the effect of these rare events when they occurred-what we call search-amplified risk. This led to riskier choices for individuals whose choices most closely followed the sample mean.

  5. An Optimized Set of Fluorescence In Situ Hybridization Probes for Detection of Pancreatobiliary Tract Cancer in Cytology Brush Samples.

    Science.gov (United States)

    Barr Fritcher, Emily G; Voss, Jesse S; Brankley, Shannon M; Campion, Michael B; Jenkins, Sarah M; Keeney, Matthew E; Henry, Michael R; Kerr, Sarah M; Chaiteerakij, Roongruedee; Pestova, Ekaterina V; Clayton, Amy C; Zhang, Jun; Roberts, Lewis R; Gores, Gregory J; Halling, Kevin C; Kipp, Benjamin R

    2015-12-01

    Pancreatobiliary cancer is detected by fluorescence in situ hybridization (FISH) of pancreatobiliary brush samples with UroVysion probes, originally designed to detect bladder cancer. We designed a set of new probes to detect pancreatobiliary cancer and compared its performance with that of UroVysion and routine cytology analysis. We tested a set of FISH probes on tumor tissues (cholangiocarcinoma or pancreatic carcinoma) and non-tumor tissues from 29 patients. We identified 4 probes that had high specificity for tumor vs non-tumor tissues; we called this set of probes pancreatobiliary FISH. We performed a retrospective analysis of brush samples from 272 patients who underwent endoscopic retrograde cholangiopancreatography for evaluation of malignancy at the Mayo Clinic; results were available from routine cytology and FISH with UroVysion probes. Archived residual specimens were retrieved and used to evaluate the pancreatobiliary FISH probes. Cutoff values for FISH with the pancreatobiliary probes were determined using 89 samples and validated in the remaining 183 samples. Clinical and pathologic evidence of malignancy in the pancreatobiliary tract within 2 years of brush sample collection was used as the standard; samples from patients without malignancies were used as negative controls. The validation cohort included 85 patients with malignancies (46.4%) and 114 patients with primary sclerosing cholangitis (62.3%). Samples containing cells above the cutoff for polysomy (copy number gain of ≥2 probes) were classified as positive in FISH with the UroVysion and pancreatobiliary probes. Multivariable logistic regression was used to estimate associations between clinical and pathology findings and results from FISH. The combination of FISH probes 1q21, 7p12, 8q24, and 9p21 identified cancer cells with 93% sensitivity and 100% specificity in pancreatobiliary tissue samples and were therefore included in the pancreatobiliary probe set. In the validation cohort of

  6. Verification Process of Behavioral Consistency between Design and Implementation programs of pSET using HW-CBMC

    International Nuclear Information System (INIS)

    Lee, Dong Ah; Lee, Jong Hoon; Yoo, Jun Beom

    2011-01-01

    Controllers in safety critical systems such as nuclear power plants often use Function Block Diagrams (FBDs) to design embedded software. The design is implemented using programming languages such as C to compile it into particular target hardware. The implementation must have the same behavior with the design and the behavior should be verified explicitly. For example, the pSET (POSAFE-Q Software Engineering Tool) is a loader software to program POSAFE-Q PLC (Programmable Logic Controller) and is developed as a part of the KNICS (Korea Nuclear Instrumentation and Control System R and D Center) project. It uses FBDs to design software of PLC, and generates ANSI-C code to compile it into specific machine code. To verify the equivalence between the FBDs and ANSI-C code, mathematical proof of code generator or a verification tools such as RETRANS can help guarantee the equivalence. Mathematical proof, however, has a weakness that requires high expenditure and repetitive fulfillment whenever the translator is modified. On the other hand, RETRANS reconstructs the generated source code without consideration of the generator. It has also a weakness that the reconstruction of generated code needs additional analysis This paper introduces verification process of behavioral consistency between design and its implementation of the pSET using the HW-CBMC. The HW-CBMC is a formal verification tool, verifying equivalence between hardware and software description. It requires two inputs for checking equivalence, Verilog for hard-ware and ANSI-C for software. In this approach, FBDs are translated into semantically equivalent Verilog pro-gram, and the HW-CBMC verifies equivalence between the Verilog program and the ANSI-C program which is generated from the FBDs

  7. Verification Process of Behavioral Consistency between Design and Implementation programs of pSET using HW-CBMC

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong Ah; Lee, Jong Hoon; Yoo, Jun Beom [Konkuk University, Seoul (Korea, Republic of)

    2011-05-15

    Controllers in safety critical systems such as nuclear power plants often use Function Block Diagrams (FBDs) to design embedded software. The design is implemented using programming languages such as C to compile it into particular target hardware. The implementation must have the same behavior with the design and the behavior should be verified explicitly. For example, the pSET (POSAFE-Q Software Engineering Tool) is a loader software to program POSAFE-Q PLC (Programmable Logic Controller) and is developed as a part of the KNICS (Korea Nuclear Instrumentation and Control System R and D Center) project. It uses FBDs to design software of PLC, and generates ANSI-C code to compile it into specific machine code. To verify the equivalence between the FBDs and ANSI-C code, mathematical proof of code generator or a verification tools such as RETRANS can help guarantee the equivalence. Mathematical proof, however, has a weakness that requires high expenditure and repetitive fulfillment whenever the translator is modified. On the other hand, RETRANS reconstructs the generated source code without consideration of the generator. It has also a weakness that the reconstruction of generated code needs additional analysis This paper introduces verification process of behavioral consistency between design and its implementation of the pSET using the HW-CBMC. The HW-CBMC is a formal verification tool, verifying equivalence between hardware and software description. It requires two inputs for checking equivalence, Verilog for hard-ware and ANSI-C for software. In this approach, FBDs are translated into semantically equivalent Verilog pro-gram, and the HW-CBMC verifies equivalence between the Verilog program and the ANSI-C program which is generated from the FBDs

  8. Unconfined versus confined speleogenetic settings: variations of solution porosity.

    Directory of Open Access Journals (Sweden)

    Klimchouk Alexander

    2006-01-01

    Full Text Available Speleogenesis in confined settings generates cave morphologies that differ much from those formed in unconfined settings. Cavesdeveloped in unconfined settings are characterised by broadly dendritic patterns of channels due to highly competing development.In contrast, caves originated under confined conditions tend to form two- or three-dimensional mazes with densely packed conduits.This paper illustrates variations of solution (channel porosity resulted from speleogenesis in unconfined and confined settings by theanalysis of morphometric parameters of typical cave patterns. Two samples of typical cave systems formed in the respective settingsare compared. The sample that represents unconfined speleogenesis consists of solely limestone caves, whereas gypsum cavesof this type tend to be less dendritic and more linear. The sample that represents confined speleogenesis consists of both limestoneand gypsum maze caves. The comparison shows considerable differences in average values of some parameters between thesettings. Passage network density (the ratio of the cave length to the area of the cave field, km/km2 is one order of magnitudegreater in confined settings than in unconfined (average 167.3 km/km2 versus 16.6 km/km2. Similarly, an order of magnitudedifference is observed in cave porosity (a fraction of the volume of a cave block, occupied by mapped cavities; 5.0 % versus 0.4 %.This illustrates that storage in maturely karstified confined aquifers is generally much greater than in unconfined. The average areal coverage (a fraction of the area of the cave field occupied by passages in a plan view is about 5 times greater in confined settingsthan in unconfined (29.7 % versus 6.4 %. This indicates that conduit permeability in confined aquifers is appreciably easier to targetwith drilling than the widely spaced conduits in unconfined aquifers.

  9. Optimum sample length for estimating anchovy size distribution and the proportion of juveniles per fishing set for the Peruvian purse-seine fleet

    Directory of Open Access Journals (Sweden)

    Rocío Joo

    2017-04-01

    Full Text Available The length distribution of catches represents a fundamental source of information for estimating growth and spatio-temporal dynamics of cohorts. The length distribution of caught is estimated based on samples of catched individuals. This work studies the optimum sample size of individuals at each fishing set in order to obtain a representative sample of the length and the proportion of juveniles in the fishing set. For that matter, we use anchovy (Engraulis ringens length data from different fishing sets recorded by observers at-sea from the On-board Observers Program from the Peruvian Marine Research Institute. Finally, we propose an optimum sample size for obtaining robust size and juvenile estimations. Though the application of this work corresponds to the anchovy fishery, the procedure can be applied to any fishery, either for on board or inland biometric measurements.

  10. Consistency of color representation in smart phones.

    Science.gov (United States)

    Dain, Stephen J; Kwan, Benjamin; Wong, Leslie

    2016-03-01

    One of the barriers to the construction of consistent computer-based color vision tests has been the variety of monitors and computers. Consistency of color on a variety of screens has necessitated calibration of each setup individually. Color vision examination with a carefully controlled display has, as a consequence, been a laboratory rather than a clinical activity. Inevitably, smart phones have become a vehicle for color vision tests. They have the advantage that the processor and screen are associated and there are fewer models of smart phones than permutations of computers and monitors. Colorimetric consistency of display within a model may be a given. It may extend across models from the same manufacturer but is unlikely to extend between manufacturers especially where technologies vary. In this study, we measured the same set of colors in a JPEG file displayed on 11 samples of each of four models of smart phone (iPhone 4s, iPhone5, Samsung Galaxy S3, and Samsung Galaxy S4) using a Photo Research PR-730. The iPhones are white LED backlit LCD and the Samsung are OLEDs. The color gamut varies between models and comparison with sRGB space shows 61%, 85%, 117%, and 110%, respectively. The iPhones differ markedly from the Samsungs and from one another. This indicates that model-specific color lookup tables will be needed. Within each model, the primaries were quite consistent (despite the age of phone varying within each sample). The worst case in each model was the blue primary; the 95th percentile limits in the v' coordinate were ±0.008 for the iPhone 4 and ±0.004 for the other three models. The u'v' variation in white points was ±0.004 for the iPhone4 and ±0.002 for the others, although the spread of white points between models was u'v'±0.007. The differences are essentially the same for primaries at low luminance. The variation of colors intermediate between the primaries (e.g., red-purple, orange) mirror the variation in the primaries. The variation in

  11. Integration of morphological data sets for phylogenetic analysis of Amniota: the importance of integumentary characters and increased taxonomic sampling.

    Science.gov (United States)

    Hill, Robert V

    2005-08-01

    Several mutually exclusive hypotheses have been advanced to explain the phylogenetic position of turtles among amniotes. Traditional morphology-based analyses place turtles among extinct anapsids (reptiles with a solid skull roof), whereas more recent studies of both morphological and molecular data support an origin of turtles from within Diapsida (reptiles with a doubly fenestrated skull roof). Evaluation of these conflicting hypotheses has been hampered by nonoverlapping taxonomic samples and the exclusion of significant taxa from published analyses. Furthermore, although data from soft tissues and anatomical systems such as the integument may be particularly relevant to this problem, they are often excluded from large-scale analyses of morphological systematics. Here, conflicting hypotheses of turtle relationships are tested by (1) combining published data into a supermatrix of morphological characters to address issues of character conflict and missing data; (2) increasing taxonomic sampling by more than doubling the number of operational taxonomic units to test internal relationships within suprageneric ingroup taxa; and (3) increasing character sampling by approximately 25% by adding new data on the osteology and histology of the integument, an anatomical system that has been historically underrepresented in morphological systematics. The morphological data set assembled here represents the largest yet compiled for Amniota. Reevaluation of character data from prior studies of amniote phylogeny favors the hypothesis that turtles indeed have diapsid affinities. Addition of new ingroup taxa alone leads to a decrease in overall phylogenetic resolution, indicating that existing characters used for amniote phylogeny are insufficient to explain the evolution of more highly nested taxa. Incorporation of new data from the soft and osseous components of the integument, however, helps resolve relationships among both basal and highly nested amniote taxa. Analysis of a

  12. Results for five sets of forensic genetic markers studied in a Greek population sample.

    Science.gov (United States)

    Tomas, C; Skitsa, I; Steinmeier, E; Poulsen, L; Ampati, A; Børsting, C; Morling, N

    2015-05-01

    A population sample of 223 Greek individuals was typed for five sets of forensic genetic markers with the kits NGM SElect™, SNPforID 49plex, DIPplex®, Argus X-12 and PowerPlex® Y23. No significant deviation from Hardy-Weinberg expectations was observed for any of the studied markers after Holm-Šidák correction. Statistically significant (P21) individuals for 16 autosomal STRs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Sixteen-item Anxiety Sensitivity Index: Confirmatory factor analytic evidence, internal consistency, and construct validity in a young adult sample from the Netherlands

    NARCIS (Netherlands)

    Vujanovic, Anka A.; Arrindell, Willem A.; Bernstein, Amit; Norton, Peter J.; Zvolensky, Michael J.

    The present investigation examined the factor structure, internal consistency, and construct validity of the 16-item Anxiety Sensitivity Index (ASI; Reiss Peterson, Gursky, & McNally 1986) in a young adult sample (n = 420)from the Netherlands. Confirmatory factor analysis was used to comparatively

  14. Election Districts and Precincts, PrecinctPoly-The data set is a polygon feature consisting of 220 segments representing voter precinct boundaries., Published in 1991, Davis County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Election Districts and Precincts dataset current as of 1991. PrecinctPoly-The data set is a polygon feature consisting of 220 segments representing voter precinct...

  15. Transitioning from MODIS to VIIRS: an analysis of inter-consistency of NDVI data sets for agricultural monitoring.

    Science.gov (United States)

    Skakun, Sergii; Justice, Christopher O; Vermote, Eric; Roger, Jean-Claude

    2018-01-01

    The Visible/Infrared Imager/Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership (S-NPP) satellite was launched in 2011, in part to provide continuity with the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument aboard National Aeronautics and Space Administration's (NASA) Terra and Aqua remote sensing satellites. The VIIRS will eventually replace MODIS for both land science and applications and add to the coarse-resolution, long term data record. It is, therefore, important to provide the user community with an assessment of the consistency of equivalent products from the two sensors. For this study, we do this in the context of example agricultural monitoring applications. Surface reflectance that is routinely delivered within the M{O,Y}D09 and VNP09 series of products provide critical input for generating downstream products. Given the range of applications utilizing the normalized difference vegetation index (NDVI) generated from M{O,Y}D09 and VNP09 products and the inherent differences between MODIS and VIIRS sensors in calibration, spatial sampling, and spectral bands, the main objective of this study is to quantify uncertainties related the transitioning from using MODIS to VIIRS-based NDVI's. In particular, we compare NDVI's derived from two sets of Level 3 MYD09 and VNP09 products with various spatial-temporal characteristics, namely 8-day composites at 500 m spatial resolution and daily Climate Modelling Grid (CMG) images at 0.05° spatial resolution. Spectral adjustment of VIIRS I1 (red) and I2 (near infra-red - NIR) bands to match MODIS/Aqua b1 (red) and b2 (NIR) bands is performed to remove a bias between MODIS and VIIRS-based red, NIR, and NDVI estimates. Overall, red reflectance, NIR reflectance, NDVI uncertainties were 0.014, 0.029 and 0.056 respectively for the 500 m product and 0.013, 0.016 and 0.032 for the 0.05° product. The study shows that MODIS and VIIRS NDVI data can be used interchangeably for

  16. Honest Importance Sampling with Multiple Markov Chains.

    Science.gov (United States)

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable

  17. Cone penetrometer testing and discrete-depth groundwater sampling techniques: A cost-effective method of site characterization in a multiple-aquifer setting

    International Nuclear Information System (INIS)

    Zemo, D.A.; Pierce, Y.G.; Gallinatti, J.D.

    1992-01-01

    Cone penetrometer testing (CPT), combined with discrete-depth groundwater sampling methods, can reduce significantly the time and expense required to characterize large sites that have multiple aquifers. Results from the screening site characterization can be used to design and install a cost-effective monitoring well network. At a site in northern California, it was necessary to characterize the stratigraphy and the distribution of volatile organic compounds (VOCs) to a depth of 80 feet within a 1/2 mile-by-1/4-mile residential and commercial area in a complex alluvial fan setting. To expedite site characterization, a five-week field screening program was implemented that consisted of a shallow groundwater survey, CPT soundings, and discrete-depth groundwater sampling. Based on continuous lithologic information provided by the CPT soundings, four coarse-grained water-yielding sedimentary packages were identified. Eighty-three discrete-depth groundwater samples were collected using shallow groundwater survey techniques, the BAT Enviroprobe, or the QED HydroPunch 1, depending on subsurface conditions. A 20-well monitoring network was designed and installed to monitor critical points within each sedimentary package. Understanding the vertical VOC distribution and concentrations produced substantial cost savings by minimizing the number of permanent monitoring wells and reducing the number of costly conductor casings to be installed. Significant long-term cost savings will result from reduced sampling costs. Where total VOC concentrations exceeded 20 φg/l in the screening samples, a good correlation was found between the discrete-depth screening data and data from monitoring wells. Using a screening program to characterize the site before installing monitoring wells resulted in an estimated 50-percent reduction in costs for site characterization, 65-percent reduction in time for site characterization, and 50-percent reduction in long-term monitoring costs

  18. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  19. Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines

    Science.gov (United States)

    Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.

    2011-01-01

    Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433

  20. Detection and genotyping of human papillomavirus in self-obtained cervicovaginal samples by using the FTA cartridge: new possibilities for cervical cancer screening.

    NARCIS (Netherlands)

    Lenselink, C.H.; Bie, R.P. de; Hamont, D. van; Bakkers, J.M.J.E.; Quint, W.G.V.; Massuger, L.F.A.G.; Bekkers, R.L.M.; Melchers, W.J.G.

    2009-01-01

    This study assesses human papillomavirus (HPV) detection and genotyping in self-sampled genital smears applied to an indicating FTA elute cartridge (FTA cartridge). The study group consisted of 96 women, divided into two sample sets. All samples were analyzed by the HPV SPF(10)-Line Blot 25. Set 1

  1. On the consistent histories approach to quantum mechanics

    International Nuclear Information System (INIS)

    Dowker, F.; Kent, A.

    1996-01-01

    We review the consistent histories formulations of quantum mechanics developed by Griffiths, Omnes, Gell-Man, and Hartle, and we describe the classifications of consistent sets. We illustrate some general features of consistent sets by a few lemmas and examples. We also consider various interpretations of the formalism, and we examine the new problems which arise in reconstructing the past and predicting the future. It is shown that Omnes characterization of true statements---statements that can be deduced unconditionally in his interpretation---is incorrect. We examine critically Gell-Mann and Hartle's interpretation of the formalism, and in particular, their discussions of communication, prediction, and retrodiction, and we conclude that their explanation of the apparent persistence of quasiclassicality relies on assumptions about an as-yet-unknown theory of experience. Our overall conclusion is that the consistent histories approach illustrates the need to supplement quantum mechanics by some selection principle in order to produce a fundamental theory capable of unconditional predictions

  2. Basic investigation of the laminated alginate impression technique: Setting time, permanent deformation, elastic deformation, consistency, and tensile bond strength tests.

    Science.gov (United States)

    Kitamura, Aya; Kawai, Yasuhiko

    2015-01-01

    Laminated alginate impression for edentulous is simple and time efficient compared to border molding technique. The purpose of this study was to examine clinical applicability of the laminated alginate impression, by measuring the effects of different Water/Powder (W/P) and mixing methods, and different bonding methods in the secondary impression of alginate impression. Three W/P: manufacturer-designated mixing water amount (standard), 1.5-fold (1.5×) and 1.75-fold (1.75×) water amount were mixed by manual and automatic mixing methods. Initial and complete setting time, permanent and elastic deformation, and consistency of the secondary impression were investigated (n=10). Additionally, tensile bond strength between the primary and secondary impression were measured in the following surface treatment; air blow only (A), surface baking (B), and alginate impression material bonding agent (ALGI-BOND: AB) (n=12). Initial setting times significantly shortened with automatic mixing for all W/P (p<0.05). The permanent deformation decreased and elastic deformation increased as high W/P, regardless of the mixing method. Elastic deformation significantly reduced in 1.5× and 1.75× with automatic mixing (p<0.05). All of these properties resulted within JIS standards. For all W/P, AB showed a significantly high bonding strength as compared to A and B (p<0.01). The increase of mixing water, 1.5× and 1.75×, resulted within JIS standards in setting time, suggesting its applicability in clinical setting. The use of automatic mixing device decreased elastic strain and shortening of the curing time. For the secondary impression application of adhesives on the primary impression gives secure adhesion. Copyright © 2014 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  3. Recommended ALIs and DACs for 10 CFR part 220: A consistent numerical set

    Energy Technology Data Exchange (ETDEWEB)

    Eckerman, K.F.

    1996-05-01

    Appendix B to 10 CFR Part 20 contains numerical data for controlling the intake of radionuclides in the workplace or in the environment. These data, derived from the recommendations of the International Commission on Radiological Protection (ICRP), do not provide a numerically consistent basis for demonstrating compliance with the limitation on dose stated in the regulation. This situation is largely a consequence of the numerical procedures used by the ICRP which did not maintain, in a strict numerical sense, the hierarchial relationship among the radiation protection quantities. In this work recommended values of the quantities in Appendix B to CFR Part 20 are developed using the dose coefficients of the applicable ICRP publications and a numerical procedure which ensures that the tabulated quantities are numerically consistent.

  4. Experimental set-up for electrical resistivity measurements at low temperature in amorphous and crystalline metallic samples

    International Nuclear Information System (INIS)

    Rodriquez Fernandez, J.M.; Lopez Sanchez, R.J.; Gomez-Sal, J.C.

    1986-01-01

    The experimental set-up to measure the thermal variation of the electrical resistivity between 10.5 K and 300 K, has been developed. A four probe A.C. method with a synchronous-detection (lock'in) technique were the idoneous for our proposes. We have designed a new type of pressure sample-holder adopted to the CS-202 type cryostat. The measurements performed on samples already known have allowed us to determine the sensitivity of our experiments, which is Δ ρ/ρ=2x10 -4 . The measurements performed in the new Y 3 Rh 2 Si 2 compound which at 10 K has no magnetic ordering, are also presented. (author)

  5. Detection and genotyping of human papillomavirus in self-obtained cervicovaginal samples by using the FTA cartridge: new possibilities for cervical cancer screening.

    Science.gov (United States)

    Lenselink, Charlotte H; de Bie, Roosmarie P; van Hamont, Dennis; Bakkers, Judith M J E; Quint, Wim G V; Massuger, Leon F A G; Bekkers, Ruud L M; Melchers, Willem J G

    2009-08-01

    This study assesses human papillomavirus (HPV) detection and genotyping in self-sampled genital smears applied to an indicating FTA elute cartridge (FTA cartridge). The study group consisted of 96 women, divided into two sample sets. All samples were analyzed by the HPV SPF(10)-Line Blot 25. Set 1 consisted of 45 women attending the gynecologist; all obtained a self-sampled cervicovaginal smear, which was applied to an FTA cartridge. HPV results were compared to a cervical smear (liquid based) taken by a trained physician. Set 2 consisted of 51 women who obtained a self-sampled cervicovaginal smear at home, which was applied to an FTA cartridge and to a liquid-based medium. DNA was obtained from the FTA cartridges by simple elution as well as extraction. Of all self-obtained samples of set 1, 62.2% tested HPV positive. The overall agreement between self- and physician-obtained samples was 93.3%, in favor of the self-obtained samples. In sample set 2, 25.5% tested HPV positive. The overall agreement for high-risk HPV presence between the FTA cartridge and liquid-based medium and between DNA elution and extraction was 100%. This study shows that HPV detection and genotyping in self-obtained cervicovaginal samples applied to an FTA cartridge is highly reliable. It shows a high level of overall agreement with HPV detection and genotyping in physician-obtained cervical smears and liquid-based self-samples. DNA can be obtained by simple elution and is therefore easy, cheap, and fast. Furthermore, the FTA cartridge is a convenient medium for collection and safe transport at ambient temperatures. Therefore, this method may contribute to a new way of cervical cancer screening.

  6. Quasi-Particle Self-Consistent GW for Molecules.

    Science.gov (United States)

    Kaplan, F; Harding, M E; Seiler, C; Weigend, F; Evers, F; van Setten, M J

    2016-06-14

    We present the formalism and implementation of quasi-particle self-consistent GW (qsGW) and eigenvalue only quasi-particle self-consistent GW (evGW) adapted to standard quantum chemistry packages. Our implementation is benchmarked against high-level quantum chemistry computations (coupled-cluster theory) and experimental results using a representative set of molecules. Furthermore, we compare the qsGW approach for five molecules relevant for organic photovoltaics to self-consistent GW results (scGW) and analyze the effects of the self-consistency on the ground state density by comparing calculated dipole moments to their experimental values. We show that qsGW makes a significant improvement over conventional G0W0 and that partially self-consistent flavors (in particular evGW) can be excellent alternatives.

  7. Objective past of a quantum universe: Redundant records of consistent histories

    Science.gov (United States)

    Riedel, C. Jess; Zurek, Wojciech H.; Zwolak, Michael

    2016-03-01

    Motivated by the advances of quantum Darwinism and recognizing the role played by redundancy in identifying the small subset of quantum states with resilience characteristic of objective classical reality, we explore the implications of redundant records for consistent histories. The consistent histories formalism is a tool for describing sequences of events taking place in an evolving closed quantum system. A set of histories is consistent when one can reason about them using Boolean logic, i.e., when probabilities of sequences of events that define histories are additive. However, the vast majority of the sets of histories that are merely consistent are flagrantly nonclassical in other respects. This embarras de richesses (known as the set selection problem) suggests that one must go beyond consistency to identify how the classical past arises in our quantum universe. The key intuition we follow is that the records of events that define the familiar objective past are inscribed in many distinct systems, e.g., subsystems of the environment, and are accessible locally in space and time to observers. We identify histories that are not just consistent but redundantly consistent using the partial-trace condition introduced by Finkelstein as a bridge between histories and decoherence. The existence of redundant records is a sufficient condition for redundant consistency. It selects, from the multitude of the alternative sets of consistent histories, a small subset endowed with redundant records characteristic of the objective classical past. The information about an objective history of the past is then simultaneously within reach of many, who can independently reconstruct it and arrive at compatible conclusions in the present.

  8. Multiphase flows of N immiscible incompressible fluids: A reduction-consistent and thermodynamically-consistent formulation and associated algorithm

    Science.gov (United States)

    Dong, S.

    2018-05-01

    We present a reduction-consistent and thermodynamically consistent formulation and an associated numerical algorithm for simulating the dynamics of an isothermal mixture consisting of N (N ⩾ 2) immiscible incompressible fluids with different physical properties (densities, viscosities, and pair-wise surface tensions). By reduction consistency we refer to the property that if only a set of M (1 ⩽ M ⩽ N - 1) fluids are present in the system then the N-phase governing equations and boundary conditions will exactly reduce to those for the corresponding M-phase system. By thermodynamic consistency we refer to the property that the formulation honors the thermodynamic principles. Our N-phase formulation is developed based on a more general method that allows for the systematic construction of reduction-consistent formulations, and the method suggests the existence of many possible forms of reduction-consistent and thermodynamically consistent N-phase formulations. Extensive numerical experiments have been presented for flow problems involving multiple fluid components and large density ratios and large viscosity ratios, and the simulation results are compared with the physical theories or the available physical solutions. The comparisons demonstrate that our method produces physically accurate results for this class of problems.

  9. Evaluation of setting time and flow properties of self-synthesize alginate impressions

    Science.gov (United States)

    Halim, Calista; Cahyanto, Arief; Sriwidodo, Harsatiningsih, Zulia

    2018-02-01

    Alginate is an elastic hydrocolloid dental impression materials to obtain negative reproduction of oral mucosa such as to record soft-tissue and occlusal relationships. The aim of the present study was to synthesize alginate and to determine the setting time and flow properties. There were five groups of alginate consisted of fifty samples self-synthesize alginate and commercial alginate impression product. Fifty samples were divided according to two tests, each twenty-five samples for setting time and flow test. Setting time test was recorded in the s unit, meanwhile, flow test was recorded in the mm2 unit. The fastest setting time result was in the group three (148.8 s) and the latest was group fours). The highest flow test result was in the group three (69.70 mm2) and the lowest was group one (58.34 mm2). Results were analyzed statistically by one way ANOVA (α= 0.05), showed that there was a statistical significance of setting time while no statistical significance of flow properties between self-synthesize alginate and alginate impression product. In conclusion, the alginate impression was successfully self-synthesized and variation composition gives influence toward setting time and flow properties. The most resemble setting time of control group is group three. The most resemble flow of control group is group four.

  10. Methods for sampling geographically mobile female traders in an East African market setting

    Science.gov (United States)

    Achiro, Lillian; Kwena, Zachary A.; McFarland, Willi; Neilands, Torsten B.; Cohen, Craig R.; Bukusi, Elizabeth A.; Camlin, Carol S.

    2018-01-01

    Background The role of migration in the spread of HIV in sub-Saharan Africa is well-documented. Yet migration and HIV research have often focused on HIV risks to male migrants and their partners, or migrants overall, often failing to measure the risks to women via their direct involvement in migration. Inconsistent measures of mobility, gender biases in those measures, and limited data sources for sex-specific population-based estimates of mobility have contributed to a paucity of research on the HIV prevention and care needs of migrant and highly mobile women. This study addresses an urgent need for novel methods for developing probability-based, systematic samples of highly mobile women, focusing on a population of female traders operating out of one of the largest open air markets in East Africa. Our method involves three stages: 1.) identification and mapping of all market stall locations using Global Positioning System (GPS) coordinates; 2.) using female market vendor stall GPS coordinates to build the sampling frame using replicates; and 3.) using maps and GPS data for recruitment of study participants. Results The location of 6,390 vendor stalls were mapped using GPS. Of these, 4,064 stalls occupied by women (63.6%) were used to draw four replicates of 128 stalls each, and a fifth replicate of 15 pre-selected random alternates for a total of 527 stalls assigned to one of five replicates. Staff visited 323 stalls from the first three replicates and from these successfully recruited 306 female vendors into the study for a participation rate of 94.7%. Mobilization strategies and involving traders association representatives in participant recruitment were critical to the study’s success. Conclusion The study’s high participation rate suggests that this geospatial sampling method holds promise for development of probability-based samples in other settings that serve as transport hubs for highly mobile populations. PMID:29324780

  11. Orthology and paralogy constraints: satisfiability and consistency.

    Science.gov (United States)

    Lafond, Manuel; El-Mabrouk, Nadia

    2014-01-01

    A variety of methods based on sequence similarity, reconciliation, synteny or functional characteristics, can be used to infer orthology and paralogy relations between genes of a given gene family  G. But is a given set  C of orthology/paralogy constraints possible, i.e., can they simultaneously co-exist in an evolutionary history for  G? While previous studies have focused on full sets of constraints, here we consider the general case where  C does not necessarily involve a constraint for each pair of genes. The problem is subdivided in two parts: (1) Is  C satisfiable, i.e. can we find an event-labeled gene tree G inducing  C? (2) Is there such a G which is consistent, i.e., such that all displayed triplet phylogenies are included in a species tree? Previous results on the Graph sandwich problem can be used to answer to (1), and we provide polynomial-time algorithms for satisfiability and consistency with a given species tree. We also describe a new polynomial-time algorithm for the case of consistency with an unknown species tree and full knowledge of pairwise orthology/paralogy relationships, as well as a branch-and-bound algorithm in the case when unknown relations are present. We show that our algorithms can be used in combination with ProteinOrtho, a sequence similarity-based orthology detection tool, to extract a set of robust orthology/paralogy relationships.

  12. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Directory of Open Access Journals (Sweden)

    Lauren Hund

    Full Text Available Lot quality assurance sampling (LQAS surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  13. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Science.gov (United States)

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  14. Detection and Genotyping of Human Papillomavirus in Self-Obtained Cervicovaginal Samples by Using the FTA Cartridge: New Possibilities for Cervical Cancer Screening ▿

    Science.gov (United States)

    Lenselink, Charlotte H.; de Bie, Roosmarie P.; van Hamont, Dennis; Bakkers, Judith M. J. E.; Quint, Wim G. V.; Massuger, Leon F. A. G.; Bekkers, Ruud L. M.; Melchers, Willem J. G.

    2009-01-01

    This study assesses human papillomavirus (HPV) detection and genotyping in self-sampled genital smears applied to an indicating FTA elute cartridge (FTA cartridge). The study group consisted of 96 women, divided into two sample sets. All samples were analyzed by the HPV SPF10-Line Blot 25. Set 1 consisted of 45 women attending the gynecologist; all obtained a self-sampled cervicovaginal smear, which was applied to an FTA cartridge. HPV results were compared to a cervical smear (liquid based) taken by a trained physician. Set 2 consisted of 51 women who obtained a self-sampled cervicovaginal smear at home, which was applied to an FTA cartridge and to a liquid-based medium. DNA was obtained from the FTA cartridges by simple elution as well as extraction. Of all self-obtained samples of set 1, 62.2% tested HPV positive. The overall agreement between self- and physician-obtained samples was 93.3%, in favor of the self-obtained samples. In sample set 2, 25.5% tested HPV positive. The overall agreement for high-risk HPV presence between the FTA cartridge and liquid-based medium and between DNA elution and extraction was 100%. This study shows that HPV detection and genotyping in self-obtained cervicovaginal samples applied to an FTA cartridge is highly reliable. It shows a high level of overall agreement with HPV detection and genotyping in physician-obtained cervical smears and liquid-based self-samples. DNA can be obtained by simple elution and is therefore easy, cheap, and fast. Furthermore, the FTA cartridge is a convenient medium for collection and safe transport at ambient temperatures. Therefore, this method may contribute to a new way of cervical cancer screening. PMID:19553570

  15. Results for five sets of forensic genetic markers studied in a Greek population sample

    DEFF Research Database (Denmark)

    Tomas Mas, Carmen; Skitsa, I; Steinmeier, E

    2015-01-01

    A population sample of 223 Greek individuals was typed for five sets of forensic genetic markers with the kits NGM SElect™, SNPforID 49plex, DIPplex(®), Argus X-12 and PowerPlex(®) Y23. No significant deviation from Hardy-Weinberg expectations was observed for any of the studied markers after Holm...... origin. The Greek population grouped closely to the other European populations measured by FST(*) distances. The match probability ranged from a value of 1 in 2×10(7) males by using haplotype frequencies of four X-chromosome haplogroups in males to 1 in 1.73×10(21) individuals for 16 autosomal STRs....

  16. Pengaruh Kebijakan Pendanaan, Deviden dan Profitabilitas Perusahaan terhadap Set Kesempatan Investasi (IOS

    Directory of Open Access Journals (Sweden)

    Akhmad Adi Saputro

    2015-12-01

    Full Text Available The objective this research is to examine financing policy, dividend policy and profitability of the  firm association investment  opportunity set. The sample of this study consist  of 28 firm in Jakarta Stock Exchange period 1999-2004. Common factor analysis is conducted to constructcomposite  measures then ranked to classify the growth of sampling firm. To examine the impact of the financing policy, dividend policy, and profitability to the  investment opportunity set of the high and low growth firm is used logit regression analysis. The result indicates that the impact between  financing policy, proxied  by Book value  of debt  to equity, Market value of debt to equity to investment  opportunity set, is significantly negative. The impact between  dividend policy  proxied  by dividend yield  and investment opportunity set is significantly negative, but dividen pay out ratio is not  significant. The impact of profitability, proxied  by return on assets to investment opportunity set is significantly positive. 

  17. Developing consistent pronunciation models for phonemic variants

    CSIR Research Space (South Africa)

    Davel, M

    2006-09-01

    Full Text Available Pronunciation lexicons often contain pronunciation variants. This can create two problems: It can be difficult to define these variants in an internally consistent way and it can also be difficult to extract generalised grapheme-to-phoneme rule sets...

  18. Reliability of attitude and knowledge items and behavioral consistency in the validated sun exposure questionnaire in a Danish population based sample

    DEFF Research Database (Denmark)

    Køster, Brian; Søndergaard, Jens; Nielsen, Jesper Bo

    2018-01-01

    in protection behavior was low. To our knowledge, this is the first study to report reliability for a completely validated questionnaire on sun-related behavior in a national random population based sample. Further, we show that attitude and knowledge questions confirmed their validity with good reliability......An important feature of questionnaire validation is reliability. To be able to measure a given concept by questionnaire validly, the reliability needs to be high. The objectives of this study were to examine reliability of attitude and knowledge and behavioral consistency of sunburn in a developed...... questionnaire for monitoring and evaluating population sun-related behavior. Sun related behavior, attitude and knowledge was measured weekly by a questionnaire in the summer of 2013 among 664 Danes. Reliability was tested in a test-retest design. Consistency of behavioral information was tested similarly...

  19. Consistence of Network Filtering Rules

    Institute of Scientific and Technical Information of China (English)

    SHE Kun; WU Yuancheng; HUANG Juncai; ZHOU Mingtian

    2004-01-01

    The inconsistence of firewall/VPN(Virtual Private Network) rule makes a huge maintainable cost.With development of Multinational Company,SOHO office,E-government the number of firewalls/VPN will increase rapidly.Rule table in stand-alone or network will be increased in geometric series accordingly.Checking the consistence of rule table manually is inadequate.A formal approach can define semantic consistence,make a theoretic foundation of intelligent management about rule tables.In this paper,a kind of formalization of host rules and network ones for auto rule-validation based on SET theory were proporsed and a rule validation scheme was defined.The analysis results show the superior performance of the methods and demonstrate its potential for the intelligent management based on rule tables.

  20. Malaria PCR Detection in Cambodian Low-Transmission Settings: Dried Blood Spots versus Venous Blood Samples

    Science.gov (United States)

    Canier, Lydie; Khim, Nimol; Kim, Saorin; Eam, Rotha; Khean, Chanra; Loch, Kaknika; Ken, Malen; Pannus, Pieter; Bosman, Philippe; Stassijns, Jorgen; Nackers, Fabienne; Alipon, SweetC; Char, Meng Chuor; Chea, Nguon; Etienne, William; De Smet, Martin; Kindermans, Jean-Marie; Ménard, Didier

    2015-01-01

    In the context of malaria elimination, novel strategies for detecting very low malaria parasite densities in asymptomatic individuals are needed. One of the major limitations of the malaria parasite detection methods is the volume of blood samples being analyzed. The objective of the study was to compare the diagnostic accuracy of a malaria polymerase chain reaction assay, from dried blood spots (DBS, 5 μL) and different volumes of venous blood (50 μL, 200 μL, and 1 mL). The limit of detection of the polymerase chain reaction assay, using calibrated Plasmodium falciparum blood dilutions, showed that venous blood samples (50 μL, 200 μL, 1 mL) combined with Qiagen extraction methods gave a similar threshold of 100 parasites/mL, ∼100-fold lower than 5 μL DBS/Instagene method. On a set of 521 field samples, collected in two different transmission areas in northern Cambodia, no significant difference in the proportion of parasite carriers, regardless of the methods used was found. The 5 μL DBS method missed 27% of the samples detected by the 1 mL venous blood method, but most of the missed parasites carriers were infected by Plasmodium vivax (84%). The remaining missed P. falciparum parasite carriers (N = 3) were only detected in high-transmission areas. PMID:25561570

  1. Consistent application of codes and standards

    International Nuclear Information System (INIS)

    Scott, M.A.

    1989-01-01

    The guidelines presented in the US Department of Energy, General Design Criteria (DOE 6430.1A), and the Design and Evaluation Guidelines for Department of Energy Facilities Subject to Natural Phenomena Hazards (UCRL-15910) provide a consistent and well defined approach to determine the natural phenomena hazards loads for US Department of Energy site facilities. The guidelines for the application of loads combinations and allowables criteria are not as well defined and are more flexible in interpretation. This flexibility in the interpretation of load combinations can lead to conflict between the designer and overseer. The establishment of an efficient set of acceptable design criteria, based on US Department of Energy guidelines, provides a consistent baseline for analysis, design, and review. Additionally, the proposed method should not limit the design and analytical innovation necessary to analyze or qualify the unique structure. This paper investigates the consistent application of load combinations, analytical methods, and load allowables and suggests a reference path consistent with the US Department of Energy guidelines

  2. Precision, time, and cost: a comparison of three sampling designs in an emergency setting

    Science.gov (United States)

    Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles

    2008-01-01

    The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster) and a 67 × 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data collection in emergency

  3. Precision, time, and cost: a comparison of three sampling designs in an emergency setting

    Directory of Open Access Journals (Sweden)

    Deconinck Hedwig

    2008-05-01

    Full Text Available Abstract The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster and a 67 × 3 cluster design (67 clusters, 3 observations per cluster. Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data

  4. Sample Set (SE): SE57 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available etabolite accumulation patterns in plants We optimized the MRM conditions for specifi c compounds by performing automate...applied to high-throughput automated analysis of biological samples using TQMS coupled with ultra performanc...ies, and family-specifi c metabolites could be predicted using a batch-learning self organizing map analysis. Thus, the automate

  5. Coupled Dyson-Schwinger equations and effects of self-consistency

    International Nuclear Information System (INIS)

    Wu, S.S.; Zhang, H.X.; Yao, Y.J.

    2001-01-01

    Using the σ-ω model as an effective tool, the effects of self-consistency are studied in some detail. A coupled set of Dyson-Schwinger equations for the renormalized baryon and meson propagators in the σ-ω model is solved self-consistently according to the dressed Hartree-Fock scheme, where the hadron propagators in both the baryon and meson self-energies are required to also satisfy this coupled set of equations. It is found that the self-consistency affects the baryon spectral function noticeably, if only the interaction with σ mesons is considered. However, there is a cancellation between the effects due to the σ and ω mesons and the additional contribution of ω mesons makes the above effect insignificant. In both the σ and σ-ω cases the effects of self-consistency on meson spectral function are perceptible, but they can nevertheless be taken account of without a self-consistent calculation. Our study indicates that to include the meson propagators in the self-consistency requirement is unnecessary and one can stop at an early step of an iteration procedure to obtain a good approximation to the fully self-consistent results of all the hadron propagators in the model, if an appropriate initial input is chosen. Vertex corrections and their effects on ghost poles are also studied

  6. A combined approach of generalized additive model and bootstrap with small sample sets for fault diagnosis in fermentation process of glutamate.

    Science.gov (United States)

    Liu, Chunbo; Pan, Feng; Li, Yun

    2016-07-29

    Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.

  7. Implementation of antimicrobial peptides for sample preparation prior to nucleic acid amplification in point-of-care settings.

    Science.gov (United States)

    Krõlov, Katrin; Uusna, Julia; Grellier, Tiia; Andresen, Liis; Jevtuševskaja, Jekaterina; Tulp, Indrek; Langel, Ülo

    2017-12-01

    A variety of sample preparation techniques are used prior to nucleic acid amplification. However, their efficiency is not always sufficient and nucleic acid purification remains the preferred method for template preparation. Purification is difficult and costly to apply in point-of-care (POC) settings and there is a strong need for more robust, rapid, and efficient biological sample preparation techniques in molecular diagnostics. Here, the authors applied antimicrobial peptides (AMPs) for urine sample preparation prior to isothermal loop-mediated amplification (LAMP). AMPs bind to many microorganisms such as bacteria, fungi, protozoa and viruses causing disruption of their membrane integrity and facilitate nucleic acid release. The authors show that incubation of E. coli with antimicrobial peptide cecropin P1 for 5 min had a significant effect on the availability of template DNA compared with untreated or even heat treated samples resulting in up to six times increase of the amplification efficiency. These results show that AMPs treatment is a very efficient sample preparation technique that is suitable for application prior to nucleic acid amplification directly within biological samples. Furthermore, the entire process of AMPs treatment was performed at room temperature for 5 min thereby making it a good candidate for use in POC applications.

  8. Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots.

    Science.gov (United States)

    Gilbert, Hunter B; Webster, Robert J

    Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C.

  9. Consistency and Communication in Committees

    OpenAIRE

    Inga Deimen; Felix Ketelaar; Mark T. Le Quement

    2013-01-01

    This paper analyzes truthtelling incentives in pre-vote communication in heterogeneous committees. We generalize the classical Condorcet jury model by introducing a new informational structure that captures consistency of information. In contrast to the impossibility result shown by Coughlan (2000) for the classical model, full pooling of information followed by sincere voting is an equilibrium outcome of our model for a large set of parameter values implying the possibility of ex post confli...

  10. Parquet equations for numerical self-consistent-field theory

    International Nuclear Information System (INIS)

    Bickers, N.E.

    1991-01-01

    In recent years increases in computational power have provided new motivation for the study of self-consistent-field theories for interacting electrons. In this set of notes, the so-called parquet equations for electron systems are derived pedagogically. The principal advantages of the parquet approach are outlined, and its relationship to simpler self-consistent-field methods, including the Baym-Kadanoff technique, is discussed in detail. (author). 14 refs, 9 figs

  11. Structural Consistency, Consistency, and Sequential Rationality.

    OpenAIRE

    Kreps, David M; Ramey, Garey

    1987-01-01

    Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...

  12. Personalized recommendation based on unbiased consistence

    Science.gov (United States)

    Zhu, Xuzhen; Tian, Hui; Zhang, Ping; Hu, Zheng; Zhou, Tao

    2015-08-01

    Recently, in physical dynamics, mass-diffusion-based recommendation algorithms on bipartite network provide an efficient solution by automatically pushing possible relevant items to users according to their past preferences. However, traditional mass-diffusion-based algorithms just focus on unidirectional mass diffusion from objects having been collected to those which should be recommended, resulting in a biased causal similarity estimation and not-so-good performance. In this letter, we argue that in many cases, a user's interests are stable, and thus bidirectional mass diffusion abilities, no matter originated from objects having been collected or from those which should be recommended, should be consistently powerful, showing unbiased consistence. We further propose a consistence-based mass diffusion algorithm via bidirectional diffusion against biased causality, outperforming the state-of-the-art recommendation algorithms in disparate real data sets, including Netflix, MovieLens, Amazon and Rate Your Music.

  13. Consistently low prevalence of syphilis among female sex workers in Jinan, China: findings from two consecutive respondent driven sampling surveys.

    Directory of Open Access Journals (Sweden)

    Meizhen Liao

    Full Text Available BACKGROUND: Routine surveillance using convenient sampling found low prevalence of HIV and syphilis among female sex workers in China. Two consecutive surveys using respondent driven sampling were conducted in 2008 and 2009 to examine the prevalence of HIV and syphilis among female sex workers in Jinan, China. METHODS: A face-to-face interview was conducted to collect demographic, behavioral and service utilization information using a structured questionnaire. Blood samples were drawn for serological tests of HIV-1 antibody and syphilis antibody. Respondent Driven Sampling Analysis Tool was used to generate population level estimates. RESULTS: In 2008 and in 2009, 363 and 432 subjects were recruited and surveyed respectively. Prevalence of syphilis was 2.8% in 2008 and 2.2% in 2009, while no HIV case was found in both years. Results are comparable to those from routine sentinel surveillance system in the city. Only 60.8% subjects in 2008 and 48.3% in 2009 reported a consistent condom use with clients during the past month. Over 50% subjects had not been covered by any HIV-related services in the past year, with only 15.6% subjects in 2008 and 13.1% in 2009 ever tested for HIV. CONCLUSIONS: Despite the low prevalence of syphilis and HIV, risk behaviors are common. Targeted interventions to promote the safe sex and utilization of existing intervention services are still needed to keep the epidemic from growing.

  14. Sampling on Quasicrystals

    OpenAIRE

    Grepstad, Sigrid

    2011-01-01

    We prove that quasicrystals are universal sets of stable sampling in any dimension. Necessary and sufficient density conditions for stable sampling and interpolation sets in one dimension are studied in detail.

  15. Using the Perceptron Algorithm to Find Consistent Hypotheses

    OpenAIRE

    Anthony, M.; Shawe-Taylor, J.

    1993-01-01

    The perceptron learning algorithm yields quite naturally an algorithm for finding a linearly separable boolean function consistent with a sample of such a function. Using the idea of a specifying sample, we give a simple proof that this algorithm is not efficient, in general.

  16. Consistency of blood pressure differences between the left and right arms.

    Science.gov (United States)

    Eguchi, Kazuo; Yacoub, Mona; Jhalani, Juhee; Gerin, William; Schwartz, Joseph E; Pickering, Thomas G

    2007-02-26

    It is unclear to what extent interarm blood pressure (BP) differences are reproducible vs the result of random error. The present study was designed to resolve this issue. We enrolled 147 consecutive patients from a hypertension clinic. Three sets of 3 BP readings were recorded, first using 2 oscillometric devices simultaneously in the 2 arms (set 1); next, 3 readings were taken sequentially for each arm using a standard mercury sphygmomanometer (set 2); finally, the readings as performed for set 1 were repeated (set 3). The protocol was repeated at a second visit for 91 patients. Large interarm systolic BP differences were consistently seen in 2 patients with obstructive arterial disease. In the remaining patients, the systolic BP and the diastolic BP, respectively, were slightly higher in the right arm than in the left arm by 2 to 3 mm Hg and by 1 mm Hg for all 3 sets (Pdifference of more than 5 mm Hg were 11 (7.5%) and 4 (2.7%) across all 3 sets of readings. Among patients who repeated the test, none had a consistent interarm BP difference of more than 5 mm Hg across the 2 visits. The interarm BP difference was consistent only when obstructive arterial disease was present. Although BP in the right arm tended to be higher than in the left arm, clinically meaningful interarm differences were not reproducible in the absence of obstructive arterial disease and are attributable to random variation.

  17. Square-wave anodic-stripping voltammetric determination of Cd, Pb and Cu in wine: Set-up and optimization of sample pre-treatment and instrumental parameters

    International Nuclear Information System (INIS)

    Illuminati, Silvia; Annibaldi, Anna; Truzzi, Cristina; Finale, Carolina; Scarponi, Giuseppe

    2013-01-01

    For the first time, square-wave anodic-stripping voltammetry (SWASV) was set up and optimized for the determination of Cd, Pb and Cu in white wine after UV photo-oxidative digestion of the sample. The best procedure for the sample pre-treatment consisted in a 6-h UV irradiation of diluted, acidified wine, with the addition of ultrapure H 2 O 2 (three sequential additions during the irradiation). Due to metal concentration differences, separate measurements were carried out for Cd (deposition potential −950 mV vs. Ag/AgCl/3 M KCl deposition time 15 min) and simultaneously for Pb and Cu (E d −750 mV, t d 30 s). The optimum set-up of the main instrumental parameters, evaluated also in terms of the signal-to-noise ratio, were as follows: E SW 20 mV, f 100 Hz, ΔE step 8 mV, t step 100 ms, t wait 60 ms, t delay 2 ms, t meas 3 ms. The electrochemical behaviour was reversible bielectronic for Cd and Pb, and kinetically controlled monoelectronic for Cu. Good accuracy was found both when the recovery procedure was used and when the results were compared with data obtained by differential pulse anodic stripping voltammetry. The linearity of the response was verified up to ∼4 μg L −1 for Cd and Pb and ∼15 μg L −1 for Cu. The detection limits for t d = 5 min in the 10 times diluted, UV digested sample were (ng L −1 ): Cd 7.0, Pb 1.2 and Cu 6.6, which are well below currently applied methods. Application to a Verdicchio dei Castelli di Jesi white wine revealed concentration levels of Cd ∼0.2, Pb ∼10, Cu ∼30 μg L −1 with repeatabilities of (±RSD%) Cd ±6%, Pb ±5%, Cu ±10%

  18. Parton Distributions based on a Maximally Consistent Dataset

    Science.gov (United States)

    Rojo, Juan

    2016-04-01

    The choice of data that enters a global QCD analysis can have a substantial impact on the resulting parton distributions and their predictions for collider observables. One of the main reasons for this has to do with the possible presence of inconsistencies, either internal within an experiment or external between different experiments. In order to assess the robustness of the global fit, different definitions of a conservative PDF set, that is, a PDF set based on a maximally consistent dataset, have been introduced. However, these approaches are typically affected by theory biases in the selection of the dataset. In this contribution, after a brief overview of recent NNPDF developments, we propose a new, fully objective, definition of a conservative PDF set, based on the Bayesian reweighting approach. Using the new NNPDF3.0 framework, we produce various conservative sets, which turn out to be mutually in agreement within the respective PDF uncertainties, as well as with the global fit. We explore some of their implications for LHC phenomenology, finding also good consistency with the global fit result. These results provide a non-trivial validation test of the new NNPDF3.0 fitting methodology, and indicate that possible inconsistencies in the fitted dataset do not affect substantially the global fit PDFs.

  19. The aug-cc-pVnZ-F12 basis set family: Correlation consistent basis sets for explicitly correlated benchmark calculations on anions and noncovalent complexes.

    Science.gov (United States)

    Sylvetsky, Nitai; Kesharwani, Manoj K; Martin, Jan M L

    2017-10-07

    We have developed a new basis set family, denoted as aug-cc-pVnZ-F12 (or aVnZ-F12 for short), for explicitly correlated calculations. The sets included in this family were constructed by supplementing the corresponding cc-pVnZ-F12 sets with additional diffuse functions on the higher angular momenta (i.e., additional d-h functions on non-hydrogen atoms and p-g on hydrogen atoms), optimized for the MP2-F12 energy of the relevant atomic anions. The new basis sets have been benchmarked against electron affinities of the first- and second-row atoms, the W4-17 dataset of total atomization energies, the S66 dataset of noncovalent interactions, the Benchmark Energy and Geometry Data Base water cluster subset, and the WATER23 subset of the GMTKN24 and GMTKN30 benchmark suites. The aVnZ-F12 basis sets displayed excellent performance, not just for electron affinities but also for noncovalent interaction energies of neutral and anionic species. Appropriate CABSs (complementary auxiliary basis sets) were explored for the S66 noncovalent interaction benchmark: between similar-sized basis sets, CABSs were found to be more transferable than generally assumed.

  20. Representation learning with deep extreme learning machines for efficient image set classification

    KAUST Repository

    Uzair, Muhammad

    2016-12-09

    Efficient and accurate representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the nonlinear structure of image sets with deep extreme learning machines that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.

  1. Representation learning with deep extreme learning machines for efficient image set classification

    KAUST Repository

    Uzair, Muhammad; Shafait, Faisal; Ghanem, Bernard; Mian, Ajmal

    2016-01-01

    Efficient and accurate representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the nonlinear structure of image sets with deep extreme learning machines that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.

  2. A new set-up for simultaneous high-precision measurements of CO2, δ13C-CO2 and δ18O-CO2 on small ice core samples

    Science.gov (United States)

    Jenk, Theo Manuel; Rubino, Mauro; Etheridge, David; Ciobanu, Viorela Gabriela; Blunier, Thomas

    2016-08-01

    Palaeoatmospheric records of carbon dioxide and its stable carbon isotope composition (δ13C) obtained from polar ice cores provide important constraints on the natural variability of the carbon cycle. However, the measurements are both analytically challenging and time-consuming; thus only data exist from a limited number of sampling sites and time periods. Additional analytical resources with high analytical precision and throughput are thus desirable to extend the existing datasets. Moreover, consistent measurements derived by independent laboratories and a variety of analytical systems help to further increase confidence in the global CO2 palaeo-reconstructions. Here, we describe our new set-up for simultaneous measurements of atmospheric CO2 mixing ratios and atmospheric δ13C and δ18O-CO2 in air extracted from ice core samples. The centrepiece of the system is a newly designed needle cracker for the mechanical release of air entrapped in ice core samples of 8-13 g operated at -45 °C. The small sample size allows for high resolution and replicate sampling schemes. In our method, CO2 is cryogenically and chromatographically separated from the bulk air and its isotopic composition subsequently determined by continuous flow isotope ratio mass spectrometry (IRMS). In combination with thermal conductivity measurement of the bulk air, the CO2 mixing ratio is calculated. The analytical precision determined from standard air sample measurements over ice is ±1.9 ppm for CO2 and ±0.09 ‰ for δ13C. In a laboratory intercomparison study with CSIRO (Aspendale, Australia), good agreement between CO2 and δ13C results is found for Law Dome ice core samples. Replicate analysis of these samples resulted in a pooled standard deviation of 2.0 ppm for CO2 and 0.11 ‰ for δ13C. These numbers are good, though they are rather conservative estimates of the overall analytical precision achieved for single ice sample measurements. Facilitated by the small sample requirement

  3. Comparison of established and novel purity tests for the quality control of heparin by means of a set of 177 heparin samples.

    Science.gov (United States)

    Alban, Susanne; Lühn, Susanne; Schiemann, Simone; Beyer, Tanja; Norwig, Jochen; Schilling, Claudia; Rädler, Oliver; Wolf, Bernhard; Matz, Magnus; Baumann, Knut; Holzgrabe, Ulrike

    2011-01-01

    The widespread occurrence of heparin contaminated with oversulfated chrondroitin sulfate (OSCS) in 2008 initiated a comprehensive revision process of the Pharmacopoeial heparin monographs and stimulated research in analytical techniques for the quality control of heparin. Here, a set of 177 heparin samples from the market in 2008 as well as pure heparin sodium spiked with defined amounts of OSCS and DS were used to evaluate established and novel methods for the quality control of heparin. Besides (1)H nuclear magnetic resonance spectroscopy (NMR), the assessment included two further spectroscopic methods, i.e., attenuated total reflection-infrared spectroscopy (ATR-IR) and Raman spectroscopy, three coagulation assays, i.e., activated partial thromboplastin time (aPTT) performed with both sheep and human plasma and the prothrombin time (PT), and finally two novel purity assays, each consisting of an incubation step with heparinase I followed by either a fluorescence measurement (Inc-PolyH-assay) or by a chromogenic aXa-assay (Inc-aXa-assay). NMR was shown to allow not only sensitive detection, but also quantification of OSCS by using the peak-height method and a response factor determined by calibration. Chemometric evaluation of the NMR, ATR-IR, and Raman spectra by statistical classification techniques turned out to be best with NMR spectra concerning the detection of OSCS. The validity of the aPTT, the current EP assay, could be considerably improved by replacing the sheep plasma by human plasma. In this way, most of the contaminated heparin samples did not meet the novel potency limit of 180 IU/mg. However, also more than 50% of the uncontaminated samples had interpretation of the results.

  4. Determination of Cs-134 and Cs-137 rain water samples

    International Nuclear Information System (INIS)

    Lima, M.F.; Mazzilli, B.

    1988-01-01

    In order to setting an environmental monitoring program at IPEN, was developed a fast and simple methodology for concentration of Cs-134 and Cs-137 in rain water. This procedure consists in the precipitation of cesium and others cathions of its family (NH 4 + , K + and Rb + ) by ammonium molybdophosphate. The measures of the desintegration rates of Cs-134 and Cs-137 was done by gamma spectrometry in a Ge(Li) detector. After setting up the ideal experimental conditions, the procedure was used to analyze four samples of rain water. (author) [pt

  5. Diagnosing intramammary infections: evaluation of definitions based on a single milk sample.

    Science.gov (United States)

    Dohoo, I R; Smith, J; Andersen, S; Kelton, D F; Godden, S

    2011-01-01

    Criteria for diagnosing intramammary infections (IMI) have been debated for many years. Factors that may be considered in making a diagnosis include the organism of interest being found on culture, the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and whether or not concurrent evidence of inflammation existed (often measured by somatic cell count). However, research using these criteria has been hampered by the lack of a "gold standard" test (i.e., a perfect test against which the criteria can be evaluated) and the need for very large data sets of culture results to have sufficient numbers of quarters with infections with a variety of organisms. This manuscript used 2 large data sets of culture results to evaluate several definitions (sets of criteria) for classifying a quarter as having, or not having an IMI by comparing the results from a single culture to a gold standard diagnosis based on a set of 3 milk samples. The first consisted of 38,376 milk samples from which 25,886 triplicate sets of milk samples taken 1 wk apart were extracted. The second consisted of 784 quarters that were classified as infected or not based on a set of 3 milk samples collected at 2-d intervals. From these quarters, a total of 3,136 additional samples were evaluated. A total of 12 definitions (named A to L) based on combinations of the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and the somatic cell count were evaluated for each organism (or group of organisms) with sufficient data. The sensitivity (ability of a definition to detect IMI) and the specificity (Sp; ability of a definition to correctly classify noninfected quarters) were both computed. For all species, except Staphylococcus aureus, the sensitivity of all definitions was definition A). With the exception of "any organism" and coagulase-negative staphylococci, all Sp estimates were over 94% in the daily data and over 97

  6. Examination of the MMPI-2 restructured form (MMPI-2-RF) validity scales in civil forensic settings: findings from simulation and known group samples.

    Science.gov (United States)

    Wygant, Dustin B; Ben-Porath, Yossef S; Arbisi, Paul A; Berry, David T R; Freeman, David B; Heilbronner, Robert L

    2009-11-01

    The current study examined the effectiveness of the MMPI-2 Restructured Form (MMPI-2-RF; Ben-Porath and Tellegen, 2008) over-reporting indicators in civil forensic settings. The MMPI-2-RF includes three revised MMPI-2 over-reporting validity scales and a new scale to detect over-reported somatic complaints. Participants dissimulated medical and neuropsychological complaints in two simulation samples, and a known-groups sample used symptom validity tests as a response bias criterion. Results indicated large effect sizes for the MMPI-2-RF validity scales, including a Cohen's d of .90 for Fs in a head injury simulation sample, 2.31 for FBS-r, 2.01 for F-r, and 1.97 for Fs in a medical simulation sample, and 1.45 for FBS-r and 1.30 for F-r in identifying poor effort on SVTs. Classification results indicated good sensitivity and specificity for the scales across the samples. This study indicates that the MMPI-2-RF over-reporting validity scales are effective at detecting symptom over-reporting in civil forensic settings.

  7. Sample Set (SE): SE47 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE47 Metabolomic correlation-network modules in Arabidopsis based on a graph-cluste...as a system. Typical metabolomics data show a few but significant correlations among metabolite levels when ...itions. Although several studies have assessed topologies in metabolomic correlation networks, it remains un... (mto1), and transparent testa4 (tt4) to compare systematically the metabolomic correlation...s in samples of roots and aerial parts. We then applied graph clustering to the constructed correlation

  8. Consistency of variables in PCS and JASTRO great area database

    International Nuclear Information System (INIS)

    Nishino, Tomohiro; Teshima, Teruki; Abe, Mitsuyuki

    1998-01-01

    To examine whether the Patterns of Care Study (PCS) reflects the data for the major areas in Japan, the consistency of variables in the PCS and in the major area database of the Japanese Society for Therapeutic Radiology and Oncology (JASTRO) were compared. Patients with esophageal or uterine cervical cancer were sampled from the PCS and JASTRO databases. From the JASTRO database, 147 patients with esophageal cancer and 95 patients with uterine cervical cancer were selected according to the eligibility criteria for the PCS. From the PCS, 455 esophageal and 432 uterine cervical cancer patients were surveyed. Six items for esophageal cancer and five items for uterine cervical cancer were selected for a comparative analysis of PCS and JASTRO databases. Esophageal cancer: Age (p=.0777), combination of radiation and surgery (p=.2136), and energy of the external beam (p=.6400) were consistent for PCS and JASTRO. However, the dose of the external beam for the non-surgery group showed inconsistency (p=.0467). Uterine cervical cancer: Age (p=.6301) and clinical stage (p=.8555) were consistent for the two sets of data. However, the energy of the external beam (p<.0001), dose rate of brachytherapy (p<.0001), and brachytherapy utilization by clinical stage (p<.0001) showed inconsistencies. It appears possible that the JASTRO major area database could not account for all patients' backgrounds and factors and that both surveys might have an imbalance in the stratification of institutions including differences in equipment and staffing patterns. (author)

  9. Simple way to avoid underestimating uncertainties of the evaluated values for sets of consistent data: a proposal for an improvement of the evaluations

    International Nuclear Information System (INIS)

    Chechev, V.P.

    2001-01-01

    To avoid underestimating the uncertainty of the evaluated values for sets of consistent data the following rule is proposed: if the smallest of the input measurement uncertainties (σ min ) is more than the uncertainty obtained from statistical data processing, the σ min should be used as a final uncertainty of the evaluated value. This rule is justified by the fact that almost any measurement is indirect and the total uncertainty of any precise measurement includes mainly the systematic error of the measurement method. Exceptions can be only for measured data obtained by essentially different methods (for example, half life measurements by calorimetry and specific activity determination)

  10. Prediction consistency and clinical presentations of breast cancer molecular subtypes for Han Chinese population

    Directory of Open Access Journals (Sweden)

    Huang Chi-Cheng

    2012-09-01

    Full Text Available Abstract Background Breast cancer is a heterogeneous disease in terms of transcriptional aberrations; moreover, microarray gene expression profiles had defined 5 molecular subtypes based on certain intrinsic genes. This study aimed to evaluate the prediction consistency of breast cancer molecular subtypes from 3 distinct intrinsic gene sets (Sørlie 500, Hu 306 and PAM50 as well as clinical presentations of each molecualr subtype in Han Chinese population. Methods In all, 169 breast cancer samples (44 from Taiwan and 125 from China of Han Chinese population were gathered, and the gene expression features corresponding to 3 distinct intrinsic gene sets (Sørlie 500, Hu 306 and PAM50 were retrieved for molecular subtype prediction. Results For Sørlie 500 and Hu 306 intrinsic gene set, mean-centring of genes and distance-weighted discrimination (DWD remarkably reduced the number of unclassified cases. Regarding pairwise agreement, the highest predictive consistency was found between Hu 306 and PAM50. In all, 150 and 126 samples were assigned into identical subtypes by both Hu 306 and PAM50 genes, under mean-centring and DWD. Luminal B tended to show a higher nuclear grade and have more HER2 over-expression status than luminal A did. No basal-like breast tumours were ER positive, and most HER2-enriched breast tumours showed HER2 over-expression, whereas, only two-thirds of ER negativity/HER2 over-expression tumros were predicted as HER2-enriched molecular subtype. For 44 Taiwanese breast cancers with survival data, a better prognosis of luminal A than luminal B subtype in ER-postive breast cancers and a better prognosis of basal-like than HER2-enriched subtype in ER-negative breast cancers was observed. Conclusions We suggest that the intrinsic signature Hu 306 or PAM50 be used for breast cancers in the Han Chinese population during molecular subtyping. For the prognostic value and decision making based on intrinsic subtypes, further prospective

  11. Consistency of the MLE under mixture models

    OpenAIRE

    Chen, Jiahua

    2016-01-01

    The large-sample properties of likelihood-based statistical inference under mixture models have received much attention from statisticians. Although the consistency of the nonparametric MLE is regarded as a standard conclusion, many researchers ignore the precise conditions required on the mixture model. An incorrect claim of consistency can lead to false conclusions even if the mixture model under investigation seems well behaved. Under a finite normal mixture model, for instance, the consis...

  12. Single, Complete, Probability Spaces Consistent With EPR-Bohm-Bell Experimental Data

    Science.gov (United States)

    Avis, David; Fischer, Paul; Hilbert, Astrid; Khrennikov, Andrei

    2009-03-01

    We show that paradoxical consequences of violations of Bell's inequality are induced by the use of an unsuitable probabilistic description for the EPR-Bohm-Bell experiment. The conventional description (due to Bell) is based on a combination of statistical data collected for different settings of polarization beam splitters (PBSs). In fact, such data consists of some conditional probabilities which only partially define a probability space. Ignoring this conditioning leads to apparent contradictions in the classical probabilistic model (due to Kolmogorov). We show how to make a completely consistent probabilistic model by taking into account the probabilities of selecting the settings of the PBSs. Our model matches both the experimental data and is consistent with classical probability theory.

  13. International spinal cord injury musculoskeletal basic data set

    DEFF Research Database (Denmark)

    Biering-Sørensen, Fin; Burns, A S; Curt, A

    2012-01-01

    To develop an International Spinal Cord Injury (SCI) Musculoskeletal Basic Data Set as part of the International SCI Data Sets to facilitate consistent collection and reporting of basic musculoskeletal findings in the SCI population.Setting:International.......To develop an International Spinal Cord Injury (SCI) Musculoskeletal Basic Data Set as part of the International SCI Data Sets to facilitate consistent collection and reporting of basic musculoskeletal findings in the SCI population.Setting:International....

  14. Consistent Estimation of Continuous-Time Signals from Nonlinear Transformations of Noisy Samples,

    Science.gov (United States)

    1980-03-10

    t, then hn is given by (5) (with W = n) and represents the Szasz operator. Theorem 3.0, while guaranteeing mean-square consistency of the estimate Sw...t), provides no bounds on the rate of convergence. We shall derive such bounds for linear systems hW corresponding to the class of generalized Szasz ...operators [6] (see below) and to the Bernstein operator. While the Szasz operator (5) can be generated as in Proposition 3.0, the class of generalized

  15. A novel atmospheric tritium sampling system

    Science.gov (United States)

    Qin, Lailai; Xia, Zhenghai; Gu, Shaozhong; Zhang, Dongxun; Bao, Guangliang; Han, Xingbo; Ma, Yuhua; Deng, Ke; Liu, Jiayu; Zhang, Qin; Ma, Zhaowei; Yang, Guo; Liu, Wei; Liu, Guimin

    2018-06-01

    The health hazard of tritium is related to its chemical form. Sampling different chemical forms of tritium simultaneously becomes significant. Here a novel atmospheric tritium sampling system (TS-212) was developed to collect the tritiated water (HTO), tritiated hydrogen (HT) and tritiated methane (CH3T) simultaneously. It consisted of an air inlet system, three parallel connected sampling channels, a hydrogen supply module, a methane supply module and a remote control system. It worked at air flow rate of 1 L/min to 5 L/min, with temperature of catalyst furnace at 200 °C for HT sampling and 400 °C for CH3T sampling. Conversion rates of both HT and CH3T to HTO were larger than 99%. The collecting efficiency of the two-stage trap sets for HTO was larger than 96% in 12 h working-time without being blocked. Therefore, the collected efficiencies of TS-212 are larger than 95% for tritium with different chemical forms in environment. Besides, the remote control system made sampling more intelligent, reducing the operator's work intensity. Based on the performance parameters described above, the TS-212 can be used to sample atmospheric tritium in different chemical forms.

  16. Large scale Bayesian nuclear data evaluation with consistent model defects

    International Nuclear Information System (INIS)

    Schnabel, G

    2015-01-01

    Monte Carlo sampling schemes of available evaluation methods. The second improvement concerns Bayesian evaluation methods based on a certain simplification of the nuclear model. These methods were restricted to the consistent evaluation of tens of thousands of observables. In this thesis, a new evaluation scheme has been developed, which is mathematically equivalent to existing methods, but allows the consistent evaluation of dozens of millions of observables. The new scheme is suited for the implementation as a database application. The realization of such an application with public access can help to accelerate the production of reliable nuclear data sets. Furthermore, in combination with the novel treatment of model deficiencies, problems of the model and the experimental data can be tracked down without user interaction. This feature can foster the development of nuclear models with high predictive power. (author) [de

  17. A simulative comparison of respondent driven sampling with incentivized snowball sampling--the "strudel effect".

    Science.gov (United States)

    Gyarmathy, V Anna; Johnston, Lisa G; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A

    2014-02-01

    Respondent driven sampling (RDS) and incentivized snowball sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania ("original sample") to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1-12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called "strudel effect" is discussed in the paper. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Consistently Showing Your Best Side? Intra-individual Consistency in #Selfie Pose Orientation

    Science.gov (United States)

    Lindell, Annukka K.

    2017-01-01

    Painted and photographic portraits of others show an asymmetric bias: people favor their left cheek. Both experimental and database studies confirm that the left cheek bias extends to selfies. To date all such selfie studies have been cross-sectional; whether individual selfie-takers tend to consistently favor the same pose orientation, or switch between multiple poses, remains to be determined. The present study thus examined intra-individual consistency in selfie pose orientations. Two hundred selfie-taking participants (100 male and 100 female) were identified by searching #selfie on Instagram. The most recent 10 single-subject selfies for the each of the participants were selected and coded for type of selfie (normal; mirror) and pose orientation (left, midline, right), resulting in a sample of 2000 selfies. Results indicated that selfie-takers do tend to consistently adopt a preferred pose orientation (α = 0.72), with more participants showing an overall left cheek bias (41%) than would be expected by chance (overall right cheek bias = 31.5%; overall midline bias = 19.5%; no overall bias = 8%). Logistic regression modellng, controlling for the repeated measure of participant identity, indicated that sex did not affect pose orientation. However, selfie type proved a significant predictor when comparing left and right cheek poses, with a stronger left cheek bias for mirror than normal selfies. Overall, these novel findings indicate that selfie-takers show intra-individual consistency in pose orientation, and in addition, replicate the previously reported left cheek bias for selfies and other types of portrait, confirming that the left cheek bias also presents within individuals’ selfie corpora. PMID:28270790

  19. Sets in Coq, Coq in Sets

    Directory of Open Access Journals (Sweden)

    Bruno Barras

    2010-01-01

    Full Text Available This work is about formalizing models of various type theories of the Calculus of Constructions family. Here we focus on set theoretical models. The long-term goal is to build a formal set theoretical model of the Calculus of Inductive Constructions, so we can be sure that Coq is consistent with the language used by most mathematicians.One aspect of this work is to axiomatize several set theories: ZF possibly with inaccessible cardinals, and HF, the theory of hereditarily finite sets. On top of these theories we have developped a piece of the usual set theoretical construction of functions, ordinals and fixpoint theory. We then proved sound several models of the Calculus of Constructions, its extension with an infinite hierarchy of universes, and its extension with the inductive type of natural numbers where recursion follows the type-based termination approach.The other aspect is to try and discharge (most of these assumptions. The goal here is rather to compare the theoretical strengths of all these formalisms. As already noticed by Werner, the replacement axiom of ZF in its general form seems to require a type-theoretical axiom of choice (TTAC.

  20. Reformulating classical and quantum mechanics in terms of a unified set of consistency conditions

    International Nuclear Information System (INIS)

    Bordley, R.F.

    1983-01-01

    This paper imposes consistency conditions on the path of a particle and shows that they imply Hamilton's principle in classical contexts and Schroedinger's equation in quantum mechanical contexts. Thus this paper provides a common, intuitive foundation for classical and quantum mechanics. It also provides a very new perspective on quantum mechanics. (author

  1. Vibrational frequency scaling factors for correlation consistent basis sets and the methods CC2 and MP2 and their spin-scaled SCS and SOS variants

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no [Centre for Theoretical and Computational Chemistry CTCC, Department of Chemistry, University of Tromsø, N-9037 Tromsø (Norway); Törk, Lisa; Hättig, Christof, E-mail: christof.haettig@rub.de [Lehrstuhl für Theoretische Chemie, Ruhr-Universität Bochum, D-44801 Bochum (Germany)

    2014-11-21

    We present scaling factors for vibrational frequencies calculated within the harmonic approximation and the correlated wave-function methods coupled cluster singles and doubles model (CC2) and Møller-Plesset perturbation theory (MP2) with and without a spin-component scaling (SCS or spin-opposite scaling (SOS)). Frequency scaling factors and the remaining deviations from the reference data are evaluated for several non-augmented basis sets of the cc-pVXZ family of generally contracted correlation-consistent basis sets as well as for the segmented contracted TZVPP basis. We find that the SCS and SOS variants of CC2 and MP2 lead to a slightly better accuracy for the scaled vibrational frequencies. The determined frequency scaling factors can also be used for vibrational frequencies calculated for excited states through response theory with CC2 and the algebraic diagrammatic construction through second order and their spin-component scaled variants.

  2. Purposeful Sampling for Qualitative Data Collection and Analysis in Mixed Method Implementation Research.

    Science.gov (United States)

    Palinkas, Lawrence A; Horwitz, Sarah M; Green, Carla A; Wisdom, Jennifer P; Duan, Naihua; Hoagwood, Kimberly

    2015-09-01

    Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.

  3. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research

    Science.gov (United States)

    Palinkas, Lawrence A.; Horwitz, Sarah M.; Green, Carla A.; Wisdom, Jennifer P.; Duan, Naihua; Hoagwood, Kimberly

    2013-01-01

    Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research. PMID:24193818

  4. A consistent NPMLE of the joint distribution function with competing risks data under the dependent masking and right-censoring model.

    Science.gov (United States)

    Li, Jiahui; Yu, Qiqing

    2016-01-01

    Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.

  5. Optimal consistency in microRNA expression analysis using reference-gene-based normalization.

    Science.gov (United States)

    Wang, Xi; Gardiner, Erin J; Cairns, Murray J

    2015-05-01

    Normalization of high-throughput molecular expression profiles secures differential expression analysis between samples of different phenotypes or biological conditions, and facilitates comparison between experimental batches. While the same general principles apply to microRNA (miRNA) normalization, there is mounting evidence that global shifts in their expression patterns occur in specific circumstances, which pose a challenge for normalizing miRNA expression data. As an alternative to global normalization, which has the propensity to flatten large trends, normalization against constitutively expressed reference genes presents an advantage through their relative independence. Here we investigated the performance of reference-gene-based (RGB) normalization for differential miRNA expression analysis of microarray expression data, and compared the results with other normalization methods, including: quantile, variance stabilization, robust spline, simple scaling, rank invariant, and Loess regression. The comparative analyses were executed using miRNA expression in tissue samples derived from subjects with schizophrenia and non-psychiatric controls. We proposed a consistency criterion for evaluating methods by examining the overlapping of differentially expressed miRNAs detected using different partitions of the whole data. Based on this criterion, we found that RGB normalization generally outperformed global normalization methods. Thus we recommend the application of RGB normalization for miRNA expression data sets, and believe that this will yield a more consistent and useful readout of differentially expressed miRNAs, particularly in biological conditions characterized by large shifts in miRNA expression.

  6. Invariant sets for Windows

    CERN Document Server

    Morozov, Albert D; Dragunov, Timothy N; Malysheva, Olga V

    1999-01-01

    This book deals with the visualization and exploration of invariant sets (fractals, strange attractors, resonance structures, patterns etc.) for various kinds of nonlinear dynamical systems. The authors have created a special Windows 95 application called WInSet, which allows one to visualize the invariant sets. A WInSet installation disk is enclosed with the book.The book consists of two parts. Part I contains a description of WInSet and a list of the built-in invariant sets which can be plotted using the program. This part is intended for a wide audience with interests ranging from dynamical

  7. Irreducible descriptive sets of attributes for information systems

    KAUST Repository

    Moshkov, Mikhail

    2010-01-01

    The maximal consistent extension Ext(S) of a given information system S consists of all objects corresponding to attribute values from S which are consistent with all true and realizable rules extracted from the original information system S. An irreducible descriptive set for the considered information system S is a minimal (relative to the inclusion) set B of attributes which defines exactly the set Ext(S) by means of true and realizable rules constructed over attributes from the considered set B. We show that there exists only one irreducible descriptive set of attributes. We present a polynomial algorithm for this set construction. We also study relationships between the cardinality of irreducible descriptive set of attributes and the number of attributes in S. The obtained results will be useful for the design of concurrent data models from experimental data. © 2010 Springer-Verlag.

  8. A game on the universe of sets

    International Nuclear Information System (INIS)

    Saveliev, D I

    2008-01-01

    Working in set theory without the axiom of regularity, we consider a two-person game on the universe of sets. In this game, the players choose in turn an element of a given set, an element of this element and so on. A player wins if he leaves his opponent no possibility of making a move, that is, if he has chosen the empty set. Winning sets (those admitting a winning strategy for one of the players) form a natural hierarchy with levels indexed by ordinals (in the finite case, the ordinal indicates the shortest length of a winning strategy). We show that the class of hereditarily winning sets is an inner model containing all well-founded sets and that each of the four possible relations between the universe, the class of hereditarily winning sets, and the class of well-founded sets is consistent. As far as the class of winning sets is concerned, either it is equal to the whole universe, or many of the axioms of set theory cannot hold on this class. Somewhat surprisingly, this does not apply to the axiom of regularity: we show that the failure of this axiom is consistent with its relativization to winning sets. We then establish more subtle properties of winning non-well-founded sets. We describe all classes of ordinals for which the following is consistent: winning sets without minimal elements (in the sense of membership) occur exactly at the levels indexed by the ordinals of this class. In particular, we show that if an even level of the hierarchy of winning sets contains a set without minimal elements, then all higher levels contain such sets. We show that the failure of the axiom of regularity implies that all odd levels contain sets without minimal elements, but it is consistent with the absence of such sets at all even levels as well as with their appearance at an arbitrary even non-limit or countable-cofinal level. To obtain consistency results, we propose a new method for obtaining models with non-well-founded sets. Finally, we study how long this game can

  9. Contrasting lexical similarity and formal definitions in SNOMED CT: consistency and implications.

    Science.gov (United States)

    Agrawal, Ankur; Elhanan, Gai

    2014-02-01

    To quantify the presence of and evaluate an approach for detection of inconsistencies in the formal definitions of SNOMED CT (SCT) concepts utilizing a lexical method. Utilizing SCT's Procedure hierarchy, we algorithmically formulated similarity sets: groups of concepts with similar lexical structure of their fully specified name. We formulated five random samples, each with 50 similarity sets, based on the same parameter: number of parents, attributes, groups, all the former as well as a randomly selected control sample. All samples' sets were reviewed for types of formal definition inconsistencies: hierarchical, attribute assignment, attribute target values, groups, and definitional. For the Procedure hierarchy, 2111 similarity sets were formulated, covering 18.1% of eligible concepts. The evaluation revealed that 38 (Control) to 70% (Different relationships) of similarity sets within the samples exhibited significant inconsistencies. The rate of inconsistencies for the sample with different relationships was highly significant compared to Control, as well as the number of attribute assignment and hierarchical inconsistencies within their respective samples. While, at this time of the HITECH initiative, the formal definitions of SCT are only a minor consideration, in the grand scheme of sophisticated, meaningful use of captured clinical data, they are essential. However, significant portion of the concepts in the most semantically complex hierarchy of SCT, the Procedure hierarchy, are modeled inconsistently in a manner that affects their computability. Lexical methods can efficiently identify such inconsistencies and possibly allow for their algorithmic resolution. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Evaluation of Two Lyophilized Molecular Assays to Rapidly Detect Foot-and-Mouth Disease Virus Directly from Clinical Samples in Field Settings.

    Science.gov (United States)

    Howson, E L A; Armson, B; Madi, M; Kasanga, C J; Kandusi, S; Sallu, R; Chepkwony, E; Siddle, A; Martin, P; Wood, J; Mioulet, V; King, D P; Lembo, T; Cleaveland, S; Fowler, V L

    2017-06-01

    Accurate, timely diagnosis is essential for the control, monitoring and eradication of foot-and-mouth disease (FMD). Clinical samples from suspect cases are normally tested at reference laboratories. However, transport of samples to these centralized facilities can be a lengthy process that can impose delays on critical decision making. These concerns have motivated work to evaluate simple-to-use technologies, including molecular-based diagnostic platforms, that can be deployed closer to suspect cases of FMD. In this context, FMD virus (FMDV)-specific reverse transcription loop-mediated isothermal amplification (RT-LAMP) and real-time RT-PCR (rRT-PCR) assays, compatible with simple sample preparation methods and in situ visualization, have been developed which share equivalent analytical sensitivity with laboratory-based rRT-PCR. However, the lack of robust 'ready-to-use kits' that utilize stabilized reagents limits the deployment of these tests into field settings. To address this gap, this study describes the performance of lyophilized rRT-PCR and RT-LAMP assays to detect FMDV. Both of these assays are compatible with the use of fluorescence to monitor amplification in real-time, and for the RT-LAMP assays end point detection could also be achieved using molecular lateral flow devices. Lyophilization of reagents did not adversely affect the performance of the assays. Importantly, when these assays were deployed into challenging laboratory and field settings within East Africa they proved to be reliable in their ability to detect FMDV in a range of clinical samples from acutely infected as well as convalescent cattle. These data support the use of highly sensitive molecular assays into field settings for simple and rapid detection of FMDV. © 2015 The Authors. Transboundary and Emerging Diseases Published by Blackwell Verlag GmbH.

  11. Consistency Check for the Bin Packing Constraint Revisited

    Science.gov (United States)

    Dupuis, Julien; Schaus, Pierre; Deville, Yves

    The bin packing problem (BP) consists in finding the minimum number of bins necessary to pack a set of items so that the total size of the items in each bin does not exceed the bin capacity C. The bin capacity is common for all the bins.

  12. Temporal and Geographic variation in the validity and internal consistency of the Nursing Home Resident Assessment Minimum Data Set 2.0.

    Science.gov (United States)

    Mor, Vincent; Intrator, Orna; Unruh, Mark Aaron; Cai, Shubing

    2011-04-15

    The Minimum Data Set (MDS) for nursing home resident assessment has been required in all U.S. nursing homes since 1990 and has been universally computerized since 1998. Initially intended to structure clinical care planning, uses of the MDS expanded to include policy applications such as case-mix reimbursement, quality monitoring and research. The purpose of this paper is to summarize a series of analyses examining the internal consistency and predictive validity of the MDS data as used in the "real world" in all U.S. nursing homes between 1999 and 2007. We used person level linked MDS and Medicare denominator and all institutional claim files including inpatient (hospital and skilled nursing facilities) for all Medicare fee-for-service beneficiaries entering U.S. nursing homes during the period 1999 to 2007. We calculated the sensitivity and positive predictive value (PPV) of diagnoses taken from Medicare hospital claims and from the MDS among all new admissions from hospitals to nursing homes and the internal consistency (alpha reliability) of pairs of items within the MDS that logically should be related. We also tested the internal consistency of commonly used MDS based multi-item scales and examined the predictive validity of an MDS based severity measure viz. one year survival. Finally, we examined the correspondence of the MDS discharge record to hospitalizations and deaths seen in Medicare claims, and the completeness of MDS assessments upon skilled nursing facility (SNF) admission. Each year there were some 800,000 new admissions directly from hospital to US nursing homes and some 900,000 uninterrupted SNF stays. Comparing Medicare enrollment records and claims with MDS records revealed reasonably good correspondence that improved over time (by 2006 only 3% of deaths had no MDS discharge record, only 5% of SNF stays had no MDS, but over 20% of MDS discharges indicating hospitalization had no associated Medicare claim). The PPV and sensitivity levels of

  13. Basic set theory

    CERN Document Server

    Levy, Azriel

    2002-01-01

    An advanced-level treatment of the basics of set theory, this text offers students a firm foundation, stopping just short of the areas employing model-theoretic methods. Geared toward upper-level undergraduate and graduate students, it consists of two parts: the first covers pure set theory, including the basic motions, order and well-foundedness, cardinal numbers, the ordinals, and the axiom of choice and some of it consequences; the second deals with applications and advanced topics such as point set topology, real spaces, Boolean algebras, and infinite combinatorics and large cardinals. An

  14. Under-utilized Important Data Sets from Barrow, Alaska

    Science.gov (United States)

    Jensen, A. M.; Misarti, N.

    2012-12-01

    The Barrow region has a number of high resolution data sets of high quality and high scientific and stakeholder relevance. Many are described as being of long duration, yet span mere decades. Here we highlight the fact that there are data sets available in the Barrow area that span considerably greater periods of time (centuries to millennia), at varying degrees of resolution. When used appropriately, these data sets can contribute to the study and understanding of the changing Arctic. However, because these types of data are generally acquired as part of archaeological projects, funded through Arctic Social Science and similar programs, their use in other sciences has been limited. Archaeologists focus on analyzing these data sets in ways designed to answer particular anthropological questions. That in no way precludes archaeological collaboration with other types of scientists nor the analysis of these data sets in new and innovative ways, in order to look at questions of Arctic change over a time span beginning well before the Industrial Revolution introduced complicating factors. One major data group consists of zooarchaeological data from sites in the Barrow area. This consists of faunal remains of human subsistence activities, recovered either from middens (refuse deposits) or dwellings. In effect, occupants of a site were sampling their environment as it existed at the time of occupation, although not in a random or systematic way. When analyzed to correct for biases introduced by taphonomic and human behavioral factors, such data sets are used by archaeologists to understand past people's subsistence practices, and how such practices changed through time. However, there is much additional information that can be obtained from these collections. Certain species have fairly specific habitat requirements, and their presence in significant numbers at a site indicates that such conditions existed relatively nearby at a particular time in the past, and

  15. Cosmological consistency tests of gravity theory and cosmic acceleration

    Science.gov (United States)

    Ishak-Boushaki, Mustapha B.

    2017-01-01

    Testing general relativity at cosmological scales and probing the cause of cosmic acceleration are among the important objectives targeted by incoming and future astronomical surveys and experiments. I present our recent results on consistency tests that can provide insights about the underlying gravity theory and cosmic acceleration using cosmological data sets. We use statistical measures, the rate of cosmic expansion, the growth rate of large scale structure, and the physical consistency of these probes with one another.

  16. Individual consistency in the behaviors of newly-settled reef fish.

    Science.gov (United States)

    White, James R; Meekan, Mark G; McCormick, Mark I

    2015-01-01

    Flexibility in behavior is advantageous for organisms that transition between stages of a complex life history. However, various constraints can set limits on plasticity, giving rise to the existence of personalities that have associated costs and benefits. Here, we document a field and laboratory experiment that examines the consistency of measures of boldness, activity, and aggressive behavior in the young of a tropical reef fish, Pomacentrus amboinensis (Pomacentridae) immediately following their transition between pelagic larval and benthic juvenile habitats. Newly-settled fish were observed in aquaria and in the field on replicated patches of natural habitat cleared of resident fishes. Seven behavioral traits representing aspects of boldness, activity and aggression were monitored directly and via video camera over short (minutes), medium (hours), and long (3 days) time scales. With the exception of aggression, these behaviors were found to be moderately or highly consistent over all time scales in both laboratory and field settings, implying that these fish show stable personalities within various settings. Our study is the first to examine the temporal constancy of behaviors in both field and laboratory settings in over various time scales at a critically important phase during the life cycle of a reef fish.

  17. Analysis of inconsistent source sampling in monte carlo weight-window variance reduction methods

    Directory of Open Access Journals (Sweden)

    David P. Griesheimer

    2017-09-01

    Full Text Available The application of Monte Carlo (MC to large-scale fixed-source problems has recently become possible with new hybrid methods that automate generation of parameters for variance reduction techniques. Two common variance reduction techniques, weight windows and source biasing, have been automated and popularized by the consistent adjoint-driven importance sampling (CADIS method. This method uses the adjoint solution from an inexpensive deterministic calculation to define a consistent set of weight windows and source particles for a subsequent MC calculation. One of the motivations for source consistency is to avoid the splitting or rouletting of particles at birth, which requires computational resources. However, it is not always possible or desirable to implement such consistency, which results in inconsistent source biasing. This paper develops an original framework that mathematically expresses the coupling of the weight window and source biasing techniques, allowing the authors to explore the impact of inconsistent source sampling on the variance of MC results. A numerical experiment supports this new framework and suggests that certain classes of problems may be relatively insensitive to inconsistent source sampling schemes with moderate levels of splitting and rouletting.

  18. A simulative comparison of respondent driven sampling with incentivized snowball sampling – the “strudel effect”

    Science.gov (United States)

    Gyarmathy, V. Anna; Johnston, Lisa G.; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A.

    2014-01-01

    Background Respondent driven sampling (RDS) and Incentivized Snowball Sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). Methods We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania (“original sample”) to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. Results The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1 to 12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. Conclusions When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called “strudel effect” is discussed in the paper. PMID:24360650

  19. Consistency of Use and Effectiveness of Household Water Treatment among Indian Households Claiming to Treat Their Water.

    Science.gov (United States)

    Rosa, Ghislaine; Clasen, Thomas

    2017-07-01

    Household water treatment (HWT) can improve drinking water quality and prevent disease if used correctly and consistently by populations at risk. Current international monitoring estimates by the Joint Monitoring Programme for water and sanitation suggest that at least 1.1 billion people practice HWT. These estimates, however, are based on surveys that may overstate the level of consistent use and do not address microbial effectiveness. We sought to assess how HWT is practiced among households identified as HWT users according to these monitoring standards. After a baseline survey (urban: 189 households, rural: 210 households) to identify HWT users, 83 urban and 90 rural households were followed up for 6 weeks. Consistency of reported HWT practices was high in both urban (100%) and rural (93.3%) settings, as was availability of treated water (based on self-report) in all three sampling points (urban: 98.8%, rural: 76.0%). Nevertheless, only 13.7% of urban and 25.8% of rural households identified at baseline as users of adequate HWT had water free of thermotolerant coliforms at all three water sampling points. Our findings raise questions about the value of the data gathered through the international monitoring of HWT as predictors of water quality in the home, as well as questioning the ability of HWT, as actually practiced by vulnerable populations, to reduce exposure to waterborne diseases.

  20. The cluster bootstrap consistency in generalized estimating equations

    KAUST Repository

    Cheng, Guang

    2013-03-01

    The cluster bootstrap resamples clusters or subjects instead of individual observations in order to preserve the dependence within each cluster or subject. In this paper, we provide a theoretical justification of using the cluster bootstrap for the inferences of the generalized estimating equations (GEE) for clustered/longitudinal data. Under the general exchangeable bootstrap weights, we show that the cluster bootstrap yields a consistent approximation of the distribution of the regression estimate, and a consistent approximation of the confidence sets. We also show that a computationally more efficient one-step version of the cluster bootstrap provides asymptotically equivalent inference. © 2012.

  1. A comparative study of different feature sets for recognition of handwritten Arabic numerals using a Multi Layer Perceptron

    OpenAIRE

    Das, Nibaran; Mollah, Ayatullah Faruk; Sarkar, Ram; Basu, Subhadip

    2010-01-01

    The work presents a comparative assessment of seven different feature sets for recognition of handwritten Arabic numerals using a Multi Layer Perceptron (MLP) based classifier. The seven feature sets employed here consist of shadow features, octant centroids, longest runs, angular distances, effective spans, dynamic centers of gravity, and some of their combinations. On experimentation with a database of 3000 samples, the maximum recognition rate of 95.80% is observed with both of two separat...

  2. Self-consistent electrodynamic scattering in the symmetric Bragg case

    International Nuclear Information System (INIS)

    Campos, H.S.

    1988-01-01

    We have analyzed the symmetric Bragg case, introducing a model of self consistent scattering for two elliptically polarized beams. The crystal is taken as a set of mathematical planes, each of them defined by a surface density of dipoles. We have considered the mesofield and the epifield differently from that of the Ewald's theory and, we assumed a plane of dipoles and the associated fields as a self consistent scattering unit. The exact analytical treatment when applied to any two neighbouring planes, results in a general and self consistent Bragg's equation, in terms of the amplitude and phase variations. The generalized solution for the set of N planes was obtained after introducing an absorption factor in the incident radiation, in two ways: (i) the analytical one, through a rule of field similarity, which says that the incidence occurs in both faces of the all crystal planes and also, through a matricial development with the Chebyshev polynomials; (ii) using the numerical solution we calculated, iteratively, the reflectivity, the reflection phase, the transmissivity, the transmission phase and the energy. The results are showed through reflection and transmission curves, which are characteristics as from kinematical as dynamical theories. The conservation of the energy results from the Ewald's self consistency principle is used. In the absorption case, the results show that it is not the only cause for the asymmetric form in the reflection curves. The model contains basic elements for a unified, microscope, self consistent, vectorial and exact formulation for interpretating the X ray diffraction in perfect crystals. (author)

  3. Chemometrical exploration of an isotopic ratio data set of acetylsalicylic acid

    International Nuclear Information System (INIS)

    Stanimirova, I.; Daszykowski, M.; Van Gyseghem, E.; Bensaid, F.F.; Lees, M.; Smeyers-Verbeke, J.; Massart, D.L.; Vander Heyden, Y.

    2005-01-01

    A data set consisting of fourteen isotopic ratios or quantities derived from such ratios for samples of acetylsalicylic acid (aspirin), commercialized by various pharmaceutical companies from different countries, was analyzed. The goal of the data analysis was to explore whether results can be linked to geographical origin or other features such as different manufacturing processes, of the samples. The methods of data analysis used were principal component analysis (PCA), robust principal component analysis (RPCA), projection pursuit (PP) and multiple factor analysis (MFA). The results do not seem to depend on geographic origin, except for some samples from India. They do depend on the pharmaceutical companies. Moreover, it seems that the samples from certain pharmaceutical companies form clusters of similar samples, suggesting that there is some common feature between those pharmaceutical companies. Variable selection performed by means of MFA showed that the number of variables can be reduced to five without loss of information

  4. Chemometrical exploration of an isotopic ratio data set of acetylsalicylic acid

    Energy Technology Data Exchange (ETDEWEB)

    Stanimirova, I. [ChemoAC, FABI, Analytical Chemistry and Pharmaceutical Technology, Vrije Universiteit Brussel-VUB, Laarbeeklaan 103, B-1090 Brussels (Belgium); Daszykowski, M. [ChemoAC, FABI, Analytical Chemistry and Pharmaceutical Technology, Vrije Universiteit Brussel-VUB, Laarbeeklaan 103, B-1090 Brussels (Belgium); Van Gyseghem, E. [Eurofins Scientific Analytics, Rue Pierre Adolphe Bobierre, 44323 Nantes Cedex 3 (France); Bensaid, F.F. [Eurofins Scientific Analytics, Rue Pierre Adolphe Bobierre, 44323 Nantes Cedex 3 (France); Lees, M. [Eurofins Scientific Analytics, Rue Pierre Adolphe Bobierre, 44323 Nantes Cedex 3 (France); Smeyers-Verbeke, J. [ChemoAC, FABI, Analytical Chemistry and Pharmaceutical Technology, Vrije Universiteit Brussel-VUB, Laarbeeklaan 103, B-1090 Brussels (Belgium); Massart, D.L. [ChemoAC, FABI, Analytical Chemistry and Pharmaceutical Technology, Vrije Universiteit Brussel-VUB, Laarbeeklaan 103, B-1090 Brussels (Belgium); Vander Heyden, Y. [ChemoAC, FABI, Analytical Chemistry and Pharmaceutical Technology, Vrije Universiteit Brussel-VUB, Laarbeeklaan 103, B-1090 Brussels (Belgium)]. E-mail: yvanvdh@vub.ac.be

    2005-11-03

    A data set consisting of fourteen isotopic ratios or quantities derived from such ratios for samples of acetylsalicylic acid (aspirin), commercialized by various pharmaceutical companies from different countries, was analyzed. The goal of the data analysis was to explore whether results can be linked to geographical origin or other features such as different manufacturing processes, of the samples. The methods of data analysis used were principal component analysis (PCA), robust principal component analysis (RPCA), projection pursuit (PP) and multiple factor analysis (MFA). The results do not seem to depend on geographic origin, except for some samples from India. They do depend on the pharmaceutical companies. Moreover, it seems that the samples from certain pharmaceutical companies form clusters of similar samples, suggesting that there is some common feature between those pharmaceutical companies. Variable selection performed by means of MFA showed that the number of variables can be reduced to five without loss of information.

  5. A replica exchange transition interface sampling method with multiple interface sets for investigating networks of rare events

    Science.gov (United States)

    Swenson, David W. H.; Bolhuis, Peter G.

    2014-07-01

    The multiple state transition interface sampling (TIS) framework in principle allows the simulation of a large network of complex rare event transitions, but in practice suffers from convergence problems. To improve convergence, we combine multiple state TIS [J. Rogal and P. G. Bolhuis, J. Chem. Phys. 129, 224107 (2008)] with replica exchange TIS [T. S. van Erp, Phys. Rev. Lett. 98, 268301 (2007)]. In addition, we introduce multiple interface sets, which allow more than one order parameter to be defined for each state. We illustrate the methodology on a model system of multiple independent dimers, each with two states. For reaction networks with up to 64 microstates, we determine the kinetics in the microcanonical ensemble, and discuss the convergence properties of the sampling scheme. For this model, we find that the kinetics depend on the instantaneous composition of the system. We explain this dependence in terms of the system's potential and kinetic energy.

  6. Self-consistent approximations beyond the CPA: Part II

    International Nuclear Information System (INIS)

    Kaplan, T.; Gray, L.J.

    1982-01-01

    This paper concentrates on a self-consistent approximation for random alloys developed by Kaplan, Leath, Gray, and Diehl. The construction of the augmented space formalism for a binary alloy is sketched, and the notation to be used derived. Using the operator methods of the augmented space, the self-consistent approximation is derived for the average Green's function, and for evaluating the self-energy, taking into account the scattering by clusters of excitations. The particular cluster approximation desired is derived by treating the scattering by the excitations with S /SUB T/ exactly. Fourier transforms on the disorder-space clustersite labels solve the self-consistent set of equations. Expansion to short range order in the alloy is also discussed. A method to reduce the problem to a computationally tractable form is described

  7. Self-consistent model of confinement

    International Nuclear Information System (INIS)

    Swift, A.R.

    1988-01-01

    A model of the large-spatial-distance, zero--three-momentum, limit of QCD is developed from the hypothesis that there is an infrared singularity. Single quarks and gluons do not propagate because they have infinite energy after renormalization. The Hamiltonian formulation of the path integral is used to quantize QCD with physical, nonpropagating fields. Perturbation theory in the infrared limit is simplified by the absence of self-energy insertions and by the suppression of large classes of diagrams due to vanishing propagators. Remaining terms in the perturbation series are resummed to produce a set of nonlinear, renormalizable integral equations which fix both the confining interaction and the physical propagators. Solutions demonstrate the self-consistency of the concepts of an infrared singularity and nonpropagating fields. The Wilson loop is calculated to provide a general proof of confinement. Bethe-Salpeter equations for quark-antiquark pairs and for two gluons have finite-energy solutions in the color-singlet channel. The choice of gauge is addressed in detail. Large classes of corrections to the model are discussed and shown to support self-consistency

  8. Self-consistent asset pricing models

    Science.gov (United States)

    Malevergne, Y.; Sornette, D.

    2007-08-01

    self-consistency condition derives a risk-factor decomposition in the multi-factor case which is identical to the principal component analysis (PCA), thus providing a direct link between model-driven and data-driven constructions of risk factors. This correspondence shows that PCA will therefore suffer from the same limitations as the CAPM and its multi-factor generalization, namely lack of out-of-sample explanatory power and predictability. In the multi-period context, the self-consistency conditions force the betas to be time-dependent with specific constraints.

  9. Self-consistent modeling of amorphous silicon devices

    International Nuclear Information System (INIS)

    Hack, M.

    1987-01-01

    The authors developed a computer model to describe the steady-state behaviour of a range of amorphous silicon devices. It is based on the complete set of transport equations and takes into account the important role played by the continuous distribution of localized states in the mobility gap of amorphous silicon. Using one set of parameters they have been able to self-consistently simulate the current-voltage characteristics of p-i-n (or n-i-p) solar cells under illumination, the dark behaviour of field-effect transistors, p-i-n diodes and n-i-n diodes in both the ohmic and space charge limited regimes. This model also describes the steady-state photoconductivity of amorphous silicon, in particular, its dependence on temperature, doping and illumination intensity

  10. On the structure of the set of coincidence points

    Energy Technology Data Exchange (ETDEWEB)

    Arutyunov, A V [Peoples Friendship University of Russia, Moscow (Russian Federation); Gel' man, B D [Voronezh State University (Russian Federation)

    2015-03-31

    We consider the set of coincidence points for two maps between metric spaces. Cardinality, metric and topological properties of the coincidence set are studied. We obtain conditions which guarantee that this set (a) consists of at least two points; (b) consists of at least n points; (c) contains a countable subset; (d) is uncountable. The results are applied to study the structure of the double point set and the fixed point set for multivalued contractions. Bibliography: 12 titles.

  11. Shedding consistency of strongyle-type eggs in dutch boarding horses

    NARCIS (Netherlands)

    Dopfer, D.D.V.; Kerssens, C.M.; Meijer, Y.G.M.; Boersema, J.H.; Eysker, M.

    2004-01-01

    Faeces of 484 horses were sampled twice with an interval of 6 weeks while anthelmintic therapy was halted. Faecal eggs counts revealed that 267 (55.2%) horses had consistently low numbers of eggs per gram faeces (EPG) (EPG <100 or = 100), 155 (32.0%) horses had consistently high EPGs (EPG >

  12. Price Setting Transactions and the Role of Denominating Currency in FX Markets

    OpenAIRE

    Friberg, Richard; Wilander, Fredrik

    2007-01-01

    This report, commissioned by Sveriges Riksbank, examines the role of currency denomination in international trade transactions. It is divided in two parts. The first part consists of a survey of the price setting and payment practices of a large sample of Swedish exporting firms. The second part analyzes payments data from the Swedish settlement reports from 1999-2002. We examine whether invoicing patterns of Swedish and European companies changed following the creation of the EMU and how the...

  13. Synchronizing data from irregularly sampled sensors

    Science.gov (United States)

    Uluyol, Onder

    2017-07-11

    A system and method include receiving a set of sampled measurements for each of multiple sensors, wherein the sampled measurements are at irregular intervals or different rates, re-sampling the sampled measurements of each of the multiple sensors at a higher rate than one of the sensor's set of sampled measurements, and synchronizing the sampled measurements of each of the multiple sensors.

  14. Time clustered sampling can inflate the inferred substitution rate in foot-and-mouth disease virus analyses

    DEFF Research Database (Denmark)

    Pedersen, Casper-Emil Tingskov; Frandsen, Peter; Wekesa, Sabenzia N.

    2015-01-01

    abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale...... through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer...... to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully...

  15. Internal consistency, concurrent validity, and discriminant validity of a measure of public support for policies for active living in transportation (PAL-T) in a population-based sample of adults.

    Science.gov (United States)

    Fuller, Daniel; Gauvin, Lise; Fournier, Michel; Kestens, Yan; Daniel, Mark; Morency, Patrick; Drouin, Louis

    2012-04-01

    Active living is a broad conceptualization of physical activity that incorporates domains of exercise; recreational, household, and occupational activities; and active transportation. Policy makers develop and implement a variety of transportation policies that can influence choices about how to travel from one location to another. In making such decisions, policy makers act in part in response to public opinion or support for proposed policies. Measures of the public's support for policies aimed at promoting active transportation can inform researchers and policy makers. This study examined the internal consistency, and concurrent and discriminant validity of a newly developed measure of the public's support for policies for active living in transportation (PAL-T). A series of 17 items representing potential policies for promoting active transportation was generated. Two samples of participants (n = 2,001 and n = 2,502) from Montreal, Canada, were recruited via random digit dialling. Analyses were conducted on the combined data set (n = 4,503). Participants were aged 18 through 94 years (58% female). The concurrent and discriminant validity of the PAL-T was assessed by examining relationships with physical activity and smoking. To explore the usability of the PAL-T, predicted scale scores were compared to the summed values of responses. Results showed that the internal consistency of the PAL-T was 0.70. Multilevel regression demonstrated no relationship between the PAL-T and smoking status (p > 0.05) but significant relationships with utilitarian walking (p public opinion can inform policy makers and support advocacy efforts aimed at making built environments more suitable for active transportation while allowing researchers to examine the antecedents and consequences of public support for policies.

  16. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    Science.gov (United States)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  17. Consistency Checking of Web Service Contracts

    DEFF Research Database (Denmark)

    Cambronero, M. Emilia; Okika, Joseph C.; Ravn, Anders Peter

    2008-01-01

    Behavioural properties are analyzed for web service contracts formulated in Business Process Execution Language (BPEL) and Choreography Description Language (CDL). The key result reported is an automated technique to check consistency between protocol aspects of the contracts. The contracts...... are abstracted to (timed) automata and from there a simulation is set up, which is checked using automated tools for analyzing networks of finite state processes. Here we use the Concurrency Work Bench. The proposed techniques are illustrated with a case study that include otherwise difficult to analyze fault...

  18. Diagnostic language consistency among multicultural English-speaking nurses.

    Science.gov (United States)

    Wieck, K L

    1996-01-01

    Cultural differences among nurses may influence the choice of terminology applicable to use of a nursing diagnostic statement. This study explored whether defining characteristics are consistently applied by culturally varied nurses in an English language setting. Two diagnoses, pain, and high risk for altered skin integrity, were studied within six cultures: African, Asian, Filipino, East Indian, African-American, and Anglo-American nurses. Overall, there was consistency between the cultural groups. Analysis of variance for the pain scale demonstrated differences among cultures on two characteristics of pain, restlessness and grimace. The only difference on the high risk for altered skin integrity scale was found on the distructor, supple skin.

  19. Guideline for Sampling and Analysis of Tar and Particles in Biomass Producer Gases. Version 3.3

    Energy Technology Data Exchange (ETDEWEB)

    Neeft, J.P.A.; Knoef, H.A.M.; Zielke, U.; Sjoestroem, K.; Hasler, P.; Simell, P.A.; Dorrington, M.A.; Thomas, L.; Abatzoglou, N.; Deutch, S.; Greil, C.; Buffinga, G.J.; Brage, C.; Suomalainen, M.

    2002-07-01

    This Guideline provides a set of procedures for the measurement of organic contaminants and particles in producer gases from biomass gasifiers. The procedures are designed to cover different gasifier types (updraft or downdraft fixed bed or fluidised bed gasifiers), operating conditions (0 - 900C and 0.6-60 bars) and concentration ranges (1 mg/m{sub n}{sup 3} to 300 g/m{sub n}{sup 3}). The Guideline describes a modular sampling train, and a set of procedures, which include: planning and preparation of the sampling, sampling and post-sampling, analysis, calculations, error analysis and reporting. The modular sampling train consists of 4 modules. Module 1 is a preconditioning module for isokinetic sampling and gas cooling. Module 2 is a particle collection module including a heated filter. Module 3 is a tar collection module with a gas quench (optionally by circulating a liquid), impinger bottles and a backup adsorber. Module 4 is a volume-sampling module consisting of a pump, a rotameter, a gas flow meter and pressure and temperature indicators. The equipment and materials that are required for procuring this modular sampling train are given in the Guideline. The sampling procedures consist of a description for isokinetic sampling, a leakage test prior to sampling, the actual sampling and its duration, how the equipment is cleaned after the sampling, and how the samples are prepared and stored. Analysis of the samples is performed via three procedures. Prior to these procedures, the sample is prepared by Soxhlet extraction of the tars on the particle filter and by collection of all tars in one bulk solution. The first procedure describes the weighing of the particle filter to obtain the concentration of particles in the biomass producer gas. The bulk tar solution is used for two purposes: for determination of gravimetric tar and for analysis of individual compounds. The second procedure describes how to determine the gravimetric tar mass from the bulk solution. The

  20. Self-consistency and sum-rule tests in the Kramers-Kronig analysis of optical data: Applications to aluminum

    International Nuclear Information System (INIS)

    Shiles, E.; Sasaki, T.; Inokuti, M.; Smith, D.Y.

    1980-01-01

    An iterative, self-consistent procedure for the Kramers-Kronig analysis of data from reflectance, ellipsometric, transmission, and electron-energy-loss measurements is presented. This procedure has been developed for practical dispersion analysis since experimentally no single optical function can be readily measured over the entire range of frequencies as required by the Kramers-Kronig relations. The present technique is applied to metallic aluminum as an example. The results are then examined for internal consistency and for systematic errors by various optical sum rules. The present procedure affords a systematic means of preparing a self-consistent set of optical functions provided some optical or energy-loss data are available in all important spectral regions. The analysis of aluminum discloses that currently available data exhibit an excess oscillator strength, apparently in the vicinity of the L edge. A possible explanation is a systematic experimental error in the absorption-coefficient measurements resulting from surface layers: possibly oxides: present in thin-film transmission samples. A revised set of optical functions has been prepared by an ad hoc reduction of the reported absorption coefficient above the L edge by 14%. These revised data lead to a total oscillator strength consistent with the known electron density and are in agreement with dc-conductivity and stopping-power measurements as well as with absorption coefficients inferred from the cross sections of neighboring elements in the periodic table. The optical functions resulting from this study show evidence for both the redistribution of oscillator strength between energy levels and the effects on real transitions of the shielding of conduction electrons by virtual processes in the core states

  1. Consistency in performance evaluation reports and medical records.

    Science.gov (United States)

    Lu, Mingshan; Ma, Ching-to Albert

    2002-12-01

    In the health care market managed care has become the latest innovation for the delivery of services. For efficient implementation, the managed care organization relies on accurate information. So clinicians are often asked to report on patients before referrals are approved, treatments authorized, or insurance claims processed. What are clinicians responses to solicitation for information by managed care organizations? The existing health literature has already pointed out the importance of provider gaming, sincere reporting, nudging, and dodging the rules. We assess the consistency of clinicians reports on clients across administrative data and clinical records. For about 1,000 alcohol abuse treatment episodes, we compare clinicians reports across two data sets. The first one, the Maine Addiction Treatment System (MATS), was an administrative data set; the state government used it for program performance monitoring and evaluation. The second was a set of medical record abstracts, taken directly from the clinical records of treatment episodes. A clinician s reporting practice exhibits an inconsistency if the information reported in MATS differs from the information reported in the medical record in a statistically significant way. We look for evidence of inconsistencies in five categories: admission alcohol use frequency, discharge alcohol use frequency, termination status, admission employment status, and discharge employment status. Chi-square tests, Kappa statistics, and sensitivity and specificity tests are used for hypothesis testing. Multiple imputation methods are employed to address the problem of missing values in the record abstract data set. For admission and discharge alcohol use frequency measures, we find, respectively, strong and supporting evidence for inconsistencies. We find equally strong evidence for consistency in reports of admission and discharge employment status, and mixed evidence on report consistency on termination status. Patterns of

  2. Consistency test of the standard model

    International Nuclear Information System (INIS)

    Pawlowski, M.; Raczka, R.

    1997-01-01

    If the 'Higgs mass' is not the physical mass of a real particle but rather an effective ultraviolet cutoff then a process energy dependence of this cutoff must be admitted. Precision data from at least two energy scale experimental points are necessary to test this hypothesis. The first set of precision data is provided by the Z-boson peak experiments. We argue that the second set can be given by 10-20 GeV e + e - colliders. We pay attention to the special role of tau polarization experiments that can be sensitive to the 'Higgs mass' for a sample of ∼ 10 8 produced tau pairs. We argue that such a study may be regarded as a negative selfconsistency test of the Standard Model and of most of its extensions

  3. International spinal cord injury pulmonary function basic data set

    DEFF Research Database (Denmark)

    Biering-Sørensen, Fin; Krassioukov, A; Alexander, M S

    2012-01-01

    To develop the International Spinal Cord Injury (SCI) Pulmonary Function Basic Data Set within the framework of the International SCI Data Sets in order to facilitate consistent collection and reporting of basic bronchopulmonary findings in the SCI population.......To develop the International Spinal Cord Injury (SCI) Pulmonary Function Basic Data Set within the framework of the International SCI Data Sets in order to facilitate consistent collection and reporting of basic bronchopulmonary findings in the SCI population....

  4. Subgame consistent cooperation a comprehensive treatise

    CERN Document Server

    Yeung, David W K

    2016-01-01

    Strategic behavior in the human and social world has been increasingly recognized in theory and practice. It is well known that non-cooperative behavior could lead to suboptimal or even highly undesirable outcomes. Cooperation suggests the possibility of obtaining socially optimal solutions and the calls for cooperation are prevalent in real-life problems. Dynamic cooperation cannot be sustainable if there is no guarantee that the agreed upon optimality principle at the beginning is maintained throughout the cooperation duration. It is due to the lack of this kind of guarantees that cooperative schemes fail to last till its end or even fail to get started. The property of subgame consistency in cooperative dynamic games and the corresponding solution mechanism resolve this “classic” problem in game theory. This book is a comprehensive treatise on subgame consistent dynamic cooperation covering the up-to-date state of the art analyses in this important topic. It sets out to provide the theory, solution tec...

  5. Closed sets of nonlocal correlations

    International Nuclear Information System (INIS)

    Allcock, Jonathan; Linden, Noah; Brunner, Nicolas; Popescu, Sandu; Skrzypczyk, Paul; Vertesi, Tamas

    2009-01-01

    We present a fundamental concept - closed sets of correlations - for studying nonlocal correlations. We argue that sets of correlations corresponding to information-theoretic principles, or more generally to consistent physical theories, must be closed under a natural set of operations. Hence, studying the closure of sets of correlations gives insight into which information-theoretic principles are genuinely different, and which are ultimately equivalent. This concept also has implications for understanding why quantum nonlocality is limited, and for finding constraints on physical theories beyond quantum mechanics.

  6. Training Classifiers under Covariate Shift by Constructing the Maximum Consistent Distribution Subset

    OpenAIRE

    Yu, Xu; Yu, Miao; Xu, Li-xun; Yang, Jing; Xie, Zhi-qiang

    2015-01-01

    The assumption that the training and testing samples are drawn from the same distribution is violated under covariate shift setting, and most algorithms for the covariate shift setting try to first estimate distributions and then reweight samples based on the distributions estimated. Due to the difficulty of estimating a correct distribution, previous methods can not get good classification performance. In this paper, we firstly present two types of covariate shift problems. Rather than estim...

  7. Proteomic Biomarker Discovery in 1000 Human Plasma Samples with Mass Spectrometry.

    Science.gov (United States)

    Cominetti, Ornella; Núñez Galindo, Antonio; Corthésy, John; Oller Moreno, Sergio; Irincheeva, Irina; Valsesia, Armand; Astrup, Arne; Saris, Wim H M; Hager, Jörg; Kussmann, Martin; Dayon, Loïc

    2016-02-05

    The overall impact of proteomics on clinical research and its translation has lagged behind expectations. One recognized caveat is the limited size (subject numbers) of (pre)clinical studies performed at the discovery stage, the findings of which fail to be replicated in larger verification/validation trials. Compromised study designs and insufficient statistical power are consequences of the to-date still limited capacity of mass spectrometry (MS)-based workflows to handle large numbers of samples in a realistic time frame, while delivering comprehensive proteome coverages. We developed a highly automated proteomic biomarker discovery workflow. Herein, we have applied this approach to analyze 1000 plasma samples from the multicentered human dietary intervention study "DiOGenes". Study design, sample randomization, tracking, and logistics were the foundations of our large-scale study. We checked the quality of the MS data and provided descriptive statistics. The data set was interrogated for proteins with most stable expression levels in that set of plasma samples. We evaluated standard clinical variables that typically impact forthcoming results and assessed body mass index-associated and gender-specific proteins at two time points. We demonstrate that analyzing a large number of human plasma samples for biomarker discovery with MS using isobaric tagging is feasible, providing robust and consistent biological results.

  8. The upgraded external-beam PIXE/PIGE set-up at LABEC for very fast measurements on aerosol samples

    Energy Technology Data Exchange (ETDEWEB)

    Lucarelli, F., E-mail: lucarelli@fi.infn.it; Calzolai, G.; Chiari, M.; Giannoni, M.; Mochi, D.; Nava, S.; Carraresi, L.

    2014-01-01

    At the 3 MV Tandetron accelerator of the LABEC laboratory of INFN in Florence, an external beam facility is fully dedicated to measurements of elemental composition of atmospheric aerosol. The experimental set-up hitherto used for this kind of applications has been upgraded with the replacement of a traditional Si(Li) detector for the detection of medium–high Z elements with a silicon drift detector (SDD) with a big active area (80 mm{sup 2}) and 450 μm thickness, with the aim of obtaining better minimum detection limits (MDL) and reduce measuring times. The Upilex extraction window has been replaced by a more resistant one (Si{sub 3}N{sub 4}). A comparison between the old Si(Li) and the new SDD for aerosol samples collected on different substrata like Teflon, Kapton and Nuclepore evidenced the better performances of the SDD. It allows obtaining better results (higher counting statistics, lower MDLs) even in shorter measuring times, thus allowing very fast analysis of both daily and hourly samples.

  9. Nanostructured and nanolayer coatings based on nitrides of the metals structure study and structure and composition standard samples set development

    Directory of Open Access Journals (Sweden)

    E. B. Chabina

    2014-01-01

    Full Text Available Researches by methods of analytical microscopy and the x-ray analysis have allowed to develop a set of standard samples of composition and structure of the strengthening nanostructured and nanolayer coatings for control of the strengthening nanostructured and nanolayer coatings based on nitrides of the metals used to protect critical parts of the compressor of the gas turbine engine from dust erosion, corrosion and oxidation.

  10. Consistency and Reconciliation Model In Regional Development Planning

    Directory of Open Access Journals (Sweden)

    Dina Suryawati

    2016-10-01

    Full Text Available The aim of this study was to identify the problems and determine the conceptual model of regional development planning. Regional development planning is a systemic, complex and unstructured process. Therefore, this study used soft systems methodology to outline unstructured issues with a structured approach. The conceptual models that were successfully constructed in this study are a model of consistency and a model of reconciliation. Regional development planning is a process that is well-integrated with central planning and inter-regional planning documents. Integration and consistency of regional planning documents are very important in order to achieve the development goals that have been set. On the other hand, the process of development planning in the region involves technocratic system, that is, both top-down and bottom-up system of participation. Both must be balanced, do not overlap and do not dominate each other. regional, development, planning, consistency, reconciliation

  11. An algebraic method for constructing stable and consistent autoregressive filters

    International Nuclear Information System (INIS)

    Harlim, John; Hong, Hoon; Robbins, Jacob L.

    2015-01-01

    In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides a discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern

  12. Adult age differences in memory for schema-consistent and schema-inconsistent objects in a real-world setting.

    Science.gov (United States)

    Prull, Matthew W

    2015-01-01

    The present study examined age-related differences in the inconsistency effect, in which memory is enhanced for schema-inconsistent information compared to schema-consistent information. Young and older adults studied schema-consistent and schema-inconsistent objects in an academic office under either intentional or incidental encoding instructions, and were given two recognition tests either immediately or after 48 hr: A yes/no item recognition test that included modified remember/know judgments and a token recognition test that required determining whether an original object was replaced with a different object with the same name. Young and older adults showed equivalent inconsistency effects in both item and token recognition tests, although older adults reported phenomenologically less rich memories of schema-inconsistent objects relative to young adults. These findings run counter to previous reports suggesting that aging is associated with processing declines at encoding that impair memory for details of schema-inconsistent or distinctive events. The results are consistent with a retrieval-based account in which age-related difficulties in retrieving contextual details can be offset by environmental support.

  13. Image Sampling with Quasicrystals

    Directory of Open Access Journals (Sweden)

    Mark Grundland

    2009-07-01

    Full Text Available We investigate the use of quasicrystals in image sampling. Quasicrystals produce space-filling, non-periodic point sets that are uniformly discrete and relatively dense, thereby ensuring the sample sites are evenly spread out throughout the sampled image. Their self-similar structure can be attractive for creating sampling patterns endowed with a decorative symmetry. We present a brief general overview of the algebraic theory of cut-and-project quasicrystals based on the geometry of the golden ratio. To assess the practical utility of quasicrystal sampling, we evaluate the visual effects of a variety of non-adaptive image sampling strategies on photorealistic image reconstruction and non-photorealistic image rendering used in multiresolution image representations. For computer visualization of point sets used in image sampling, we introduce a mosaic rendering technique.

  14. Bootstrap-Based Inference for Cube Root Consistent Estimators

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Jansson, Michael; Nagasawa, Kenichi

    This note proposes a consistent bootstrap-based distributional approximation for cube root consistent estimators such as the maximum score estimator of Manski (1975) and the isotonic density estimator of Grenander (1956). In both cases, the standard nonparametric bootstrap is known...... to be inconsistent. Our method restores consistency of the nonparametric bootstrap by altering the shape of the criterion function defining the estimator whose distribution we seek to approximate. This modification leads to a generic and easy-to-implement resampling method for inference that is conceptually distinct...... from other available distributional approximations based on some form of modified bootstrap. We offer simulation evidence showcasing the performance of our inference method in finite samples. An extension of our methodology to general M-estimation problems is also discussed....

  15. Long-term data set of small mammals from owl pellets in the Atlantic-Mediterranean transition area.

    Science.gov (United States)

    Escribano, Nora; Galicia, David; Ariño, Arturo H; Escala, Carmen

    2016-09-27

    We describe the pellet sampling data set from the Vertebrate Collection of the Museum of Zoology of the University of Navarra. This data set compiles all information about small mammals obtained from the analysis of owl pellets. The collection consists on skulls, mandibles, and some skeletons of 36 species of more than 72,000 georeferenced specimens. These specimens come from the Iberian Peninsula although most samples were collected in Navarra, a highly diverse transitional area of 10,000 kilometre square sitting across three biogeographical regions. The collection spans more than forty years and is still growing as a result of the establishment of a barn owl pellet monitoring network in 2015. The program will provide critical information about the evolution of the small mammals' community in this transition zone as it changes over time.

  16. Vibrational multiconfiguration self-consistent field theory: implementation and test calculations.

    Science.gov (United States)

    Heislbetz, Sandra; Rauhut, Guntram

    2010-03-28

    A state-specific vibrational multiconfiguration self-consistent field (VMCSCF) approach based on a multimode expansion of the potential energy surface is presented for the accurate calculation of anharmonic vibrational spectra. As a special case of this general approach vibrational complete active space self-consistent field calculations will be discussed. The latter method shows better convergence than the general VMCSCF approach and must be considered the preferred choice within the multiconfigurational framework. Benchmark calculations are provided for a small set of test molecules.

  17. Parcels and Land Ownership, This data set consists of digital map files containing parcel-level cadastral information obtained from property descriptions. Cadastral features contained in the data set include real property boundary lines, rights-of-way boundaries, property dimensions, Published in Not Provided, 1:2400 (1in=200ft) scale, Racine County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Parcels and Land Ownership dataset current as of unknown. This data set consists of digital map files containing parcel-level cadastral information obtained from...

  18. Probabilistic Open Set Recognition

    Science.gov (United States)

    Jain, Lalit Prithviraj

    support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.

  19. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Directory of Open Access Journals (Sweden)

    Sharmila Vaz

    Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.

  20. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Science.gov (United States)

    Vaz, Sharmila; Parsons, Richard; Passmore, Anne Elizabeth; Andreou, Pantelis; Falkmer, Torbjörn

    2013-01-01

    The social skills rating system (SSRS) is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US) are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME) of the SSRS secondary student form (SSF) in a sample of Year 7 students (N = 187), from five randomly selected public schools in Perth, western Australia. Internal consistency (IC) of the total scale and most subscale scores (except empathy) on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating) for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating) was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports), not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID).

  1. Culture, cross-role consistency, and adjustment: testing trait and cultural psychology perspectives.

    Science.gov (United States)

    Church, A Timothy; Anderson-Harumi, Cheryl A; del Prado, Alicia M; Curtis, Guy J; Tanaka-Matsumi, Junko; Valdez Medina, José L; Mastor, Khairul A; White, Fiona A; Miramontes, Lilia A; Katigbak, Marcia S

    2008-09-01

    Trait and cultural psychology perspectives on cross-role consistency and its relation to adjustment were examined in 2 individualistic cultures, the United States (N=231) and Australia (N=195), and 4 collectivistic cultures, Mexico (N=199), the Philippines (N=195), Malaysia (N=217), and Japan (N=180). Cross-role consistency in trait ratings was evident in all cultures, supporting trait perspectives. Cultural comparisons of mean consistency provided support for cultural psychology perspectives as applied to East Asian cultures (i.e., Japan) but not collectivistic cultures more generally. Some but not all of the hypothesized predictors of consistency were supported across cultures. Cross-role consistency predicted aspects of adjustment in all cultures, but prediction was most reliable in the U.S. sample and weakest in the Japanese sample. Alternative constructs proposed by cultural psychologists--personality coherence, social appraisal, and relationship harmony--predicted adjustment in all cultures but were not, as hypothesized, better predictors of adjustment in collectivistic cultures than in individualistic cultures.

  2. The consistency assessment of topological relations in cartographic generalization

    Science.gov (United States)

    Zheng, Chunyan; Guo, Qingsheng; Du, Xiaochu

    2006-10-01

    The field of research in the generalization assessment has been less studied than the generalization process itself, and it is very important to keep topological relation consistency for meeting generalization quality. This paper proposes a methodology to assess the quality of generalized map from topological relations consistency. Taking roads (including railway) and residential areas for examples, from the viewpoint of the spatial cognition, some issues about topological consistency in different map scales are analyzed. The statistic information about the inconsistent topological relations can be obtained by comparing the two matrices: one is the matrix for the topological relations in the generalized map; the other is the theoretical matrix for the topological relations that should be maintained after generalization. Based on the fuzzy set theory and the classification of map object types, the consistency evaluation model of topological relations is established. The paper proves the feasibility of the method through the example about how to evaluate the local topological relations between simple roads and residential area finally.

  3. EVALUATION OF SETTING TIME OF MINERAL TRIOXIDE AGGREGATE AND BIODENTINE IN THE PRESENCE OF HUMAN BLOOD AND MINIMAL ESSENTIAL MEDIA - AN IN VITRO STUDY

    Directory of Open Access Journals (Sweden)

    Gopi Krishna Reddy Moosani

    2017-12-01

    Full Text Available BACKGROUND The aim of this study was to compare the ability of MTA and Biodentine to set in the presence of human blood and minimal essential media. MATERIALS AND METHODS Eighty 1 x 3 inches plexi glass sheets were taken. In each sheet, 10 wells were created and divided into 10 groups. Odd number groups were filled with MTA and even groups were filled with Biodentine. Within these groups 4 groups were control groups and the remaining 6 groups were experimental groups (i.e., blood, minimal essential media, blood and minimal essential media. Each block was submerged for 4, 5, 6, 8, 24, 36, and 48 hours in an experimental liquid at 370C with 100% humidity. RESULTS The setting times varied for the 2 materials, with contrasting differences in the setting times between MTA and Biodentine samples. Majority of the MTA samples did not set until 24 hrs. but at 36 hours all the samples of MTA are set. While for Biodentine samples, all of them had set by 6 hours. There is a significant difference in setting time between MTA and Biodentine. CONCLUSION This outcome draws into question the proposed setting time given by each respective manufacturer. Furthermore, despite Biodentine being marketed as a direct competitor to MTA with superior handling properties, MTA consistently set at a faster rate under the conditions of this study.

  4. Pi sampling: a methodical and flexible approach to initial macromolecular crystallization screening

    International Nuclear Information System (INIS)

    Gorrec, Fabrice; Palmer, Colin M.; Lebon, Guillaume; Warne, Tony

    2011-01-01

    Pi sampling, derived from the incomplete factorial approach, is an effort to maximize the diversity of macromolecular crystallization conditions and to facilitate the preparation of 96-condition initial screens. The Pi sampling method is derived from the incomplete factorial approach to macromolecular crystallization screen design. The resulting ‘Pi screens’ have a modular distribution of a given set of up to 36 stock solutions. Maximally diverse conditions can be produced by taking into account the properties of the chemicals used in the formulation and the concentrations of the corresponding solutions. The Pi sampling method has been implemented in a web-based application that generates screen formulations and recipes. It is particularly adapted to screens consisting of 96 different conditions. The flexibility and efficiency of Pi sampling is demonstrated by the crystallization of soluble proteins and of an integral membrane-protein sample

  5. Consistency in color parameters of a commonly used shade guide.

    Science.gov (United States)

    Tashkandi, Esam

    2010-01-01

    The use of shade guides to assess the color of natural teeth subjectively remains one of the most common means for dental shade assessment. Any variation in the color parameters of the different shade guides may lead to significant clinical implications. Particularly, since the communication between the clinic and the dental laboratory is based on using the shade guide designation. The purpose of this study was to investigate the consistency of the L∗a∗b∗ color parameters of a sample of a commonly used shade guide. The color parameters of a total of 100 VITAPAN Classical Vacuum shade guide (VITA Zahnfabrik, Bad Säckingen, Germany(were measured using a X-Rite ColorEye 7000A Spectrophotometer (Grand Rapids, Michigan, USA). Each shade guide consists of 16 tabs with different designations. Each shade tab was measured five times and the average values were calculated. The ΔE between the average L∗a∗b∗ value for each shade tab and the average of the 100 shade tabs of the same designation was calculated. Using the Student t-test analysis, no significant differences were found among the measured sample. There is a high consistency level in terms of color parameters of the measured VITAPAN Classical Vacuum shade guide sample tested.

  6. Assessing the Consistency and Microbiological Effectiveness of Household Water Treatment Practices by Urban and Rural Populations Claiming to Treat Their Water at Home: A Case Study in Peru

    Science.gov (United States)

    Rosa, Ghislaine; Huaylinos, Maria L.; Gil, Ana; Lanata, Claudio; Clasen, Thomas

    2014-01-01

    Background Household water treatment (HWT) can improve drinking water quality and prevent disease if used correctly and consistently by vulnerable populations. Over 1.1 billion people report treating their water prior to drinking it. These estimates, however, are based on responses to household surveys that may exaggerate the consistency and microbiological performance of the practice—key factors for reducing pathogen exposure and achieving health benefits. The objective of this study was to examine how HWT practices are actually performed by households identified as HWT users, according to international monitoring standards. Methods and Findings We conducted a 6-month case study in urban (n = 117 households) and rural (n = 115 households) Peru, a country in which 82.8% of households report treating their water at home. We used direct observation, in-depth interviews, surveys, spot-checks, and water sampling to assess water treatment practices among households that claimed to treat their drinking water at home. While consistency of reported practices was high in both urban (94.8%) and rural (85.3%) settings, availability of treated water (based on self-report) at time of collection was low, with 67.1% and 23.0% of urban and rural households having treated water at all three sampling visits. Self-reported consumption of untreated water in the home among adults and children water of self-reported users was significantly better than source water in the urban setting and negligible but significantly better in the rural setting. However, only 46.3% and 31.6% of households had drinking water water quality. The lack of consistency and sub-optimal microbiological effectiveness also raises questions about the potential of HWT to prevent waterborne diseases. PMID:25522371

  7. Two-Sample Two-Stage Least Squares (TSTSLS estimates of earnings mobility: how consistent are they?

    Directory of Open Access Journals (Sweden)

    John Jerrim

    2016-08-01

    Full Text Available Academics and policymakers have shown great interest in cross-national comparisons of intergenerational earnings mobility. However, producing consistent and comparable estimates of earnings mobility is not a trivial task. In most countries researchers are unable to observe earnings information for two generations. They are thus forced to rely upon imputed data from different surveys instead. This paper builds upon previous work by considering the consistency of the intergenerational correlation (ρ as well as the elasticity (β, how this changes when using a range of different instrumental (imputer variables, and highlighting an important but infrequently discussed measurement issue. Our key finding is that, while TSTSLS estimates of β and ρ are both likely to be inconsistent, the magnitude of this problem is much greater for the former than it is for the latter. We conclude by offering advice on estimating earnings mobility using this methodology.

  8. Syneresis and rheological behaviors of set yogurt containing green tea and green coffee powders.

    Science.gov (United States)

    Dönmez, Özge; Mogol, Burçe Ataç; Gökmen, Vural

    2017-02-01

    This study aimed to investigate the effect of added green coffee powder (GCP) and green tea powder (GTP) on syneresis behavior and consistency of set yogurts. Adding GCP (1 or 2%) decreased syneresis rate. The effect of GTP on the syneresis rate was concentration dependent. In comparison to the control, GTP decreased syneresis rate when it was added at 0.02%, but it caused an increase when added at 2%. No significant difference was observed in the syneresis rates when GTP was added at 1 and 0.01%, until 14 and 7 d of storage, respectively. The Herschel-Bulkley model parameters indicated that the consistency of control was considerably lower than that of GCP yogurts during 14 d, whereas it was higher at the end of storage. The GTP yogurt results showed that the consistency coefficients of GTP yogurts were different from those of control samples until 14 d of storage. In conclusion, GTP and GCP behaved differently in acidified gel networks of set yogurt, modifying its rheological behavior, as they have different profiles and concentrations of polyphenols. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. Varying Associations Between Body Mass Index and Physical and Cognitive Function in Three Samples of Older Adults Living in Different Settings.

    Science.gov (United States)

    Kiesswetter, Eva; Schrader, Eva; Diekmann, Rebecca; Sieber, Cornel Christian; Volkert, Dorothee

    2015-10-01

    The study investigates variations in the associations between body mass index (BMI) and (a) physical and (b) cognitive function across three samples of older adults living in different settings, and moreover determines if the association between BMI and physical function is confounded by cognitive abilities. One hundred ninety-five patients of a geriatric day hospital, 322 persons receiving home care (HC), and 183 nursing home (NH) residents were examined regarding BMI, cognitive (Mini-Mental State Examination), and physical function (Barthel Index for activities of daily living). Differences in Mini-Mental State Examination and activities of daily living scores between BMI groups (Examination impairments increased from the geriatric day hospital over the HC to the NH sample, whereas prevalence rates of obesity and severe obesity (35%, 33%, 25%) decreased. In geriatric day hospital patients cognitive and physical function did not differ between BMI groups. In the HC and NH samples, cognitive abilities were highest in obese and severely obese subjects. Unadjusted mean activities of daily living scores differed between BMI groups in HC receivers (51.6±32.2, 61.8±26.1, 67.5±28.3, 72.0±23.4, 66.2±24.2, p = .002) and NH residents (35.6±28.6, 48.1±25.7, 39.9±28.7, 50.8±24.0, 57.1±28.2, p = .029). In both samples significance was lost after adjustment indicating cognitive function as dominant confounder. In older adults the associations between BMI and physical and cognitive function were dependent on the health and care status corresponding to the setting. In the HC and the NH samples, cognitive status, as measured by the Mini-Mental State Examination, emerged as an important confounder within the association between BMI and physical function. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Simulating the complex output of rainfall and hydrological processes using the information contained in large data sets: the Direct Sampling approach.

    Science.gov (United States)

    Oriani, Fabio

    2017-04-01

    The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model

  11. Personality and Situation Predictors of Consistent Eating Patterns.

    Science.gov (United States)

    Vainik, Uku; Dubé, Laurette; Lu, Ji; Fellows, Lesley K

    2015-01-01

    A consistent eating style might be beneficial to avoid overeating in a food-rich environment. Eating consistency entails maintaining a similar dietary pattern across different eating situations. This construct is relatively under-studied, but the available evidence suggests that eating consistency supports successful weight maintenance and decreases risk for metabolic syndrome and cardiovascular disease. Yet, personality and situation predictors of consistency have not been studied. A community-based sample of 164 women completed various personality tests, and 139 of them also reported their eating behaviour 6 times/day over 10 observational days. We focused on observations with meals (breakfast, lunch, or dinner). The participants indicated if their momentary eating patterns were consistent with their own baseline eating patterns in terms of healthiness or size of the meal. Further, participants described various characteristics of each eating situation. Eating consistency was positively predicted by trait self-control. Eating consistency was undermined by eating in the evening, eating with others, eating away from home, having consumed alcohol and having undertaken physical exercise. Interactions emerged between personality traits and situations, including punishment sensitivity, restraint, physical activity and alcohol consumption. Trait self-control and several eating situation variables were related to eating consistency. These findings provide a starting point for targeting interventions to improve consistency, suggesting that a focus on self-control skills, together with addressing contextual factors such as social situations and time of day, may be most promising. This work is a first step to provide people with the tools they need to maintain a consistently healthy lifestyle in a food-rich environment.

  12. Personality and Situation Predictors of Consistent Eating Patterns.

    Directory of Open Access Journals (Sweden)

    Uku Vainik

    Full Text Available A consistent eating style might be beneficial to avoid overeating in a food-rich environment. Eating consistency entails maintaining a similar dietary pattern across different eating situations. This construct is relatively under-studied, but the available evidence suggests that eating consistency supports successful weight maintenance and decreases risk for metabolic syndrome and cardiovascular disease. Yet, personality and situation predictors of consistency have not been studied.A community-based sample of 164 women completed various personality tests, and 139 of them also reported their eating behaviour 6 times/day over 10 observational days. We focused on observations with meals (breakfast, lunch, or dinner. The participants indicated if their momentary eating patterns were consistent with their own baseline eating patterns in terms of healthiness or size of the meal. Further, participants described various characteristics of each eating situation.Eating consistency was positively predicted by trait self-control. Eating consistency was undermined by eating in the evening, eating with others, eating away from home, having consumed alcohol and having undertaken physical exercise. Interactions emerged between personality traits and situations, including punishment sensitivity, restraint, physical activity and alcohol consumption.Trait self-control and several eating situation variables were related to eating consistency. These findings provide a starting point for targeting interventions to improve consistency, suggesting that a focus on self-control skills, together with addressing contextual factors such as social situations and time of day, may be most promising. This work is a first step to provide people with the tools they need to maintain a consistently healthy lifestyle in a food-rich environment.

  13. IGSA: Individual Gene Sets Analysis, including Enrichment and Clustering.

    Science.gov (United States)

    Wu, Lingxiang; Chen, Xiujie; Zhang, Denan; Zhang, Wubing; Liu, Lei; Ma, Hongzhe; Yang, Jingbo; Xie, Hongbo; Liu, Bo; Jin, Qing

    2016-01-01

    Analysis of gene sets has been widely applied in various high-throughput biological studies. One weakness in the traditional methods is that they neglect the heterogeneity of genes expressions in samples which may lead to the omission of some specific and important gene sets. It is also difficult for them to reflect the severities of disease and provide expression profiles of gene sets for individuals. We developed an application software called IGSA that leverages a powerful analytical capacity in gene sets enrichment and samples clustering. IGSA calculates gene sets expression scores for each sample and takes an accumulating clustering strategy to let the samples gather into the set according to the progress of disease from mild to severe. We focus on gastric, pancreatic and ovarian cancer data sets for the performance of IGSA. We also compared the results of IGSA in KEGG pathways enrichment with David, GSEA, SPIA, ssGSEA and analyzed the results of IGSA clustering and different similarity measurement methods. Notably, IGSA is proved to be more sensitive and specific in finding significant pathways, and can indicate related changes in pathways with the severity of disease. In addition, IGSA provides with significant gene sets profile for each sample.

  14. Consistent two-dimensional visualization of protein-ligand complex series

    Directory of Open Access Journals (Sweden)

    Stierand Katrin

    2011-06-01

    Full Text Available Abstract Background The comparative two-dimensional graphical representation of protein-ligand complex series featuring different ligands bound to the same active site offers a quick insight in their binding mode differences. In comparison to arbitrary orientations of the residue molecules in the individual complex depictions a consistent placement improves the legibility and comparability within the series. The automatic generation of such consistent layouts offers the possibility to apply it to large data sets originating from computer-aided drug design methods. Results We developed a new approach, which automatically generates a consistent layout of interacting residues for a given series of complexes. Based on the structural three-dimensional input information, a global two-dimensional layout for all residues of the complex ensemble is computed. The algorithm incorporates the three-dimensional adjacencies of the active site residues in order to find an universally valid circular arrangement of the residues around the ligand. Subsequent to a two-dimensional ligand superimposition step, a global placement for each residue is derived from the set of already placed ligands. The method generates high-quality layouts, showing mostly overlap-free solutions with molecules which are displayed as structure diagrams providing interaction information in atomic detail. Application examples document an improved legibility compared to series of diagrams whose layouts are calculated independently from each other. Conclusions The presented method extends the field of complex series visualizations. A series of molecules binding to the same protein active site is drawn in a graphically consistent way. Compared to existing approaches these drawings substantially simplify the visual analysis of large compound series.

  15. Effects of goal-setting skills on students’academic performance in english language in Enugu Nigeria

    Directory of Open Access Journals (Sweden)

    Abe Iyabo Idowu

    2014-07-01

    Full Text Available The study investigated the effectiveness of goal-setting skills among Senior Secondary II students’ academic performance in English language in Enugu Metropolis, Enugu state, Nigeria. Quasi-experimental pre-test, post- test control group design was adopted for the study. The initial sample was 147 participants (male and female Senior Secondary School II students drawn from two public schools in Enugu zone of Enugu Metropolis. The final sample for the intervention consisted of 80 participants. This sample satisfied the condition for selection from the baseline data. Two research hypotheses were formulated and tested at 0.05 level of significance. Data generated were analyzed using the mean, standard deviation and t-test statistical method. The findings showed that performance in English language was enhanced among participants exposed to goal-setting intervention compared to those in the control group. The study also showed that there is a significant gender difference in students’ performance with female participants recording a higher mean score than males. Parental level of education was also found to be related to performance in English Language. Based on the findings, goal-setting intervention was recommended as a strategy to enhancing students’ academic performance particularly in English Language. 

  16. Performance and consistency of indicator groups in two biodiversity hotspots.

    Directory of Open Access Journals (Sweden)

    Joaquim Trindade-Filho

    Full Text Available In a world limited by data availability and limited funds for conservation, scientists and practitioners must use indicator groups to define spatial conservation priorities. Several studies have evaluated the effectiveness of indicator groups, but still little is known about the consistency in performance of these groups in different regions, which would allow their a priori selection.We systematically examined the effectiveness and the consistency of nine indicator groups in representing mammal species in two top-ranked Biodiversity Hotspots (BH: the Brazilian Cerrado and the Atlantic Forest. To test for group effectiveness we first found the best sets of sites able to maximize the representation of each indicator group in the BH and then calculated the average representation of different target species by the indicator groups in the BH. We considered consistent indicator groups whose representation of target species was not statistically different between BH. We called effective those groups that outperformed the target-species representation achieved by random sets of species. Effective indicator groups required the selection of less than 2% of the BH area for representing target species. Restricted-range species were the most effective indicators for the representation of all mammal diversity as well as target species. It was also the only group with high consistency.We show that several indicator groups could be applied as shortcuts for representing mammal species in the Cerrado and the Atlantic Forest to develop conservation plans, however, only restricted-range species consistently held as the most effective indicator group for such a task. This group is of particular importance in conservation planning as it captures high diversity of endemic and endangered species.

  17. Performance and consistency of indicator groups in two biodiversity hotspots.

    Science.gov (United States)

    Trindade-Filho, Joaquim; Loyola, Rafael Dias

    2011-01-01

    In a world limited by data availability and limited funds for conservation, scientists and practitioners must use indicator groups to define spatial conservation priorities. Several studies have evaluated the effectiveness of indicator groups, but still little is known about the consistency in performance of these groups in different regions, which would allow their a priori selection. We systematically examined the effectiveness and the consistency of nine indicator groups in representing mammal species in two top-ranked Biodiversity Hotspots (BH): the Brazilian Cerrado and the Atlantic Forest. To test for group effectiveness we first found the best sets of sites able to maximize the representation of each indicator group in the BH and then calculated the average representation of different target species by the indicator groups in the BH. We considered consistent indicator groups whose representation of target species was not statistically different between BH. We called effective those groups that outperformed the target-species representation achieved by random sets of species. Effective indicator groups required the selection of less than 2% of the BH area for representing target species. Restricted-range species were the most effective indicators for the representation of all mammal diversity as well as target species. It was also the only group with high consistency. We show that several indicator groups could be applied as shortcuts for representing mammal species in the Cerrado and the Atlantic Forest to develop conservation plans, however, only restricted-range species consistently held as the most effective indicator group for such a task. This group is of particular importance in conservation planning as it captures high diversity of endemic and endangered species.

  18. Can survival prediction be improved by merging gene expression data sets?

    Directory of Open Access Journals (Sweden)

    Haleh Yasrebi

    Full Text Available BACKGROUND: High-throughput gene expression profiling technologies generating a wealth of data, are increasingly used for characterization of tumor biopsies for clinical trials. By applying machine learning algorithms to such clinically documented data sets, one hopes to improve tumor diagnosis, prognosis, as well as prediction of treatment response. However, the limited number of patients enrolled in a single trial study limits the power of machine learning approaches due to over-fitting. One could partially overcome this limitation by merging data from different studies. Nevertheless, such data sets differ from each other with regard to technical biases, patient selection criteria and follow-up treatment. It is therefore not clear at all whether the advantage of increased sample size outweighs the disadvantage of higher heterogeneity of merged data sets. Here, we present a systematic study to answer this question specifically for breast cancer data sets. We use survival prediction based on Cox regression as an assay to measure the added value of merged data sets. RESULTS: Using time-dependent Receiver Operating Characteristic-Area Under the Curve (ROC-AUC and hazard ratio as performance measures, we see in overall no significant improvement or deterioration of survival prediction with merged data sets as compared to individual data sets. This apparently was due to the fact that a few genes with strong prognostic power were not available on all microarray platforms and thus were not retained in the merged data sets. Surprisingly, we found that the overall best performance was achieved with a single-gene predictor consisting of CYB5D1. CONCLUSIONS: Merging did not deteriorate performance on average despite (a The diversity of microarray platforms used. (b The heterogeneity of patients cohorts. (c The heterogeneity of breast cancer disease. (d Substantial variation of time to death or relapse. (e The reduced number of genes in the merged data

  19. Cognitive consistency and math-gender stereotypes in Singaporean children.

    Science.gov (United States)

    Cvencek, Dario; Meltzoff, Andrew N; Kapur, Manu

    2014-01-01

    In social psychology, cognitive consistency is a powerful principle for organizing psychological concepts. There have been few tests of cognitive consistency in children and no research about cognitive consistency in children from Asian cultures, who pose an interesting developmental case. A sample of 172 Singaporean elementary school children completed implicit and explicit measures of math-gender stereotype (male=math), gender identity (me=male), and math self-concept (me=math). Results showed strong evidence for cognitive consistency; the strength of children's math-gender stereotypes, together with their gender identity, significantly predicted their math self-concepts. Cognitive consistency may be culturally universal and a key mechanism for developmental change in social cognition. We also discovered that Singaporean children's math-gender stereotypes increased as a function of age and that boys identified with math more strongly than did girls despite Singaporean girls' excelling in math. The results reveal both cultural universals and cultural variation in developing social cognition. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Continuous sampling from distributed streams

    DEFF Research Database (Denmark)

    Graham, Cormode; Muthukrishnan, S.; Yi, Ke

    2012-01-01

    A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple distribu......A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple...... distributed sites. The main challenge is to ensure that a sample is drawn uniformly across the union of the data while minimizing the communication needed to run the protocol on the evolving data. At the same time, it is also necessary to make the protocol lightweight, by keeping the space and time costs low...... for each participant. In this article, we present communication-efficient protocols for continuously maintaining a sample (both with and without replacement) from k distributed streams. These apply to the case when we want a sample from the full streams, and to the sliding window cases of only the W most...

  1. RNA-seq reveals more consistent reference genes for gene expression studies in human non-melanoma skin cancers

    Directory of Open Access Journals (Sweden)

    Van L.T. Hoang

    2017-08-01

    Full Text Available Identification of appropriate reference genes (RGs is critical to accurate data interpretation in quantitative real-time PCR (qPCR experiments. In this study, we have utilised next generation RNA sequencing (RNA-seq to analyse the transcriptome of a panel of non-melanoma skin cancer lesions, identifying genes that are consistently expressed across all samples. Genes encoding ribosomal proteins were amongst the most stable in this dataset. Validation of this RNA-seq data was examined using qPCR to confirm the suitability of a set of highly stable genes for use as qPCR RGs. These genes will provide a valuable resource for the normalisation of qPCR data for the analysis of non-melanoma skin cancer.

  2. Initial Self-Consistent 3D Electron-Cloud Simulations of the LHC Beam with the Code WARP+POSINST

    International Nuclear Information System (INIS)

    Vay, J; Furman, M A; Cohen, R H; Friedman, A; Grote, D P

    2005-01-01

    We present initial results for the self-consistent beam-cloud dynamics simulations for a sample LHC beam, using a newly developed set of modeling capability based on a merge [1] of the three-dimensional parallel Particle-In-Cell (PIC) accelerator code WARP [2] and the electron-cloud code POSINST [3]. Although the storage ring model we use as a test bed to contain the beam is much simpler and shorter than the LHC, its lattice elements are realistically modeled, as is the beam and the electron cloud dynamics. The simulated mechanisms for generation and absorption of the electrons at the walls are based on previously validated models available in POSINST [3, 4

  3. Renormalization in self-consistent approximation schemes at finite temperature I: theory

    International Nuclear Information System (INIS)

    Hees, H. van; Knoll, J.

    2001-07-01

    Within finite temperature field theory, we show that truncated non-perturbative self-consistent Dyson resummation schemes can be renormalized with local counter-terms defined at the vacuum level. The requirements are that the underlying theory is renormalizable and that the self-consistent scheme follows Baym's Φ-derivable concept. The scheme generates both, the renormalized self-consistent equations of motion and the closed equations for the infinite set of counter terms. At the same time the corresponding 2PI-generating functional and the thermodynamic potential can be renormalized, in consistency with the equations of motion. This guarantees the standard Φ-derivable properties like thermodynamic consistency and exact conservation laws also for the renormalized approximation scheme to hold. The proof uses the techniques of BPHZ-renormalization to cope with the explicit and the hidden overlapping vacuum divergences. (orig.)

  4. Sample Selection for Training Cascade Detectors.

    Science.gov (United States)

    Vállez, Noelia; Deniz, Oscar; Bueno, Gloria

    2015-01-01

    Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  5. Effects of manual threshold setting on image analysis results of a sandstone sample structural characterization by X-ray microtomography

    International Nuclear Information System (INIS)

    Moreira, Anderson C.; Fernandes, Celso P.; Fernandes, Jaquiel S.; Marques, Leonardo C.; Appoloni, Carlos R.; Nagata, Rodrigo

    2009-01-01

    X-ray microtomography is a nondestructive nuclear technique widely applied for samples structural characterization. This methodology permits the investigation of materials porous phase, without special sample preparation, generating bidimensional images of the irradiated sample. The images are generated by the linear attenuation coefficient mapping of the sample. In order to do a quantitative characterization, the images have to be binarized, separating porous phase from the material matrix. The choice of the correct threshold in the grey level histogram is an important and discerning procedure for the binary images creation. Slight variations of the threshold level led to substantial variations in physical parameters determination, like porosity and pore size distribution values. The aim of this work is to evaluate these variations based on some manual threshold setting. Employing Imago image analysis software, four operators determined the porosity and pore size distribution of a sandstone sample by image analysis. The microtomography measurements were accomplished with the following scan conditions: 60 kV, 165 μA, 1 mm Al filter, 0.45 deg step size and 180.0 deg total rotation angle with and 3.8 μm and 11 μm spatial resolution. The global average porosity values, determined by the operators, range from 27.8 to 32.4 % for 3.8 μm spatial resolution and 12.3 to 28.3 % for 11 μm spatial resolution. Percentage differences among the pore size distributions were also found. For the same pore size range, 5.5 % and 17.1 %, for 3.8 μm and 11 μm spatial resolutions respectively, were noted. (author)

  6. Brief report: Assessing youth well-being in global emergency settings: Early results from the Emergency Developmental Assets Profile.

    Science.gov (United States)

    Scales, Peter C; Roehlkepartain, Eugene C; Wallace, Teresa; Inselman, Ashley; Stephenson, Paul; Rodriguez, Michael

    2015-12-01

    The 13-item Emergency Developmental Assets Profile measures the well-being of children and youth in emergency settings such as refugee camps and armed conflict zones, assessing whether young people are experiencing adequate positive relationships and opportunities, and developing positive values, skills, and self-perceptions, despite being in crisis circumstances. The instrument was found to have acceptable and nearly identical internal consistency reliability in 22 administrations in non-emergency samples in 15 countries (.75), and in 4 samples of youth ages 10-18 (n = 1550) in the emergency settings (war refugees and typhoon victims, .74) that are the measure's focus, and evidence of convergent validity. Confirmatory Factor Analysis showed acceptable model fit among those youth in emergency settings. Measures of model fit showed that the Em-DAP has configural and metric invariance across all emergency contexts and scalar invariance across some. The Em-DAP is a promising brief cross-cultural tool for assessing the developmental quality of life as reported by samples of youth in a current humanitarian crisis situation. The results can help to inform international relief program decisions about services and activities to be provided for children, youth, and families in emergency settings. Copyright © 2015 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  7. Identification of a set of endogenous reference genes for miRNA expression studies in Parkinson's disease blood samples.

    Science.gov (United States)

    Serafin, Alice; Foco, Luisa; Blankenburg, Hagen; Picard, Anne; Zanigni, Stefano; Zanon, Alessandra; Pramstaller, Peter P; Hicks, Andrew A; Schwienbacher, Christine

    2014-10-10

    Research on microRNAs (miRNAs) is becoming an increasingly attractive field, as these small RNA molecules are involved in several physiological functions and diseases. To date, only few studies have assessed the expression of blood miRNAs related to Parkinson's disease (PD) using microarray and quantitative real-time PCR (qRT-PCR). Measuring miRNA expression involves normalization of qRT-PCR data using endogenous reference genes for calibration, but their choice remains a delicate problem with serious impact on the resulting expression levels. The aim of the present study was to evaluate the suitability of a set of commonly used small RNAs as normalizers and to identify which of these miRNAs might be considered reliable reference genes in qRT-PCR expression analyses on PD blood samples. Commonly used reference genes snoRNA RNU24, snRNA RNU6B, snoRNA Z30 and miR-103a-3p were selected from the literature. We then analyzed the effect of using these genes as reference, alone or in any possible combination, on the measured expression levels of the target genes miR-30b-5p and miR-29a-3p, which have been previously reported to be deregulated in PD blood samples. We identified RNU24 and Z30 as a reliable and stable pair of reference genes in PD blood samples.

  8. Robust Visual Tracking Via Consistent Low-Rank Sparse Learning

    KAUST Repository

    Zhang, Tianzhu

    2014-06-19

    Object tracking is the process of determining the states of a target in consecutive video frames based on properties of motion and appearance consistency. In this paper, we propose a consistent low-rank sparse tracker (CLRST) that builds upon the particle filter framework for tracking. By exploiting temporal consistency, the proposed CLRST algorithm adaptively prunes and selects candidate particles. By using linear sparse combinations of dictionary templates, the proposed method learns the sparse representations of image regions corresponding to candidate particles jointly by exploiting the underlying low-rank constraints. In addition, the proposed CLRST algorithm is computationally attractive since temporal consistency property helps prune particles and the low-rank minimization problem for learning joint sparse representations can be efficiently solved by a sequence of closed form update operations. We evaluate the proposed CLRST algorithm against 14 state-of-the-art tracking methods on a set of 25 challenging image sequences. Experimental results show that the CLRST algorithm performs favorably against state-of-the-art tracking methods in terms of accuracy and execution time.

  9. An in situ neutron diffraction study of shape setting shape memory NiTi

    International Nuclear Information System (INIS)

    Benafan, O.; Padula, S.A.; Noebe, R.D.; Brown, D.W.; Clausen, B.; Vaidyanathan, R.

    2013-01-01

    A bulk polycrystalline Ni 49.9 Ti 50.1 (at.%) shape memory alloy specimen was shape set while neutron diffraction spectra were simultaneously acquired. The objective was to correlate internal stress, phase volume fraction, and texture measurements (from neutron diffraction spectra) with the macroscopic stress and shape changes (from load cell and extensometry measurements) during the shape setting procedure and subsequent shape recovery. Experimental results showed the evolution of the martensitic transformation (lattice strains, phase fractions and texture) against external constraints during both heating and cooling. Constrained heating resulted in a build-up of stresses during the martensite to austenite transformation, followed by stress relaxation due to thermal expansion, final conversion of retained martensite, and recovery processes. Constrained cooling also resulted in stress build-up arising from thermal contraction and early formation of martensite, followed by relaxation as the austenite fully transformed to martensite. Comparisons were also made between specimens pre-shape set and post-shape set with and without external constraints. The specimens displayed similar shape memory behavior consistent with the microstructure of the shape set sample, which was mostly unchanged by the shape setting process and similar to that of the as-received material

  10. Estimating the re-identification risk of clinical data sets

    Directory of Open Access Journals (Sweden)

    Dankar Fida

    2012-07-01

    Full Text Available Abstract Background De-identification is a common way to protect patient privacy when disclosing clinical data for secondary purposes, such as research. One type of attack that de-identification protects against is linking the disclosed patient data with public and semi-public registries. Uniqueness is a commonly used measure of re-identification risk under this attack. If uniqueness can be measured accurately then the risk from this kind of attack can be managed. In practice, it is often not possible to measure uniqueness directly, therefore it must be estimated. Methods We evaluated the accuracy of uniqueness estimators on clinically relevant data sets. Four candidate estimators were identified because they were evaluated in the past and found to have good accuracy or because they were new and not evaluated comparatively before: the Zayatz estimator, slide negative binomial estimator, Pitman’s estimator, and mu-argus. A Monte Carlo simulation was performed to evaluate the uniqueness estimators on six clinically relevant data sets. We varied the sampling fraction and the uniqueness in the population (the value being estimated. The median relative error and inter-quartile range of the uniqueness estimates was measured across 1000 runs. Results There was no single estimator that performed well across all of the conditions. We developed a decision rule which selected between the Pitman, slide negative binomial and Zayatz estimators depending on the sampling fraction and the difference between estimates. This decision rule had the best consistent median relative error across multiple conditions and data sets. Conclusion This study identified an accurate decision rule that can be used by health privacy researchers and disclosure control professionals to estimate uniqueness in clinical data sets. The decision rule provides a reliable way to measure re-identification risk.

  11. On the Creation of Hypertext Links in Full-Text Documents: Measurement of Inter-Linker Consistency.

    Science.gov (United States)

    Ellis, David; And Others

    1994-01-01

    Describes a study in which several different sets of hypertext links are inserted by different people in full-text documents. The degree of similarity between the sets is measured using coefficients and topological indices. As in comparable studies of inter-indexer consistency, the sets of links used by different people showed little similarity.…

  12. Self-consistent Bayesian analysis of space-time symmetry studies

    International Nuclear Information System (INIS)

    Davis, E.D.

    1996-01-01

    We introduce a Bayesian method for the analysis of epithermal neutron transmission data on space-time symmetries in which unique assignment of the prior is achieved by maximisation of the cross entropy and the imposition of a self-consistency criterion. Unlike the maximum likelihood method used in previous analyses of parity-violation data, our method is freed of an ad hoc cutoff parameter. Monte Carlo studies indicate that our self-consistent Bayesian analysis is superior to the maximum likelihood method when applied to the small data samples typical of symmetry studies. (orig.)

  13. A Consistent Phylogenetic Backbone for the Fungi

    Science.gov (United States)

    Ebersberger, Ingo; de Matos Simoes, Ricardo; Kupczok, Anne; Gube, Matthias; Kothe, Erika; Voigt, Kerstin; von Haeseler, Arndt

    2012-01-01

    The kingdom of fungi provides model organisms for biotechnology, cell biology, genetics, and life sciences in general. Only when their phylogenetic relationships are stably resolved, can individual results from fungal research be integrated into a holistic picture of biology. However, and despite recent progress, many deep relationships within the fungi remain unclear. Here, we present the first phylogenomic study of an entire eukaryotic kingdom that uses a consistency criterion to strengthen phylogenetic conclusions. We reason that branches (splits) recovered with independent data and different tree reconstruction methods are likely to reflect true evolutionary relationships. Two complementary phylogenomic data sets based on 99 fungal genomes and 109 fungal expressed sequence tag (EST) sets analyzed with four different tree reconstruction methods shed light from different angles on the fungal tree of life. Eleven additional data sets address specifically the phylogenetic position of Blastocladiomycota, Ustilaginomycotina, and Dothideomycetes, respectively. The combined evidence from the resulting trees supports the deep-level stability of the fungal groups toward a comprehensive natural system of the fungi. In addition, our analysis reveals methodologically interesting aspects. Enrichment for EST encoded data—a common practice in phylogenomic analyses—introduces a strong bias toward slowly evolving and functionally correlated genes. Consequently, the generalization of phylogenomic data sets as collections of randomly selected genes cannot be taken for granted. A thorough characterization of the data to assess possible influences on the tree reconstruction should therefore become a standard in phylogenomic analyses. PMID:22114356

  14. Improving the Functionality of Dictionary Definitions for Lexical Sets: The Role of Definitional Templates, Definitional Consistency, Definitional Coherence and the Incorporation of Lexical Conceptual Models

    Directory of Open Access Journals (Sweden)

    Piet Swanepoel

    2011-10-01

    Full Text Available

    ABSTRACT: This article focuses on some of the problems raised by Atkins and Rundell's (2008 approach to the design of lexicographic definitions for members of lexical sets. The questions raised are how to define and identify lexical sets, how lexical conceptual models (LCMs can support definitional consistency and coherence in defining members of lexical sets, and what the ideal content and structure of LCMs could be. Although similarity of meaning is proposed as the defining feature of lexical sets, similarity of meaning is only one dimension of the broader concept of lexical coherence. The argument is presented that numerous conceptual lexical models (e.g. taxonomies, folk models, frames, etc. in fact indicate, justify or explain how lexical items cohere (and thus form sets. In support of Fillmore's (2003 suggestion that definitions of the lexical items of cohering sets should be linked to such explanatory models, additional functionally-orientated arguments are presented for the incorporation of conceptual lexical models in electronic monolingual learners' dictionaries. Numerous resources exist to support the design of LCMs which can improve the functionality of definitions of members of lexical sets. A few examples are discussed of how such resources can be used to design functionally justified LCMs.

    OPSOMMING: Verbetering van die funksionaliteit van woordeboekdefinisies vir leksikale versamelings: Die rol van definisiematryse, definisie-eenvormigheid, definisiesamehang en die inkorporering van leksikale konseptuele modelle. Hierdie artikel fokus op sommige van die probleme wat ter sprake kom deur Atkins en Rundell (2008 se benadering tot die ontwerp van leksikografiese definisies vir lede van leksikale versamelings. Die vrae wat gestel word, is hoe leksikale versamelings gedefinieer en geïdentifiseer moet word, hoe leksikale konseptuele modelle (LKM's definisie-eenvormigheid en-samehang kan ondersteun by die definiëring van lede

  15. MultiSIMNRA: A computational tool for self-consistent ion beam analysis using SIMNRA

    International Nuclear Information System (INIS)

    Silva, T.F.; Rodrigues, C.L.; Mayer, M.; Moro, M.V.; Trindade, G.F.; Aguirre, F.R.; Added, N.; Rizzutto, M.A.; Tabacniks, M.H.

    2016-01-01

    Highlights: • MultiSIMNRA enables the self-consistent analysis of multiple ion beam techniques. • Self-consistent analysis enables unequivocal and reliable modeling of the sample. • Four different computational algorithms available for model optimizations. • Definition of constraints enables to include prior knowledge into the analysis. - Abstract: SIMNRA is widely adopted by the scientific community of ion beam analysis for the simulation and interpretation of nuclear scattering techniques for material characterization. Taking advantage of its recognized reliability and quality of the simulations, we developed a computer program that uses multiple parallel sessions of SIMNRA to perform self-consistent analysis of data obtained by different ion beam techniques or in different experimental conditions of a given sample. In this paper, we present a result using MultiSIMNRA for a self-consistent multi-elemental analysis of a thin film produced by magnetron sputtering. The results demonstrate the potentialities of the self-consistent analysis and its feasibility using MultiSIMNRA.

  16. Data Set for the manuscript entitled, "Sample Processing Approach for Detection of Ricin in Surface Samples."

    Data.gov (United States)

    U.S. Environmental Protection Agency — Figure. This dataset is associated with the following publication: Shah, S., S. Kane, A.M. Erler, and T. Alfaro. Sample Processing Approach for Detection of Ricin in...

  17. Consistent measurements comparing the drift features of noble gas mixtures

    CERN Document Server

    Becker, U; Fortunato, E M; Kirchner, J; Rosera, K; Uchida, Y

    1999-01-01

    We present a consistent set of measurements of electron drift velocities and Lorentz deflection angles for all noble gases with methane and ethane as quenchers in magnetic fields up to 0.8 T. Empirical descriptions are also presented. Details on the World Wide Web allow for guided design and optimization of future detectors.

  18. Sample Selection for Training Cascade Detectors.

    Directory of Open Access Journals (Sweden)

    Noelia Vállez

    Full Text Available Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  19. Water born pollutants sampling using porous suction samples

    International Nuclear Information System (INIS)

    Baig, M.A.

    1997-01-01

    The common standard method of sampling water born pollutants in the vadoze zone is core sampling and it is followed by extraction of pore fluid. This method does not allow sampling at the same location next time and again later on. There is an alternative approach for sampling fluids (water born pollutants) from both saturated and unsaturated regions of vadose zone using porous suction samplers. There are three types of porous suction samplers, vacuum-operated, pressure-vacuum lysimeters, high pressure vacuum samples. The suction samples are operated in the range of 0-70 centi bars and usually consist of ceramic and polytetrafluorethylene (PTFE). The operation range of PTFE is higher than ceramic cups. These samplers are well suited for in situ and repeated sampling form the same location. This paper discusses the physical properties and operating condition of such samplers to the utilized under our environmental sampling. (author)

  20. CHANG-ES. IX. Radio scale heights and scale lengths of a consistent sample of 13 spiral galaxies seen edge-on and their correlations

    Science.gov (United States)

    Krause, Marita; Irwin, Judith; Wiegert, Theresa; Miskolczi, Arpad; Damas-Segovia, Ancor; Beck, Rainer; Li, Jiang-Tao; Heald, George; Müller, Peter; Stein, Yelena; Rand, Richard J.; Heesen, Volker; Walterbos, Rene A. M.; Dettmar, Ralf-Jürgen; Vargas, Carlos J.; English, Jayanne; Murphy, Eric J.

    2018-03-01

    Aim. The vertical halo scale height is a crucial parameter to understand the transport of cosmic-ray electrons (CRE) and their energy loss mechanisms in spiral galaxies. Until now, the radio scale height could only be determined for a few edge-on galaxies because of missing sensitivity at high resolution. Methods: We developed a sophisticated method for the scale height determination of edge-on galaxies. With this we determined the scale heights and radial scale lengths for a sample of 13 galaxies from the CHANG-ES radio continuum survey in two frequency bands. Results: The sample average values for the radio scale heights of the halo are 1.1 ± 0.3 kpc in C-band and 1.4 ± 0.7 kpc in L-band. From the frequency dependence analysis of the halo scale heights we found that the wind velocities (estimated using the adiabatic loss time) are above the escape velocity. We found that the halo scale heights increase linearly with the radio diameters. In order to exclude the diameter dependence, we defined a normalized scale height h˜ which is quite similar for all sample galaxies at both frequency bands and does not depend on the star formation rate or the magnetic field strength. However, h˜ shows a tight anticorrelation with the mass surface density. Conclusions: The sample galaxies with smaller scale lengths are more spherical in the radio emission, while those with larger scale lengths are flatter. The radio scale height depends mainly on the radio diameter of the galaxy. The sample galaxies are consistent with an escape-dominated radio halo with convective cosmic ray propagation, indicating that galactic winds are a widespread phenomenon in spiral galaxies. While a higher star formation rate or star formation surface density does not lead to a higher wind velocity, we found for the first time observational evidence of a gravitational deceleration of CRE outflow, e.g. a lowering of the wind velocity from the galactic disk.

  1. MMPI-2 Symptom Validity (FBS) Scale: psychometric characteristics and limitations in a Veterans Affairs neuropsychological setting.

    Science.gov (United States)

    Gass, Carlton S; Odland, Anthony P

    2014-01-01

    The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) Symptom Validity (Fake Bad Scale [FBS]) Scale is widely used to assist in determining noncredible symptom reporting, despite a paucity of detailed research regarding its itemmetric characteristics. Originally designed for use in civil litigation, the FBS is often used in a variety of clinical settings. The present study explored its fundamental psychometric characteristics in a sample of 303 patients who were consecutively referred for a comprehensive examination in a Veterans Affairs (VA) neuropsychology clinic. FBS internal consistency (reliability) was .77. Its underlying factor structure consisted of three unitary dimensions (Tiredness/Distractibility, Stomach/Head Discomfort, and Claimed Virtue of Self/Others) accounting for 28.5% of the total variance. The FBS's internal structure showed factoral discordance, as Claimed Virtue was negatively related to most of the FBS and to its somatic complaint components. Scores on this 12-item FBS component reflected a denial of socially undesirable attitudes and behaviors (Antisocial Practices Scale) that is commonly expressed by the 1,138 males in the MMPI-2 normative sample. These 12 items significantly reduced FBS reliability, introducing systematic error variance. In this VA neuropsychological referral setting, scores on the FBS have ambiguous meaning because of its structural discordance.

  2. Sampling the reference set’ revisited

    NARCIS (Netherlands)

    Berkum, van E.E.M.; Linssen, H.N.; Overdijk, D.A.

    1998-01-01

    The confidence level of an inference table is defined as a weighted truth probability of the inference when sampling the reference set. The reference set is recognized by conditioning on the values of maximal partially ancillary statistics. In the sampling experiment values of incidental parameters

  3. Set-Asides and Subsidies in Auctions

    OpenAIRE

    Susan Athey; Dominic Coey; Jonathan Levin

    2011-01-01

    Set-asides and subsidies are used extensively in government procurement and natural resource sales. We analyze these policies in an empirical model of U.S. Forest Service timber auctions. The model fits the data well both within the sample of unrestricted sales where we estimate the model, and when we predict (out of sample) bidder entry and prices for small business set-asides. Our estimates suggest that restricting entry to small businesses substantially reduces efficiency and revenue, alth...

  4. Validating the Copenhagen Psychosocial Questionnaire (COPSOQ-II) Using Set-ESEM: Identifying Psychosocial Risk Factors in a Sample of School Principals.

    Science.gov (United States)

    Dicke, Theresa; Marsh, Herbert W; Riley, Philip; Parker, Philip D; Guo, Jiesi; Horwood, Marcus

    2018-01-01

    School principals world-wide report high levels of strain and attrition resulting in a shortage of qualified principals. It is thus crucial to identify psychosocial risk factors that reflect principals' occupational wellbeing. For this purpose, we used the Copenhagen Psychosocial Questionnaire (COPSOQ-II), a widely used self-report measure covering multiple psychosocial factors identified by leading occupational stress theories. We evaluated the COPSOQ-II regarding factor structure and longitudinal, discriminant, and convergent validity using latent structural equation modeling in a large sample of Australian school principals ( N = 2,049). Results reveal that confirmatory factor analysis produced marginally acceptable model fit. A novel approach we call set exploratory structural equation modeling (set-ESEM), where cross-loadings were only allowed within a priori defined sets of factors, fit well, and was more parsimonious than a full ESEM. Further multitrait-multimethod models based on the set-ESEM confirm the importance of a principal's psychosocial risk factors; Stressors and depression were related to demands and ill-being, while confidence and autonomy were related to wellbeing. We also show that working in the private sector was beneficial for showing a low psychosocial risk, while other demographics have little effects. Finally, we identify five latent risk profiles (high risk to no risk) of school principals based on all psychosocial factors. Overall the research presented here closes the theory application gap of a strong multi-dimensional measure of psychosocial risk-factors.

  5. The prediction of semantic consistency in self-descriptions: characteristics of persons and of terms that affect the consistency of responses to synonym and antonym pairs.

    Science.gov (United States)

    Goldberg, L R; Kilkowski, J M

    1985-01-01

    Subjects described themselves, using an alphabetically ordered list of 191 trait adjectives, which included sets of synonyms and antonyms, half of each type more difficult than the other half. Subjects were randomly assigned to one of two experimental conditions. In one condition, each adjective was listed with its dictionary definition; in the other condition, only the adjectives were listed. All subjects were administered a battery of demographic, cognitive, and personality measures. We analyzed both the relative consistency elicited by different pairs of terms and the individual differences in semantic consistency displayed by different sorts of subjects. Although the provision of definitions served to increase consistency (especially for the difficult antonyms), it did not decrease the range of consistency values across either synonym or antonym pairs. And, although interpair differences in semantic consistency were as difficult to predict in this study as in previous ones, individual differences were highly predictable. The implications of our many findings are discussed in the context of various hypotheses about semantic inconsistency in self-reports.

  6. A review of 20 Ne structure in a full microscopic self-consistent shell ...

    African Journals Online (AJOL)

    A set of single-particle energies together with a set of two-body matrix- elements derived in a self –consistent manner from the Reid soft–core potential are used to calculate the energy levels of 20Ne. We used a harmonic oscillator wave function folded with two-body correlation functions in our calculation. It is found that the ...

  7. Stability of DREEM in a Sample of Medical Students: A Prospective Study

    Directory of Open Access Journals (Sweden)

    Muhamad Saiful Bahri Yusoff

    2012-01-01

    Full Text Available Background. Over the last 15 year, DREEM was applied in various educational settings to appraise educational climate. So far, none of article reported its stability in Malaysian medical students. Objective. To determine stability of the DREEM to measure educational climate at different time and occasions using a sample of medical students. Methodology. A prospective cohort study was done on 196 first year medical students. It was administered to the medical students at four different intervals. The Cronbach's alpha and intraclass correlation analysis were applied to measure internal consistency and agreement level across the intervals. The analysis was done using SPSS 18. Result. A total of 186 (94.9% medical students responded completely to the DREEM inventory. The overall Cronbach's alpha value of the DREEM at the four measurements ranged between 0.91 and 0.94. The average Cronbach's alpha values of the five subscales ranged between 0.45 and 0.83. The ICC coefficient values for the DREEM total score was 0.67 and its subscales ranged between 0.51 and 0.62. Conclusion. This study supported satisfactory levels of stability and internal consistency of the DREEM to measure educational climate over multiple observations in a sample of Malaysian medical students. Continued research is required to optimise its psychometric credential across educational settings.

  8. The International Spinal Cord Injury Pain Basic Data Set

    DEFF Research Database (Denmark)

    Widerstrom-Noga, E.; Bryce, T.; Cardenas, D.D.

    2008-01-01

    Objective:To develop a basic pain data set (International Spinal Cord Injury Basic Pain Data Set, ISCIPDS:B) within the framework of the International spinal cord injury (SCI) data sets that would facilitate consistent collection and reporting of pain in the SCI population.Setting:International.......Methods:The ISCIPDS:B was developed by a working group consisting of individuals with published evidence of expertise in SCI-related pain regarding taxonomy, psychophysics, psychology, epidemiology and assessment, and one representative of the Executive Committee of the International SCI Standards and Data Sets...... on suggestions from members of the Executive Committee of the International SCI Standards and Data Sets, the ISCoS Scientific Committee, ASIA and APS Boards, and the Neuropathic Pain Special Interest Group of the IASP, individual reviewers and societies and the ISCoS Council.Results:The final ISCIPDS:B contains...

  9. Road and Street Centerlines, Street-The data set is a line feature consisting of 13948 line segments representing streets. It was created to maintain the location of city and county based streets., Published in 1989, Davis County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Road and Street Centerlines dataset current as of 1989. Street-The data set is a line feature consisting of 13948 line segments representing streets. It was created...

  10. Fully self-consistent GW calculations for molecules

    DEFF Research Database (Denmark)

    Rostgaard, Carsten; Jacobsen, Karsten Wedel; Thygesen, Kristian Sommer

    2010-01-01

    We calculate single-particle excitation energies for a series of 34 molecules using fully self-consistent GW, one-shot G0W0, Hartree-Fock (HF), and hybrid density-functional theory (DFT). All calculations are performed within the projector-augmented wave method using a basis set of Wannier...... functions augmented by numerical atomic orbitals. The GW self-energy is calculated on the real frequency axis including its full frequency dependence and off-diagonal matrix elements. The mean absolute error of the ionization potential (IP) with respect to experiment is found to be 4.4, 2.6, 0.8, 0.4, and 0...

  11. Control charts for location based on different sampling schemes

    NARCIS (Netherlands)

    Mehmood, R.; Riaz, M.; Does, R.J.M.M.

    2013-01-01

    Control charts are the most important statistical process control tool for monitoring variations in a process. A number of articles are available in the literature for the X̄ control chart based on simple random sampling, ranked set sampling, median-ranked set sampling (MRSS), extreme-ranked set

  12. Velocity Drives Greater Power Observed During Back Squat Using Cluster Sets.

    Science.gov (United States)

    Oliver, Jonathan M; Kreutzer, Andreas; Jenke, Shane C; Phillips, Melody D; Mitchell, Joel B; Jones, Margaret T

    2016-01-01

    This investigation compared the kinetics and kinematics of cluster sets (CLU) and traditional sets (TRD) during back squat in trained (RT) and untrained (UT) men. Twenty-four participants (RT = 12, 25 ± 1 year, 179.1 ± 2.2 cm, 84.6 ± 2.1 kg; UT = 12, 25 ± 1 year, 180.1 ± 1.8 cm, 85.4 ± 3.8 kg) performed TRD (4 × 10, 120-second rest) and CLU (4 × (2 × 5) 30 seconds between clusters; 90 seconds between sets) with 70% one repetition maximum, randomly. Kinematics and kinetics were sampled through force plate and linear position transducers. Resistance-trained produced greater overall force, velocity, and power; however, similar patterns were observed in all variables when comparing conditions. Cluster sets produced significantly greater force in isolated repetitions in sets 1-3, while consistently producing greater force due to a required reduction in load during set 4 resulting in greater total volume load (CLU, 3302.4 ± 102.7 kg; TRD, 3274.8 ± 102.8 kg). Velocity loss was lessened in CLU resulting in significantly higher velocities in sets 2 through 4. Furthermore, higher velocities were produced by CLU during later repetitions of each set. Cluster sets produced greater power output for an increasing number of repetitions in each set (set 1, 5 repetitions; sets 2 and 3, 6 repetitions; set 4, 8 repetitions), and the difference between conditions increased over subsequent sets. Time under tension increased over each set and was greater in TRD. This study demonstrates greater power output is driven by greater velocity when back squatting during CLU; therefore, velocity may be a useful measure by which to assess power.

  13. Development of a direct observation Measure of Environmental Qualities of Activity Settings.

    Science.gov (United States)

    King, Gillian; Rigby, Patty; Batorowicz, Beata; McMain-Klein, Margot; Petrenchik, Theresa; Thompson, Laura; Gibson, Michelle

    2014-08-01

    The aim of this study was to develop an observer-rated measure of aesthetic, physical, social, and opportunity-related qualities of leisure activity settings for young people (with or without disabilities). Eighty questionnaires were completed by sets of raters who independently rated 22 community/home activity settings. The scales of the 32-item Measure of Environmental Qualities of Activity Settings (MEQAS; Opportunities for Social Activities, Opportunities for Physical Activities, Pleasant Physical Environment, Opportunities for Choice, Opportunities for Personal Growth, and Opportunities to Interact with Adults) were determined using principal components analyses. Test-retest reliability was determined for eight activity settings, rated twice (4-6wk interval) by a trained rater. The factor structure accounted for 80% of the variance. The Kaiser-Meyer-Olkin Measure of Sampling Adequacy was 0.73. Cronbach's alphas for the scales ranged from 0.76 to 0.96, and interrater reliabilities (ICCs) ranged from 0.60 to 0.93. Test-retest reliabilities ranged from 0.70 to 0.90. Results suggest that the MEQAS has a sound factor structure and preliminary evidence of internal consistency, interrater, and test-retest reliability. The MEQAS is the first observer-completed measure of environmental qualities of activity settings. The MEQAS allows researchers to assess comprehensively qualities and affordances of activity settings, and can be used to design and assess environmental qualities of programs for young people. © 2014 Mac Keith Press.

  14. The Effect of Asymmetrical Sample Training on Retention Functions for Hedonic Samples in Rats

    Science.gov (United States)

    Simmons, Sabrina; Santi, Angelo

    2012-01-01

    Rats were trained in a symbolic delayed matching-to-sample task to discriminate sample stimuli that consisted of the presence of food or the absence of food. Asymmetrical sample training was provided in which one group was initially trained with only the food sample and the other group was initially trained with only the no-food sample. In…

  15. Principal component analysis applied to Fourier transform infrared spectroscopy for the design of calibration sets for glycerol prediction models in wine and for the detection and classification of outlier samples.

    Science.gov (United States)

    Nieuwoudt, Helene H; Prior, Bernard A; Pretorius, Isak S; Manley, Marena; Bauer, Florian F

    2004-06-16

    Principal component analysis (PCA) was used to identify the main sources of variation in the Fourier transform infrared (FT-IR) spectra of 329 wines of various styles. The FT-IR spectra were gathered using a specialized WineScan instrument. The main sources of variation included the reducing sugar and alcohol content of the samples, as well as the stage of fermentation and the maturation period of the wines. The implications of the variation between the different wine styles for the design of calibration models with accurate predictive abilities were investigated using glycerol calibration in wine as a model system. PCA enabled the identification and interpretation of samples that were poorly predicted by the calibration models, as well as the detection of individual samples in the sample set that had atypical spectra (i.e., outlier samples). The Soft Independent Modeling of Class Analogy (SIMCA) approach was used to establish a model for the classification of the outlier samples. A glycerol calibration for wine was developed (reducing sugar content 8% v/v) with satisfactory predictive ability (SEP = 0.40 g/L). The RPD value (ratio of the standard deviation of the data to the standard error of prediction) was 5.6, indicating that the calibration is suitable for quantification purposes. A calibration for glycerol in special late harvest and noble late harvest wines (RS 31-147 g/L, alcohol > 11.6% v/v) with a prediction error SECV = 0.65 g/L, was also established. This study yielded an analytical strategy that combined the careful design of calibration sets with measures that facilitated the early detection and interpretation of poorly predicted samples and outlier samples in a sample set. The strategy provided a powerful means of quality control, which is necessary for the generation of accurate prediction data and therefore for the successful implementation of FT-IR in the routine analytical laboratory.

  16. Domain Adaptation for Pedestrian Detection Based on Prediction Consistency

    Directory of Open Access Journals (Sweden)

    Yu Li-ping

    2014-01-01

    Full Text Available Pedestrian detection is an active area of research in computer vision. It remains a quite challenging problem in many applications where many factors cause a mismatch between source dataset used to train the pedestrian detector and samples in the target scene. In this paper, we propose a novel domain adaptation model for merging plentiful source domain samples with scared target domain samples to create a scene-specific pedestrian detector that performs as well as rich target domain simples are present. Our approach combines the boosting-based learning algorithm with an entropy-based transferability, which is derived from the prediction consistency with the source classifications, to selectively choose the samples showing positive transferability in source domains to the target domain. Experimental results show that our approach can improve the detection rate, especially with the insufficient labeled data in target scene.

  17. Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks

    Science.gov (United States)

    Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.

    Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.

  18. Time-consistent and market-consistent evaluations

    NARCIS (Netherlands)

    Pelsser, A.; Stadje, M.A.

    2014-01-01

    We consider evaluation methods for payoffs with an inherent financial risk as encountered for instance for portfolios held by pension funds and insurance companies. Pricing such payoffs in a way consistent to market prices typically involves combining actuarial techniques with methods from

  19. Multiple testing issues in discriminating compound-related peaks and chromatograms from high frequency noise, spikes and solvent-based noise in LC-MS data sets

    NARCIS (Netherlands)

    Nyangoma, Stephen O.; van Kampen, Antoine A. H. C.; Reijmers, Theo H.; Govorukhina, Natalia I.; van der Zee, Ate G. J.; Billingham, Lucinda J.; Bischoff, Rainer; Jansen, Ritsert C.

    2007-01-01

    Liquid Chromatography - Mass Spectrometry (LC-MS) is a powerful method for sensitive detection and quantification of proteins and peptides in complex biological fluids like serum. LC-MS produces complex data sets, consisting of some hundreds of millions of data points per sample at a resolution of

  20. International Spinal Cord Injury Upper Extremity Basic Data Set

    DEFF Research Database (Denmark)

    Biering-Sørensen, F; Bryden, A; Curt, A

    2014-01-01

    OBJECTIVE: To develop an International Spinal Cord Injury (SCI) Upper Extremity Basic Data Set as part of the International SCI Data Sets, which facilitates consistent collection and reporting of basic upper extremity findings in the SCI population. SETTING: International. METHODS: A first draft...

  1. The redshift distribution of cosmological samples: a forward modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina, E-mail: joerg.herbel@phys.ethz.ch, E-mail: tomasz.kacprzak@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch, E-mail: claudio.bruderer@phys.ethz.ch, E-mail: andrina.nicola@phys.ethz.ch [Institute for Astronomy, Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 27, 8093 Zürich (Switzerland)

    2017-08-01

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  2. The redshift distribution of cosmological samples: a forward modeling approach

    Science.gov (United States)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-08-01

    Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  3. The redshift distribution of cosmological samples: a forward modeling approach

    International Nuclear Information System (INIS)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-01-01

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  4. STP: A mathematically and physically consistent library of steam properties

    International Nuclear Information System (INIS)

    Aguilar, F.; Hutter, A.C.; Tuttle, P.G.

    1982-01-01

    A new FORTRAN library of subroutines has been developed from the fundamental equation of Keenan et al. to evaluate a large set of water properties including derivatives such as sound speed and isothermal compressibility. The STP library uses the true saturation envelope of the Keenan et al. fundamental equation. The evaluation of the true envelope by a continuation method is explained. This envelope, along with other design features, imparts an exceptionally high degree of thermodynamic and mathematical consistency to the STP library, even at the critical point. Accuracy and smoothness, library self-consistency, and designed user convenience make the STP library a reliable and versatile water property package

  5. 5 CFR 531.214 - Setting pay upon promotion.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Setting pay upon promotion. 531.214... Changes § 531.214 Setting pay upon promotion. (a) General. An agency must set an employee's payable rate of basic pay upon promotion following the rules in this section, consistent with 5 U.S.C. 5334(b...

  6. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  7. Supplementing electronic health records through sample collection and patient diaries: A study set within a primary care research database.

    Science.gov (United States)

    Joseph, Rebecca M; Soames, Jamie; Wright, Mark; Sultana, Kirin; van Staa, Tjeerd P; Dixon, William G

    2018-02-01

    To describe a novel observational study that supplemented primary care electronic health record (EHR) data with sample collection and patient diaries. The study was set in primary care in England. A list of 3974 potentially eligible patients was compiled using data from the Clinical Practice Research Datalink. Interested general practices opted into the study then confirmed patient suitability and sent out postal invitations. Participants completed a drug-use diary and provided saliva samples to the research team to combine with EHR data. Of 252 practices contacted to participate, 66 (26%) mailed invitations to patients. Of the 3974 potentially eligible patients, 859 (22%) were at participating practices, and 526 (13%) were sent invitations. Of those invited, 117 (22%) consented to participate of whom 86 (74%) completed the study. We have confirmed the feasibility of supplementing EHR with data collected directly from patients. Although the present study successfully collected essential data from patients, it also underlined the requirement for improved engagement with both patients and general practitioners to support similar studies. © 2017 The Authors. Pharmacoepidemiology & Drug Safety published by John Wiley & Sons Ltd.

  8. An eddy-permitting, dynamically consistent adjoint-based assimilation system for the tropical Pacific: Hindcast experiments in 2000

    KAUST Repository

    Hoteit, Ibrahim

    2010-03-02

    An eddy-permitting adjoint-based assimilation system has been implemented to estimate the state of the tropical Pacific Ocean. The system uses the Massachusetts Institute of Technology\\'s general circulation model and its adjoint. The adjoint method is used to adjust the model to observations by controlling the initial temperature and salinity; temperature, salinity, and horizontal velocities at the open boundaries; and surface fluxes of momentum, heat, and freshwater. The model is constrained with most of the available data sets in the tropical Pacific, including Tropical Atmosphere and Ocean, ARGO, expendable bathythermograph, and satellite SST and sea surface height data, and climatologies. Results of hindcast experiments in 2000 suggest that the iterated adjoint-based descent is able to significantly improve the model consistency with the multivariate data sets, providing a dynamically consistent realization of the tropical Pacific circulation that generally matches the observations to within specified errors. The estimated model state is evaluated both by comparisons with observations and by checking the controls, the momentum balances, and the representation of small-scale features that were not well sampled by the observations used in the assimilation. As part of these checks, the estimated controls are smoothed and applied in independent model runs to check that small changes in the controls do not greatly change the model hindcast. This is a simple ensemble-based uncertainty analysis. In addition, the original and smoothed controls are applied to a version of the model with doubled horizontal resolution resulting in a broadly similar “downscaled” hindcast, showing that the adjustments are not tuned to a single configuration (meaning resolution, topography, and parameter settings). The time-evolving model state and the adjusted controls should be useful for analysis or to supply the forcing, initial, and boundary conditions for runs of other models.

  9. Introduction to set theory and topology

    CERN Document Server

    Kuratowski, Kazimierz; Stark, M

    1972-01-01

    Introduction to Set Theory and Topology describes the fundamental concepts of set theory and topology as well as its applicability to analysis, geometry, and other branches of mathematics, including algebra and probability theory. Concepts such as inverse limit, lattice, ideal, filter, commutative diagram, quotient-spaces, completely regular spaces, quasicomponents, and cartesian products of topological spaces are considered. This volume consists of 21 chapters organized into two sections and begins with an introduction to set theory, with emphasis on the propositional calculus and its applica

  10. Device for sampling liquid radioactive materials

    International Nuclear Information System (INIS)

    Vlasak, L.

    1987-01-01

    Remote sampling of radioactive materials in the process of radioactive waste treatment is claimed by the Czechoslovak Patent Document 238599. The existing difficulties are eliminated consisting in a complex remote control of sampling featuring the control of sliding and rotary movements of the sampling device. The new device consists of a vertical pipe with an opening provided with a cover. A bend is provided above the opening level housing flow distributors. A sampling tray is pivoted in the cover. In sampling, the tray is tilted in the vertical pipe space while it tilts back when filled. The sample flows into a vessel below the tray. Only rotary movement is thus sufficient for controlling the tray. (Z.M.)

  11. Two new integrable couplings of the soliton hierarchies with self-consistent sources

    International Nuclear Information System (INIS)

    Tie-Cheng, Xia

    2010-01-01

    A kind of integrable coupling of soliton equations hierarchy with self-consistent sources associated with s-tilde l(4) has been presented (Yu F J and Li L 2009 Appl. Math. Comput. 207 171; Yu F J 2008 Phys. Lett. A 372 6613). Based on this method, we construct two integrable couplings of the soliton hierarchy with self-consistent sources by using the loop algebra s-tilde l(4). In this paper, we also point out that there are some errors in these references and we have corrected these errors and set up new formula. The method can be generalized to other soliton hierarchy with self-consistent sources. (general)

  12. Toward a consistent model for glass dissolution

    International Nuclear Information System (INIS)

    Strachan, D.M.; McGrail, B.P.; Bourcier, W.L.

    1994-01-01

    Understanding the process of glass dissolution in aqueous media has advanced significantly over the last 10 years through the efforts of many scientists around the world. Mathematical models describing the glass dissolution process have also advanced from simple empirical functions to structured models based on fundamental principles of physics, chemistry, and thermodynamics. Although borosilicate glass has been selected as the waste form for disposal of high-level wastes in at least 5 countries, there is no international consensus on the fundamental methodology for modeling glass dissolution that could be used in assessing the long term performance of waste glasses in a geologic repository setting. Each repository program is developing their own model and supporting experimental data. In this paper, we critically evaluate a selected set of these structured models and show that a consistent methodology for modeling glass dissolution processes is available. We also propose a strategy for a future coordinated effort to obtain the model input parameters that are needed for long-term performance assessments of glass in a geologic repository. (author) 4 figs., tabs., 75 refs

  13. Macroscopic self-consistent model for external-reflection near-field microscopy

    International Nuclear Information System (INIS)

    Berntsen, S.; Bozhevolnaya, E.; Bozhevolnyi, S.

    1993-01-01

    The self-consistent macroscopic approach based on the Maxwell equations in two-dimensional geometry is developed to describe tip-surface interaction in external-reflection near-field microscopy. The problem is reduced to a single one-dimensional integral equation in terms of the Fourier components of the field at the plane of the sample surface. This equation is extended to take into account a pointlike scatterer placed on the sample surface. The power of light propagating toward the detector as the fiber mode is expressed by using the self-consistent field at the tip surface. Numerical results for trapezium-shaped tips are presented. The authors show that the sharper tip and the more confined fiber mode result in better resolution of the near-field microscope. Moreover, it is found that the tip-surface distance should not be too small so that better resolution is ensured. 14 refs., 10 figs

  14. Time Clustered Sampling Can Inflate the Inferred Substitution Rate in Foot-And-Mouth Disease Virus Analyses.

    Science.gov (United States)

    Pedersen, Casper-Emil T; Frandsen, Peter; Wekesa, Sabenzia N; Heller, Rasmus; Sangula, Abraham K; Wadsworth, Jemma; Knowles, Nick J; Muwanika, Vincent B; Siegismund, Hans R

    2015-01-01

    With the emergence of analytical software for the inference of viral evolution, a number of studies have focused on estimating important parameters such as the substitution rate and the time to the most recent common ancestor (tMRCA) for rapidly evolving viruses. Coupled with an increasing abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully consider how samples are combined.

  15. The Role of Cognitive Factors in Predicting Balance and Fall Risk in a Neuro-Rehabilitation Setting

    OpenAIRE

    Saverino, A.; Waller, D.; Rantell, K.; Parry, R.; Moriarty, A.; Playford, E. D.

    2016-01-01

    Introduction\\ud \\ud There is a consistent body of evidence supporting the role of cognitive functions, particularly executive function, in the elderly and in neurological conditions which become more frequent with ageing. The aim of our study was to assess the role of different domains of cognitive functions to predict balance and fall risk in a sample of adults with various neurological conditions in a rehabilitation setting.\\ud \\ud Methods\\ud \\ud This was a prospective, cohort study conduct...

  16. Multiplicative renormalizability and self-consistent treatments of the Schwinger-Dyson equations

    International Nuclear Information System (INIS)

    Brown, N.; Dorey, N.

    1989-11-01

    Many approximations to the Schwinger-Dyson equations place constraints on the renormalization constants of a theory. The requirement that the solutions to the equations be multiplicatively renormalizable also places constraints on these constants. Demanding that these two sets of constraints be compatible is an important test of the self-consistency of the approximations made. We illustrate this idea by considering the equation for the fermion propagator in massless quenched quantum electrodynamics, (QED), checking the consistency of various approximations. In particular, we show that the much used 'ladder' approximation is self-consistent, provided that the coupling constant is renormalized in a particular way. We also propose another approximation which satisfies this self-consistency test, but requires that the coupling be unrenormalized, as should be the case in the full quenched approximation. This new approximation admits an exact solution, which also satisfies the renormalization group equation for the quenched approximation. (author)

  17. A nanocomposite consisting of graphene oxide and Fe3O4 magnetic nanoparticles for the extraction of flavonoids from tea, wine and urine samples

    International Nuclear Information System (INIS)

    Wu, Jianrong; Xiao, Deli; Peng, Jun; Wang, Cuixia; Zhang, Chan; He, Jia; Zhao, Hongyan; He, Hua

    2015-01-01

    We describe a single-step solvothermal method for the preparation of nanocomposites consisting of graphene oxide and Fe 3 O 4 nanoparticles (GO/Fe 3 O 4 ). This material is shown to be useful as a magnetic sorbent for the extraction of flavonoids from green tea, red wine, and urine samples. The nanocomposite is taking advantage of the high surface area of GO and the magnetic phase separation feature of the magnetic sorbent. The nanocomposite is recyclable and was applied to the extraction of flavonoids prior to their determination by HPLC. The effects of amount of surfactant, pH value of the sample solution, extraction time, and desorption condition on the extraction efficiency, and the regeneration conditions were optimized. The limits of detection for luteolin, quercetin and kaempferol range from 0.2 to 0.5 ng∙ mL −1 in urine, from 3.0 to 6.0 ng∙mL −1 in green tea, and from 1.0 to 2.5 ng∙mL −1 in red wine. The recoveries are between 82.0 and 101.4 %, with relative standard deviations of <9.3 %. (author)

  18. Retrieving unobserved consideration sets from household panel data

    NARCIS (Netherlands)

    J.E.M. van Nierop; R. Paap (Richard); B. Bronnenberg; Ph.H.B.F. Franses (Philip Hans); M. Wedel (Michel)

    2005-01-01

    textabstractWe propose a new model to describe consideration, consisting of a multivariate probit model component for consideration and a multinomial probit model component for choice, given consideration. The approach allows one to analyze stated consideration set data, revealed consideration set

  19. Hybrid Compensatory-Noncompensatory Choice Sets in Semicompensatory Models

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Bekhor, Shlomo; Shiftan, Yoram

    2013-01-01

    Semicompensatory models represent a choice process consisting of an elimination-based choice set formation on satisfaction of criterion thresholds and a utility-based choice. Current semicompensatory models assume a purely noncompensatory choice set formation and therefore do not support multinom...

  20. Consistent Set of Experiments from ICSBEP Handbook for Evaluation of Criticality Calculation Prediction of Apparatus of External Fuel Cycle with Different Fuel

    Energy Technology Data Exchange (ETDEWEB)

    Golovko, Yury E. [FSUE ' SSC RF-IPPE' , 249033, Bondarenko Square 1, Obninsk (Russian Federation)

    2008-07-01

    Experiments with plutonium, low enriched uranium and uranium-233 from the ICSBEP1 Handbook are being considered in this paper. Among these experiments it was selected only those, which seem to be the most relevant to the evaluation of uncertainty of critical mass of mixtures of plutonium or low enriched uranium or uranium-233 with light water. All selected experiments were examined and covariance matrices of criticality uncertainties were developed along with some uncertainties were revised. Statistical analysis of these experiments was performed and some contradictions were discovered and eliminated. Evaluation of accuracy of prediction of criticality calculations was performed using the internally consistent set of experiments with plutonium, low enriched uranium and uranium-233 remained after the statistical analyses. The application objects for the evaluation of calculational prediction of criticality were water-reflected spherical systems of homogeneous aqueous mixtures of plutonium or low enriched uranium or uranium-233 of different concentrations which are simplified models of apparatus of external fuel cycle. It is shows that the procedure allows to considerably reduce uncertainty in k{sub eff} caused by the uncertainties in neutron cross-sections. Also it is shows that the results are practically independent of initial covariance matrices of nuclear data uncertainties. (authors)

  1. Changes in pectins and product consistency during the concentration of tomato juice to paste.

    Science.gov (United States)

    Anthon, Gordon E; Diaz, Jerome V; Barrett, Diane M

    2008-08-27

    Concentrating tomato juice to paste during the tomato season allows for preservation and long-term storage, but subsequent dilution for formulation of value-added products is known to result in a loss of consistency. To understand the reasons for this, samples of unconcentrated juice, processing intermediates, and concentrated paste were collected from an industrial processing plant during normal commercial production. All samples were diluted with water to 5 degrees Brix and then analyzed for consistency and pectin content. Whole juice consistency, measured with a Bostwick consistometer, decreased through the course of juice concentration, with the largest change occurring early in the process, as the juice was concentrated from 5 to 10 degrees Brix. This decrease in consistency occurred during the production of paste from both hot- and cold-break juices. The change in Bostwick value was correlated with a decrease in the precipitate weight ratio. The loss of consistency during commercial processing was not the direct result of water removal because a sample of this same 5 degrees Brix juice could be concentrated 2-fold in a vacuum oven and then diluted back to 5 degrees Brix with no change in consistency or precipitate ratio. Total pectin content did not change as the juice was concentrated to paste, but the proportion of the total pectin that was water soluble increased. The greatest increases in pectin solubility occurred during the hot break and late in the process where the evaporator temperature was the highest.

  2. Design of a groundwater sampling network for Minnesota

    International Nuclear Information System (INIS)

    Kanivetsky, R.

    1977-01-01

    This folio was compiled to facilitate the use of groundwater as a sampling medium to aid in exploration for hitherto undiscovered deposits of uranium in the subsurface rocks of Minnesota. The report consists of the following sheets of the hydrogeologic map of Minnesota: (1) map of bedrock hydrogeology, (2) generalized cross sections of the hydrogeologic map of Minnesota, showing both Quaternary deposits and bedrock, (3) map of waterwells that penetrate Precambrian rocks in Minnesota. A list of these wells, showing locations, names of owners, type of Precambrian aquifers penetrated, lithologic material of the aquifers, and well depths is provided in the appendix to this report. Structural settings, locations, and composition of the bedrock aquifers, movement of groundwater, and preliminary suggestions for a sampling program are discussed below under the heading Bedrock Hydrogeology of Minnesota. The map sheet showing Quaternary hydrogeology is not included in this report because the chemistry of groundwater in these deposits is not directly related to bedrock mineralization

  3. A CVAR scenario for a standard monetary model using theory-consistent expectations

    DEFF Research Database (Denmark)

    Juselius, Katarina

    2017-01-01

    A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination and shows that all assumptions about the model's shock structure and steady...

  4. Spherical sampling

    CERN Document Server

    Freeden, Willi; Schreiner, Michael

    2018-01-01

    This book presents, in a consistent and unified overview, results and developments in the field of today´s spherical sampling, particularly arising in mathematical geosciences. Although the book often refers to original contributions, the authors made them accessible to (graduate) students and scientists not only from mathematics but also from geosciences and geoengineering. Building a library of topics in spherical sampling theory it shows how advances in this theory lead to new discoveries in mathematical, geodetic, geophysical as well as other scientific branches like neuro-medicine. A must-to-read for everybody working in the area of spherical sampling.

  5. Automatic data acquisition and on-line analysis of trace element concentration in serum samples

    International Nuclear Information System (INIS)

    Lecomte, R.; Paradis, P.; Monaro, S.

    1978-01-01

    A completely automated system has been developed to determine the trace element concentration in biological samples by measuring charged particle induced X-rays. A CDC-3100 computer with ADC and CAMAC interface is employed to control the data collection apparatus, acquire data and perform simultaneously the analysis. The experimental set-up consists of a large square plexiglass chamber in which a commercially available 750H Kodak Carousel is suitably arranged as a computer controlled sample changer. A method of extracting trace element concentrations using reference spectra is presented and an on-line program has been developed to easily and conveniently obtain final results at the end of each run. (Auth.)

  6. 45 CFR 1356.84 - Sampling.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Sampling. 1356.84 Section 1356.84 Public Welfare....84 Sampling. (a) The State agency may collect and report the information required in section 1356.83(e) of this part on a sample of the baseline population consistent with the sampling requirements...

  7. Out-of-Sample Generalizations for Supervised Manifold Learning for Classification.

    Science.gov (United States)

    Vural, Elif; Guillemot, Christine

    2016-03-01

    Supervised manifold learning methods for data classification map high-dimensional data samples to a lower dimensional domain in a structure-preserving way while increasing the separation between different classes. Most manifold learning methods compute the embedding only of the initially available data; however, the generalization of the embedding to novel points, i.e., the out-of-sample extension problem, becomes especially important in classification applications. In this paper, we propose a semi-supervised method for building an interpolation function that provides an out-of-sample extension for general supervised manifold learning algorithms studied in the context of classification. The proposed algorithm computes a radial basis function interpolator that minimizes an objective function consisting of the total embedding error of unlabeled test samples, defined as their distance to the embeddings of the manifolds of their own class, as well as a regularization term that controls the smoothness of the interpolation function in a direction-dependent way. The class labels of test data and the interpolation function parameters are estimated jointly with an iterative process. Experimental results on face and object images demonstrate the potential of the proposed out-of-sample extension algorithm for the classification of manifold-modeled data sets.

  8. SamplingStrata: An R Package for the Optimization of Strati?ed Sampling

    Directory of Open Access Journals (Sweden)

    Giulio Barcaroli

    2014-11-01

    Full Text Available When designing a sampling survey, usually constraints are set on the desired precision levels regarding one or more target estimates (the Ys. If a sampling frame is available, containing auxiliary information related to each unit (the Xs, it is possible to adopt a stratified sample design. For any given strati?cation of the frame, in the multivariate case it is possible to solve the problem of the best allocation of units in strata, by minimizing a cost function sub ject to precision constraints (or, conversely, by maximizing the precision of the estimates under a given budget. The problem is to determine the best stratification in the frame, i.e., the one that ensures the overall minimal cost of the sample necessary to satisfy precision constraints. The Xs can be categorical or continuous; continuous ones can be transformed into categorical ones. The most detailed strati?cation is given by the Cartesian product of the Xs (the atomic strata. A way to determine the best stratification is to explore exhaustively the set of all possible partitions derivable by the set of atomic strata, evaluating each one by calculating the corresponding cost in terms of the sample required to satisfy precision constraints. This is una?ordable in practical situations, where the dimension of the space of the partitions can be very high. Another possible way is to explore the space of partitions with an algorithm that is particularly suitable in such situations: the genetic algorithm. The R package SamplingStrata, based on the use of a genetic algorithm, allows to determine the best strati?cation for a population frame, i.e., the one that ensures the minimum sample cost necessary to satisfy precision constraints, in a multivariate and multi-domain case.

  9. An experimental set-up to test heatmoisture exchangers

    NARCIS (Netherlands)

    N. Ünal (N.); J.C. Pompe (Jan); W.P. Holland (Wim); I. Gultuna; P.E.M. Huygen; K. Jabaaij (K.); C. Ince (Can); B. Saygin (B.); H.A. Bruining (Hajo)

    1995-01-01

    textabstractObjectives: The purpose of this study was to build an experimental set-up to assess continuously the humidification, heating and resistance properties of heat-moisture exchangers (HMEs) under clinical conditions. Design: The experimental set-up consists of a patient model, measurement

  10. Solvent Hold Tank Sample Results for MCU-16-991-992-993: July 2016 Monthly sample and MCU-16-1033-1034-1035: July 2016 Superwashed Sample

    Energy Technology Data Exchange (ETDEWEB)

    Fondeur, F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-11-25

    SRNL received one set of SHT samples (MCU-16-991, MCU-16-992 and MCU-16-993), pulled on 07/13/2016 and another set of SHT samples (MCU-16-1033, MCU-16-1034, and MCU-16-1035) that were pulled on 07/24/2016 after the solvent was superwashed with 300 mM sodium hydroxide for analysis. Samples MCU-16-991, MCU-16-992, and MCU-16-993 were combined into one sample (MCU-16-991-992-993) and samples MCU-16-1033, MCU-16-1034, and MCU-16-1035 were combined into one sample (MCU-16-1033-1034-1035). Of the two composite samples MCU-16-1033-1034-1035 represents the current chemical state of the solvent at MCU. All analytical conclusions are based on the chemical analysis of MCU-16-1033-1034-1035. There were no chemical differences between MCU-16- 991-992-993 and superwashed MCU-16-1033-1034-1035.

  11. Choice of Sample Split in Out-of-Sample Forecast Evaluation

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Timmermann, Allan

    , while conversely the power of forecast evaluation tests is strongest with long out-of-sample periods. To deal with size distortions, we propose a test statistic that is robust to the effect of considering multiple sample split points. Empirical applications to predictabil- ity of stock returns......Out-of-sample tests of forecast performance depend on how a given data set is split into estimation and evaluation periods, yet no guidance exists on how to choose the split point. Empirical forecast evaluation results can therefore be difficult to interpret, particularly when several values...... and inflation demonstrate that out-of-sample forecast evaluation results can critically depend on how the sample split is determined....

  12. Benchmarking of protein descriptor sets in proteochemometric modeling (part 2): modeling performance of 13 amino acid descriptor sets

    Science.gov (United States)

    2013-01-01

    Background While a large body of work exists on comparing and benchmarking descriptors of molecular structures, a similar comparison of protein descriptor sets is lacking. Hence, in the current work a total of 13 amino acid descriptor sets have been benchmarked with respect to their ability of establishing bioactivity models. The descriptor sets included in the study are Z-scales (3 variants), VHSE, T-scales, ST-scales, MS-WHIM, FASGAI, BLOSUM, a novel protein descriptor set (termed ProtFP (4 variants)), and in addition we created and benchmarked three pairs of descriptor combinations. Prediction performance was evaluated in seven structure-activity benchmarks which comprise Angiotensin Converting Enzyme (ACE) dipeptidic inhibitor data, and three proteochemometric data sets, namely (1) GPCR ligands modeled against a GPCR panel, (2) enzyme inhibitors (NNRTIs) with associated bioactivities against a set of HIV enzyme mutants, and (3) enzyme inhibitors (PIs) with associated bioactivities on a large set of HIV enzyme mutants. Results The amino acid descriptor sets compared here show similar performance (set differences ( > 0.3 log units RMSE difference and >0.7 difference in MCC). Combining different descriptor sets generally leads to better modeling performance than utilizing individual sets. The best performers were Z-scales (3) combined with ProtFP (Feature), or Z-Scales (3) combined with an average Z-Scale value for each target, while ProtFP (PCA8), ST-Scales, and ProtFP (Feature) rank last. Conclusions While amino acid descriptor sets capture different aspects of amino acids their ability to be used for bioactivity modeling is still – on average – surprisingly similar. Still, combining sets describing complementary information consistently leads to small but consistent improvement in modeling performance (average MCC 0.01 better, average RMSE 0.01 log units lower). Finally, performance differences exist between the targets compared thereby underlining that

  13. Use of the SSF equations in the Kojima-Moon-Ochi thermodynamic consistency test of isothermal vapour-liquid equilibrium data

    Directory of Open Access Journals (Sweden)

    SLOBODAN P. SERBANOVIC

    2000-12-01

    Full Text Available The Kojima-Moon-Ochi (KMO thermodynamic consistency test of vapour–liquid equilibrium (VLE measurements for 32 isothermal data sets of binary systems of various complexity was applied using two fitting equations: the Redlich-Kister equation and the Sum of Symmetrical Functions. It was shown that the enhanced reliability of the fitting of the experimental data can change the conclusions drawn on their thermodynamic consistency in those cases of VLE data sets that are estimated to be near the border of consistency.

  14. Maintaining consistency in distributed systems

    Science.gov (United States)

    Birman, Kenneth P.

    1991-01-01

    In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability.

  15. Perceived climate in physical activity settings.

    Science.gov (United States)

    Gill, Diane L; Morrow, Ronald G; Collins, Karen E; Lucey, Allison B; Schultz, Allison M

    2010-01-01

    This study focused on the perceived climate for LGBT youth and other minority groups in physical activity settings. A large sample of undergraduates and a selected sample including student teachers/interns and a campus Pride group completed a school climate survey and rated the climate in three physical activity settings (physical education, organized sport, exercise). Overall, school climate survey results paralleled the results with national samples revealing high levels of homophobic remarks and low levels of intervention. Physical activity climate ratings were mid-range, but multivariate analysis of variation test (MANOVA) revealed clear differences with all settings rated more inclusive for racial/ethnic minorities and most exclusive for gays/lesbians and people with disabilities. The results are in line with national surveys and research suggesting sexual orientation and physical characteristics are often the basis for harassment and exclusion in sport and physical activity. The current results also indicate that future physical activity professionals recognize exclusion, suggesting they could benefit from programs that move beyond awareness to skills and strategies for creating more inclusive programs.

  16. Identification of self-consistent modulons from bacterial microarray expression data with the help of structured regulon gene sets

    KAUST Repository

    Permina, Elizaveta A.; Medvedeva, Yulia; Baeck, Pia M.; Hegde, Shubhada R.; Mande, Shekhar C.; Makeev, Vsevolod J.

    2013-01-01

    interactions helps to evaluate parameters for regulatory subnetwork inference. We suggest a procedure for modulon construction where a seed regulon is iteratively updated with genes having expression patterns similar to those for regulon member genes. A set

  17. Estimates of Inequality Indices Based on Simple Random, Ranked Set, and Systematic Sampling

    OpenAIRE

    Bansal, Pooja; Arora, Sangeeta; Mahajan, Kalpana K.

    2013-01-01

    Gini index, Bonferroni index, and Absolute Lorenz index are some popular indices of inequality showing different features of inequality measurement. In general simple random sampling procedure is commonly used to estimate the inequality indices and their related inference. The key condition that the samples must be drawn via simple random sampling procedure though makes calculations much simpler but this assumption is often violated in practice as the data does not always yield simple random ...

  18. G-Consistent Subsets and Reduced Dynamical Quantum Maps

    Science.gov (United States)

    Ceballos, Russell R.

    A quantum system which evolves in time while interacting with an external environ- ment is said to be an open quantum system (OQS), and the influence of the environment on the unperturbed unitary evolution of the system generally leads to non-unitary dynamics. This kind of open system dynamical evolution has been typically modeled by a Standard Prescription (SP) which assumes that the state of the OQS is initially uncorrelated with the environment state. It is here shown that when a minimal set of physically motivated assumptions are adopted, not only does there exist constraints on the reduced dynamics of an OQS such that this SP does not always accurately describe the possible initial cor- relations existing between the OQS and environment, but such initial correlations, and even entanglement, can be witnessed when observing a particular class of reduced state transformations termed purity extractions are observed. Furthermore, as part of a more fundamental investigation to better understand the minimal set of assumptions required to formulate well defined reduced dynamical quantum maps, it is demonstrated that there exists a one-to-one correspondence between the set of initial reduced states and the set of admissible initial system-environment composite states when G-consistency is enforced. Given the discussions surrounding the requirement of complete positivity and the reliance on the SP, the results presented here may well be found valuable for determining the ba- sic properties of reduced dynamical maps, and when restrictions on the OQS dynamics naturally emerge.

  19. Simplified DFT methods for consistent structures and energies of large systems

    Science.gov (United States)

    Caldeweyher, Eike; Gerit Brandenburg, Jan

    2018-05-01

    Kohn–Sham density functional theory (DFT) is routinely used for the fast electronic structure computation of large systems and will most likely continue to be the method of choice for the generation of reliable geometries in the foreseeable future. Here, we present a hierarchy of simplified DFT methods designed for consistent structures and non-covalent interactions of large systems with particular focus on molecular crystals. The covered methods are a minimal basis set Hartree–Fock (HF-3c), a small basis set screened exchange hybrid functional (HSE-3c), and a generalized gradient approximated functional evaluated in a medium-sized basis set (B97-3c), all augmented with semi-classical correction potentials. We give an overview on the methods design, a comprehensive evaluation on established benchmark sets for geometries and lattice energies of molecular crystals, and highlight some realistic applications on large organic crystals with several hundreds of atoms in the primitive unit cell.

  20. Implicit leadership theories in applied settings: factor structure, generalizability, and stability over time.

    Science.gov (United States)

    Epitropaki, Olga; Martin, Robin

    2004-04-01

    The present empirical investigation had a 3-fold purpose: (a) to cross-validate L. R. Offermann, J. K. Kennedy, and P. W. Wirtz's (1994) scale of Implicit Leadership Theories (ILTs) in several organizational settings and to further provide a shorter scale of ILTs in organizations; (b) to assess the generalizability of ILTs across different employee groups, and (c) to evaluate ILTs' change over time. Two independent samples were used for the scale validation (N1 = 500 and N2 = 439). A 6-factor structure (Sensitivity, Intelligence, Dedication, Dynamism, Tyranny, and Masculinity) was found to most accurately represent ELTs in organizational settings. Regarding the generalizability of ILTs, although the 6-factor structure was consistent across different employee groups, there was only partial support for total factorial invariance. Finally, evaluation of gamma, beta, and alpha change provided support for ILTs' stability over time.

  1. Sludge characterization: the role of physical consistency

    Energy Technology Data Exchange (ETDEWEB)

    Spinosa, Ludovico; Wichmann, Knut

    2003-07-01

    The physical consistency is an important parameter in sewage sludge characterization as it strongly affects almost all treatment, utilization and disposal operations. In addition, in many european Directives a reference to the physical consistency is reported as a characteristic to be evaluated for fulfilling the regulations requirements. Further, in many analytical methods for sludge different procedures are indicated depending on whether a sample is liquid or not, is solid or not. Three physical behaviours (liquid, paste-like and solid) can be observed with sludges, so the development of analytical procedures to define the boundary limit between liquid and paste-like behaviours (flowability) and that between solid and paste-like ones (solidity) is of growing interest. Several devices can be used for evaluating the flowability and solidity properties, but often they are costly and difficult to be operated in the field. Tests have been carried out to evaluate the possibility to adopt a simple extrusion procedure for flowability measurements, and a Vicat needle for solidity ones. (author)

  2. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  3. 1999 Baseline Sampling and Analysis Sampling Locations, Geographic NAD83, LOSCO (2004) [BSA_1999_sample_locations_LOSCO_2004

    Data.gov (United States)

    Louisiana Geographic Information Center — The monitor point data set was produced as a part of the Baseline Sampling and Analysis program coordinated by the Louisiana Oil Spill Coordinator's Office. This...

  4. 1997 Baseline Sampling and Analysis Sample Locations, Geographic NAD83, LOSCO (2004) [BSA_1997_sample_locations_LOSCO_2004

    Data.gov (United States)

    Louisiana Geographic Information Center — The monitor point data set was produced as a part of the Baseline Sampling and Analysis (BSA) program coordinated by the Louisiana Oil Spill Coordinator's Office....

  5. 1998 Baseline Sampling and Analysis Sampling Locations, Geographic NAD83, LOSCO (2004) [BSA_1998_sample_locations_LOSCO_2004

    Data.gov (United States)

    Louisiana Geographic Information Center — The monitor point data set was produced as a part of the Baseline Sampling and Analysis program coordinated by the Louisiana Oil Spill Coordinator's Office. This...

  6. Flood damage curves for consistent global risk assessments

    Science.gov (United States)

    de Moel, Hans; Huizinga, Jan; Szewczyk, Wojtek

    2016-04-01

    Assessing potential damage of flood events is an important component in flood risk management. Determining direct flood damage is commonly done using depth-damage curves, which denote the flood damage that would occur at specific water depths per asset or land-use class. Many countries around the world have developed flood damage models using such curves which are based on analysis of past flood events and/or on expert judgement. However, such damage curves are not available for all regions, which hampers damage assessments in those regions. Moreover, due to different methodologies employed for various damage models in different countries, damage assessments cannot be directly compared with each other, obstructing also supra-national flood damage assessments. To address these problems, a globally consistent dataset of depth-damage curves has been developed. This dataset contains damage curves depicting percent of damage as a function of water depth as well as maximum damage values for a variety of assets and land use classes (i.e. residential, commercial, agriculture). Based on an extensive literature survey concave damage curves have been developed for each continent, while differentiation in flood damage between countries is established by determining maximum damage values at the country scale. These maximum damage values are based on construction cost surveys from multinational construction companies, which provide a coherent set of detailed building cost data across dozens of countries. A consistent set of maximum flood damage values for all countries was computed using statistical regressions with socio-economic World Development Indicators from the World Bank. Further, based on insights from the literature survey, guidance is also given on how the damage curves and maximum damage values can be adjusted for specific local circumstances, such as urban vs. rural locations, use of specific building material, etc. This dataset can be used for consistent supra

  7. The effect on dose accumulation accuracy of inverse-consistency and transitivity error reduced deformation maps

    International Nuclear Information System (INIS)

    Hardcastle, Nicholas; Bender, Edward T.; Tomé, Wolfgang A.

    2014-01-01

    It has previously been shown that deformable image registrations (DIRs) often result in deformation maps that are neither inverse-consistent nor transitive, and that the dose accumulation based on these deformation maps can be inconsistent if different image pathways are used for dose accumulation. A method presented to reduce inverse consistency and transitivity errors has been shown to result in more consistent dose accumulation, regardless of the image pathway selected for dose accumulation. The present study investigates the effect on the dose accumulation accuracy of deformation maps processed to reduce inverse consistency and transitivity errors. A set of lung 4DCT phases were analysed, consisting of four images on which a dose grid was created. Dose to 75 corresponding anatomical locations was manually tracked. Dose accumulation was performed between all image sets with Demons derived deformation maps as well as deformation maps processed to reduce inverse consistency and transitivity errors. The ground truth accumulated dose was then compared with the accumulated dose derived from DIR. Two dose accumulation image pathways were considered. The post-processing method to reduce inverse consistency and transitivity errors had minimal effect on the dose accumulation accuracy. There was a statistically significant improvement in dose accumulation accuracy for one pathway, but for the other pathway there was no statistically significant difference. A post-processing technique to reduce inverse consistency and transitivity errors has a positive, yet minimal effect on the dose accumulation accuracy. Thus the post-processing technique improves consistency of dose accumulation with minimal effect on dose accumulation accuracy.

  8. Sampling in practice

    DEFF Research Database (Denmark)

    Esbensen, Kim Harry; Petersen, Lars

    2005-01-01

    A basic knowledge of the Theory of Sampling (TOS) and a set of only eight sampling unit operations is all the practical sampler needs to ensure representativeness of samples extracted from all kinds of lots: production batches, - truckloads, - barrels, sub-division in the laboratory, sampling...... in nature and in the field (environmental sampling, forestry, geology, biology), from raw materials or manufactory processes etc. We here can only give a brief introduction to the Fundamental Sampling Principle (FSP) and these eight Sampling Unit Operations (SUO’s). Always respecting FSP and invoking only...... the necessary SUO’s (dependent on the practical situation) is the only prerequisite needed for eliminating all sampling bias and simultaneously minimizing sampling variance, and this is in addition a sure guarantee for making the final analytical results trustworthy. No reliable conclusions can be made unless...

  9. A sampling approach for predicting the eating quality of apples using visible-near infrared spectroscopy.

    Science.gov (United States)

    Martínez Vega, Mabel V; Sharifzadeh, Sara; Wulfsohn, Dvoralai; Skov, Thomas; Clemmensen, Line Harder; Toldam-Andersen, Torben B

    2013-12-01

    Visible-near infrared spectroscopy remains a method of increasing interest as a fast alternative for the evaluation of fruit quality. The success of the method is assumed to be achieved by using large sets of samples to produce robust calibration models. In this study we used representative samples of an early and a late season apple cultivar to evaluate model robustness (in terms of prediction ability and error) on the soluble solids content (SSC) and acidity prediction, in the wavelength range 400-1100 nm. A total of 196 middle-early season and 219 late season apples (Malus domestica Borkh.) cvs 'Aroma' and 'Holsteiner Cox' samples were used to construct spectral models for SSC and acidity. Partial least squares (PLS), ridge regression (RR) and elastic net (EN) models were used to build prediction models. Furthermore, we compared three sub-sample arrangements for forming training and test sets ('smooth fractionator', by date of measurement after harvest and random). Using the 'smooth fractionator' sampling method, fewer spectral bands (26) and elastic net resulted in improved performance for SSC models of 'Aroma' apples, with a coefficient of variation CVSSC = 13%. The model showed consistently low errors and bias (PLS/EN: R(2) cal = 0.60/0.60; SEC = 0.88/0.88°Brix; Biascal = 0.00/0.00; R(2) val = 0.33/0.44; SEP = 1.14/1.03; Biasval = 0.04/0.03). However, the prediction acidity and for SSC (CV = 5%) of the late cultivar 'Holsteiner Cox' produced inferior results as compared with 'Aroma'. It was possible to construct local SSC and acidity calibration models for early season apple cultivars with CVs of SSC and acidity around 10%. The overall model performance of these data sets also depend on the proper selection of training and test sets. The 'smooth fractionator' protocol provided an objective method for obtaining training and test sets that capture the existing variability of the fruit samples for construction of visible-NIR prediction models. The implication

  10. Auditory proactive interference in monkeys: the roles of stimulus set size and intertrial interval.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2013-09-01

    We conducted two experiments to examine the influences of stimulus set size (the number of stimuli that are used throughout the session) and intertrial interval (ITI, the elapsed time between trials) in auditory short-term memory in monkeys. We used an auditory delayed matching-to-sample task wherein the animals had to indicate whether two sounds separated by a 5-s retention interval were the same (match trials) or different (nonmatch trials). In Experiment 1, we randomly assigned stimulus set sizes of 2, 4, 8, 16, 32, 64, or 192 (trial-unique) for each session of 128 trials. Consistent with previous visual studies, overall accuracy was consistently lower when smaller stimulus set sizes were used. Further analyses revealed that these effects were primarily caused by an increase in incorrect "same" responses on nonmatch trials. In Experiment 2, we held the stimulus set size constant at four for each session and alternately set the ITI at 5, 10, or 20 s. Overall accuracy improved when the ITI was increased from 5 to 10 s, but it was the same across the 10- and 20-s conditions. As in Experiment 1, the overall decrease in accuracy during the 5-s condition was caused by a greater number of false "match" responses on nonmatch trials. Taken together, Experiments 1 and 2 showed that auditory short-term memory in monkeys is highly susceptible to proactive interference caused by stimulus repetition. Additional analyses of the data from Experiment 1 suggested that monkeys may make same-different judgments on the basis of a familiarity criterion that is adjusted by error-related feedback.

  11. Statistical sampling strategies

    International Nuclear Information System (INIS)

    Andres, T.H.

    1987-01-01

    Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized

  12. Context-specific metabolic networks are consistent with experiments.

    Directory of Open Access Journals (Sweden)

    Scott A Becker

    2008-05-01

    Full Text Available Reconstructions of cellular metabolism are publicly available for a variety of different microorganisms and some mammalian genomes. To date, these reconstructions are "genome-scale" and strive to include all reactions implied by the genome annotation, as well as those with direct experimental evidence. Clearly, many of the reactions in a genome-scale reconstruction will not be active under particular conditions or in a particular cell type. Methods to tailor these comprehensive genome-scale reconstructions into context-specific networks will aid predictive in silico modeling for a particular situation. We present a method called Gene Inactivity Moderated by Metabolism and Expression (GIMME to achieve this goal. The GIMME algorithm uses quantitative gene expression data and one or more presupposed metabolic objectives to produce the context-specific reconstruction that is most consistent with the available data. Furthermore, the algorithm provides a quantitative inconsistency score indicating how consistent a set of gene expression data is with a particular metabolic objective. We show that this algorithm produces results consistent with biological experiments and intuition for adaptive evolution of bacteria, rational design of metabolic engineering strains, and human skeletal muscle cells. This work represents progress towards producing constraint-based models of metabolism that are specific to the conditions where the expression profiling data is available.

  13. A set of triple-resonance nuclear magnetic resonance experiments for structural characterization of organophosphorus compounds in mixture samples

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, Harri, E-mail: Harri.T.Koskela@helsinki.fi [VERIFIN, University of Helsinki, P.O. Box 55, FIN-00014 Helsinki (Finland)

    2012-11-02

    Highlights: Black-Right-Pointing-Pointer New {sup 1}H, {sup 13}C, {sup 31}P triple-resonance NMR pulse experiments. Black-Right-Pointing-Pointer Analysis of organophosphorus (OP) compounds in complex matrix. Black-Right-Pointing-Pointer Selective extraction of {sup 1}H, {sup 31}P, and {sup 13}C chemical shifts and connectivities. Black-Right-Pointing-Pointer More precise NMR identification of OP nerve agents and their degradation products. - Abstract: The {sup 1}H, {sup 13}C correlation NMR spectroscopy utilizes J{sub CH} couplings in molecules, and provides important structural information from small organic molecules in the form of carbon chemical shifts and carbon-proton connectivities. The full potential of the {sup 1}H, {sup 13}C correlation NMR spectroscopy has not been realized in the Chemical Weapons Convention (CWC) related verification analyses due to the sample matrix, which usually contains a high amount of non-related compounds obscuring the correlations of the relevant compounds. Here, the results of the application of {sup 1}H, {sup 13}C, {sup 31}P triple-resonance NMR spectroscopy in characterization of OP compounds related to the CWC are presented. With a set of two-dimensional triple-resonance experiments the J{sub HP}, J{sub CH} and J{sub PC} couplings are utilized to map the connectivities of the atoms in OP compounds and to extract the carbon chemical shift information. With the use of the proposed pulse sequences the correlations from the OP compounds can be recorded without significant artifacts from the non-OP compound impurities in the sample. Further selectivity of the observed correlations is achieved with the application of phosphorus band-selective pulse in the pulse sequences to assist the analysis of multiple OP compounds in mixture samples. The use of the triple-resonance experiments in the analysis of a complex sample is shown with a test mixture containing typical scheduled OP compounds, including the characteristic degradation

  14. Road and Street Centerlines, StreetLabels-The data set is a text feature consisting of 6329 label points representing street names. It was created to show the names of city and county based streets., Published in 1989, Davis County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Road and Street Centerlines dataset current as of 1989. StreetLabels-The data set is a text feature consisting of 6329 label points representing street names. It was...

  15. A paradigm shift toward a consistent modeling framework to assess climate impacts

    Science.gov (United States)

    Monier, E.; Paltsev, S.; Sokolov, A. P.; Fant, C.; Chen, H.; Gao, X.; Schlosser, C. A.; Scott, J. R.; Dutkiewicz, S.; Ejaz, Q.; Couzo, E. A.; Prinn, R. G.; Haigh, M.

    2017-12-01

    Estimates of physical and economic impacts of future climate change are subject to substantial challenges. To enrich the currently popular approaches of assessing climate impacts by evaluating a damage function or by multi-model comparisons based on the Representative Concentration Pathways (RCPs), we focus here on integrating impacts into a self-consistent coupled human and Earth system modeling framework that includes modules that represent multiple physical impacts. In a sample application we show that this framework is capable of investigating the physical impacts of climate change and socio-economic stressors. The projected climate impacts vary dramatically across the globe in a set of scenarios with global mean warming ranging between 2.4°C and 3.6°C above pre-industrial by 2100. Unabated emissions lead to substantial sea level rise, acidification that impacts the base of the oceanic food chain, air pollution that exceeds health standards by tenfold, water stress that impacts an additional 1 to 2 billion people globally and agricultural productivity that decreases substantially in many parts of the world. We compare the outcomes from these forward-looking scenarios against the common goal described by the target-driven scenario of 2°C, which results in much smaller impacts. It is challenging for large internationally coordinated exercises to respond quickly to new policy targets. We propose that a paradigm shift toward a self-consistent modeling framework to assess climate impacts is needed to produce information relevant to evolving global climate policy and mitigation strategies in a timely way.

  16. NODC Standard Product: World Ocean Atlas 2001 (6 disc set) (NODC Accession 0095600)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The World Ocean Atlas 2001 (WOA01) Series consists of two sets of products. The first set of products consists of three DATA CD-ROMs containing global data...

  17. The computer system for the express-analysis of the irradiation samples

    International Nuclear Information System (INIS)

    Vzorov, I.K.; Kalmykov, A.V.; Korenev, S.A.; Minashkin, V.F.; Sikolenko, V.V.

    1999-01-01

    The computer system for the express-analysis (SEA) of the irradiation samples is described. The system is working together with the pulse high current electrons and ions source. It allows us to correct irradiation regime in real time. The SEA system automatically measures volt-ampere and volt-farad characteristics, sample resistance by 'four-probe' method, sample capacitor parameters. Its parameters are: in the volt-ampere measuring regime - U max = 200 V, minimal voltage step U sh =0.05 V, voltage accuracy 0.25%; in the capacity measuring regime - capacity measurement diapason 0 - 1600 pF, working frequencies diapason 1 -150 kHz, capacity accuracy 0.5%, voltage shifting diapason 1 - 200 V, minimal step of voltage shifting U sh 0.05 V. The SEA management is performed by IBM/AT computer. The control and measuring apparatus was realized in CAMAC standard. The programmed set consists of the first display procedures, control, treatment and information exit. (author)

  18. Comparing two sampling methods to engage hard-to-reach communities in research priority setting.

    Science.gov (United States)

    Valerio, Melissa A; Rodriguez, Natalia; Winkler, Paula; Lopez, Jaime; Dennison, Meagen; Liang, Yuanyuan; Turner, Barbara J

    2016-10-28

    Effective community-partnered and patient-centered outcomes research needs to address community priorities. However, optimal sampling methods to engage stakeholders from hard-to-reach, vulnerable communities to generate research priorities have not been identified. In two similar rural, largely Hispanic communities, a community advisory board guided recruitment of stakeholders affected by chronic pain using a different method in each community: 1) snowball sampling, a chain- referral method or 2) purposive sampling to recruit diverse stakeholders. In both communities, three groups of stakeholders attended a series of three facilitated meetings to orient, brainstorm, and prioritize ideas (9 meetings/community). Using mixed methods analysis, we compared stakeholder recruitment and retention as well as priorities from both communities' stakeholders on mean ratings of their ideas based on importance and feasibility for implementation in their community. Of 65 eligible stakeholders in one community recruited by snowball sampling, 55 (85 %) consented, 52 (95 %) attended the first meeting, and 36 (65 %) attended all 3 meetings. In the second community, the purposive sampling method was supplemented by convenience sampling to increase recruitment. Of 69 stakeholders recruited by this combined strategy, 62 (90 %) consented, 36 (58 %) attended the first meeting, and 26 (42 %) attended all 3 meetings. Snowball sampling recruited more Hispanics and disabled persons (all P research, focusing on non-pharmacologic interventions for management of chronic pain. Ratings on importance and feasibility for community implementation differed only on the importance of massage services (P = 0.045) which was higher for the purposive/convenience sampling group and for city improvements/transportation services (P = 0.004) which was higher for the snowball sampling group. In each of the two similar hard-to-reach communities, a community advisory board partnered with researchers

  19. Improving the functionality of dictionary definitions for lexical sets ...

    African Journals Online (AJOL)

    2008) approach to the design of lexicographic definitions for members of lexical sets. The questions raised are how to define and identify lexical sets, how lexical conceptual models (LCMs) can support definitional consistency and coherence in ...

  20. Comparison of culture based methods for the isolation of Clostridium difficile from stool samples in a research setting.

    Science.gov (United States)

    Lister, Michelle; Stevenson, Emma; Heeg, Daniela; Minton, Nigel P; Kuehne, Sarah A

    2014-08-01

    Effective isolation of Clostridium difficile from stool samples is important in the research setting, especially where low numbers of spores/vegetative cells may be present within a sample. In this study, three protocols for stool culture were investigated to find a sensitive, cost effective and timely method of C. difficile isolation. For the initial enrichment step, the effectiveness of two different rich media, cycloserine-cefoxitin fructose broth (CCFB) and cycloserine-cefoxitin mannitol broth with taurocholate and lysozyme (CCMB-TAL) were compared. For the comparison of four different, selective solid media; Cycloserine-cefoxitin fructose agar (CCFA), Cycloserine-cefoxitin egg yolk agar (CCEY), ChromID C. difficile and tryptone soy agar (TSA) with 5% sheep's blood with and without preceding broth enrichment were used. As a means to enable differentiation between C. difficile and other fecal flora, the effectiveness of the inclusion of a pH indictor (1% Neutral Red), was also evaluated. The data derived indicated that CCFB is more sensitive than CCMB-TAL, however, the latter had an improved recovery rate. A broth enrichment step had a reduced sensitivity over direct plating. ChromID C. difficile showed the best recovery rate whereas CCEY egg yolk agar was the most sensitive of the four. The addition of 1% Neutral Red did not show sufficient colour change when added to CCEY egg yolk agar to be used as a differential medium. For a low cost, timely and sensitive method of isolating C. difficile from stool samples we recommend direct plating onto CCEY egg yolk agar after heat shock. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. The Model Confidence Set

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.

    The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS......, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine...... the MCS of the best in terms of in-sample likelihood criteria....

  2. Pengaruh Goal Setting terhadap Performance : Tinjauan Teoritis

    OpenAIRE

    Ginting, Surya Dharma; Ariani, D. Wahyu

    2004-01-01

    This article is the conceptual view of goal setting theory and effects of goal setting on individual performance. Goal setting is recognized, and is a major theory of work motivation. Difficult goals have consistently been shown to lead to higher levels of performance than easy goals. If there is no commitment, a goal can have no motivational effect. Goals are central to current treatments of work motivation, and goal commitment is a necessary condition for difficult goals to result in higher...

  3. Setting Priorities Personal Values, Organizational Results

    CERN Document Server

    (CCL), Center for Creative Leadership

    2011-01-01

    To be a successful leader, you need to get results. To get results, you need to set priorities. This book can help you do a better job of setting priorities, recognizing the personal values that motivate your decision making, the probable trade-offs and consequences of your decisions, and the importance of aligning your priorities with your organization's expectations. In this way you can successfully meet organizational objectives and consistently produce results.

  4. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  5. On the matched pairs sign test using bivariate ranked set sampling ...

    African Journals Online (AJOL)

    BVRSS) is introduced and investigated. We show that this test is asymptotically more efficient than its counterpart sign test based on a bivariate simple random sample (BVSRS). The asymptotic null distribution and the efficiency of the test are derived.

  6. Bioremediation of PAH contaminated soil samples

    International Nuclear Information System (INIS)

    Joshi, M.M.; Lee, S.

    1994-01-01

    Soils contaminated with polynuclear aromatic hydrocarbons (PAHs) pose a hazard to life. The remediation of such sites can be done using physical, chemical, and biological treatment methods or a combination of them. It is of interest to study the decontamination of soil using bioremediation. The experiments were conducted using Acinetobacter (ATCC 31012) at room temperature without pH or temperature control. In the first series of experiments, contaminated soil samples obtained from Alberta Research Council were analyzed to determine the toxic contaminant and their composition in the soil. These samples were then treated using aerobic fermentation and removal efficiency for each contaminant was determined. In the second series of experiments, a single contaminant was used to prepare a synthetic soil sample. This sample of known composition was then treated using aerobic fermentation in continuously stirred flasks. In one set of flasks, contaminant was the only carbon source and in the other set, starch was an additional carbon source. In the third series of experiments, the synthetic contaminated soil sample was treated in continuously stirred flasks in the first set and in fixed bed in the second set and the removal efficiencies were compared. The removal efficiencies obtained indicated the extent of biodegradation for various contaminants, the effect of additional carbon source, and performance in fixed bed without external aeration

  7. Modelling uncertainty with generalized credal sets: application to conjunction and decision

    Science.gov (United States)

    Bronevich, Andrey G.; Rozenberg, Igor N.

    2018-01-01

    To model conflict, non-specificity and contradiction in information, upper and lower generalized credal sets are introduced. Any upper generalized credal set is a convex subset of plausibility measures interpreted as lower probabilities whose bodies of evidence consist of singletons and a certain event. Analogously, contradiction is modelled in the theory of evidence by a belief function that is greater than zero at empty set. Based on generalized credal sets, we extend the conjunctive rule for contradictory sources of information, introduce constructions like natural extension in the theory of imprecise probabilities and show that the model of generalized credal sets coincides with the model of imprecise probabilities if the profile of a generalized credal set consists of probability measures. We give ways how the introduced model can be applied to decision problems.

  8. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.

    Directory of Open Access Journals (Sweden)

    Anne Hsu

    Full Text Available A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.

  9. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning

    Science.gov (United States)

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning. PMID:27310576

  10. Sample Return Robot

    Data.gov (United States)

    National Aeronautics and Space Administration — This Challenge requires demonstration of an autonomous robotic system to locate and collect a set of specific sample types from a large planetary analog area and...

  11. A multi-year data set on aerosol-cloud-precipitation-meteorology interactions for marine stratocumulus clouds.

    Science.gov (United States)

    Sorooshian, Armin; MacDonald, Alexander B; Dadashazar, Hossein; Bates, Kelvin H; Coggon, Matthew M; Craven, Jill S; Crosbie, Ewan; Hersey, Scott P; Hodas, Natasha; Lin, Jack J; Negrón Marty, Arnaldo; Maudlin, Lindsay C; Metcalf, Andrew R; Murphy, Shane M; Padró, Luz T; Prabhakar, Gouri; Rissman, Tracey A; Shingler, Taylor; Varutbangkul, Varuntida; Wang, Zhen; Woods, Roy K; Chuang, Patrick Y; Nenes, Athanasios; Jonsson, Haflidi H; Flagan, Richard C; Seinfeld, John H

    2018-02-27

    Airborne measurements of meteorological, aerosol, and stratocumulus cloud properties have been harmonized from six field campaigns during July-August months between 2005 and 2016 off the California coast. A consistent set of core instruments was deployed on the Center for Interdisciplinary Remotely-Piloted Aircraft Studies Twin Otter for 113 flight days, amounting to 514 flight hours. A unique aspect of the compiled data set is detailed measurements of aerosol microphysical properties (size distribution, composition, bioaerosol detection, hygroscopicity, optical), cloud water composition, and different sampling inlets to distinguish between clear air aerosol, interstitial in-cloud aerosol, and droplet residual particles in cloud. Measurements and data analysis follow documented methods for quality assurance. The data set is suitable for studies associated with aerosol-cloud-precipitation-meteorology-radiation interactions, especially owing to sharp aerosol perturbations from ship traffic and biomass burning. The data set can be used for model initialization and synergistic application with meteorological models and remote sensing data to improve understanding of the very interactions that comprise the largest uncertainty in the effect of anthropogenic emissions on radiative forcing.

  12. Improved and consistent determination of the nuclear inventory of spent PWR-fuel on the basis of time-dependent cell-calculations with KORIGEN

    International Nuclear Information System (INIS)

    Fischer, U.; Wiese, H.W.

    1983-01-01

    For safe handling, processing and storage of spent nuclear fuel a reliable, experimentally validated method is needed to determine fuel and waste characteristics: composition, radioactivity, heat and radiation. For PWR's, a cell-burnup procedure has been developed which is able to calculate the inventory in consistency with cell geometry, initial enrichment, and reactor control. Routine calculations can be performed with KORIGEN using consistent cross-section sets - burnup-dependent and based on the latest Karlsruhe evaluations for actinides - which were calculated previously with the cell-burnup procedure. Extensive comparisons between calculations and experiments validate the presented procedure. For the use of the KORIGEN code the input description and sample problems are added. Improvements in the calculational method and in data are described, results from KORIGEN, ORIGEN and ORIGEN2 calculations are compared. Fuel and waste inventories are given for BIBLIS-type fuel of different burnup. (orig.) [de

  13. Comparing two sampling methods to engage hard-to-reach communities in research priority setting

    Directory of Open Access Journals (Sweden)

    Melissa A. Valerio

    2016-10-01

    Full Text Available Abstract Background Effective community-partnered and patient-centered outcomes research needs to address community priorities. However, optimal sampling methods to engage stakeholders from hard-to-reach, vulnerable communities to generate research priorities have not been identified. Methods In two similar rural, largely Hispanic communities, a community advisory board guided recruitment of stakeholders affected by chronic pain using a different method in each community: 1 snowball sampling, a chain- referral method or 2 purposive sampling to recruit diverse stakeholders. In both communities, three groups of stakeholders attended a series of three facilitated meetings to orient, brainstorm, and prioritize ideas (9 meetings/community. Using mixed methods analysis, we compared stakeholder recruitment and retention as well as priorities from both communities’ stakeholders on mean ratings of their ideas based on importance and feasibility for implementation in their community. Results Of 65 eligible stakeholders in one community recruited by snowball sampling, 55 (85 % consented, 52 (95 % attended the first meeting, and 36 (65 % attended all 3 meetings. In the second community, the purposive sampling method was supplemented by convenience sampling to increase recruitment. Of 69 stakeholders recruited by this combined strategy, 62 (90 % consented, 36 (58 % attended the first meeting, and 26 (42 % attended all 3 meetings. Snowball sampling recruited more Hispanics and disabled persons (all P < 0.05. Despite differing recruitment strategies, stakeholders from the two communities identified largely similar ideas for research, focusing on non-pharmacologic interventions for management of chronic pain. Ratings on importance and feasibility for community implementation differed only on the importance of massage services (P = 0.045 which was higher for the purposive/convenience sampling group and for city improvements

  14. Consistency of Teacher-Reported Problems for Students in 21 Countries

    Science.gov (United States)

    Rescorla, Leslie A.; Achenbach, Thomas M.; Ginzburg, Sofia; Ivanova, Masha; Dumenci, Levent; Almqvist, Fredrik; Bathiche, Marie; Bilenberg, Niels; Bird, Hector; Domuta, Anca; Erol, Nese; Fombonne, Eric; Fonseca, Antonio; Frigerio, Alessandra; Kanbayashi, Yasuko; Lambert, Michael C.; Liu, Xianchen; Leung, Patrick; Minaei, Asghar; Roussos, Alexandra; Simsek, Zeynep; Weintraub, Sheila; Weisz, John; Wolanczyk, Tomasz; Zubrick, Stephen R.; Zukauskiene, Rita; Verhulst, Frank

    2007-01-01

    This study compared teachers' ratings of behavioral and emotional problems on the Teacher's Report Form for general population samples in 21 countries (N = 30,957). Correlations between internal consistency coefficients in different countries averaged 0.90. Effects of country on scale scores ranged from 3% to 13%. Gender effects ranged from less…

  15. German Value Set for the EQ-5D-5L.

    Science.gov (United States)

    Ludwig, Kristina; Graf von der Schulenburg, J-Matthias; Greiner, Wolfgang

    2018-06-01

    The objective of this study was to develop a value set for EQ-5D-5L based on the societal preferences of the German population. As the first country to do so, the study design used the improved EQ-5D-5L valuation protocol 2.0 developed by the EuroQol Group, including a feedback module as internal validation and a quality control process that was missing in the first wave of EQ-5D-5L valuation studies. A representative sample of the general German population (n = 1158) was interviewed using a composite time trade-off and a discrete choice experiment under close quality control. Econometric modeling was used to estimate values for all 3125 possible health states described by EQ-5D-5L. The value set was based on a hybrid model including all available information from the composite time trade-off and discrete choice experiment valuations without any exclusions due to data issues. The final German value set was constructed from a combination of a conditional logit model for the discrete choice experiment data and a censored at -1 Tobit model for the composite time trade-off data, correcting for heteroskedasticity. The value set had logically consistent parameter estimates (p German version of EQ-5D-5L representing the preferences of the German population. The study successfully employed for the first time worldwide the improved protocol 2.0. The value set enables the use of the EQ-5D-5L instrument in economic evaluations and in clinical studies.

  16. Probabilistic generation of quantum contextual sets

    International Nuclear Information System (INIS)

    Megill, Norman D.; Fresl, Kresimir; Waegell, Mordecai; Aravind, P.K.; Pavicic, Mladen

    2011-01-01

    We give a method for exhaustive generation of a huge number of Kochen-Specker contextual sets, based on the 600-cell, for possible experiments and quantum gates. The method is complementary to our previous parity proof generation of these sets, and it gives all sets while the parity proof method gives only sets with an odd number of edges in their hypergraph representation. Thus we obtain 35 new kinds of critical KS sets with an even number of edges. We also give a statistical estimate of the number of sets that might be obtained in an eventual exhaustive enumeration. -- Highlights: → We generate millions of new Kochen-Specker noncontextual set. → We find thousands of novel critical Kochen-Specker (KS) sets. → We give algorithms for generating KS sets from a new 4-dim class. → We represent KS sets by means of hypergraphs and their figures. → We give a new exact estimation method for random sampling of sets.

  17. Exploring nurses' and patients' perspectives of limit setting in a forensic mental health setting.

    Science.gov (United States)

    Maguire, Tessa; Daffern, Michael; Martin, Trish

    2014-04-01

    Limit setting is an intervention that is frequently used by mental health nurses. However, limit setting is poorly conceptualized, its purpose is unclear, and there are few evidence-based guidelines to assist nurses to set limits in a safe and effective manner. What is known is that the manner in which nurses set limits influences patients' perceptions of the interactions and their emotional and behavioural responses. In this qualitative study, 12 nurses and 12 patients participated in personal, semistructured interviews that aimed to explore limit setting and to propose principles to guide practice. The findings suggested that: (i) limit setting is important to safety in mental health hospitals; (ii) engaging patients in an empathic manner is necessary when setting limits (when nurses engage in an empathic manner, the therapeutic relationship is more likely to be preserved and the risk of aggressive responses is reduced); and (iii) an authoritative (fair, respectful, consistent, and knowledgeable), rather than authoritarian (controlling and indifferent), limit-setting style enhances positive outcomes with regards to adherence, reduced likelihood of aggression, and preservation of the therapeutic relationship. In conclusion, a limit-setting style characterized by empathic responding and an authoritative, rather than authoritarian interpersonal, style is recommended. Elucidating the components of this style is critical for effective training and best practice of mental health nurses, and to reduce aggressive responses from limit setting. © 2013 Australian College of Mental Health Nurses Inc.

  18. Sample container for neutron activation analysis

    International Nuclear Information System (INIS)

    Lersmacher, B.; Verheijke, M.L.; Jaspers, H.J.

    1983-01-01

    The sample container avoids contaminating the sample substance by diffusion of foreign matter from the wall of the sample container into the sample. It cannot be activated, so that the results of measurements are not falsified by a radioactive container wall. It consists of solid carbon. (orig./HP) [de

  19. The internal consistency and validity of the Self-assessment Parkinson's Disease Disability Scale.

    NARCIS (Netherlands)

    Biemans, M.A.J.E.; Dekker, J.; Woude, L.H.V. van der

    2001-01-01

    OBJECTIVE: To test the consistency and validity of the Self-assessment Parkinson's Disease Disability Scale in patients with Parkinson's disease living at home. DESIGN: Patients with Parkinson's disease responded to a set of questionnaires. In addition, an observation of the performance of daily

  20. Internal consistency and validity of the self-assessment Parkinson's Disease disability scale. Abstract.

    NARCIS (Netherlands)

    Dekker, J.; Biemans, M.A.J.E.; Woude, L.H.V. van der

    2000-01-01

    OBJECTIVE: To test the consistency and validity of the Self-assessment Parkinson's Disease Disability Scale in patients with Parkinson's disease living at home. DESIGN: Patients with Parkinson's disease responded to a set of questionnaires. In addition, an observation of the performance of daily

  1. The upgraded external-beam PIXE/PIGE set-up at LABEC for very fast measurements on aerosol samples

    Energy Technology Data Exchange (ETDEWEB)

    Lucarelli, F.; Calzolai, G.; Chiari, M.; Mochi, D.; Nava, S. [Department of Physics, University of Florence and INFN, Florence (Italy)

    2013-07-01

    Full text: Particle Induced X-ray Emission (PIXE)technique has been widely used since its birth for the study of the aerosol composition, and for a long time it has been the dominating technique for its elemental analysis. However now it has to compete with other techniques, like Induced Coupled Plasma and detection by Atomic Emission Spectroscopy (ICP-AES) or Mass Spectrometry (ICP-MS) or Synchrotron Radiation XRF (SR-XRF). To remain competitive, a proper experimental set-up is important to fully exploit PIXE capabilities. At LABEC, an external beam line is fully dedicated to PIXE-PIGE measurements of atmospheric aerosols [1]. Recently SDD (Silicon Drift Detectors) have been introduced for X-ray detection thanks to their better resolution with respect to Si(Li) detectors and the possibility of managing high counting rates (up to 50 kHz at 0.5 μsec shaping time). This implies, in turn, the possibility of using very high beam currents thus drastically reducing the measurement time. However their use for a complete characterization of X-rays was limited by the small thickness and surface areas available. Now SDD with a thickness of 500 μm and 80 mm{sup 2} area have been introduced in the market. We have therefore replaced the Si(Li) detector used so far for the detection of medium-high Z elements with such a SDD. A comparison of the two detectors has been carried out; PIXE minimum detection limits (MDLs) at different proton beam energies have been studied to find out the best energy for PIXE measurements on aerosol samples collected on different substrata, namely Teflon, Kapton, Nuclepore and Kimfol, used for daily or hourly sampling or for cascade impactors. In particular in the case of Teflon filters, the production of γ-rays by F in the Teflon filter limits the current which may be used and the Compton γ-ray background worsens the MDLs. Due to the lower thickness of the SDD detector with respect to a typical Si(Li) detector, these problems are reduced

  2. Reducing consistency in human realism increases the uncanny valley effect; increasing category uncertainty does not.

    Science.gov (United States)

    MacDorman, Karl F; Chattopadhyay, Debaleena

    2016-01-01

    Human replicas may elicit unintended cold, eerie feelings in viewers, an effect known as the uncanny valley. Masahiro Mori, who proposed the effect in 1970, attributed it to inconsistencies in the replica's realism with some of its features perceived as human and others as nonhuman. This study aims to determine whether reducing realism consistency in visual features increases the uncanny valley effect. In three rounds of experiments, 548 participants categorized and rated humans, animals, and objects that varied from computer animated to real. Two sets of features were manipulated to reduce realism consistency. (For humans, the sets were eyes-eyelashes-mouth and skin-nose-eyebrows.) Reducing realism consistency caused humans and animals, but not objects, to appear eerier and colder. However, the predictions of a competing theory, proposed by Ernst Jentsch in 1906, were not supported: The most ambiguous representations-those eliciting the greatest category uncertainty-were neither the eeriest nor the coldest. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  3. The Cross-Cultural Consistency of Marital Communication Associated with Marital Distress.

    Science.gov (United States)

    Halford, W. Kim; And Others

    1990-01-01

    Compared problem-solving behaviors of four samples of couples, sorted by marital happiness/distress and culture (German and Australian). Results showed cultural differences in frequency and functional significance of negative verbal communication, along with cross-culturally consistent marital behaviors associated with marital distress. (Author/TE)

  4. ANL small-sample calorimeter system design and operation

    International Nuclear Information System (INIS)

    Roche, C.T.; Perry, R.B.; Lewis, R.N.; Jung, E.A.; Haumann, J.R.

    1978-07-01

    The Small-Sample Calorimetric System is a portable instrument designed to measure the thermal power produced by radioactive decay of plutonium-containing fuels. The small-sample calorimeter is capable of measuring samples producing power up to 32 milliwatts at a rate of one sample every 20 min. The instrument is contained in two packages: a data-acquisition module consisting of a microprocessor with an 8K-byte nonvolatile memory, and a measurement module consisting of the calorimeter and a sample preheater. The total weight of the system is 18 kg

  5. Consistent Quantum Histories: Towards a Universal Language of Physics

    International Nuclear Information System (INIS)

    Grygiel, W.P.

    2007-01-01

    The consistent histories interpretation of quantum mechanics is a reformulation of the standard Copenhagen interpretation that aims at incorporating quantum probabilities as part of the axiomatic foundations of the theory. It is not only supposed to equip quantum mechanics with clear criteria of its own experimental verification but, first and foremost, to alleviate one of the stumbling blocks of the theory - the measurement problem. Since the consistent histories interpretation operates with a series of quantum events integrated into one quantum history, the measurement problem is naturally absorbed as one of the events that build up a history. The interpretation rests upon the two following assumptions, proposed already by J. von Neumann: (1) both the microscopic and macroscopic regimes are subject to the same set of quantum laws and (2) a projector operator that is assigned to each event within a history permits to transcribe the history into a set of propositions that relate the entire course of quantum events. Based on this, a universal language of physics is expected to emerge that will bring the quantum apparatus back to common sense propositional logic. The basic philosophical issue raised this study is whether one should justify quantum mechanics by means of what emerges from it, that is, the properties of the macroscopic world, or use the axioms of quantum mechanics to demonstrate the mechanisms how the macroscopic world comes about from the quantum regime. (author)

  6. Nonlinear and self-consistent treatment of ECRH

    Energy Technology Data Exchange (ETDEWEB)

    Tsironis, C.; Vlahos, L.

    2005-07-01

    A self-consistent formulation for the nonlinear interaction of electromagnetic waves with relativistic magnetized electrons is applied for the description of the ECRH. In general, electron-cyclotron absorption is the result of resonances between the cyclotron harmonics and the Doppler-shifted waver frequency. The resonant interaction results to an intense wave-particle energy exchange and an electron acceleration, and for that reason it is widely applied in fusion experiments for plasma heating and current drive. The linear theory, for the wave absorption, as well as the quasilinear theory for the electron distribution function, are the most frequently-used tools for the study of wave-particle interactions. However, in many cases the validity of these theories is violated, namely cases where nonlinear effects, like, e. g. particle trapping in the wave field, are dominant in the particle phase-space. Our model consists of electrons streaming and gyrating in a tokamak plasma slab, which is finite in the directions perpendicular to the main magnetic field. The particles interact with an electromagnetic electron-cyclotron wave of the ordinary (O-) or the extraordinary (X-) mode. A set of nonlinear and relativistic equations is derived, which take into account the effects of the charged particle motions on the wave. These consist of the equations of motion for the plasma electrons in the slab, as well as the wave equation in terms of the vector potential. The effect of the electron motions on the temporal evolution of the wave is reflected in the current density source term. (Author)

  7. Nonlinear and self-consistent treatment of ECRH

    International Nuclear Information System (INIS)

    Tsironis, C.; Vlahos, L.

    2005-01-01

    A self-consistent formulation for the nonlinear interaction of electromagnetic waves with relativistic magnetized electrons is applied for the description of the ECRH. In general, electron-cyclotron absorption is the result of resonances between the cyclotron harmonics and the Doppler-shifted waver frequency. The resonant interaction results to an intense wave-particle energy exchange and an electron acceleration, and for that reason it is widely applied in fusion experiments for plasma heating and current drive. The linear theory, for the wave absorption, as well as the quasilinear theory for the electron distribution function, are the most frequently-used tools for the study of wave-particle interactions. However, in many cases the validity of these theories is violated, namely cases where nonlinear effects, like, e. g. particle trapping in the wave field, are dominant in the particle phase-space. Our model consists of electrons streaming and gyrating in a tokamak plasma slab, which is finite in the directions perpendicular to the main magnetic field. The particles interact with an electromagnetic electron-cyclotron wave of the ordinary (O-) or the extraordinary (X-) mode. A set of nonlinear and relativistic equations is derived, which take into account the effects of the charged particle motions on the wave. These consist of the equations of motion for the plasma electrons in the slab, as well as the wave equation in terms of the vector potential. The effect of the electron motions on the temporal evolution of the wave is reflected in the current density source term. (Author)

  8. 16 CFR 305.6 - Sampling.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Sampling. 305.6 Section 305.6 Commercial... ENERGY POLICY AND CONSERVATION ACT (âAPPLIANCE LABELING RULEâ) Testing § 305.6 Sampling. (a) For any... based upon the sampling procedures set forth in § 430.24 of 10 CFR part 430, subpart B. (b) For any...

  9. Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction

    Science.gov (United States)

    Zhang, W.; Li, X.; Xiao, W.

    2018-05-01

    random sampling method is used to conduct the field investigation, achieve an overall accuracy of 90.31 %, and the Kappa coefficient is 0.88. The classification method based on decision tree threshold values and rule set developed by the repository, outperforms the results obtained from the traditional methodology. Our decision tree repository and rule set based object-oriented classification technique was an effective method for producing comparable and consistency wetlands data set.

  10. Training set optimization and classifier performance in a top-down diabetic retinopathy screening system

    Science.gov (United States)

    Wigdahl, J.; Agurto, C.; Murray, V.; Barriga, S.; Soliz, P.

    2013-03-01

    Diabetic retinopathy (DR) affects more than 4.4 million Americans age 40 and over. Automatic screening for DR has shown to be an efficient and cost-effective way to lower the burden on the healthcare system, by triaging diabetic patients and ensuring timely care for those presenting with DR. Several supervised algorithms have been developed to detect pathologies related to DR, but little work has been done in determining the size of the training set that optimizes an algorithm's performance. In this paper we analyze the effect of the training sample size on the performance of a top-down DR screening algorithm for different types of statistical classifiers. Results are based on partial least squares (PLS), support vector machines (SVM), k-nearest neighbor (kNN), and Naïve Bayes classifiers. Our dataset consisted of digital retinal images collected from a total of 745 cases (595 controls, 150 with DR). We varied the number of normal controls in the training set, while keeping the number of DR samples constant, and repeated the procedure 10 times using randomized training sets to avoid bias. Results show increasing performance in terms of area under the ROC curve (AUC) when the number of DR subjects in the training set increased, with similar trends for each of the classifiers. Of these, PLS and k-NN had the highest average AUC. Lower standard deviation and a flattening of the AUC curve gives evidence that there is a limit to the learning ability of the classifiers and an optimal number of cases to train on.

  11. Is LambdaCDM consistent with the Tully-Fisher relation?

    Science.gov (United States)

    Reyes, Reinabelle; Gunn, J. E.; Mandelbaum, R.

    2013-07-01

    We consider the question of the origin of the Tully-Fisher relation in LambdaCDM cosmology. Reproducing the observed tight relation between stellar masses and rotation velocities of disk galaxies presents a challenge for semi-analytical models and hydrodynamic simulations of galaxy formation. Here, our goal is to construct a suite of galaxy mass models that is fully consistent with observations, and that also reproduces the observed Tully-Fisher relation. We take advantage of a well-defined sample of disk galaxies in SDSS with measured rotation velocities (from long-slit spectroscopy of H-alpha), stellar bulge and disk profiles (from fits to SDSS images), and average dark matter halo masses (from stacked weak lensing of a larger, similarly-selected sample). The primary remaining freedom in the mass models come from the final dark matter halo profile (after contraction from baryon infall and, possibly, feedback) and the stellar IMF. We find that the observed velocities are reproduced by models with Kroupa IMF and NFW (i.e., unmodified) dark matter haloes for galaxies with stellar masses 10^9-10^10 M_sun. For higher stellar masses, models with contracted NFW haloes are favored. A scenario in which the amount of halo contraction varies with stellar mass is able to reproduce the observed Tully-Fisher relation over the full stellar mass range of our sample from 10^9 to 10^11 M_sun. We present this as a proof-of-concept for consistency between LambdaCDM and the Tully-Fisher relation.

  12. Evaluation of endogenous control genes for gene expression studies across multiple tissues and in the specific sets of fat- and muscle-type samples of the pig.

    Science.gov (United States)

    Gu, Y R; Li, M Z; Zhang, K; Chen, L; Jiang, A A; Wang, J Y; Li, X W

    2011-08-01

    To normalize a set of quantitative real-time PCR (q-PCR) data, it is essential to determine an optimal number/set of housekeeping genes, as the abundance of housekeeping genes can vary across tissues or cells during different developmental stages, or even under certain environmental conditions. In this study, of the 20 commonly used endogenous control genes, 13, 18 and 17 genes exhibited credible stability in 56 different tissues, 10 types of adipose tissue and five types of muscle tissue, respectively. Our analysis clearly showed that three optimal housekeeping genes are adequate for an accurate normalization, which correlated well with the theoretical optimal number (r ≥ 0.94). In terms of economical and experimental feasibility, we recommend the use of the three most stable housekeeping genes for calculating the normalization factor. Based on our results, the three most stable housekeeping genes in all analysed samples (TOP2B, HSPCB and YWHAZ) are recommended for accurate normalization of q-PCR data. We also suggest that two different sets of housekeeping genes are appropriate for 10 types of adipose tissue (the HSPCB, ALDOA and GAPDH genes) and five types of muscle tissue (the TOP2B, HSPCB and YWHAZ genes), respectively. Our report will serve as a valuable reference for other studies aimed at measuring tissue-specific mRNA abundance in porcine samples. © 2011 Blackwell Verlag GmbH.

  13. Uncertainty representation of grey numbers and grey sets.

    Science.gov (United States)

    Yang, Yingjie; Liu, Sifeng; John, Robert

    2014-09-01

    In the literature, there is a presumption that a grey set and an interval-valued fuzzy set are equivalent. This presumption ignores the existence of discrete components in a grey number. In this paper, new measurements of uncertainties of grey numbers and grey sets, consisting of both absolute and relative uncertainties, are defined to give a comprehensive representation of uncertainties in a grey number and a grey set. Some simple examples are provided to illustrate that the proposed uncertainty measurement can give an effective representation of both absolute and relative uncertainties in a grey number and a grey set. The relationships between grey sets and interval-valued fuzzy sets are also analyzed from the point of view of the proposed uncertainty representation. The analysis demonstrates that grey sets and interval-valued fuzzy sets provide different but overlapping models for uncertainty representation in sets.

  14. Wavelet bidomain sample entropy analysis to predict spontaneous termination of atrial fibrillation

    International Nuclear Information System (INIS)

    Alcaraz, Raúl; Rieta, José Joaquín

    2008-01-01

    The ability to predict if an atrial fibrillation (AF) episode terminates spontaneously or not through non-invasive techniques is a challenging problem of great clinical interest. This fact could avoid useless therapeutic interventions and minimize the risks for the patient. The present work introduces a robust AF prediction methodology carried out by estimating, through sample entropy (SampEn), the atrial activity (AA) organization increase prior to AF termination from the surface electrocardiogram (ECG). This regularity variation appears as a consequence of the decrease in the number of reentries wandering throughout the atrial tissue. AA was obtained from surface ECG recordings by applying a QRST cancellation technique. Next, a robust and reliable classification process for terminating and non-terminating AF episodes was developed, making use of two different wavelet decomposition strategies. Finally, the AA organization both in time and wavelet domains (bidomain) was estimated via SampEn. The methodology was validated using a training set consisting of 20 AF recordings with known termination properties and a test set of 30 recordings. All the training signals and 93.33% of the test set were correctly classified into terminating and sustained AF, obtaining 93.75% sensitivity and 92.86% specificity. It can be concluded that spontaneous AF termination can be reliably and noninvasively predicted by applying wavelet bidomain sample entropy

  15. A vocabulary for the identification and delineation of teratoma tissue components in hematoxylin and eosin-stained samples

    Directory of Open Access Journals (Sweden)

    Ramamurthy Bhagavatula

    2014-01-01

    Full Text Available We propose a methodology for the design of features mimicking the visual cues used by pathologists when identifying tissues in hematoxylin and eosin (H&E-stained samples. Background: H&E staining is the gold standard in clinical histology; it is cheap and universally used, producing a vast number of histopathological samples. While pathologists accurately and consistently identify tissues and their pathologies, it is a time-consuming and expensive task, establishing the need for automated algorithms for improved throughput and robustness. Methods: We use an iterative feedback process to design a histopathology vocabulary (HV, a concise set of features that mimic the visual cues used by pathologists, e.g. "cytoplasm color" or "nucleus density." These features are based in histology and understood by both pathologists and engineers. We compare our HV to several generic texture-feature sets in a pixel-level classification algorithm. Results: Results on delineating and identifying tissues in teratoma tumor samples validate our expert knowledge-based approach. Conclusions: The HV can be an effective tool for identifying and delineating teratoma components from images of H&E-stained tissue samples.

  16. Concentration of ions in selected bottled water samples sold in Malaysia

    Science.gov (United States)

    Aris, Ahmad Zaharin; Kam, Ryan Chuan Yang; Lim, Ai Phing; Praveena, Sarva Mangala

    2013-03-01

    Many consumers around the world, including Malaysians, have turned to bottled water as their main source of drinking water. The aim of this study is to determine the physical and chemical properties of bottled water samples sold in Selangor, Malaysia. A total of 20 bottled water brands consisting of `natural mineral (NM)' and `packaged drinking (PD)' types were randomly collected and analyzed for their physical-chemical characteristics: hydrogen ion concentration (pH), electrical conductivity (EC) and total dissolved solids (TDS), selected major ions: calcium (Ca), potassium (K), magnesium (Mg) and sodium (Na), and minor trace constituents: copper (Cu) and zinc (Zn) to ascertain their suitability for human consumption. The results obtained were compared with guideline values recommended by World Health Organization (WHO) and Malaysian Ministry of Health (MMOH), respectively. It was found that all bottled water samples were in accordance with the guidelines set by WHO and MMOH except for one sample (D3) which was below the pH limit of 6.5. Both NM and PD bottled water were dominated by Na + K > Ca > Mg. Low values for EC and TDS in the bottled water samples showed that water was deficient in essential elements, likely an indication that these were removed by water treatment. Minerals like major ions were present in very low concentrations which could pose a risk to individuals who consume this water on a regular basis. Generally, the overall quality of the supplied bottled water was in accordance to standards and guidelines set by WHO and MMOH and safe for consumption.

  17. A parametric level-set method for partially discrete tomography

    NARCIS (Netherlands)

    A. Kadu (Ajinkya); T. van Leeuwen (Tristan); K.J. Batenburg (Joost)

    2017-01-01

    textabstractThis paper introduces a parametric level-set method for tomographic reconstruction of partially discrete images. Such images consist of a continuously varying background and an anomaly with a constant (known) grey-value. We express the geometry of the anomaly using a level-set function,

  18. An experimental set-up to test heat-moisture exchangers

    NARCIS (Netherlands)

    Unal, N.; Pompe, J. C.; Holland, W. P.; Gültuna, I.; Huygen, P. E.; Jabaaij, K.; Ince, C.; Saygin, B.; Bruining, H. A.

    1995-01-01

    The purpose of this study was to build an experimental set-up to assess continuously the humidification, heating and resistance properties of heat-moisture exchangers (HMEs) under clinical conditions. The experimental set-up consists of a patient model, measurement systems and a ventilator. Surgical

  19. Self-consistent electronic-structure calculations for interface geometries

    International Nuclear Information System (INIS)

    Sowa, E.C.; Gonis, A.; MacLaren, J.M.; Zhang, X.G.

    1992-01-01

    This paper describes a technique for computing self-consistent electronic structures and total energies of planar defects, such as interfaces, which are embedded in an otherwise perfect crystal. As in the Layer Korringa-Kohn-Rostoker approach, the solid is treated as a set of coupled layers of atoms, using Bloch's theorem to take advantage of the two-dimensional periodicity of the individual layers. The layers are coupled using the techniques of the Real-Space Multiple-Scattering Theory, avoiding artificial slab or supercell boundary conditions. A total-energy calculation on a Cu crystal, which has been split apart at a (111) plane, is used to illustrate the method

  20. Wide baseline stereo matching based on double topological relationship consistency

    Science.gov (United States)

    Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang

    2009-07-01

    Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.

  1. Consistently Trained Artificial Neural Network for Automatic Ship Berthing Control

    Directory of Open Access Journals (Sweden)

    Y.A. Ahmed

    2015-09-01

    Full Text Available In this paper, consistently trained Artificial Neural Network controller for automatic ship berthing is discussed. Minimum time course changing manoeuvre is utilised to ensure such consistency and a new concept named ‘virtual window’ is introduced. Such consistent teaching data are then used to train two separate multi-layered feed forward neural networks for command rudder and propeller revolution output. After proper training, several known and unknown conditions are tested to judge the effectiveness of the proposed controller using Monte Carlo simulations. After getting acceptable percentages of success, the trained networks are implemented for the free running experiment system to judge the network’s real time response for Esso Osaka 3-m model ship. The network’s behaviour during such experiments is also investigated for possible effect of initial conditions as well as wind disturbances. Moreover, since the final goal point of the proposed controller is set at some distance from the actual pier to ensure safety, therefore a study on automatic tug assistance is also discussed for the final alignment of the ship with actual pier.

  2. Towards the harmonization between National Forest Inventory and Forest Condition Monitoring. Consistency of plot allocation and effect of tree selection methods on sample statistics in Italy.

    Science.gov (United States)

    Gasparini, Patrizia; Di Cosmo, Lucio; Cenni, Enrico; Pompei, Enrico; Ferretti, Marco

    2013-07-01

    In the frame of a process aiming at harmonizing National Forest Inventory (NFI) and ICP Forests Level I Forest Condition Monitoring (FCM) in Italy, we investigated (a) the long-term consistency between FCM sample points (a subsample of the first NFI, 1985, NFI_1) and recent forest area estimates (after the second NFI, 2005, NFI_2) and (b) the effect of tree selection method (tree-based or plot-based) on sample composition and defoliation statistics. The two investigations were carried out on 261 and 252 FCM sites, respectively. Results show that some individual forest categories (larch and stone pine, Norway spruce, other coniferous, beech, temperate oaks and cork oak forests) are over-represented and others (hornbeam and hophornbeam, other deciduous broadleaved and holm oak forests) are under-represented in the FCM sample. This is probably due to a change in forest cover, which has increased by 1,559,200 ha from 1985 to 2005. In case of shift from a tree-based to a plot-based selection method, 3,130 (46.7%) of the original 6,703 sample trees will be abandoned, and 1,473 new trees will be selected. The balance between exclusion of former sample trees and inclusion of new ones will be particularly unfavourable for conifers (with only 16.4% of excluded trees replaced by new ones) and less for deciduous broadleaves (with 63.5% of excluded trees replaced). The total number of tree species surveyed will not be impacted, while the number of trees per species will, and the resulting (plot-based) sample composition will have a much larger frequency of deciduous broadleaved trees. The newly selected trees have-in general-smaller diameter at breast height (DBH) and defoliation scores. Given the larger rate of turnover, the deciduous broadleaved part of the sample will be more impacted. Our results suggest that both a revision of FCM network to account for forest area change and a plot-based approach to permit statistical inference and avoid bias in the tree sample

  3. Adaptive maximal poisson-disk sampling on surfaces

    KAUST Repository

    Yan, Dongming

    2012-01-01

    In this paper, we study the generation of maximal Poisson-disk sets with varying radii on surfaces. Based on the concepts of power diagram and regular triangulation, we present a geometric analysis of gaps in such disk sets on surfaces, which is the key ingredient of the adaptive maximal Poisson-disk sampling framework. Moreover, we adapt the presented sampling framework for remeshing applications. Several novel and efficient operators are developed for improving the sampling/meshing quality over the state-of-theart. © 2012 ACM.

  4. Algorithms for assessing person-based consistency among linked records for the investigation of maternal use of medications and safety

    Directory of Open Access Journals (Sweden)

    Duong Tran

    2017-04-01

    Quality assessment indicated high consistency among linked records. The set of algorithms developed in this project can be applied to similar linked perinatal datasets to promote a consistent approach and comparability across studies.

  5. 40 CFR 761.312 - Compositing of samples.

    Science.gov (United States)

    2010-07-01

    ... to composite surface wipe test samples and to use the composite measurement to represent the PCB concentration of the entire surface. Composite samples consist of more than one sample gauze extracted and... arithmetic mean of the composited samples. (a) Compositing samples from surfaces to be used or reused. For...

  6. Shrinkage covariance matrix approach based on robust trimmed mean in gene sets detection

    Science.gov (United States)

    Karjanto, Suryaefiza; Ramli, Norazan Mohamed; Ghani, Nor Azura Md; Aripin, Rasimah; Yusop, Noorezatty Mohd

    2015-02-01

    Microarray involves of placing an orderly arrangement of thousands of gene sequences in a grid on a suitable surface. The technology has made a novelty discovery since its development and obtained an increasing attention among researchers. The widespread of microarray technology is largely due to its ability to perform simultaneous analysis of thousands of genes in a massively parallel manner in one experiment. Hence, it provides valuable knowledge on gene interaction and function. The microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints. Therefore, the sample covariance matrix in Hotelling's T2 statistic is not positive definite and become singular, thus it cannot be inverted. In this research, the Hotelling's T2 statistic is combined with a shrinkage approach as an alternative estimation to estimate the covariance matrix to detect significant gene sets. The use of shrinkage covariance matrix overcomes the singularity problem by converting an unbiased to an improved biased estimator of covariance matrix. Robust trimmed mean is integrated into the shrinkage matrix to reduce the influence of outliers and consequently increases its efficiency. The performance of the proposed method is measured using several simulation designs. The results are expected to outperform existing techniques in many tested conditions.

  7. Consistent model driven architecture

    Science.gov (United States)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  8. Performance of local information-based link prediction: a sampling perspective

    Science.gov (United States)

    Zhao, Jichang; Feng, Xu; Dong, Li; Liang, Xiao; Xu, Ke

    2012-08-01

    Link prediction is pervasively employed to uncover the missing links in the snapshots of real-world networks, which are usually obtained through different kinds of sampling methods. In the previous literature, in order to evaluate the performance of the prediction, known edges in the sampled snapshot are divided into the training set and the probe set randomly, without considering the underlying sampling approaches. However, different sampling methods might lead to different missing links, especially for the biased ways. For this reason, random partition-based evaluation of performance is no longer convincing if we take the sampling method into account. In this paper, we try to re-evaluate the performance of local information-based link predictions through sampling method governed division of the training set and the probe set. It is interesting that we find that for different sampling methods, each prediction approach performs unevenly. Moreover, most of these predictions perform weakly when the sampling method is biased, which indicates that the performance of these methods might have been overestimated in the prior works.

  9. Protective Factors, Risk Indicators, and Contraceptive Consistency Among College Women.

    Science.gov (United States)

    Morrison, Leslie F; Sieving, Renee E; Pettingell, Sandra L; Hellerstedt, Wendy L; McMorris, Barbara J; Bearinger, Linda H

    2016-01-01

    To explore risk and protective factors associated with consistent contraceptive use among emerging adult female college students and whether effects of risk indicators were moderated by protective factors. Secondary analysis of National Longitudinal Study of Adolescent to Adult Health Wave III data. Data collected through in-home interviews in 2001 and 2002. National sample of 18- to 25-year-old women (N = 842) attending 4-year colleges. We examined relationships between protective factors, risk indicators, and consistent contraceptive use. Consistent contraceptive use was defined as use all of the time during intercourse in the past 12 months. Protective factors included external supports of parental closeness and relationship with caring nonparental adult and internal assets of self-esteem, confidence, independence, and life satisfaction. Risk indicators included heavy episodic drinking, marijuana use, and depression symptoms. Multivariable logistic regression models were used to evaluate relationships between protective factors and consistent contraceptive use and between risk indicators and contraceptive use. Self-esteem, confidence, independence, and life satisfaction were significantly associated with more consistent contraceptive use. In a final model including all internal assets, life satisfaction was significantly related to consistent contraceptive use. Marijuana use and depression symptoms were significantly associated with less consistent use. With one exception, protective factors did not moderate relationships between risk indicators and consistent use. Based on our findings, we suggest that risk and protective factors may have largely independent influences on consistent contraceptive use among college women. A focus on risk and protective factors may improve contraceptive use rates and thereby reduce unintended pregnancy among college students. Copyright © 2016 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses. Published

  10. Standard Practice for Ensuring Test Consistency in Neutron-Induced Displacement Damage of Electronic Parts

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2007-01-01

    1.1 This practice sets forth requirements to ensure consistency in neutron-induced displacement damage testing of silicon and gallium arsenide electronic piece parts. This requires controls on facility, dosimetry, tester, and communications processes that affect the accuracy and reproducibility of these tests. It provides background information on the technical basis for the requirements and additional recommendations on neutron testing. In addition to neutrons, reactors are used to provide gamma-ray pulses of intensities and durations that are not achievable elsewhere. This practice also provides background information and recommendations on gamma-ray testing of electronics using nuclear reactors. 1.2 Methods are presented for ensuring and validating consistency in neutron displacement damage testing of electronic parts such as integrated circuits, transistors, and diodes. The issues identified and the controls set forth in this practice address the characterization and suitability of the radiation environm...

  11. Self-consistent studies of magnetic thin film Ni (001)

    International Nuclear Information System (INIS)

    Wang, C.S.; Freeman, A.J.

    1979-01-01

    Advances in experimental methods for studying surface phenomena have provided the stimulus to develop theoretical methods capable of interpreting this wealth of new information. Of particular interest have been the relative roles of bulk and surface contributions since in several important cases agreement between experiment and bulk self-consistent (SC) calculations within the local spin density functional formalism (LSDF) is lacking. We discuss our recent extension of the (LSDF) approach to the study of thin films (slabs) and the role of surface effects on magnetic properties. Results are described for Ni (001) films using our new SC numerical basis set LCAO method. Self-consistency within the superposition of overlapping spherical atomic charge density model is obtained iteratively with the atomic configuration as the adjustable parameter. Results are presented for the electronic charge densities and local density of states. The origin and role of (magnetic) surface states is discussed by comparison with results of earlier bulk calculations

  12. An Interdisciplinary Method for the Visualization of Novel High-Resolution Precision Photography and Micro-XCT Data Sets of NASA's Apollo Lunar Samples and Antarctic Meteorite Samples to Create Combined Research-Grade 3D Virtual Samples for the Benefit of Astromaterials Collections Conservation, Curation, Scientific Research and Education

    Science.gov (United States)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Hanna, R. D.; Ketcham, R. A.

    2016-01-01

    New technologies make possible the advancement of documentation and visualization practices that can enhance conservation and curation protocols for NASA's Astromaterials Collections. With increasing demands for accessibility to updated comprehensive data, and with new sample return missions on the horizon, it is of primary importance to develop new standards for contemporary documentation and visualization methodologies. Our interdisciplinary team has expertise in the fields of heritage conservation practices, professional photography, photogrammetry, imaging science, application engineering, data curation, geoscience, and astromaterials curation. Our objective is to create virtual 3D reconstructions of Apollo Lunar and Antarctic Meteorite samples that are a fusion of two state-of-the-art data sets: the interior view of the sample by collecting Micro-XCT data and the exterior view of the sample by collecting high-resolution precision photography data. These new data provide researchers an information-rich visualization of both compositional and textural information prior to any physical sub-sampling. Since January 2013 we have developed a process that resulted in the successful creation of the first image-based 3D reconstruction of an Apollo Lunar Sample correlated to a 3D reconstruction of the same sample's Micro- XCT data, illustrating that this technique is both operationally possible and functionally beneficial. In May of 2016 we began a 3-year research period during which we aim to produce Virtual Astromaterials Samples for 60 high-priority Apollo Lunar and Antarctic Meteorite samples and serve them on NASA's Astromaterials Acquisition and Curation website. Our research demonstrates that research-grade Virtual Astromaterials Samples are beneficial in preserving for posterity a precise 3D reconstruction of the sample prior to sub-sampling, which greatly improves documentation practices, provides unique and novel visualization of the sample's interior and

  13. Training set optimization under population structure in genomic selection.

    Science.gov (United States)

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  14. Fast, kinetically self-consistent simulation of RF modulated plasma boundary sheaths

    International Nuclear Information System (INIS)

    Shihab, Mohammed; Ziegler, Dennis; Brinkmann, Ralf Peter

    2012-01-01

    A mathematical model is presented which enables the efficient, kinetically self-consistent simulation of RF modulated plasma boundary sheaths in all technically relevant discharge regimes. It is defined on a one-dimensional geometry where a Cartesian x-axis points from the electrode or wall at x E ≡ 0 towards the plasma bulk. An arbitrary endpoint x B is chosen ‘deep in the bulk’. The model consists of a set of kinetic equations for the ions, Boltzmann's relation for the electrons and Poisson's equation for the electrical field. Boundary conditions specify the ion flux at x B and a periodically—not necessarily harmonically—modulated sheath voltage V(t) or sheath charge Q(t). The equations are solved in a statistical sense. However, it is not the well-known particle-in-cell (PIC) scheme that is employed, but an alternative iterative algorithm termed ensemble-in-spacetime (EST). The basis of the scheme is a discretization of the spacetime, the product of the domain [x E , x B ] and the RF period [0, T]. Three modules are called in a sequence. A Monte Carlo module calculates the trajectories of a large set of ions from their start at x B until they reach the electrode at x E , utilizing the potential values on the nodes of the spatio-temporal grid. A harmonic analysis module reconstructs the Fourier modes n im (x) of the ion density n i (x, t) from the calculated trajectories. A field module finally solves the Boltzmann-Poisson equation with the calculated ion densities to generate an updated set of potential values for the spatio-temporal grid. The iteration is started with the potential values of a self-consistent fluid model and terminates when the updates become sufficiently small, i.e. when self-consistency is achieved. A subsequent post-processing determines important quantities, in particular the phase-resolved and phase-averaged values of the ion energy and angular distributions and the total energy flux at the electrode. A drastic reduction of the

  15. Microwave Enhanced Cotunneling in SET Transistors

    DEFF Research Database (Denmark)

    Manscher, Martin; Savolainen, M.; Mygind, Jesper

    2003-01-01

    Cotunneling in single electron tunneling (SET) devices is an error process which may severely limit their electronic and metrologic applications. Here is presented an experimental investigation of the theory for adiabatic enhancement of cotunneling by coherent microwaves. Cotunneling in SET...... transistors has been measured as function of temperature, gate voltage, frequency, and applied microwave power. At low temperatures and applied power levels, including also sequential tunneling, the results can be made consistent with theory using the unknown damping in the microwave line as the only free...

  16. Self-consistent theory of hadron-nucleus scattering. Application to pion physics

    International Nuclear Information System (INIS)

    Johnson, M.B.

    1981-01-01

    The first part of this set of two seminars will consist of a review of several of the important accomplishments made in the last few years in the field of pion-nucleus physics. Next I discuss some questions raised by these accomplishments and show that for some very natural reasons the commonly employed theoretical methods cannot be applied to answer these questions. This situation leads to the idea of self-consistency, which is first explained in a general context. The remainder of the seminars are devoted to illustrating the idea within a simple multiple-scattering model for the case of pion scattering. An evaluation of the effectiveness of the self-consistent requirment to produce a solution to the model is made, and a few of the questions raised by recent accomplishments in the field of pion physics are addressed in the model. Finally, the results of the model calculation are compared to experimental data and implications of the results discussed. (orig./HSI)

  17. Identification of self-consistent modulons from bacterial microarray expression data with the help of structured regulon gene sets

    KAUST Repository

    Permina, Elizaveta A.

    2013-01-01

    Identification of bacterial modulons from series of gene expression measurements on microarrays is a principal problem, especially relevant for inadequately studied but practically important species. Usage of a priori information on regulatory interactions helps to evaluate parameters for regulatory subnetwork inference. We suggest a procedure for modulon construction where a seed regulon is iteratively updated with genes having expression patterns similar to those for regulon member genes. A set of genes essential for a regulon is used to control modulon updating. Essential genes for a regulon were selected as a subset of regulon genes highly related by different measures to each other. Using Escherichia coli as a model, we studied how modulon identification depends on the data, including the microarray experiments set, the adopted relevance measure and the regulon itself. We have found that results of modulon identification are highly dependent on all parameters studied and thus the resulting modulon varies substantially depending on the identification procedure. Yet, modulons that were identified correctly displayed higher stability during iterations, which allows developing a procedure for reliable modulon identification in the case of less studied species where the known regulatory interactions are sparse. Copyright © 2013 Taylor & Francis.

  18. Electron beam charging of insulators: A self-consistent flight-drift model

    International Nuclear Information System (INIS)

    Touzin, M.; Goeuriot, D.; Guerret-Piecourt, C.; Juve, D.; Treheux, D.; Fitting, H.-J.

    2006-01-01

    Electron beam irradiation and the self-consistent charge transport in bulk insulating samples are described by means of a new flight-drift model and an iterative computer simulation. Ballistic secondary electron and hole transport is followed by electron and hole drifts, their possible recombination and/or trapping in shallow and deep traps. The trap capture cross sections are the Poole-Frenkel-type temperature and field dependent. As a main result the spatial distributions of currents j(x,t), charges ρ(x,t), the field F(x,t), and the potential slope V(x,t) are obtained in a self-consistent procedure as well as the time-dependent secondary electron emission rate σ(t) and the surface potential V 0 (t). For bulk insulating samples the time-dependent distributions approach the final stationary state with j(x,t)=const=0 and σ=1. Especially for low electron beam energies E 0 G of a vacuum grid in front of the target surface. For high beam energies E 0 =10, 20, and 30 keV high negative surface potentials V 0 =-4, -14, and -24 kV are obtained, respectively. Besides open nonconductive samples also positive ion-covered samples and targets with a conducting and grounded layer (metal or carbon) on the surface have been considered as used in environmental scanning electron microscopy and common SEM in order to prevent charging. Indeed, the potential distributions V(x) are considerably small in magnitude and do not affect the incident electron beam neither by retarding field effects in front of the surface nor within the bulk insulating sample. Thus the spatial scattering and excitation distributions are almost not affected

  19. SKATE: a docking program that decouples systematic sampling from scoring.

    Science.gov (United States)

    Feng, Jianwen A; Marshall, Garland R

    2010-11-15

    SKATE is a docking prototype that decouples systematic sampling from scoring. This novel approach removes any interdependence between sampling and scoring functions to achieve better sampling and, thus, improves docking accuracy. SKATE systematically samples a ligand's conformational, rotational and translational degrees of freedom, as constrained by a receptor pocket, to find sterically allowed poses. Efficient systematic sampling is achieved by pruning the combinatorial tree using aggregate assembly, discriminant analysis, adaptive sampling, radial sampling, and clustering. Because systematic sampling is decoupled from scoring, the poses generated by SKATE can be ranked by any published, or in-house, scoring function. To test the performance of SKATE, ligands from the Asetex/CDCC set, the Surflex set, and the Vertex set, a total of 266 complexes, were redocked to their respective receptors. The results show that SKATE was able to sample poses within 2 A RMSD of the native structure for 98, 95, and 98% of the cases in the Astex/CDCC, Surflex, and Vertex sets, respectively. Cross-docking accuracy of SKATE was also assessed by docking 10 ligands to thymidine kinase and 73 ligands to cyclin-dependent kinase. 2010 Wiley Periodicals, Inc.

  20. POLYP: an automatic device for drawing sequential samples of gas

    Energy Technology Data Exchange (ETDEWEB)

    Gaglione, P; Koechler, C; Stanchi, L

    1974-12-01

    Polyp is an automatic device consisting of electronic equipment which drives sequentially 8 small pumps for drawing samples of gas. The electronic circuit is driven by a quartz oscillator and allows for the preselection of a waiting time in such a manner that a set of similar instruments placed in suitable position in the open country will start simultaneously. At the same time the first pump of each instrument will inflate a plastic bag for a preset time. The other seven pumps will inflate sequentially the other bags. The instrument is powered by rechargeable batteries and realized with C-MUS integrated circuits for a nearly negligible consumption. As it is foreseen for field operation it is waterproof.

  1. POLYP: an automatic device for drawing sequential samples of gas

    International Nuclear Information System (INIS)

    Gaglione, P.; Koechler, C.; Stanchi, L.

    1974-12-01

    POLYP is an automatic device consisting of an electronic equipment which drives sequentially 8 small pumps for drawing samples of gas. The electronic circuit is driven by a quartz oscillator and allows for the preselection of a waiting time in such a manner that a set of similar instruments placed in suitable position in the open country will start simultaneously. At the same time the first pump of each instrument will inflate a plastic bag for a preset time. Thereafter the other seven pumps will inflate sequentially the other bag. The instrument is powered by rechargeable batteries and realized with C-MOS integrated circuits for a nearly negligible consumption. As it is foreseen for field operation it is waterproof

  2. Continuation of Sets of Constrained Orbit Segments

    DEFF Research Database (Denmark)

    Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki

    Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of constrained...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....

  3. Utilizing the Zero-One Linear Programming Constraints to Draw Multiple Sets of Matched Samples from a Non-Treatment Population as Control Groups for the Quasi-Experimental Design

    Science.gov (United States)

    Li, Yuan H.; Yang, Yu N.; Tompkins, Leroy J.; Modarresi, Shahpar

    2005-01-01

    The statistical technique, "Zero-One Linear Programming," that has successfully been used to create multiple tests with similar characteristics (e.g., item difficulties, test information and test specifications) in the area of educational measurement, was deemed to be a suitable method for creating multiple sets of matched samples to be…

  4. Science and art of setting performance standards and cutoff scores in kinesiology.

    Science.gov (United States)

    Zhu, Weimo

    2013-12-01

    Setting standards and cutoff scores is essential to any measurement and evaluation practice. Two evaluation frameworks, norm-referenced (NR) and criterion-referenced (CR), have often been used for setting standards. Although setting fitness standards based on the NR evaluation is relatively easy as long as a nationally representative sample can be obtained and regularly updated, it has several limitations-namely, time dependency, population dependence, discouraging low-level performers, and favoring advantaged or punishing disadvantaged individuals. Fortunately, these limitations can be significantly eliminated by employing the CR evaluation, which was introduced to kinesiology by Safrit and colleagues in the 1980s and has been successfully applied to some practical problems (e.g., set health-related fitness standards for FITNESSGRAM). Yet, the CR evaluation has its own challenges, e.g., selecting an appropriate measure for a criterion behavior, when the expected relationship between the criterion behavior and a predictive measure is not clear, and when standards are not consistent among multiple field measures. Some of these challenges can be addressed by employing the latest statistical methods (e.g., test equating). This article provides a comprehensive review of the science and art of setting standards and cutoff scores in kinesiology. After a brief historical overview of the standard-setting practice in kinesiology is presented, a case analysis of a successful CR evaluation, along with related challenges, is described. Lessons learned from past and current practice as well as how to develop a defendable standard are described. Finally, future research needs and directions are outlined.

  5. Self-assessment: Strategy for higher standards, consistency, and performance

    International Nuclear Information System (INIS)

    Ide, W.E.

    1996-01-01

    In late 1994, Palo Verde operations underwent a transformation from a unitized structure to a single functional unit. It was necessary to build consistency in watchstanding practices and create a shared mission. Because there was a lack of focus on actual plant operations and because personnel were deeply involved with administrative tasks, command and control of evolutions were weak. Improvement was needed. Consistent performance standards have been set for all three operating units. These expectation focus on nuclear, radiological, and industrial safety. Straightforward descriptions of watchstanding and monitoring practices have been provided to all department personnel. The desired professional and leadership qualities for employee conduct have been defined and communicated thoroughly. A healthy and competitive atmosphere developed with the successful implementation of these standards. Overall performance improved. The auxiliary operators demonstrated increased pride and ownership in the performance of their work activities. In addition, their morale improved. Crew teamwork improved as well as the quality of shift briefs. There was a decrease in the noise level and the administrative functions in the control room. The use of self-assessment helped to anchor and define higher and more consistent standards. The proof of Palo Verde's success was evident when an Institute of Nuclear Power Operations finding was turned into a strength within 1 yr

  6. Sampling Large Graphs for Anticipatory Analytics

    Science.gov (United States)

    2015-05-15

    low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges

  7. Multi-instance learning based on instance consistency for image retrieval

    Science.gov (United States)

    Zhang, Miao; Wu, Zhize; Wan, Shouhong; Yue, Lihua; Yin, Bangjie

    2017-07-01

    Multiple-instance learning (MIL) has been successfully utilized in image retrieval. Existing approaches cannot select positive instances correctly from positive bags which may result in a low accuracy. In this paper, we propose a new image retrieval approach called multiple instance learning based on instance-consistency (MILIC) to mitigate such issue. First, we select potential positive instances effectively in each positive bag by ranking instance-consistency (IC) values of instances. Then, we design a feature representation scheme, which can represent the relationship among bags and instances, based on potential positive instances to convert a bag into a single instance. Finally, we can use a standard single-instance learning strategy, such as the support vector machine, for performing object-based image retrieval. Experimental results on two challenging data sets show the effectiveness of our proposal in terms of accuracy and run time.

  8. Sampling solution traces for the problem of sorting permutations by signed reversals

    Science.gov (United States)

    2012-01-01

    Background Traditional algorithms to solve the problem of sorting by signed reversals output just one optimal solution while the space of all optimal solutions can be huge. A so-called trace represents a group of solutions which share the same set of reversals that must be applied to sort the original permutation following a partial ordering. By using traces, we therefore can represent the set of optimal solutions in a more compact way. Algorithms for enumerating the complete set of traces of solutions were developed. However, due to their exponential complexity, their practical use is limited to small permutations. A partial enumeration of traces is a sampling of the complete set of traces and can be an alternative for the study of distinct evolutionary scenarios of big permutations. Ideally, the sampling should be done uniformly from the space of all optimal solutions. This is however conjectured to be ♯P-complete. Results We propose and evaluate three algorithms for producing a sampling of the complete set of traces that instead can be shown in practice to preserve some of the characteristics of the space of all solutions. The first algorithm (RA) performs the construction of traces through a random selection of reversals on the list of optimal 1-sequences. The second algorithm (DFALT) consists in a slight modification of an algorithm that performs the complete enumeration of traces. Finally, the third algorithm (SWA) is based on a sliding window strategy to improve the enumeration of traces. All proposed algorithms were able to enumerate traces for permutations with up to 200 elements. Conclusions We analysed the distribution of the enumerated traces with respect to their height and average reversal length. Various works indicate that the reversal length can be an important aspect in genome rearrangements. The algorithms RA and SWA show a tendency to lose traces with high average reversal length. Such traces are however rare, and qualitatively our results

  9. Sampling and analyses report for June 1992 semiannual postburn sampling at the RM1 UCG site, Hanna, Wyoming

    International Nuclear Information System (INIS)

    Lindblom, S.R.

    1992-08-01

    The Rocky Mountain 1 (RMl) underground coal gasification (UCG) test was conducted from November 16, 1987 through February 26, 1988 (United Engineers and Constructors 1989) at a site approximately one mile south of Hanna, Wyoming. The test consisted of dual module operation to evaluate the controlled retracting injection point (CRIP) technology, the elongated linked well (ELW) technology, and the interaction of closely spaced modules operating simultaneously. The test caused two cavities to be formed in the Hanna No. 1 coal seam and associated overburden. The Hanna No. 1 coal seam is approximately 30 ft thick and lays at depths between 350 ft and 365 ft below the surface in the test area. The coal seam is overlain by sandstones, siltstones and claystones deposited by various fluvial environments. The groundwater monitoring was designed to satisfy the requirements of the Wyoming Department of Environmental Quality (WDEQ) in addition to providing research data toward the development of UCG technology that minimizes environmental impacts. The June 1992 semiannual groundwater.sampling took place from June 10 through June 13, 1992. This event occurred nearly 34 months after the second groundwater restoration at the RM1 site and was the fifteenth sampling event since UCG operations ceased. Samples were collected for analyses of a limited suite set of parameters as listed in Table 1. With a few exceptions, the groundwater is near baseline conditions. Data from the field measurements and analysis of samples are presented. Benzene concentrations in the groundwater were below analytical detection limits

  10. Poster Abstract: Towards NILM for Industrial Settings

    DEFF Research Database (Denmark)

    Holmegaard, Emil; Kjærgaard, Mikkel Baun

    2015-01-01

    Industry consumes a large share of the worldwide electricity consumption. Disaggregated information about electricity consumption enables better decision-making and feedback tools to optimize electricity consumption. In industrial settings electricity loads consist of a variety of equipment, whic...... consumption for six months, at an industrial site. In this poster abstract we provide initial results for how industrial equipment challenge NILM algorithms. These results thereby open up for evaluating the use of NILM in industrial settings....

  11. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    Science.gov (United States)

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  12. Screening experiments of ecstasy street samples using near infrared spectroscopy.

    Science.gov (United States)

    Sondermann, N; Kovar, K A

    1999-12-20

    Twelve different sets of confiscated ecstasy samples were analysed applying both near infrared spectroscopy in reflectance mode (1100-2500 nm) and high-performance liquid chromatography (HPLC). The sets showed a large variance in composition. A calibration data set was generated based on the theory of factorial designs. It contained 221 N-methyl-3,4-methylenedioxyamphetamine (MDMA) samples, 167 N-ethyl-3,4-methylenedioxyamphetamine (MDE), 111 amphetamine and 106 samples without a controlled substance, which will be called placebo samples thereafter. From this data set, PLS-1 models were calculated and were successfully applied for validation of various external laboratory test sets. The transferability of these results to confiscated tablets is demonstrated here. It is shown that differentiation into placebo, amphetamine and ecstasy samples is possible. Analysis of intact tablets is practicable. However, more reliable results are obtained from pulverised samples. This is due to ill-defined production procedures. The use of mathematically pretreated spectra improves the prediction quality of all the PLS-1 models studied. It is possible to improve discrimination between MDE and MDMA with the help of a second model based on raw spectra. Alternative strategies are briefly discussed.

  13. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

    OpenAIRE

    Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we comp...

  14. Anticipatory vigilance: A grounded theory study of minimising risk within the perioperative setting.

    Science.gov (United States)

    O'Brien, Brid; Andrews, Tom; Savage, Eileen

    2018-01-01

    To explore and explain how nurses minimise risk in the perioperative setting. Perioperative nurses care for patients who are having surgery or other invasive explorative procedures. Perioperative care is increasingly focused on how to improve patient safety. Safety and risk management is a global priority for health services in reducing risk. Many studies have explored safety within the healthcare settings. However, little is known about how nurses minimise risk in the perioperative setting. Classic grounded theory. Ethical approval was granted for all aspects of the study. Thirty-seven nurses working in 11 different perioperative settings in Ireland were interviewed and 33 hr of nonparticipant observation was undertaken. Concurrent data collection and analysis was undertaken using theoretical sampling. Constant comparative method, coding and memoing and were used to analyse the data. Participants' main concern was how to minimise risk. Participants resolved this through engaging in anticipatory vigilance (core category). This strategy consisted of orchestrating, routinising and momentary adapting. Understanding the strategies of anticipatory vigilance extends and provides an in-depth explanation of how nurses' behaviour ensures that risk is minimised in a complex high-risk perioperative setting. This is the first theory situated in the perioperative area for nurses. This theory provides a guide and understanding for nurses working in the perioperative setting on how to minimise risk. It makes perioperative nursing visible enabling positive patient outcomes. This research suggests the need for training and education in maintaining safety and minimising risk in the perioperative setting. © 2017 John Wiley & Sons Ltd.

  15. Demonstrating the efficacy of the FoneAstra pasteurization monitor for human milk pasteurization in resource-limited settings.

    Science.gov (United States)

    Naicker, Mageshree; Coutsoudis, Anna; Israel-Ballard, Kiersten; Chaudhri, Rohit; Perin, Noah; Mlisana, Koleka

    2015-03-01

    Human milk provides crucial nutrition and immunologic protection for infants. When a mother's own milk is unavailable, donated human milk, pasteurized to destroy bacteria and viruses, is a lifesaving replacement. Flash-heat pasteurization is a simple, low-cost, and commonly used method to make milk safe, but currently there is no system to monitor milk temperature, which challenges quality control. FoneAstra, a smartphone-based mobile pasteurization monitor, removes this barrier by guiding users through pasteurization and documenting consistent and safe practice. This study evaluated FoneAstra's efficacy as a quality control system, particularly in resource-limited settings, by comparing bacterial growth in donor milk flash-heated with and without the device at a neonatal intensive care unit in Durban, South Africa. For 100 samples of donor milk, one aliquot each of prepasteurized milk, milk flash-heated without FoneAstra, and milk pasteurized with FoneAstra was cultured on routine agar for bacterial growth. Isolated bacteria were identified and enumerated. In total, 300 samples (three from each donor sample) were analyzed. Bacterial growth was found in 86 of the 100 samples before any pasteurization and one of the 100 postpasteurized samples without FoneAstra. None of the samples pasteurized using FoneAstra showed bacterial growth. Both pasteurization methods were safe and effective. FoneAstra, however, provides the additional benefits of user-guided temperature monitoring and data tracking. By improving quality assurance and standardizing the pasteurization process, FoneAstra can support wide-scale implementation of human milk banks in resource-limited settings, increasing access and saving lives.

  16. Centrifugation protocols: tests to determine optimal lithium heparin and citrate plasma sample quality.

    Science.gov (United States)

    Dimeski, Goce; Solano, Connie; Petroff, Mark K; Hynd, Matthew

    2011-05-01

    Currently, no clear guidelines exist for the most appropriate tests to determine sample quality from centrifugation protocols for plasma sample types with both lithium heparin in gel barrier tubes for biochemistry testing and citrate tubes for coagulation testing. Blood was collected from 14 participants in four lithium heparin and one serum tube with gel barrier. The plasma tubes were centrifuged at four different centrifuge settings and analysed for potassium (K(+)), lactate dehydrogenase (LD), glucose and phosphorus (Pi) at zero time, poststorage at six hours at 21 °C and six days at 2-8°C. At the same time, three citrate tubes were collected and centrifuged at three different centrifuge settings and analysed immediately for prothrombin time/international normalized ratio, activated partial thromboplastin time, derived fibrinogen and surface-activated clotting time (SACT). The biochemistry analytes indicate plasma is less stable than serum. Plasma sample quality is higher with longer centrifugation time, and much higher g force. Blood cells present in the plasma lyse with time or are damaged when transferred in the reaction vessels, causing an increase in the K(+), LD and Pi above outlined limits. The cells remain active and consume glucose even in cold storage. The SACT is the only coagulation parameter that was affected by platelets >10 × 10(9)/L in the citrate plasma. In addition to the platelet count, a limited but sensitive number of assays (K(+), LD, glucose and Pi for biochemistry, and SACT for coagulation) can be used to determine appropriate centrifuge settings to consistently obtain the highest quality lithium heparin and citrate plasma samples. The findings will aid laboratories to balance the need to provide the most accurate results in the best turnaround time.

  17. Using lot quality assurance sampling to assess access to water, sanitation and hygiene services in a refugee camp setting in South Sudan: a feasibility study.

    Science.gov (United States)

    Harding, Elizabeth; Beckworth, Colin; Fesselet, Jean-Francois; Lenglet, Annick; Lako, Richard; Valadez, Joseph J

    2017-08-08

    Humanitarian agencies working in refugee camp settings require rapid assessment methods to measure the needs of the populations they serve. Due to the high level of dependency of refugees, agencies need to carry out these assessments. Lot Quality Assurance Sampling (LQAS) is a method commonly used in development settings to assess populations living in a project catchment area to identify their greatest needs. LQAS could be well suited to serve the needs of refugee populations, but it has rarely been used in humanitarian settings. We adapted and implemented an LQAS survey design in Batil refugee camp, South Sudan in May 2013 to measure the added value of using it for sub-camp level assessment. Using pre-existing divisions within the camp, we divided the Batil catchment area into six contiguous segments, called 'supervision areas' (SA). Six teams of two data collectors randomly selected 19 respondents in each SA, who they interviewed to collect information on water, sanitation, hygiene, and diarrhoea prevalence. These findings were aggregated into a stratified random sample of 114 respondents, and the results were analysed to produce a coverage estimate with 95% confidence interval for the camp and to prioritize SAs within the camp. The survey provided coverage estimates on WASH indicators as well as evidence that areas of the camp closer to the main road, to clinics and to the market were better served than areas at the periphery of the camp. This assumption did not hold for all services, however, as sanitation services were uniformly high regardless of location. While it was necessary to adapt the standard LQAS protocol used in low-resource communities, the LQAS model proved to be feasible in a refugee camp setting, and program managers found the results useful at both the catchment area and SA level. This study, one of the few adaptations of LQAS for a camp setting, shows that it is a feasible method for regular monitoring, with the added value of enabling camp

  18. Using lot quality assurance sampling to assess access to water, sanitation and hygiene services in a refugee camp setting in South Sudan: a feasibility study

    Directory of Open Access Journals (Sweden)

    Elizabeth Harding

    2017-08-01

    Full Text Available Abstract Background Humanitarian agencies working in refugee camp settings require rapid assessment methods to measure the needs of the populations they serve. Due to the high level of dependency of refugees, agencies need to carry out these assessments. Lot Quality Assurance Sampling (LQAS is a method commonly used in development settings to assess populations living in a project catchment area to identify their greatest needs. LQAS could be well suited to serve the needs of refugee populations, but it has rarely been used in humanitarian settings. We adapted and implemented an LQAS survey design in Batil refugee camp, South Sudan in May 2013 to measure the added value of using it for sub-camp level assessment. Methods Using pre-existing divisions within the camp, we divided the Batil catchment area into six contiguous segments, called ‘supervision areas’ (SA. Six teams of two data collectors randomly selected 19 respondents in each SA, who they interviewed to collect information on water, sanitation, hygiene, and diarrhoea prevalence. These findings were aggregated into a stratified random sample of 114 respondents, and the results were analysed to produce a coverage estimate with 95% confidence interval for the camp and to prioritize SAs within the camp. Results The survey provided coverage estimates on WASH indicators as well as evidence that areas of the camp closer to the main road, to clinics and to the market were better served than areas at the periphery of the camp. This assumption did not hold for all services, however, as sanitation services were uniformly high regardless of location. While it was necessary to adapt the standard LQAS protocol used in low-resource communities, the LQAS model proved to be feasible in a refugee camp setting, and program managers found the results useful at both the catchment area and SA level. Conclusions This study, one of the few adaptations of LQAS for a camp setting, shows that it is a feasible

  19. Social Set Visualizer

    DEFF Research Database (Denmark)

    Flesch, Benjamin; Hussain, Abid; Vatrapu, Ravi

    2015-01-01

    -edge open source visual analytics libraries from D3.js and creation of new visualizations (ac-tor mobility across time, conversational comets etc). Evaluation of the dashboard consisting of technical testing, usability testing, and domain-specific testing with CSR students and yielded positive results.......This paper presents a state-of-the art visual analytics dash-board, Social Set Visualizer (SoSeVi), of approximately 90 million Facebook actions from 11 different companies that have been mentioned in the traditional media in relation to garment factory accidents in Bangladesh. The enterprise...

  20. Spatial part-set cuing facilitation.

    Science.gov (United States)

    Kelley, Matthew R; Parasiuk, Yuri; Salgado-Benz, Jennifer; Crocco, Megan

    2016-07-01

    Cole, Reysen, and Kelley [2013. Part-set cuing facilitation for spatial information. Journal of Experimental Psychology: Learning, Memory, & Cognition, 39, 1615-1620] reported robust part-set cuing facilitation for spatial information using snap circuits (a colour-coded electronics kit designed for children to create rudimentary circuit boards). In contrast, Drinkwater, Dagnall, and Parker [2006. Effects of part-set cuing on experienced and novice chess players' reconstruction of a typical chess midgame position. Perceptual and Motor Skills, 102(3), 645-653] and Watkins, Schwartz, and Lane [1984. Does part-set cuing test for memory organization? Evidence from reconstructions of chess positions. Canadian Journal of Psychology/Revue Canadienne de Psychologie, 38(3), 498-503] showed no influence of part-set cuing for spatial information when using chess boards. One key difference between the two procedures was that the snap circuit stimuli were explicitly connected to one another, whereas chess pieces were not. Two experiments examined the effects of connection type (connected vs. unconnected) and cue type (cued vs. uncued) on memory for spatial information. Using chess boards (Experiment 1) and snap circuits (Experiment 2), part-set cuing facilitation only occurred when the stimuli were explicitly connected; there was no influence of cuing with unconnected stimuli. These results are potentially consistent with the retrieval strategy disruption hypothesis, as well as the two- and three-mechanism accounts of part-set cuing.

  1. Statistical sampling plans

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    In auditing and in inspection, one selects a number of items by some set of procedures and performs measurements which are compared with the operator's values. This session considers the problem of how to select the samples to be measured, and what kinds of measurements to make. In the inspection situation, the ultimate aim is to independently verify the operator's material balance. The effectiveness of the sample plan in achieving this objective is briefly considered. The discussion focuses on the model plant

  2. Relative amplitude preservation processing utilizing surface consistent amplitude correction. Part 3; Surface consistent amplitude correction wo mochiita sotai shinpuku hozon shori. 3

    Energy Technology Data Exchange (ETDEWEB)

    Saeki, T [Japan National Oil Corporation, Tokyo (Japan). Technology Research Center

    1996-10-01

    For the seismic reflection method conducted on the ground surface, generator and geophone are set on the surface. The observed waveforms are affected by the ground surface and surface layer. Therefore, it is required for discussing physical properties of the deep underground to remove the influence of surface layer, preliminarily. For the surface consistent amplitude correction, properties of the generator and geophone were removed by assuming that the observed waveforms can be expressed by equations of convolution. This is a correction method to obtain records without affected by the surface conditions. In response to analysis and correction of waveforms, wavelet conversion was examined. Using the amplitude patterns after correction, the significant signal region, noise dominant region, and surface wave dominant region would be separated each other. Since the amplitude values after correction of values in the significant signal region have only small variation, a representative value can be given. This can be used for analyzing the surface consistent amplitude correction. Efficiency of the process can be enhanced by considering the change of frequency. 3 refs., 5 figs.

  3. Cell type specific DNA methylation in cord blood: A 450K-reference data set and cell count-based validation of estimated cell type composition.

    Science.gov (United States)

    Gervin, Kristina; Page, Christian Magnus; Aass, Hans Christian D; Jansen, Michelle A; Fjeldstad, Heidi Elisabeth; Andreassen, Bettina Kulle; Duijts, Liesbeth; van Meurs, Joyce B; van Zelm, Menno C; Jaddoe, Vincent W; Nordeng, Hedvig; Knudsen, Gunn Peggy; Magnus, Per; Nystad, Wenche; Staff, Anne Cathrine; Felix, Janine F; Lyle, Robert

    2016-09-01

    Epigenome-wide association studies of prenatal exposure to different environmental factors are becoming increasingly common. These studies are usually performed in umbilical cord blood. Since blood comprises multiple cell types with specific DNA methylation patterns, confounding caused by cellular heterogeneity is a major concern. This can be adjusted for using reference data consisting of DNA methylation signatures in cell types isolated from blood. However, the most commonly used reference data set is based on blood samples from adult males and is not representative of the cell type composition in neonatal cord blood. The aim of this study was to generate a reference data set from cord blood to enable correct adjustment of the cell type composition in samples collected at birth. The purity of the isolated cell types was very high for all samples (>97.1%), and clustering analyses showed distinct grouping of the cell types according to hematopoietic lineage. We explored whether this cord blood and the adult peripheral blood reference data sets impact the estimation of cell type composition in cord blood samples from an independent birth cohort (MoBa, n = 1092). This revealed significant differences for all cell types. Importantly, comparison of the cell type estimates against matched cell counts both in the cord blood reference samples (n = 11) and in another independent birth cohort (Generation R, n = 195), demonstrated moderate to high correlation of the data. This is the first cord blood reference data set with a comprehensive examination of the downstream application of the data through validation of estimated cell types against matched cell counts.

  4. Noise in NC-AFM measurements with significant tip–sample interaction

    Directory of Open Access Journals (Sweden)

    Jannis Lübbe

    2016-12-01

    Full Text Available The frequency shift noise in non-contact atomic force microscopy (NC-AFM imaging and spectroscopy consists of thermal noise and detection system noise with an additional contribution from amplitude noise if there are significant tip–sample interactions. The total noise power spectral density DΔf(fm is, however, not just the sum of these noise contributions. Instead its magnitude and spectral characteristics are determined by the strongly non-linear tip–sample interaction, by the coupling between the amplitude and tip–sample distance control loops of the NC-AFM system as well as by the characteristics of the phase locked loop (PLL detector used for frequency demodulation. Here, we measure DΔf(fm for various NC-AFM parameter settings representing realistic measurement conditions and compare experimental data to simulations based on a model of the NC-AFM system that includes the tip–sample interaction. The good agreement between predicted and measured noise spectra confirms that the model covers the relevant noise contributions and interactions. Results yield a general understanding of noise generation and propagation in the NC-AFM and provide a quantitative prediction of noise for given experimental parameters. We derive strategies for noise-optimised imaging and spectroscopy and outline a full optimisation procedure for the instrumentation and control loops.

  5. Learning to reason from samples

    NARCIS (Netherlands)

    Ben-Zvi, Dani; Bakker, Arthur; Makar, Katie

    2015-01-01

    The goal of this article is to introduce the topic of learning to reason from samples, which is the focus of this special issue of Educational Studies in Mathematics on statistical reasoning. Samples are data sets, taken from some wider universe (e.g., a population or a process) using a particular

  6. Surface Sampling Collection and Culture Methods for Escherichia coli in Household Environments with High Fecal Contamination.

    Science.gov (United States)

    Exum, Natalie G; Kosek, Margaret N; Davis, Meghan F; Schwab, Kellogg J

    2017-08-22

    Empiric quantification of environmental fecal contamination is an important step toward understanding the impact that water, sanitation, and hygiene interventions have on reducing enteric infections. There is a need to standardize the methods used for surface sampling in field studies that examine fecal contamination in low-income settings. The dry cloth method presented in this manuscript improves upon the more commonly used swabbing technique that has been shown in the literature to have a low sampling efficiency. The recovery efficiency of a dry electrostatic cloth sampling method was evaluated using Escherichia coli and then applied to household surfaces in Iquitos, Peru, where there is high fecal contamination and enteric infection. Side-by-side measurements were taken from various floor locations within a household at the same time over a three-month period to compare for consistency of quantification of E. coli bacteria. The dry cloth sampling method in the laboratory setting showed 105% (95% Confidence Interval: 98%, 113%) E. coli recovery efficiency off of the cloths. The field application demonstrated strong agreement of side-by-side results (Pearson correlation coefficient for dirt surfaces was 0.83 ( p samples (Pearson (0.53, p method can be utilized in households with high bacterial loads using either continuous (quantitative) or categorical (semi-quantitative) data. The standardization of this low-cost, dry electrostatic cloth sampling method can be used to measure differences between households in intervention and non-intervention arms of randomized trials.

  7. The prevalence of terraced treescapes in analyses of phylogenetic data sets.

    Science.gov (United States)

    Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J

    2018-04-04

    The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.

  8. Factorial Validity and Internal Consistency of Malaysian Adapted Depression Anxiety Stress Scale - 21 in an Adolescent Sample

    OpenAIRE

    Hairul Anuar Hashim; Freddy Golok; Rosmatunisah Ali

    2011-01-01

    Background: Psychometrically sound measurement instrument is a fundamental requirement across broad range of research areas. In negative affect research, Depression Anxiety Stress Scale (DASS) has been identified as a psychometrically sound instrument to measure depression, anxiety and stress, especially the 21-item version. However, its psychometric properties in adolescents have been less consistent. Objectives: Thus, the present study sought to examine the factorial validity and internal c...

  9. Thriving rough sets 10th anniversary : honoring professor Zdzisław Pawlak's life and legacy & 35 years of rough sets

    CERN Document Server

    Skowron, Andrzej; Yao, Yiyu; Ślęzak, Dominik; Polkowski, Lech

    2017-01-01

    This special book is dedicated to the memory of Professor Zdzisław Pawlak, the father of rough set theory, in order to commemorate both the 10th anniversary of his passing and 35 years of rough set theory. The book consists of 20 chapters distributed into four sections, which focus in turn on a historical review of Professor Zdzisław Pawlak and rough set theory; a review of the theory of rough sets; the state of the art of rough set theory; and major developments in rough set based data mining approaches. Apart from Professor Pawlak’s contributions to rough set theory, other areas he was interested in are also included. Moreover, recent theoretical studies and advances in applications are also presented. The book will offer a useful guide for researchers in Knowledge Engineering and Data Mining by suggesting new approaches to solving the problems they encounter.

  10. Sample Selection for Training Cascade Detectors

    OpenAIRE

    V?llez, Noelia; Deniz, Oscar; Bueno, Gloria

    2015-01-01

    Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our a...

  11. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  12. Predictive Active Set Selection Methods for Gaussian Processes

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2012-01-01

    We propose an active set selection framework for Gaussian process classification for cases when the dataset is large enough to render its inference prohibitive. Our scheme consists of a two step alternating procedure of active set update rules and hyperparameter optimization based upon marginal...... high impact to the classifier decision process while removing those that are less relevant. We introduce two active set rules based on different criteria, the first one prefers a model with interpretable active set parameters whereas the second puts computational complexity first, thus a model...... with active set parameters that directly control its complexity. We also provide both theoretical and empirical support for our active set selection strategy being a good approximation of a full Gaussian process classifier. Our extensive experiments show that our approach can compete with state...

  13. Work setting, community attachment, and satisfaction among rural and remote nurses.

    Science.gov (United States)

    Kulig, Judith C; Stewart, Norma; Penz, Kelly; Forbes, Dorothy; Morgan, Debra; Emerson, Paige

    2009-01-01

    To describe community satisfaction and attachment among rural and remote registered nurses (RNs) in Canada. Cross-sectional survey of rural and remote RNs in Canada as part of a multimethod study.The sample consisted of a stratified random sample of RNs living in rural areas of the western country and the total population of RNs who worked in three northern regional areas and those in outpost settings. A subset of 3,331 rural and remote RNs who mainly worked in acute care, long-term care, community health, home care, and primary care comprised the sample. The home community satisfaction scale measured community satisfaction, whereas single-item questions measured work community satisfaction and overall job satisfaction. Community variables were compared across practice areas using analysis of variance, whereas a thematic analysis was conducted of the open-ended questions. Home care and community health RNs were significantly more satisfied with their work community than RNs from other practice areas. RNs who grew up in rural communities were more satisfied with their current home community. Four themes emerged from the open-ended responses that describe community satisfaction and community attachment. Recruitment and retention strategies need to include mechanisms that focus on community satisfaction, which will enhance job satisfaction.

  14. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan; Alzahrani, Majed A.; Gao, Xin

    2014-01-01

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  15. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  16. No Consistent Evidence for Advancing or Delaying Trends in Spring Phenology on the Tibetan Plateau

    Science.gov (United States)

    Wang, Xufeng; Xiao, Jingfeng; Li, Xin; Cheng, Guodong; Ma, Mingguo; Che, Tao; Dai, Liyun; Wang, Shaoying; Wu, Jinkui

    2017-12-01

    Vegetation phenology is a sensitive indicator of climate change and has significant effects on the exchange of carbon, water, and energy between the terrestrial biosphere and the atmosphere. The Tibetan Plateau, the Earth's "third pole," is a unique region for studying the long-term trends in vegetation phenology in response to climate change because of the sensitivity of its alpine ecosystems to climate and its low-level human disturbance. There has been a debate whether the trends in spring phenology over the Tibetan Plateau have been continuously advancing over the last two to three decades. In this study, we examine the trends in the start of growing season (SOS) for alpine meadow and steppe using the Global Inventory Modeling and Mapping Studies (GIMMS)3g normalized difference vegetation index (NDVI) data set (1982-2014), the GIMMS NDVI data set (1982-2006), the Moderate Resolution Imaging Spectroradiometer (MODIS) NDVI data set (2001-2014), the Satellite Pour l'Observation de la Terre Vegetation (SPOT-VEG) NDVI data set (1999-2013), and the Sea-viewing Wide Field-of-View Sensor (SeaWiFS) NDVI data set (1998-2007). Both logistic and polynomial fitting methods are used to retrieve the SOS dates from the NDVI data sets. Our results show that the trends in spring phenology over the Tibetan Plateau depend on both the NDVI data set used and the method for retrieving the SOS date. There are large discrepancies in the SOS trends among the different NDVI data sets and between the two different retrieval methods. There is no consistent evidence that spring phenology ("green-up" dates) has been advancing or delaying over the Tibetan Plateau during the last two to three decades. Ground-based budburst data also indicate no consistent trends in spring phenology. The responses of SOS to environmental factors (air temperature, precipitation, soil temperature, and snow depth) also vary among NDVI data sets and phenology retrieval methods. The increases in winter and spring

  17. Settings: In a Variety of Place. . .

    Science.gov (United States)

    Cairo, Peter; And Others

    This document consists of the fourth section of a book of readings on issues related to adult career development. The four chapters in this fourth section focus on settings in which adult career development counseling may take place. "Career Planning and Development in Organizations" (Peter Cairo) discusses several concepts and definitions…

  18. An optimization approach for extracting and encoding consistent maps in a shape collection

    KAUST Repository

    Huang, Qi-Xing

    2012-11-01

    We introduce a novel approach for computing high quality point-topoint maps among a collection of related shapes. The proposed approach takes as input a sparse set of imperfect initial maps between pairs of shapes and builds a compact data structure which implicitly encodes an improved set of maps between all pairs of shapes. These maps align well with point correspondences selected from initial maps; they map neighboring points to neighboring points; and they provide cycle-consistency, so that map compositions along cycles approximate the identity map. The proposed approach is motivated by the fact that a complete set of maps between all pairs of shapes that admits nearly perfect cycleconsistency are highly redundant and can be represented by compositions of maps through a single base shape. In general, multiple base shapes are needed to adequately cover a diverse collection. Our algorithm sequentially extracts such a small collection of base shapes and creates correspondences from each of these base shapes to all other shapes. These correspondences are found by global optimization on candidate correspondences obtained by diffusing initial maps. These are then used to create a compact graphical data structure from which globally optimal cycle-consistent maps can be extracted using simple graph algorithms. Experimental results on benchmark datasets show that the proposed approach yields significantly better results than state-of-theart data-driven shape matching methods. © 2012 ACM.

  19. Large sample neutron activation analysis of a reference inhomogeneous sample

    International Nuclear Information System (INIS)

    Vasilopoulou, T.; Athens National Technical University, Athens; Tzika, F.; Stamatelatos, I.E.; Koster-Ammerlaan, M.J.J.

    2011-01-01

    A benchmark experiment was performed for Neutron Activation Analysis (NAA) of a large inhomogeneous sample. The reference sample was developed in-house and consisted of SiO 2 matrix and an Al-Zn alloy 'inhomogeneity' body. Monte Carlo simulations were employed to derive appropriate correction factors for neutron self-shielding during irradiation as well as self-attenuation of gamma rays and sample geometry during counting. The large sample neutron activation analysis (LSNAA) results were compared against reference values and the trueness of the technique was evaluated. An agreement within ±10% was observed between LSNAA and reference elemental mass values, for all matrix and inhomogeneity elements except Samarium, provided that the inhomogeneity body was fully simulated. However, in cases that the inhomogeneity was treated as not known, the results showed a reasonable agreement for most matrix elements, while large discrepancies were observed for the inhomogeneity elements. This study provided a quantification of the uncertainties associated with inhomogeneity in large sample analysis and contributed to the identification of the needs for future development of LSNAA facilities for analysis of inhomogeneous samples. (author)

  20. The youth sports club as a health-promoting setting: An integrative review of research

    Science.gov (United States)

    Quennerstedt, Mikael; Eriksson, Charli

    2013-01-01

    Aims: The aims of this review is to compile and identify key issues in international research about youth sports clubs as health-promoting settings, and then discuss the results of the review in terms of a framework for the youth sports club as a health-promoting setting. Methods: The framework guiding this review of research is the health-promoting settings approach introduced by the World Health Organization (WHO). The method used is the integrated review. Inclusion criteria were, first, that the studies concerned sports clubs for young people, not professional clubs; second, that it be a question of voluntary participation in some sort of ongoing organized athletics outside of the regular school curricula; third, that the studies consider issues about youth sports clubs in terms of health-promoting settings as described by WHO. The final sample for the review consists of 44 publications. Results: The review shows that youth sports clubs have plentiful opportunities to be or become health-promoting settings; however this is not something that happens automatically. To do so, the club needs to include an emphasis on certain important elements in its strategies and daily practices. The youth sports club needs to be a supportive and healthy environment with activities designed for and adapted to the specific age-group or stage of development of the youth. Conclusions: To become a health-promoting setting, a youth sports club needs to take a comprehensive approach to its activities, aims, and purposes. PMID:23349167

  1. Health Outcomes Survey - Limited Data Set

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Health Outcomes Survey (HOS) limited data sets (LDS) are comprised of the entire national sample for a given 2-year cohort (including both respondents...

  2. Methodology for Quantitative Analysis of Large Liquid Samples with Prompt Gamma Neutron Activation Analysis using Am-Be Source

    International Nuclear Information System (INIS)

    Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.

    2009-01-01

    An optimized set-up for prompt gamma neutron activation analysis (PGNAA) with Am-Be source is described and used for large liquid samples analysis. A methodology for quantitative analysis is proposed: it consists on normalizing the prompt gamma count rates with thermal neutron flux measurements carried out with He-3 detector and gamma attenuation factors calculated using MCNP-5. The relative and absolute methods are considered. This methodology is then applied to the determination of cadmium in industrial phosphoric acid. The same sample is then analyzed by inductively coupled plasma (ICP) method. Our results are in good agreement with those obtained with ICP method.

  3. Prediction and Cross-Situational Consistency of Daily Behavior across Cultures: Testing Trait and Cultural Psychology Perspectives

    Science.gov (United States)

    Church, A. Timothy; Katigbak, Marcia S.; Reyes, Jose Alberto S.; Salanga, Maria Guadalupe C.; Miramontes, Lilia A.; Adams, Nerissa B.

    2008-01-01

    Trait and cultural psychology perspectives on the cross-situational consistency of behavior, and the predictive validity of traits, were tested in a daily process study in the United States (N = 68), an individualistic culture, and the Philippines (N = 80), a collectivistic culture. Participants completed the Revised NEO Personality Inventory (Costa & McCrae, 1992) and a measure of self-monitoring, then reported their daily behaviors and associated situational contexts for approximately 30 days. Consistent with trait perspectives, the Big Five traits predicted daily behaviors in both cultures, and relative (interindividual) consistency was observed across many, although not all, situational contexts. The frequency of various Big Five behaviors varied across relevant situational contexts in both cultures and, consistent with cultural psychology perspectives, there was a tendency for Filipinos to exhibit greater situational variability than Americans. Self-monitoring showed some ability to account for individual differences in situational variability in the American sample, but not the Filipino sample. PMID:22146866

  4. Feasibility, internal consistency and covariates of TICS-m (telephone interview for cognitive status-modified) in a population-based sample: findings from the KORA-Age study.

    Science.gov (United States)

    Lacruz, Me; Emeny, Rt; Bickel, H; Linkohr, B; Ladwig, Kh

    2013-09-01

    Test the feasibility of the modified telephone interview for cognitive status (TICS-m) as a screening tool to detect cognitive impairment in a population-based sample of older subjects. Data were collected from 3,578 participants, age 65-94 years, of the KORA-Age study. We used analysis of covariance to test for significant sex, age and educational differences in raw TICS-m scores. Internal consistency was analysed by assessing Cronbach's alpha. Correction for education years was undertaken, and participants were divided in three subgroups following validated cut-offs. Finally, a logistic regression was performed to determine the impact of sex on cognition subgroups. Internal consistency of the TICS-m was 0.78. Study participants needed approximately 5.4 min to complete the interview. Lower raw TICS-m scores were associated with male sex, older age and lower education (all p education years, 2,851 (79%) had a non-impaired cognitive status (score >31). Male sex was independently associated with having a score equal to or below 27 and 31 (OR = 1.9, 95% CI 1.4-2.5 and OR = 1.5, 95% CI 1.2-1.7, respectively). The TICS-m is a feasible questionnaire for community-dwelling older adults with normal cognitive function or moderate cognitive impairment. Lower cognitive performance was associated with being a man, being older, and having fewer years of formal education. Copyright © 2012 John Wiley & Sons, Ltd.

  5. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...

  6. Occlusion-Aware Fragment-Based Tracking With Spatial-Temporal Consistency.

    Science.gov (United States)

    Sun, Chong; Wang, Dong; Lu, Huchuan

    2016-08-01

    In this paper, we present a robust tracking method by exploiting a fragment-based appearance model with consideration of both temporal continuity and discontinuity information. From the perspective of probability theory, the proposed tracking algorithm can be viewed as a two-stage optimization problem. In the first stage, by adopting the estimated occlusion state as a prior, the optimal state of the tracked object can be obtained by solving an optimization problem, where the objective function is designed based on the classification score, occlusion prior, and temporal continuity information. In the second stage, we propose a discriminative occlusion model, which exploits both foreground and background information to detect the possible occlusion, and also models the consistency of occlusion labels among different frames. In addition, a simple yet effective training strategy is introduced during the model training (and updating) process, with which the effects of spatial-temporal consistency are properly weighted. The proposed tracker is evaluated by using the recent benchmark data set, on which the results demonstrate that our tracker performs favorably against other state-of-the-art tracking algorithms.

  7. Measuring process and knowledge consistency

    DEFF Research Database (Denmark)

    Edwards, Kasper; Jensen, Klaes Ladeby; Haug, Anders

    2007-01-01

    When implementing configuration systems, knowledge about products and processes are documented and replicated in the configuration system. This practice assumes that products are specified consistently i.e. on the same rule base and likewise for processes. However, consistency cannot be taken...... for granted; rather the contrary, and attempting to implement a configuration system may easily ignite a political battle. This is because stakes are high in the sense that the rules and processes chosen may only reflect one part of the practice, ignoring a majority of the employees. To avoid this situation......, this paper presents a methodology for measuring product and process consistency prior to implementing a configuration system. The methodology consists of two parts: 1) measuring knowledge consistency and 2) measuring process consistency. Knowledge consistency is measured by developing a questionnaire...

  8. Bitcoin Meets Strong Consistency

    OpenAIRE

    Decker, Christian; Seidel, Jochen; Wattenhofer, Roger

    2014-01-01

    The Bitcoin system only provides eventual consistency. For everyday life, the time to confirm a Bitcoin transaction is prohibitively slow. In this paper we propose a new system, built on the Bitcoin blockchain, which enables strong consistency. Our system, PeerCensus, acts as a certification authority, manages peer identities in a peer-to-peer network, and ultimately enhances Bitcoin and similar systems with strong consistency. Our extensive analysis shows that PeerCensus is in a secure state...

  9. The Enhancement of Consistency of Interpretation Skills on the Newton’s Laws Concept

    Directory of Open Access Journals (Sweden)

    Yudi Kurniawan

    2018-03-01

    Full Text Available Conceptual understanding is the most important thing that students should have rather than they had reaches achievement. The interpretation skill is one of conceptual understanding aspects. The aim of this paper is to know the consistency of students’ interpreting skills and all at once to get the levels of increasing of students’ interpretations skill. These variables learned by Interactive Lecture Demonstrations (ILD common sense. The method of this research is pre-experimental research with one group pretest-posttest design. The sample has taken by cluster random sampling. The result had shown that 16 % of all student that are have perfect consistency of interpretation skill and there are increasing of interpretation skill on 84 % from unknown to be understand (this skill. This finding could be used by the future researcher to study in the other areas of conceptual understanding aspects

  10. Identifying Early Childhood Personality Dimensions Using the California Child Q-Set and Prospective Associations With Behavioral and Psychosocial Development.

    Science.gov (United States)

    Wilson, Sylia; Schalet, Benjamin D; Hicks, Brian M; Zucker, Robert A

    2013-08-01

    The present study used an empirical, "bottom-up" approach to delineate the structure of the California Child Q-Set (CCQ), a comprehensive set of personality descriptors, in a sample of 373 preschool-aged children. This approach yielded two broad trait dimensions, Adaptive Socialization (emotional stability, compliance, intelligence) and Anxious Inhibition (emotional/behavioral introversion). Results demonstrate the value of using empirical derivation to investigate the structure of personality in young children, speak to the importance of early-evident personality traits for adaptive development, and are consistent with a growing body of evidence indicating that personality structure in young children is similar, but not identical to, that in adults, suggesting a model of broad personality dimensions in childhood that evolve into narrower traits in adulthood.

  11. Seeing the System through the End Users' Eyes: Shadow Expert Technique for Evaluating the Consistency of a Learning Management System

    Science.gov (United States)

    Holzinger, Andreas; Stickel, Christian; Fassold, Markus; Ebner, Martin

    Interface consistency is an important basic concept in web design and has an effect on performance and satisfaction of end users. Consistency also has significant effects on the learning performance of both expert and novice end users. Consequently, the evaluation of consistency within a e-learning system and the ensuing eradication of irritating discrepancies in the user interface redesign is a big issue. In this paper, we report of our experiences with the Shadow Expert Technique (SET) during the evaluation of the consistency of the user interface of a large university learning management system. The main objective of this new usability evaluation method is to understand the interaction processes of end users with a specific system interface. Two teams of usability experts worked independently from each other in order to maximize the objectivity of the results. The outcome of this SET method is a list of recommended changes to improve the user interaction processes, hence to facilitate high consistency.

  12. RNA-Seq reveals spliceosome and proteasome genes as most consistent transcripts in human cancer cells.

    Directory of Open Access Journals (Sweden)

    Tara Macrae

    Full Text Available Accurate quantification of gene expression by qRT-PCR relies on normalization against a consistently expressed control gene. However, control genes in common use often vary greatly between samples, especially in cancer. The advent of Next Generation Sequencing technology offers the possibility to better select control genes with the least cell to cell variability in steady state transcript levels. Here we analyze the transcriptomes of 55 leukemia samples to identify the most consistent genes. This list is enriched for components of the proteasome (ex. PSMA1 and spliceosome (ex. SF3B2, and also includes the translation initiation factor EIF4H, and many heterogeneous nuclear ribonucleoprotein genes (ex. HNRNPL. We have validated the consistency of our new control genes in 1933 cancer and normal tissues using publically available RNA-seq data, and their usefulness in qRT-PCR analysis is clearly demonstrated.

  13. Object-Oriented Hierarchy Radiation Consistency for Different Temporal and Different Sensor Images

    Directory of Open Access Journals (Sweden)

    Nan Su

    2018-02-01

    Full Text Available In the paper, we propose a novel object-oriented hierarchy radiation consistency method for dense matching of different temporal and different sensor data in the 3D reconstruction. For different temporal images, our illumination consistency method is proposed to solve both the illumination uniformity for a single image and the relative illumination normalization for image pairs. Especially in the relative illumination normalization step, singular value equalization and linear relationship of the invariant pixels is combined used for the initial global illumination normalization and the object-oriented refined illumination normalization in detail, respectively. For different sensor images, we propose the union group sparse method, which is based on improving the original group sparse model. The different sensor images are set to a similar smoothness level by the same threshold of singular value from the union group matrix. Our method comprehensively considered the influence factors on the dense matching of the different temporal and different sensor stereoscopic image pairs to simultaneously improve the illumination consistency and the smoothness consistency. The radiation consistency experimental results verify the effectiveness and superiority of the proposed method by comparing two other methods. Moreover, in the dense matching experiment of the mixed stereoscopic image pairs, our method has more advantages for objects in the urban area.

  14. The effectiveness of multi-component goal setting interventions for changing physical activity behaviour: a systematic review and meta-analysis.

    Science.gov (United States)

    McEwan, Desmond; Harden, Samantha M; Zumbo, Bruno D; Sylvester, Benjamin D; Kaulius, Megan; Ruissen, Geralyn R; Dowd, A Justine; Beauchamp, Mark R

    2016-01-01

    Drawing from goal setting theory (Latham & Locke, 1991; Locke & Latham, 2002; Locke et al., 1981), the purpose of this study was to conduct a systematic review and meta-analysis of multi-component goal setting interventions for changing physical activity (PA) behaviour. A literature search returned 41,038 potential articles. Included studies consisted of controlled experimental trials wherein participants in the intervention conditions set PA goals and their PA behaviour was compared to participants in a control group who did not set goals. A meta-analysis was ultimately carried out across 45 articles (comprising 52 interventions, 126 effect sizes, n = 5912) that met eligibility criteria using a random-effects model. Overall, a medium, positive effect (Cohen's d(SE) = .552(.06), 95% CI = .43-.67, Z = 9.03, p goal setting interventions in relation to PA behaviour was found. Moderator analyses across 20 variables revealed several noteworthy results with regard to features of the study, sample characteristics, PA goal content, and additional goal-related behaviour change techniques. In conclusion, multi-component goal setting interventions represent an effective method of fostering PA across a diverse range of populations and settings. Implications for effective goal setting interventions are discussed.

  15. The opinion of Greek parents on the advantages and disadvantages of the outpatient pediatric oncology setting.

    Science.gov (United States)

    Matziou, Vasiliki; Servitzoglou, Marina; Vlahioti, Efrosini; Deli, Haralampia; Matziou, Theodora; Megapanou, Efstathia; Perdikaris, Pantelis

    2013-12-01

    The aim of this study was to assess parental opinions on the advantages and disadvantages of a pediatric oncology outpatient setting in comparison to the inpatient oncology ward. The sample of the study consisted of 104 parents whose children were diagnosed and treated for pediatric cancer. The survey took place at the Pediatric Oncology Wards, as well as their respective outpatient settings of the two General Children's Hospitals in Athens, Greece from May 2010 to August 2010. According to parents' view the outpatient setting was preferable due to the maintenance keeping of their daily routine (x(2) = 75.9, p = 0.000), maintaining the family life (x(2) = 90.1, p = 0.000) and young patients' participation in activities (x(2) = 25.6, p = 0.000). Moreover, young patients were more happy, less anxious and less scared when they were attending in the daily clinic (x(2) = 25.9, p = 0.000). According to parents' view, the outpatient setting has many advantages. The judgment of children and parents on the services offered by the Pediatric Oncology Unit overall, in both inpatient and outpatient setting can give the necessary feedback to improve the qualitative provided care. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. International Comprehensive Ocean Atmosphere Data Set (ICOADS) And NCEI Global Marine Observations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — International Comprehensive Ocean Atmosphere Data Set (ICOADS) consists of digital data set DSI-1173, archived at the National Center for Environmental Information...

  17. Logical Discrete Event Systems in a trace theory based setting

    NARCIS (Netherlands)

    Smedinga, R.

    1993-01-01

    Discrete event systems can be modelled using a triple consisting of some alphabet (representing the events that might occur), and two trace sets (sets of possible strings) denoting the possible behaviour and the completed tasks of the system. Using this definition we are able to formulate and solve

  18. 49 CFR 536.10 - Treatment of dual-fuel and alternative fuel vehicles-consistency with 49 CFR part 538.

    Science.gov (United States)

    2010-10-01

    ... vehicles-consistency with 49 CFR part 538. 536.10 Section 536.10 Transportation Other Regulations Relating... vehicles—consistency with 49 CFR part 538. (a) Statutory alternative fuel and dual-fuel vehicle fuel... economy in a particular compliance category by more than the limits set forth in 49 U.S.C. 32906(a), the...

  19. A social preference valuations set for EQ-5D health states in Flanders, Belgium.

    Science.gov (United States)

    Cleemput, Irina

    2010-04-01

    This study aimed at deriving a preference valuation set for EQ-5D health states from the general Flemish public in Belgium. A EuroQol valuation instrument with 16 health states to be valued on a visual analogue scale was sent to a random sample of 2,754 adults. The initial response rate was 35%. Eventually, 548 (20%) respondents provided useable valuations for modeling. Valuations for 245 health states were modeled using a random effects model. The selection of the model was based on two criteria: health state valuations must be consistent, and the difference with the directly observed valuations must be small. A model including a value decrement if any health dimension of the EQ-5D is on the worst level was selected to construct the social health state valuation set. A comparison with health state valuations from other countries showed similarities, especially with those from New Zealand. The use of a single preference valuation set across different health economic evaluations within a country is highly preferable to increase their usability for policy makers. This study contributes to the standardization of outcome measurement in economic evaluations in Belgium.

  20. Consistent robustness analysis (CRA) identifies biologically relevant properties of regulatory network models.

    Science.gov (United States)

    Saithong, Treenut; Painter, Kevin J; Millar, Andrew J

    2010-12-16

    A number of studies have previously demonstrated that "goodness of fit" is insufficient in reliably classifying the credibility of a biological model. Robustness and/or sensitivity analysis is commonly employed as a secondary method for evaluating the suitability of a particular model. The results of such analyses invariably depend on the particular parameter set tested, yet many parameter values for biological models are uncertain. Here, we propose a novel robustness analysis that aims to determine the "common robustness" of the model with multiple, biologically plausible parameter sets, rather than the local robustness for a particular parameter set. Our method is applied to two published models of the Arabidopsis circadian clock (the one-loop [1] and two-loop [2] models). The results reinforce current findings suggesting the greater reliability of the two-loop model and pinpoint the crucial role of TOC1 in the circadian network. Consistent Robustness Analysis can indicate both the relative plausibility of different models and also the critical components and processes controlling each model.

  1. MASS CALIBRATION AND COSMOLOGICAL ANALYSIS OF THE SPT-SZ GALAXY CLUSTER SAMPLE USING VELOCITY DISPERSION σ v AND X-RAY Y X MEASUREMENTS

    International Nuclear Information System (INIS)

    Bocquet, S.; Saro, A.; Mohr, J. J.; Bazin, G.; Chiu, I.; Desai, S.; Aird, K. A.; Ashby, M. L. N.; Bayliss, M.; Bautz, M.; Benson, B. A.; Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Crawford, T. M.; Crites, A. T.; Brodwin, M.; Cho, H. M.; Clocchiatti, A.; De Haan, T.

    2015-01-01

    We present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg 2 of the survey along with 63 velocity dispersion (σ v ) and 16 X-ray Y X measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ v and Y X are consistent at the 0.6σ level, with the σ v calibration preferring ∼16% higher masses. We use the full SPT CL data set (SZ clusters+σ v +Y X ) to measure σ 8 (Ω m /0.27) 0.3 = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is ∑m ν = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger ∑m ν further reconciles the results. When we combine the SPT CL and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y X calibration and 0.8σ higher than the σ v calibration. Given the scale of these shifts (∼44% and ∼23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ω m = 0.299 ± 0.009 and σ 8 = 0.829 ± 0.011. Within a νCDM model we find ∑m ν = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = –1.007 ± 0.065, demonstrating that the

  2. Goal Setting to Promote a Health Lifestyle.

    Science.gov (United States)

    Paxton, Raheem J; Taylor, Wendell C; Hudnall, Gina Evans; Christie, Juliette

    2012-01-01

    The purpose of this parallel-group study was to determine whether a feasibility study based on newsletters and telephone counseling would improve goal- setting constructs; physical activity (PA); and fruit and vegetable (F & V) intake in a sample of older adults. Forty-three older adults ( M age = 70 years, >70% Asian, 54% female) living in Honolulu, Hawaii were recruited and randomly assigned to either a PA or F & V intake condition. All participants completed measures of PA, F & V intake, and goal setting mechanisms (i.e., specificity, difficulty, effort, commitment, and persistence) at baseline and 8-weeks. Paired t -tests were used to evaluate changes across time. We found that F & V participants significantly increased F & V intake and mean scores of goal specificity, effort, commitment, and persistence (all p goal setting mechanisms were observed for participants in the PA condition. Overall, our results show that a short-term intervention using newsletters and motivational calls based on goal- setting theory was effective in improving F & V intake; however, more research is needed to determine whether these strategies are effective for improving PA among a multiethnic sample of older adults.

  3. A New Heteroskedastic Consistent Covariance Matrix Estimator using Deviance Measure

    Directory of Open Access Journals (Sweden)

    Nuzhat Aftab

    2016-06-01

    Full Text Available In this article we propose a new heteroskedastic consistent covariance matrix estimator, HC6, based on deviance measure. We have studied and compared the finite sample behavior of the new test and compared it with other this kind of estimators, HC1, HC3 and HC4m, which are used in case of leverage observations. Simulation study is conducted to study the effect of various levels of heteroskedasticity on the size and power of quasi-t test with HC estimators. Results show that the test statistic based on our new suggested estimator has better asymptotic approximation and less size distortion as compared to other estimators for small sample sizes when high level ofheteroskedasticity is present in data.

  4. Study of phosphors determination in biological samples

    International Nuclear Information System (INIS)

    Oliveira, Rosangela Magda de.

    1994-01-01

    In this paper, phosphors determination by neutron activation analysis in milk and bone samples was studied employing both instrumental and radiochemical separation methods. The analysis with radiochemistry separation consisted of the simultaneous irradiation of the samples and standards during 30 minutes, dissolution of the samples, hold back carrier, addition precipitation of phosphorus with ammonium phosphomolibdate (A.M.P.) and phosphorus-32 by counting by using Geiger-Mueller detector. The instrumental analysis consisted of the simultaneous irradiation of the samples and standards during 30 minutes, transfer of the samples into a counting planchet and measurement of the beta radiation emitted by phosphorus-32, after a suitable decay period. After the phosphorus analysis methods were established they were applied to both commercial milk and animal bone samples, and data obtained in the instrumental and radiochemical separation methods for each sample, were compared between themselves. In this work, it became possible to obtain analysis methods for phosphorus that can be applied independently of the sample quantity available, and the phosphorus content in the samples or interference that can be present in them. (author). 51 refs., 7 figs., 4 tabs

  5. Posterior consistency for Bayesian inverse problems through stability and regression results

    International Nuclear Information System (INIS)

    Vollmer, Sebastian J

    2013-01-01

    In the Bayesian approach, the a priori knowledge about the input of a mathematical model is described via a probability measure. The joint distribution of the unknown input and the data is then conditioned, using Bayes’ formula, giving rise to the posterior distribution on the unknown input. In this setting we prove posterior consistency for nonlinear inverse problems: a sequence of data is considered, with diminishing fluctuations around a single truth and it is then of interest to show that the resulting sequence of posterior measures arising from this sequence of data concentrates around the truth used to generate the data. Posterior consistency justifies the use of the Bayesian approach very much in the same way as error bounds and convergence results for regularization techniques do. As a guiding example, we consider the inverse problem of reconstructing the diffusion coefficient from noisy observations of the solution to an elliptic PDE in divergence form. This problem is approached by splitting the forward operator into the underlying continuum model and a simpler observation operator based on the output of the model. In general, these splittings allow us to conclude posterior consistency provided a deterministic stability result for the underlying inverse problem and a posterior consistency result for the Bayesian regression problem with the push-forward prior. Moreover, we prove posterior consistency for the Bayesian regression problem based on the regularity, the tail behaviour and the small ball probabilities of the prior. (paper)

  6. Remaining useful life prediction based on variation coefficient consistency test of a Wiener process

    Directory of Open Access Journals (Sweden)

    Juan LI

    2018-01-01

    Full Text Available High-cost equipment is often reused after maintenance, and whether the information before the maintenance can be used for the Remaining Useful Life (RUL prediction after the maintenance is directly determined by the consistency of the degradation pattern before and after the maintenance. Aiming at this problem, an RUL prediction method based on the consistency test of a Wiener process is proposed. Firstly, the parameters of the Wiener process estimated by Maximum Likelihood Estimation (MLE are proved to be biased, and a modified unbiased estimation method is proposed and verified by derivation and simulations. Then, the h statistic is constructed according to the reciprocal of the variation coefficient of the Wiener process, and the sampling distribution is derived. Meanwhile, a universal method for the consistency test is proposed based on the sampling distribution theorem, which is verified by simulation data and classical crack degradation data. Finally, based on the consistency test of the degradation model, a weighted fusion RUL prediction method is presented for the fuel pump of an airplane, and the validity of the presented method is verified by accurate computation results of real data, which provides a theoretical and practical guidance for engineers to predict the RUL of equipment after maintenance.

  7. PhysarumSoft: An update based on rough set theory

    Science.gov (United States)

    Schumann, Andrew; Pancerz, Krzysztof

    2017-07-01

    PhysarumSoft is a software tool consisting of two modules developed for programming Physarum machines and simulating Physarum games, respectively. The paper briefly discusses what has been added since the last version released in 2015. New elements in both modules are based on rough set theory. Rough sets are used to model behaviour of Physarum machines and to describe strategy games.

  8. Counting SET-free sets

    OpenAIRE

    Harman, Nate

    2016-01-01

    We consider the following counting problem related to the card game SET: How many $k$-element SET-free sets are there in an $n$-dimensional SET deck? Through a series of algebraic reformulations and reinterpretations, we show the answer to this question satisfies two polynomiality conditions.

  9. Overcoming Barriers in Unhealthy Settings

    Directory of Open Access Journals (Sweden)

    Michael K. Lemke

    2016-03-01

    Full Text Available We investigated the phenomenon of sustained health-supportive behaviors among long-haul commercial truck drivers, who belong to an occupational segment with extreme health disparities. With a focus on setting-level factors, this study sought to discover ways in which individuals exhibit resiliency while immersed in endemically obesogenic environments, as well as understand setting-level barriers to engaging in health-supportive behaviors. Using a transcendental phenomenological research design, 12 long-haul truck drivers who met screening criteria were selected using purposeful maximum sampling. Seven broad themes were identified: access to health resources, barriers to health behaviors, recommended alternative settings, constituents of health behavior, motivation for health behaviors, attitude toward health behaviors, and trucking culture. We suggest applying ecological theories of health behavior and settings approaches to improve driver health. We also propose the Integrative and Dynamic Healthy Commercial Driving (IDHCD paradigm, grounded in complexity science, as a new theoretical framework for improving driver health outcomes.

  10. Sampling and sample preparation methods for the analysis of trace elements in biological material

    International Nuclear Information System (INIS)

    Sansoni, B.; Iyengar, V.

    1978-05-01

    The authors attempt to give a most systamtic possible treatment of the sample taking and sample preparation of biological material (particularly in human medicine) for trace analysis (e.g. neutron activation analysis, atomic absorption spectrometry). Contamination and loss problems are discussed as well as the manifold problems of the different consistency of solid and liquid biological materials, as well as the stabilization of the sample material. The process of dry and wet ashing is particularly dealt with, where new methods are also described. (RB) [de

  11. Digitally programmable microfluidic automaton for multiscale combinatorial mixing and sample processing†

    Science.gov (United States)

    Jensen, Erik C.; Stockton, Amanda M.; Chiesl, Thomas N.; Kim, Jungkyu; Bera, Abhisek; Mathies, Richard A.

    2013-01-01

    A digitally programmable microfluidic Automaton consisting of a 2-dimensional array of pneumatically actuated microvalves is programmed to perform new multiscale mixing and sample processing operations. Large (µL-scale) volume processing operations are enabled by precise metering of multiple reagents within individual nL-scale valves followed by serial repetitive transfer to programmed locations in the array. A novel process exploiting new combining valve concepts is developed for continuous rapid and complete mixing of reagents in less than 800 ms. Mixing, transfer, storage, and rinsing operations are implemented combinatorially to achieve complex assay automation protocols. The practical utility of this technology is demonstrated by performing automated serial dilution for quantitative analysis as well as the first demonstration of on-chip fluorescent derivatization of biomarker targets (carboxylic acids) for microchip capillary electrophoresis on the Mars Organic Analyzer. A language is developed to describe how unit operations are combined to form a microfluidic program. Finally, this technology is used to develop a novel microfluidic 6-sample processor for combinatorial mixing of large sets (>26 unique combinations) of reagents. The digitally programmable microfluidic Automaton is a versatile programmable sample processor for a wide range of process volumes, for multiple samples, and for different types of analyses. PMID:23172232

  12. Optimising Mycobacterium tuberculosis detection in resource limited settings.

    Science.gov (United States)

    Alfred, Nwofor; Lovette, Lawson; Aliyu, Gambo; Olusegun, Obasanya; Meshak, Panwal; Jilang, Tunkat; Iwakun, Mosunmola; Nnamdi, Emenyonu; Olubunmi, Onuoha; Dakum, Patrick; Abimiku, Alash'le

    2014-03-03

    The light-emitting diode (LED) fluorescence microscopy has made acid-fast bacilli (AFB) detection faster and efficient although its optimal performance in resource-limited settings is still being studied. We assessed the optimal performances of light and fluorescence microscopy in routine conditions of a resource-limited setting and evaluated the digestion time for sputum samples for maximum yield of positive cultures. Cross-sectional study. Facility-based involving samples of routine patients receiving tuberculosis treatment and care from the main tuberculosis case referral centre in northern Nigeria. The study included 450 sputum samples from 150 new patients with clinical diagnosis of pulmonary tuberculosis. The 450 samples were pooled into 150 specimens, examined independently with mercury vapour lamp (FM), LED CysCope (CY) and Primo Star iLED (PiLED) fluorescence microscopies, and with the Ziehl-Neelsen (ZN) microscopy to assess the performance of each technique compared with liquid culture. The cultured specimens were decontaminated with BD Mycoprep (4% NaOH-1% NLAC and 2.9% sodium citrate) for 10, 15 and 20 min before incubation in Mycobacterium growth incubator tube (MGIT) system and growth examined for acid-fast bacilli (AFB). Of the 150 specimens examined by direct microscopy: 44 (29%), 60 (40%), 49 (33%) and 64 (43%) were AFB positive by ZN, FM, CY and iLED microscopy, respectively. Digestion of sputum samples for 10, 15 and 20 min yielded mycobacterial growth in 72 (48%), 81 (54%) and 68 (45%) of the digested samples, respectively, after incubation in the MGIT system. In routine laboratory conditions of a resource-limited setting, our study has demonstrated the superiority of fluorescence microscopy over the conventional ZN technique. Digestion of sputum samples for 15 min yielded more positive cultures.

  13. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Science.gov (United States)

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  14. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Directory of Open Access Journals (Sweden)

    Meznah Almutairy

    Full Text Available Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  15. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    Science.gov (United States)

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  16. Existence and uniqueness of consistent conjectural variation equilibrium in electricity markets

    International Nuclear Information System (INIS)

    Liu, Youfei; Cai, Bin; Ni, Y.X.; Wu, Felix F.

    2007-01-01

    The game-theory based methods are widely applied to analyze the market equilibrium and to study the strategic behavior in the oligopolistic electricity markets. Recently, the conjecture variation approach, one of well-studied methods in game theory, is reported to model the strategic behavior in deregulated electricity markets. Unfortunately, the conjecture variation models have been criticized for the drawback of logical inconsistence and possibility of abundant equilibria. Aiming for this, this paper investigates the existence and uniqueness of consistent conjectural variation equilibrium in electricity markets. With several good characteristics of the electricity market and with an infinite horizon optimization model, it is shown that the consistent conjecture variation will satisfy a set of coupled nonlinear equations and there will be only one equilibrium. This result can provide the fundamentals for further applications of the conjecture variation approach. (author)

  17. HBV infection in relation to consistent condom use: a population-based study in Peru.

    Science.gov (United States)

    Bernabe-Ortiz, Antonio; Carcamo, Cesar P; Scott, John D; Hughes, James P; Garcia, Patricia J; Holmes, King K

    2011-01-01

    Data on hepatitis B virus (HBV) prevalence are limited in developing countries. There is also limited information of consistent condom use efficacy for reducing HBV transmission at the population level. The study goal was to evaluate the prevalence and factors associated with HBV infection in Peru, and the relationship between anti-HBc positivity and consistent condom use. Data from two different surveys performed in 28 mid-sized Peruvian cities were analyzed. Participants aged 18-29 years were selected using a multistage cluster sampling. Information was collected through a validated two-part questionnaire. The first part (face-to-face) concerned demographic data, while the second part (self-administered using handheld computers) concerned sexual behavior. Hepatitis B core antibody (anti-HBc) was tested in 7,000 blood samples. Prevalences and associations were adjusted for sample strata, primary sampling units and population weights. Anti-HBc prevalence was 5.0% (95%CI 4.1%-5.9%), with the highest prevalence among jungle cities: 16.3% (95%CI 13.8%-19.1%). In the multivariable analysis, Anti-HBc positivity was directly associated with geographic region (highlands OR = 2.05; 95%CI 1.28-3.27, and jungle OR = 4.86; 95%CI 3.05-7.74; compared to coastal region); and inversely associated with age at sexual debut (OR = 0.90; 95%CI 0.85-0.97). Consistent condom use, evaluated in about 40% of participants, was associated with reduced prevalence (OR = 0.34; 95%CI 0.15-0.79) after adjusting for gender, geographic region, education level, lifetime number of sex partners, age at sexual debut and year of survey. Residence in highlands or jungle cities is associated with higher anti-HBc prevalences, whereas increasing age at sexual debut were associated with lower prevalences. Consistent condom use was associated with decreased risk of anti-HBc. Findings from this study emphasize the need of primary prevention programs (vaccination) especially in the jungle

  18. HBV infection in relation to consistent condom use: a population-based study in Peru.

    Directory of Open Access Journals (Sweden)

    Antonio Bernabe-Ortiz

    Full Text Available Data on hepatitis B virus (HBV prevalence are limited in developing countries. There is also limited information of consistent condom use efficacy for reducing HBV transmission at the population level. The study goal was to evaluate the prevalence and factors associated with HBV infection in Peru, and the relationship between anti-HBc positivity and consistent condom use.Data from two different surveys performed in 28 mid-sized Peruvian cities were analyzed. Participants aged 18-29 years were selected using a multistage cluster sampling. Information was collected through a validated two-part questionnaire. The first part (face-to-face concerned demographic data, while the second part (self-administered using handheld computers concerned sexual behavior. Hepatitis B core antibody (anti-HBc was tested in 7,000 blood samples. Prevalences and associations were adjusted for sample strata, primary sampling units and population weights. Anti-HBc prevalence was 5.0% (95%CI 4.1%-5.9%, with the highest prevalence among jungle cities: 16.3% (95%CI 13.8%-19.1%. In the multivariable analysis, Anti-HBc positivity was directly associated with geographic region (highlands OR = 2.05; 95%CI 1.28-3.27, and jungle OR = 4.86; 95%CI 3.05-7.74; compared to coastal region; and inversely associated with age at sexual debut (OR = 0.90; 95%CI 0.85-0.97. Consistent condom use, evaluated in about 40% of participants, was associated with reduced prevalence (OR = 0.34; 95%CI 0.15-0.79 after adjusting for gender, geographic region, education level, lifetime number of sex partners, age at sexual debut and year of survey.Residence in highlands or jungle cities is associated with higher anti-HBc prevalences, whereas increasing age at sexual debut were associated with lower prevalences. Consistent condom use was associated with decreased risk of anti-HBc. Findings from this study emphasize the need of primary prevention programs (vaccination especially in the

  19. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    KAUST Repository

    Sicat, Ronell Barrera

    2014-12-31

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  20. Consistently violating the non-Gaussian consistency relation

    International Nuclear Information System (INIS)

    Mooij, Sander; Palma, Gonzalo A.

    2015-01-01

    Non-attractor models of inflation are characterized by the super-horizon evolution of curvature perturbations, introducing a violation of the non-Gaussian consistency relation between the bispectrum's squeezed limit and the power spectrum's spectral index. In this work we show that the bispectrum's squeezed limit of non-attractor models continues to respect a relation dictated by the evolution of the background. We show how to derive this relation using only symmetry arguments, without ever needing to solve the equations of motion for the perturbations

  1. Estimating True Short-Term Consistency in Vocational Interests: A Longitudinal SEM Approach

    Science.gov (United States)

    Gaudron, Jean-Philippe; Vautier, Stephane

    2007-01-01

    This study aimed at estimating the correlation between true scores (true consistency) of vocational interest over a short time span in a sample of 1089 adults. Participants were administered 54 items assessing vocational, family, and leisure interests twice over a 1-month period. Responses were analyzed with a multitrait (MT) model, which supposes…

  2. Segmenting the Parotid Gland using Registration and Level Set Methods

    DEFF Research Database (Denmark)

    Hollensen, Christian; Hansen, Mads Fogtmann; Højgaard, Liselotte

    . The method was evaluated on a test set consisting of 8 corresponding data sets. The attained total volume Dice coefficient and mean Haussdorff distance were 0.61 ± 0.20 and 15.6 ± 7.4 mm respectively. The method has improvement potential which could be exploited in order for clinical introduction....

  3. Do Health Systems Have Consistent Performance Across Locations and Is Consistency Associated With Higher Performance?

    Science.gov (United States)

    Crespin, Daniel J; Christianson, Jon B; McCullough, Jeffrey S; Finch, Michael D

    This study addresses whether health systems have consistent diabetes care performance across their ambulatory clinics and whether increasing consistency is associated with improvements in clinic performance. Study data included 2007 to 2013 diabetes care intermediate outcome measures for 661 ambulatory clinics in Minnesota and bordering states. Health systems provided more consistent performance, as measured by the standard deviation of performance for clinics in a system, relative to propensity score-matched proxy systems created for comparison purposes. No evidence was found that improvements in consistency were associated with higher clinic performance. The combination of high performance and consistent care is likely to enhance a health system's brand reputation, allowing it to better mitigate the financial risks of consumers seeking care outside the organization. These results suggest that larger health systems are most likely to deliver the combination of consistent and high-performance care. Future research should explore the mechanisms that drive consistent care within health systems.

  4. Demonstration/Validation of Incremental Sampling at Two Diverse Military Ranges and Development of an Incremental Sampling Tool

    Science.gov (United States)

    2010-06-01

    Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit

  5. Consistency of orthodox gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bellucci, S. [INFN, Frascati (Italy). Laboratori Nazionali di Frascati; Shiekh, A. [International Centre for Theoretical Physics, Trieste (Italy)

    1997-01-01

    A recent proposal for quantizing gravity is investigated for self consistency. The existence of a fixed-point all-order solution is found, corresponding to a consistent quantum gravity. A criterion to unify couplings is suggested, by invoking an application of their argument to more complex systems.

  6. Collection and preparation of samples for gamma spectrometry

    International Nuclear Information System (INIS)

    Pan Jingquan

    1994-01-01

    The paper presents the basic principles of sample collection and preparation: setting up unified sampling program, methods and procedures, sample packing, transportation and storage, determination of sample quantity, sample pretreatment and preparation of samples to be analysed, etc. for gamma spectrometry. And the paper also describes briefly the main methods and special issues of sampling and preparation for the same environmental and biological samples, such as, air, water, grass, soil and foods

  7. Chemometric classification of casework arson samples based on gasoline content.

    Science.gov (United States)

    Sinkov, Nikolai A; Sandercock, P Mark L; Harynuk, James J

    2014-02-01

    Detection and identification of ignitable liquids (ILs) in arson debris is a critical part of arson investigations. The challenge of this task is due to the complex and unpredictable chemical nature of arson debris, which also contains pyrolysis products from the fire. ILs, most commonly gasoline, are complex chemical mixtures containing hundreds of compounds that will be consumed or otherwise weathered by the fire to varying extents depending on factors such as temperature, air flow, the surface on which IL was placed, etc. While methods such as ASTM E-1618 are effective, data interpretation can be a costly bottleneck in the analytical process for some laboratories. In this study, we address this issue through the application of chemometric tools. Prior to the application of chemometric tools such as PLS-DA and SIMCA, issues of chromatographic alignment and variable selection need to be addressed. Here we use an alignment strategy based on a ladder consisting of perdeuterated n-alkanes. Variable selection and model optimization was automated using a hybrid backward elimination (BE) and forward selection (FS) approach guided by the cluster resolution (CR) metric. In this work, we demonstrate the automated construction, optimization, and application of chemometric tools to casework arson data. The resulting PLS-DA and SIMCA classification models, trained with 165 training set samples, have provided classification of 55 validation set samples based on gasoline content with 100% specificity and sensitivity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Interpolation-free scanning and sampling scheme for tomographic reconstructions

    International Nuclear Information System (INIS)

    Donohue, K.D.; Saniie, J.

    1987-01-01

    In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation

  9. Local couplings, double insertions and the Weyl consistency condition

    International Nuclear Information System (INIS)

    Kraus, E.; Sibold, K.

    1992-05-01

    Within massless φ 4 4 -theory we set up the formalism which is needed, when the coupling λ is permitted to become an external field, i.e. a function of space-time. In particular we have worked out the action of the corresponding Callan-Symanzik operator and conformal transformations on the vertex functions, and furthermore how the Weyl transformations act on the theory with the energy-momentum tensor invariantly coupled. With the help of the Weyl consistency condition we have shown that in the limit of constant coupling the Weyl braking can entirely be written in terms of differential operators, but that otherwise, for truely local coupling, new breaking terms survive. (orig.)

  10. A New Bias Corrected Version of Heteroscedasticity Consistent Covariance Estimator

    Directory of Open Access Journals (Sweden)

    Munir Ahmed

    2016-06-01

    Full Text Available In the presence of heteroscedasticity, different available flavours of the heteroscedasticity consistent covariance estimator (HCCME are used. However, the available literature shows that these estimators can be considerably biased in small samples. Cribari–Neto et al. (2000 introduce a bias adjustment mechanism and give the modified White estimator that becomes almost bias-free even in small samples. Extending these results, Cribari-Neto and Galvão (2003 present a similar bias adjustment mechanism that can be applied to a wide class of HCCMEs’. In the present article, we follow the same mechanism as proposed by Cribari-Neto and Galvão to give bias-correction version of HCCME but we use adaptive HCCME rather than the conventional HCCME. The Monte Carlo study is used to evaluate the performance of our proposed estimators.

  11. Consistency of policy instruments. How the EU could move to a -30% greenhouse gas reduction target

    Energy Technology Data Exchange (ETDEWEB)

    Hoehne, N.; Hagemann, M.; Moltmann, S.; Escalante, D. [Ecofys Germany, Berlin (Germany)

    2011-04-15

    The report provides options for how to achieve a 30% reduction in greenhouse gas emissions, in the EU, from 1990 to 2020. The EU has agreed a set of goals (objectives) and related instruments for 2020. The most significant objectives are the 20% or 30% reduction of greenhouse gases (all emissions, both within and outside the Emissions Trading System), the 20% improvement of energy efficiency and the 20% of renewable energy use by 2020. The stringency of the instruments used to reach these goals, must be set carefully to ensure overall consistency. Particularly after the economic crisis of 2008/2009, the Climate and Energy Package has to be 'tuned' again to be fully consistent, e.g. fast take up of renewable energy and fewer emissions due to the recession may cause an over-allocation in the EU-ETS.

  12. Perceived impact on student engagement when learning middle school science in an outdoor setting

    Science.gov (United States)

    Abbatiello, James

    Human beings have an innate need to spend time outside, but in recent years children are spending less time outdoors. It is possible that this decline in time spent outdoors could have a negative impact on child development. Science teachers can combat the decline in the amount of time children spend outside by taking their science classes outdoors for regular classroom instruction. This study identified the potential impacts that learning in an outdoor setting might have on student engagement when learning middle school science. One sixth-grade middle school class participated in this case study, and students participated in outdoor intervention lessons where the instructional environment was a courtyard on the middle school campus. The outdoor lessons consisted of the same objectives and content as lessons delivered in an indoor setting during a middle school astronomy unit. Multiple sources of data were collected including questionnaires after each lesson, a focus group, student work samples, and researcher observations. The data was triangulated, and a vignette was written about the class' experiences learning in an outdoor setting. This study found that the feeling of autonomy and freedom gained by learning in an outdoor setting, and the novelty of the outdoor environment did increase student engagement for learning middle school science. In addition, as a result of this study, more work is needed to identify how peer to peer relationships are impacted by learning outdoors, how teachers could best utilize the outdoor setting for regular science instruction, and how learning in an outdoor setting might impact a feeling of stewardship for the environment in young adults.

  13. Design unbiased estimation in line intersect sampling using segmented transects

    Science.gov (United States)

    David L.R. Affleck; Timothy G. Gregoire; Harry T. Valentine; Harry T. Valentine

    2005-01-01

    In many applications of line intersect sampling. transects consist of multiple, connected segments in a prescribed configuration. The relationship between the transect configuration and the selection probability of a population element is illustrated and a consistent sampling protocol, applicable to populations composed of arbitrarily shaped elements, is proposed. It...

  14. Two methods of self-sampling compared to clinician sampling to detect reproductive tract infections in Gugulethu, South Africa

    NARCIS (Netherlands)

    van de Wijgert, Janneke; Altini, Lydia; Jones, Heidi; de Kock, Alana; Young, Taryn; Williamson, Anna-Lise; Hoosen, Anwar; Coetzee, Nicol

    2006-01-01

    To assess the validity, feasibility, and acceptability of 2 methods of self-sampling compared to clinician sampling during a speculum examination. To improve screening for reproductive tract infections (RTIs) in resource-poor settings. In a public clinic in Cape Town, 450 women underwent a speculum

  15. Neutron multicounter detector for investigation of content and spatial distribution of fission materials in large volume samples

    International Nuclear Information System (INIS)

    Swiderska-Kowalczyk, M.; Starosta, W.; Zoltowski, T.

    1998-01-01

    The experimental device is a neutron coincidence well counter. It can be applied for passive assay of fissile - especially for plutonium bearing - materials. It consist of a set of 3 He tubes placed inside a polyethylene moderator; outputs from the tubes, first processed by preamplifier/amplifier/discriminator circuits, are then analysed using neutron correlator connected with a PC, and correlation techniques implemented in software. Such a neutron counter allows for determination of plutonium mass ( 240 Pu effective mass) in nonmultiplying samples having fairly big volume (up to 0.14 m 3 ). For determination of neutron sources distribution inside the sample, the heuristic methods based on hierarchical cluster analysis are applied. As an input parameters, amplitudes and phases of two-dimensional Fourier transformation of the count profiles matrices for known point sources distributions and for the examined samples, are taken. Such matrices are collected by means of sample scanning by detection head. During clustering process, counts profiles for unknown samples fitted into dendrograms using the 'proximity' criterion of the examined sample profile to standard samples profiles. Distribution of neutron sources in an examined sample is then evaluated on the basis of comparison with standard sources distributions. (author)

  16. Sampling by electro-erosion on irradiated materials

    International Nuclear Information System (INIS)

    Riviere, M.; Pizzanelli, J.P.

    1986-05-01

    Sampling on irradiated materials, in particular for mechanical property study of steels in the FAST NEUTRON program needed the set in a hot cell of a machining device by electroerosion. This device allows sampling of tenacity, traction, resilience test pieces [fr

  17. Public engagement in setting healthcare priorities: a ranking exercise in Cyprus.

    Science.gov (United States)

    Farmakas, Antonis; Theodorou, Mamas; Galanis, Petros; Karayiannis, Georgios; Ghobrial, Stefanos; Polyzos, Nikos; Papastavrou, Evridiki; Agapidaki, Eirini; Souliotis, Kyriakos

    2017-01-01

    In countries such as Cyprus the financial crisis and the recession have severely affected the funding and priority setting of the health care system. There is evidence highlighting the importance of population' preferences in designing priorities for health care settings. Although public preferences have been thorough analysed in many countries, there is a research gap in terms of simultaneously investigating the relative importance and the weight of differing and competing criteria for determining healthcare priority settings. The main objective of the study was tο investigate public preferences for the relative utility and weight of differing and competing criteria for health care priority setting in Cyprus. The 'conjoint analysis' technique was applied to develop a ranking exercise. The aim of the study was to identify the preferences of the participants for alternative options. Participants were asked to grade in a priority order 16 hypothetical case scenarios of patients with different disease and of diverse socio-economic characteristics awaiting treatment. The sample was purposive and consisted of 100 Cypriots, selected from public locations all over the country. It was revealed that the "severity of the disease" and the " age of the patient" were the key prioritization criteria. Participants assigned the smallest relative value to the criterion " healthy lifestyle" . More precisely, participants older than 35 years old assigned higher relative importance to " age" , while younger participants to the " severity of the disease". The " healthy lifestyle" criterion was assigned to the lowest relative importance to by all participants. In Cyprus, public participation in health care priority setting is almost inexistent. Nonetheless, it seems that the public's participation in this process could lead to a wider acceptance of the healthcare system especially as a result of the financial crisis and the upcoming reforms implemented such as the establishment of the

  18. Moon-Mars simulation campaign in volcanic Eifel: Remote science support and sample analysis

    Science.gov (United States)

    Offringa, Marloes; Foing, Bernard H.; Kamps, Oscar

    2016-07-01

    Moon-Mars analogue missions using a mock-up lander that is part of the ESA/ILEWG ExoGeoLab project were conducted during Eifel field campaigns in 2009, 2015 and 2016 (Foing et al., 2010). In the last EuroMoonMars2016 campaign the lander was used to conduct reconnaissance experiments and in situ geological scientific analysis of samples, with a payload that mainly consisted of a telescope and a UV-VIS reflectance spectrometer. The aim of the campaign was to exhibit possibilities for the ExoGeoLab lander to perform remotely controlled experiments and test its applicability in the field by simulating the interaction with astronauts. The Eifel region in Germany where the experiments with the ExoGeoLab lander were conducted is a Moon-Mars analogue due to its geological setting and volcanic rock composition. The research conducted by analysis equipment on the lander could function in support of Moon-Mars sample return missions, by providing preliminary insight into characteristics of the analyzed samples. The set-up of the prototype lander was that of a telescope with camera and solar power equipment deployed on the top, the UV-VIS reflectance spectrometer together with computers and a sample webcam were situated in the middle compartment and to the side a sample analysis test bench was attached, attainable by astronauts from outside the lander. An alternative light source that illuminated the samples in case of insufficient daylight was placed on top of the lander and functioned on solar power. The telescope, teleoperated from a nearby stationed pressurized transport vehicle that functioned as a base control center, attained an overview of the sampling area and assisted the astronauts in their initial scouting pursuits. Locations of suitable sampling sites based on these obtained images were communicated to the astronauts, before being acquired during a simulated EVA. Sampled rocks and soils were remotely analyzed by the base control center, while the astronauts

  19. Improved phylogenomic taxon sampling noticeably affects nonbilaterian relationships.

    Science.gov (United States)

    Pick, K S; Philippe, H; Schreiber, F; Erpenbeck, D; Jackson, D J; Wrede, P; Wiens, M; Alié, A; Morgenstern, B; Manuel, M; Wörheide, G

    2010-09-01

    Despite expanding data sets and advances in phylogenomic methods, deep-level metazoan relationships remain highly controversial. Recent phylogenomic analyses depart from classical concepts in recovering ctenophores as the earliest branching metazoan taxon and propose a sister-group relationship between sponges and cnidarians (e.g., Dunn CW, Hejnol A, Matus DQ, et al. (18 co-authors). 2008. Broad phylogenomic sampling improves resolution of the animal tree of life. Nature 452:745-749). Here, we argue that these results are artifacts stemming from insufficient taxon sampling and long-branch attraction (LBA). By increasing taxon sampling from previously unsampled nonbilaterians and using an identical gene set to that reported by Dunn et al., we recover monophyletic Porifera as the sister group to all other Metazoa. This suggests that the basal position of the fast-evolving Ctenophora proposed by Dunn et al. was due to LBA and that broad taxon sampling is of fundamental importance to metazoan phylogenomic analyses. Additionally, saturation in the Dunn et al. character set is comparatively high, possibly contributing to the poor support for some nonbilaterian nodes.

  20. Selectivity and limitations of carbon sorption tubes for capturing siloxanes in biogas during field sampling.

    Science.gov (United States)

    Tansel, Berrin; Surita, Sharon C

    2016-06-01

    Siloxane levels in biogas can jeopardize the warranties of the engines used at the biogas to energy facilities. The chemical structure of siloxanes consists of silicon and oxygen atoms, alternating in position, with hydrocarbon groups attached to the silicon side chain. Siloxanes can be either in cyclic (D) or linear (L) configuration and referred with a letter corresponding to their structure followed by a number corresponding to the number of silicon atoms present. When siloxanes are burned, the hydrocarbon fraction is lost and silicon is converted to silicates. The purpose of this study was to evaluate the adequacy of activated carbon gas samplers for quantitative analysis of siloxanes in biogas samples. Biogas samples were collected from a landfill and an anaerobic digester using multiple carbon sorbent tubes assembled in series. One set of samples was collected for 30min (sampling 6-L gas), and the second set was collected for 60min (sampling 12-L gas). Carbon particles were thermally desorbed and analyzed by Gas Chromatography Mass Spectrometry (GC/MS). The results showed that biogas sampling using a single tube would not adequately capture octamethyltrisiloxane (L3), hexamethylcyclotrisiloxane (D3), octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5) and dodecamethylcyclohexasiloxane (D6). Even with 4 tubes were used in series, D5 was not captured effectively. The single sorbent tube sampling method was adequate only for capturing trimethylsilanol (TMS) and hexamethyldisiloxane (L2). Affinity of siloxanes for activated carbon decreased with increasing molecular weight. Using multiple carbon sorbent tubes in series can be an appropriate method for developing a standard procedure for determining siloxane levels for low molecular weight siloxanes (up to D3). Appropriate quality assurance and quality control procedures should be developed for adequately quantifying the levels of the higher molecular weight siloxanes in biogas with sorbent tubes