WorldWideScience

Sample records for sample set included

  1. IGSA: Individual Gene Sets Analysis, including Enrichment and Clustering.

    Science.gov (United States)

    Wu, Lingxiang; Chen, Xiujie; Zhang, Denan; Zhang, Wubing; Liu, Lei; Ma, Hongzhe; Yang, Jingbo; Xie, Hongbo; Liu, Bo; Jin, Qing

    2016-01-01

    Analysis of gene sets has been widely applied in various high-throughput biological studies. One weakness in the traditional methods is that they neglect the heterogeneity of genes expressions in samples which may lead to the omission of some specific and important gene sets. It is also difficult for them to reflect the severities of disease and provide expression profiles of gene sets for individuals. We developed an application software called IGSA that leverages a powerful analytical capacity in gene sets enrichment and samples clustering. IGSA calculates gene sets expression scores for each sample and takes an accumulating clustering strategy to let the samples gather into the set according to the progress of disease from mild to severe. We focus on gastric, pancreatic and ovarian cancer data sets for the performance of IGSA. We also compared the results of IGSA in KEGG pathways enrichment with David, GSEA, SPIA, ssGSEA and analyzed the results of IGSA clustering and different similarity measurement methods. Notably, IGSA is proved to be more sensitive and specific in finding significant pathways, and can indicate related changes in pathways with the severity of disease. In addition, IGSA provides with significant gene sets profile for each sample.

  2. New approaches to wipe sampling methods for antineoplastic and other hazardous drugs in healthcare settings.

    Science.gov (United States)

    Connor, Thomas H; Smith, Jerome P

    2016-09-01

    At the present time, the method of choice to determine surface contamination of the workplace with antineoplastic and other hazardous drugs is surface wipe sampling and subsequent sample analysis with a variety of analytical techniques. The purpose of this article is to review current methodology for determining the level of surface contamination with hazardous drugs in healthcare settings and to discuss recent advances in this area. In addition it will provide some guidance for conducting surface wipe sampling and sample analysis for these drugs in healthcare settings. Published studies on the use of wipe sampling to measure hazardous drugs on surfaces in healthcare settings drugs were reviewed. These studies include the use of well-documented chromatographic techniques for sample analysis in addition to newly evolving technology that provides rapid analysis of specific antineoplastic. Methodology for the analysis of surface wipe samples for hazardous drugs are reviewed, including the purposes, technical factors, sampling strategy, materials required, and limitations. The use of lateral flow immunoassay (LFIA) and fluorescence covalent microbead immunosorbent assay (FCMIA) for surface wipe sample evaluation is also discussed. Current recommendations are that all healthc a re settings where antineoplastic and other hazardous drugs are handled include surface wipe sampling as part of a comprehensive hazardous drug-safe handling program. Surface wipe sampling may be used as a method to characterize potential occupational dermal exposure risk and to evaluate the effectiveness of implemented controls and the overall safety program. New technology, although currently limited in scope, may make wipe sampling for hazardous drugs more routine, less costly, and provide a shorter response time than classical analytical techniques now in use.

  3. Including Below Detection Limit Samples in Post Decommissioning Soil Sample Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Hwan; Yim, Man Sung [KAIST, Daejeon (Korea, Republic of)

    2016-05-15

    To meet the required standards the site owner has to show that the soil at the facility has been sufficiently cleaned up. To do this one must know the contamination of the soil at the site prior to clean up. This involves sampling that soil to identify the degree of contamination. However there is a technical difficulty in determining how much decontamination should be done. The problem arises when measured samples are below the detection limit. Regulatory guidelines for site reuse after decommissioning are commonly challenged because the majority of the activity in the soil at or below the limit of detection. Using additional statistical analyses of contaminated soil after decommissioning is expected to have the following advantages: a better and more reliable probabilistic exposure assessment, better economics (lower project costs) and improved communication with the public. This research will develop an approach that defines an acceptable method for demonstrating compliance of decommissioned NPP sites and validates that compliance. Soil samples from NPP often contain censored data. Conventional methods for dealing with censored data sets are statistically biased and limited in their usefulness.

  4. Data Stewardship in the Ocean Sciences Needs to Include Physical Samples

    Science.gov (United States)

    Carter, M.; Lehnert, K.

    2016-02-01

    Across the Ocean Sciences, research involves the collection and study of samples collected above, at, and below the seafloor, including but not limited to rocks, sediments, fluids, gases, and living organisms. Many domains in the Earth Sciences have recently expressed the need for better discovery, access, and sharing of scientific samples and collections (EarthCube End-User Domain workshops, 2012 and 2013, http://earthcube.org/info/about/end-user-workshops), as has the US government (OSTP Memo, March 2014). iSamples (Internet of Samples in the Earth Sciences) is a Research Coordination Network within the EarthCube program that aims to advance the use of innovative cyberinfrastructure to support and advance the utility of physical samples and sample collections for science and ensure reproducibility of sample-based data and research results. iSamples strives to build, grow, and foster a new community of practice, in which domain scientists, curators of sample repositories and collections, computer and information scientists, software developers and technology innovators engage in and collaborate on defining, articulating, and addressing the needs and challenges of physical samples as a critical component of digital data infrastructure. A primary goal of iSamples is to deliver a community-endorsed set of best practices and standards for the registration, description, identification, and citation of physical specimens and define an actionable plan for implementation. iSamples conducted a broad community survey about sample sharing and has created 5 different working groups to address the different challenges of developing the internet of samples - from metadata schemas and unique identifiers to an architecture for a shared cyberinfrastructure to manage collections, to digitization of existing collections, to education, and ultimately to establishing the physical infrastructure that will ensure preservation and access of the physical samples. Repositories that curate

  5. Self-sampling for human papillomavirus in a community setting: feasibility in Hispanic women.

    Science.gov (United States)

    De Alba, Israel; Anton-Culver, Hoda; Hubbell, F Allan; Ziogas, Argyrios; Hess, James R; Bracho, America; Arias, Caleb; Manetta, Alberto

    2008-08-01

    The aim of the study was (a) to assess sensitivity and specificity of self-sampling in a community setting for identifying high-risk human papillomavirus (HPV) infection and abnormal Papanicolaou (Pap) smears and (b) to assess satisfaction with this collection method among Hispanic women. Lay health workers distributed self-collection kits to Hispanic women in the community. Participants collected an unsupervised vaginal sample at home or in the place and time of their preference. A total of 1,213 Hispanics were included and provided a self-sample for HPV testing and were invited for a Pap smear; 662 (55%) of them had a Pap smear and the first 386 of these also had a physician-collected sample for HPV retesting. Using physician collection as the gold standard, unsupervised self-collection had a sensitivity of 90% and specificity of 88% for identifying high-risk HPV. Compared with physician sampling, self-sampling in a community setting had comparable sensitivity for identifying a low-grade lesions or greater in the Pap smear (50% versus 55%; P = 0.45) but lower specificity (94% versus 79%). Overall experience with self-sampling was reported as excellent or very good by 64% and only 2.6% reported a poor or fair experience. Unsupervised self-collection of vaginal samples for HPV testing in a community setting has a high sensitivity for identifying high-risk HPV and a high satisfaction among Hispanics. This approach may benefit populations with limited access to health care or with cultural barriers to cervical cancer screening.

  6. Iterative algorithm of discrete Fourier transform for processing randomly sampled NMR data sets

    International Nuclear Information System (INIS)

    Stanek, Jan; Kozminski, Wiktor

    2010-01-01

    Spectra obtained by application of multidimensional Fourier Transformation (MFT) to sparsely sampled nD NMR signals are usually corrupted due to missing data. In the present paper this phenomenon is investigated on simulations and experiments. An effective iterative algorithm for artifact suppression for sparse on-grid NMR data sets is discussed in detail. It includes automated peak recognition based on statistical methods. The results enable one to study NMR spectra of high dynamic range of peak intensities preserving benefits of random sampling, namely the superior resolution in indirectly measured dimensions. Experimental examples include 3D 15 N- and 13 C-edited NOESY-HSQC spectra of human ubiquitin.

  7. An Optimization Study on Listening Experiments to Improve the Comparability of Annoyance Ratings of Noise Samples from Different Experimental Sample Sets.

    Science.gov (United States)

    Di, Guoqing; Lu, Kuanguang; Shi, Xiaofan

    2018-03-08

    Annoyance ratings obtained from listening experiments are widely used in studies on health effect of environmental noise. In listening experiments, participants usually give the annoyance rating of each noise sample according to its relative annoyance degree among all samples in the experimental sample set if there are no reference sound samples, which leads to poor comparability between experimental results obtained from different experimental sample sets. To solve this problem, this study proposed to add several pink noise samples with certain loudness levels into experimental sample sets as reference sound samples. On this basis, the standard curve between logarithmic mean annoyance and loudness level of pink noise was used to calibrate the experimental results and the calibration procedures were described in detail. Furthermore, as a case study, six different types of noise sample sets were selected to conduct listening experiments using this method to examine the applicability of it. Results showed that the differences in the annoyance ratings of each identical noise sample from different experimental sample sets were markedly decreased after calibration. The determination coefficient ( R ²) of linear fitting functions between psychoacoustic annoyance (PA) and mean annoyance (MA) of noise samples from different experimental sample sets increased obviously after calibration. The case study indicated that the method above is applicable to calibrating annoyance ratings obtained from different types of noise sample sets. After calibration, the comparability of annoyance ratings of noise samples from different experimental sample sets can be distinctly improved.

  8. Efficient triangulation of Poisson-disk sampled point sets

    KAUST Repository

    Guo, Jianwei

    2014-05-06

    In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.

  9. Effect of the absolute statistic on gene-sampling gene-set analysis methods.

    Science.gov (United States)

    Nam, Dougu

    2017-06-01

    Gene-set enrichment analysis and its modified versions have commonly been used for identifying altered functions or pathways in disease from microarray data. In particular, the simple gene-sampling gene-set analysis methods have been heavily used for datasets with only a few sample replicates. The biggest problem with this approach is the highly inflated false-positive rate. In this paper, the effect of absolute gene statistic on gene-sampling gene-set analysis methods is systematically investigated. Thus far, the absolute gene statistic has merely been regarded as a supplementary method for capturing the bidirectional changes in each gene set. Here, it is shown that incorporating the absolute gene statistic in gene-sampling gene-set analysis substantially reduces the false-positive rate and improves the overall discriminatory ability. Its effect was investigated by power, false-positive rate, and receiver operating curve for a number of simulated and real datasets. The performances of gene-set analysis methods in one-tailed (genome-wide association study) and two-tailed (gene expression data) tests were also compared and discussed.

  10. RANKED SET SAMPLING FOR ECOLOGICAL RESEARCH: ACCOUNTING FOR THE TOTAL COSTS OF SAMPLING

    Science.gov (United States)

    Researchers aim to design environmental studies that optimize precision and allow for generalization of results, while keeping the costs of associated field and laboratory work at a reasonable level. Ranked set sampling is one method to potentially increase precision and reduce ...

  11. Automatic setting of the distance between sample and detector in gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Andeweg, A.H.

    1980-01-01

    An apparatus has been developed that automatically sets the distance from the sample to the detector according to the radioactivity of the sample. The distance-setting unit works in conjuction with an automatic sample changer, and is interconnected with other components so that the counting head automatically moves to the optimum distance for the analysis of a particular sample. The distance, which is indicated digitally in increments of 0,01 mm, can be set between 18 and 995 mm at count rates that can be preset between 1000 and 10 000 counts per second. On being tested, the instrument performed well within the desired range and accuracy. Under routine conditions, the spectra were much more accurate than before, especially when samples of different radioactivity were counted

  12. The Impact of Including Below Detection Limit Samples in Post Decommissioning Soil Sample Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Hwan; Yim, Man-Sung [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    To meet the required standards the site owner has to show that the soil at the facility has been sufficiently cleaned up. To do this one must know the contamination of the soil at the site prior to clean up. This involves sampling that soil to identify the degree of contamination. However there is a technical difficulty in determining how much decontamination should be done. The problem arises when measured samples are below the detection limit. Regulatory guidelines for site reuse after decommissioning are commonly challenged because the majority of the activity in the soil at or below the limit of detection. Using additional statistical analyses of contaminated soil after decommissioning is expected to have the following advantages: a better and more reliable probabilistic exposure assessment, better economics (lower project costs) and improved communication with the public. This research will develop an approach that defines an acceptable method for demonstrating compliance of decommissioned NPP sites and validates that compliance. Soil samples from NPP often contain censored data. Conventional methods for dealing with censored data sets are statistically biased and limited in their usefulness. In this research, additional methods are performed using real data from a monazite manufacturing factory.

  13. Communication: importance sampling including path correlation in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Pan, Feng; Tao, Guohua

    2013-03-07

    Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.

  14. Rapid Fractionation and Isolation of Whole Blood Components in Samples Obtained from a Community-based Setting.

    Science.gov (United States)

    Weckle, Amy; Aiello, Allison E; Uddin, Monica; Galea, Sandro; Coulborn, Rebecca M; Soliven, Richelo; Meier, Helen; Wildman, Derek E

    2015-11-30

    Collection and processing of whole blood samples in a non-clinical setting offers a unique opportunity to evaluate community-dwelling individuals both with and without preexisting conditions. Rapid processing of these samples is essential to avoid degradation of key cellular components. Included here are methods for simultaneous peripheral blood mononuclear cell (PBMC), DNA, RNA and serum isolation from a single blood draw performed in the homes of consenting participants across a metropolitan area, with processing initiated within 2 hr of collection. We have used these techniques to process over 1,600 blood specimens yielding consistent, high quality material, which has subsequently been used in successful DNA methylation, genotyping, gene expression and flow cytometry analyses. Some of the methods employed are standard; however, when combined in the described manner, they enable efficient processing of samples from participants of population- and/or community-based studies who would not normally be evaluated in a clinical setting. Therefore, this protocol has the potential to obtain samples (and subsequently data) that are more representative of the general population.

  15. An Optimized Set of Fluorescence In Situ Hybridization Probes for Detection of Pancreatobiliary Tract Cancer in Cytology Brush Samples.

    Science.gov (United States)

    Barr Fritcher, Emily G; Voss, Jesse S; Brankley, Shannon M; Campion, Michael B; Jenkins, Sarah M; Keeney, Matthew E; Henry, Michael R; Kerr, Sarah M; Chaiteerakij, Roongruedee; Pestova, Ekaterina V; Clayton, Amy C; Zhang, Jun; Roberts, Lewis R; Gores, Gregory J; Halling, Kevin C; Kipp, Benjamin R

    2015-12-01

    Pancreatobiliary cancer is detected by fluorescence in situ hybridization (FISH) of pancreatobiliary brush samples with UroVysion probes, originally designed to detect bladder cancer. We designed a set of new probes to detect pancreatobiliary cancer and compared its performance with that of UroVysion and routine cytology analysis. We tested a set of FISH probes on tumor tissues (cholangiocarcinoma or pancreatic carcinoma) and non-tumor tissues from 29 patients. We identified 4 probes that had high specificity for tumor vs non-tumor tissues; we called this set of probes pancreatobiliary FISH. We performed a retrospective analysis of brush samples from 272 patients who underwent endoscopic retrograde cholangiopancreatography for evaluation of malignancy at the Mayo Clinic; results were available from routine cytology and FISH with UroVysion probes. Archived residual specimens were retrieved and used to evaluate the pancreatobiliary FISH probes. Cutoff values for FISH with the pancreatobiliary probes were determined using 89 samples and validated in the remaining 183 samples. Clinical and pathologic evidence of malignancy in the pancreatobiliary tract within 2 years of brush sample collection was used as the standard; samples from patients without malignancies were used as negative controls. The validation cohort included 85 patients with malignancies (46.4%) and 114 patients with primary sclerosing cholangitis (62.3%). Samples containing cells above the cutoff for polysomy (copy number gain of ≥2 probes) were classified as positive in FISH with the UroVysion and pancreatobiliary probes. Multivariable logistic regression was used to estimate associations between clinical and pathology findings and results from FISH. The combination of FISH probes 1q21, 7p12, 8q24, and 9p21 identified cancer cells with 93% sensitivity and 100% specificity in pancreatobiliary tissue samples and were therefore included in the pancreatobiliary probe set. In the validation cohort of

  16. Methods to characterize environmental settings of stream and groundwater sampling sites for National Water-Quality Assessment

    Science.gov (United States)

    Nakagaki, Naomi; Hitt, Kerie J.; Price, Curtis V.; Falcone, James A.

    2012-01-01

    Characterization of natural and anthropogenic features that define the environmental settings of sampling sites for streams and groundwater, including drainage basins and groundwater study areas, is an essential component of water-quality and ecological investigations being conducted as part of the U.S. Geological Survey's National Water-Quality Assessment program. Quantitative characterization of environmental settings, combined with physical, chemical, and biological data collected at sampling sites, contributes to understanding the status of, and influences on, water-quality and ecological conditions. To support studies for the National Water-Quality Assessment program, a geographic information system (GIS) was used to develop a standard set of methods to consistently characterize the sites, drainage basins, and groundwater study areas across the nation. This report describes three methods used for characterization-simple overlay, area-weighted areal interpolation, and land-cover-weighted areal interpolation-and their appropriate applications to geographic analyses that have different objectives and data constraints. In addition, this document records the GIS thematic datasets that are used for the Program's national design and data analyses.

  17. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  18. Epiphytic bryozoans on Neptune grass - a sample-based data set.

    Science.gov (United States)

    Lepoint, Gilles; Heughebaert, André; Michel, Loïc N

    2016-01-01

    The seagrass Posidonia oceanica L. Delile, commonly known as Neptune grass, is an endemic species of the Mediterranean Sea. It hosts a distinctive and diverse epiphytic community, dominated by various macroalgal and animal organisms. Mediterranean bryozoans have been extensively studied but quantitative data assessing temporal and spatial variability have rarely been documented. In Lepoint et al. (2014a, b) occurrence and abundance data of epiphytic bryozoan communities on leaves of Posidonia oceanica inhabiting Revellata Bay (Corsica, Mediterranean Sea) were reported and trophic ecology of Electra posidoniae Gautier assessed. Here, metadata information is provided on the data set discussed in Lepoint et al. (2014a) and published on the GBIF portal as a sampling-event data set: http://ipt.biodiversity.be/resource?r=ulg_bryozoa&v=1.0). The data set is enriched by data concerning species settled on Posidonia scales (dead petiole of Posidonia leaves, remaining after limb abscission).

  19. Different goodness of fit tests for Rayleigh distribution in ranked set sampling

    Directory of Open Access Journals (Sweden)

    Amer Al-Omari

    2016-03-01

    Full Text Available In this paper, different goodness of fit tests for the Rayleigh distribution are considered based on simple random sampling (SRS and ranked set sampling (RSS techniques. The performance of the suggested estimators is evaluated in terms of the power of the tests by using Monte Carlo simulation. It is found that the suggested RSS tests perform better than their counterparts  in SRS.

  20. Examination of the MMPI-2 restructured form (MMPI-2-RF) validity scales in civil forensic settings: findings from simulation and known group samples.

    Science.gov (United States)

    Wygant, Dustin B; Ben-Porath, Yossef S; Arbisi, Paul A; Berry, David T R; Freeman, David B; Heilbronner, Robert L

    2009-11-01

    The current study examined the effectiveness of the MMPI-2 Restructured Form (MMPI-2-RF; Ben-Porath and Tellegen, 2008) over-reporting indicators in civil forensic settings. The MMPI-2-RF includes three revised MMPI-2 over-reporting validity scales and a new scale to detect over-reported somatic complaints. Participants dissimulated medical and neuropsychological complaints in two simulation samples, and a known-groups sample used symptom validity tests as a response bias criterion. Results indicated large effect sizes for the MMPI-2-RF validity scales, including a Cohen's d of .90 for Fs in a head injury simulation sample, 2.31 for FBS-r, 2.01 for F-r, and 1.97 for Fs in a medical simulation sample, and 1.45 for FBS-r and 1.30 for F-r in identifying poor effort on SVTs. Classification results indicated good sensitivity and specificity for the scales across the samples. This study indicates that the MMPI-2-RF over-reporting validity scales are effective at detecting symptom over-reporting in civil forensic settings.

  1. Set Up of an Automatic Water Quality Sampling System in Irrigation Agriculture

    Directory of Open Access Journals (Sweden)

    Emanuel Heinz

    2013-12-01

    Full Text Available We have developed a high-resolution automatic sampling system for continuous in situ measurements of stable water isotopic composition and nitrogen solutes along with hydrological information. The system facilitates concurrent monitoring of a large number of water and nutrient fluxes (ground, surface, irrigation and rain water in irrigated agriculture. For this purpose we couple an automatic sampling system with a Wavelength-Scanned Cavity Ring Down Spectrometry System (WS-CRDS for stable water isotope analysis (δ2H and δ18O, a reagentless hyperspectral UV photometer (ProPS for monitoring nitrate content and various water level sensors for hydrometric information. The automatic sampling system consists of different sampling stations equipped with pumps, a switch cabinet for valve and pump control and a computer operating the system. The complete system is operated via internet-based control software, allowing supervision from nearly anywhere. The system is currently set up at the International Rice Research Institute (Los Baños, The Philippines in a diversified rice growing system to continuously monitor water and nutrient fluxes. Here we present the system’s technical set-up and provide initial proof-of-concept with results for the isotopic composition of different water sources and nitrate values from the 2012 dry season.

  2. Cognitive Sex Differences in Reasoning Tasks: Evidence from Brazilian Samples of Educational Settings

    Science.gov (United States)

    Flores-Mendoza, Carmen; Widaman, Keith F.; Rindermann, Heiner; Primi, Ricardo; Mansur-Alves, Marcela; Pena, Carla Couto

    2013-01-01

    Sex differences on the Attention Test (AC), the Raven's Standard Progressive Matrices (SPM), and the Brazilian Cognitive Battery (BPR5), were investigated using four large samples (total N=6780), residing in the states of Minas Gerais and Sao Paulo. The majority of samples used, which were obtained from educational settings, could be considered a…

  3. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    Science.gov (United States)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application

  4. Selection of representative calibration sample sets for near-infrared reflectance spectroscopy to predict nitrogen concentration in grasses

    DEFF Research Database (Denmark)

    Shetty, Nisha; Rinnan, Åsmund; Gislum, René

    2012-01-01

    ) algorithm were used and compared. Both Puchwein and CADEX methods provide a calibration set equally distributed in space, and both methods require a minimum prior of knowledge. The samples were also selected randomly using complete random, cultivar random (year fixed), year random (cultivar fixed......) and interaction (cultivar × year fixed) random procedures to see the influence of different factors on sample selection. Puchwein's method performed best with lowest RMSEP followed by CADEX, interaction random, year random, cultivar random and complete random. Out of 118 samples of the complete calibration set...... effectively enhance the cost-effectiveness of NIR spectral analysis by reducing the number of analyzed samples in the calibration set by more than 80%, which substantially reduces the effort of laboratory analyses with no significant loss in prediction accuracy....

  5. Synchronizing data from irregularly sampled sensors

    Science.gov (United States)

    Uluyol, Onder

    2017-07-11

    A system and method include receiving a set of sampled measurements for each of multiple sensors, wherein the sampled measurements are at irregular intervals or different rates, re-sampling the sampled measurements of each of the multiple sensors at a higher rate than one of the sensor's set of sampled measurements, and synchronizing the sampled measurements of each of the multiple sensors.

  6. International Spinal Cord Injury Core Data Set (version 2.0)-including standardization of reporting

    NARCIS (Netherlands)

    Biering-Sorensen, F.; DeVivo, M. J.; Charlifue, S.; Chen, Y.; New, P. W.; Noonan, V.; Post, M. W. M.; Vogel, L.

    Study design: The study design includes expert opinion, feedback, revisions and final consensus. Objectives: The objective of the study was to present the new knowledge obtained since the International Spinal Cord Injury (SCI) Core Data Set (Version 1.0) published in 2006, and describe the

  7. International Spinal Cord Injury Core Data Set (version 2.0)-including standardization of reporting

    NARCIS (Netherlands)

    Biering-Sørensen, F; DeVivo, M J; Charlifue, Susan; Chen, Y; New, P.W.; Noonan, V.; Post, M W M; Vogel, L.

    STUDY DESIGN: The study design includes expert opinion, feedback, revisions and final consensus. OBJECTIVES: The objective of the study was to present the new knowledge obtained since the International Spinal Cord Injury (SCI) Core Data Set (Version 1.0) published in 2006, and describe the

  8. Using Set Covering with Item Sampling to Analyze the Infeasibility of Linear Programming Test Assembly Models

    Science.gov (United States)

    Huitzing, Hiddo A.

    2004-01-01

    This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…

  9. Handling missing data in ranked set sampling

    CERN Document Server

    Bouza-Herrera, Carlos N

    2013-01-01

    The existence of missing observations is a very important aspect to be considered in the application of survey sampling, for example. In human populations they may be caused by a refusal of some interviewees to give the true value for the variable of interest. Traditionally, simple random sampling is used to select samples. Most statistical models are supported by the use of samples selected by means of this design. In recent decades, an alternative design has started being used, which, in many cases, shows an improvement in terms of accuracy compared with traditional sampling. It is called R

  10. Characterization of full set material constants of piezoelectric materials based on ultrasonic method and inverse impedance spectroscopy using only one sample.

    Science.gov (United States)

    Li, Shiyang; Zheng, Limei; Jiang, Wenhua; Sahul, Raffi; Gopalan, Venkatraman; Cao, Wenwu

    2013-09-14

    The most difficult task in the characterization of complete set material properties for piezoelectric materials is self-consistency. Because there are many independent elastic, dielectric, and piezoelectric constants, several samples are needed to obtain the full set constants. Property variation from sample to sample often makes the obtained data set lack of self-consistency. Here, we present a method, based on pulse-echo ultrasound and inverse impedance spectroscopy, to precisely determine the full set physical properties of piezoelectric materials using only one small sample, which eliminated the sample to sample variation problem to guarantee self-consistency. The method has been applied to characterize the [001] C poled Mn modified 0.27Pb(In 1/2 Nb 1/2 )O 3 -0.46Pb(Mg 1/3 Nb 2/3 )O 3 -0.27PbTiO 3 single crystal and the validity of the measured data is confirmed by a previously established method. For the inverse calculations using impedance spectrum, the stability of reconstructed results is analyzed by fluctuation analysis of input data. In contrast to conventional regression methods, our method here takes the full advantage of both ultrasonic and inverse impedance spectroscopy methods to extract all constants from only one small sample. The method provides a powerful tool for assisting novel piezoelectric materials of small size and for generating needed input data sets for device designs using finite element simulations.

  11. Preanalytical Blood Sampling Errors in Clinical Settings

    International Nuclear Information System (INIS)

    Zehra, N.; Malik, A. H.; Arshad, Q.; Sarwar, S.; Aslam, S.

    2016-01-01

    Background: Blood sampling is one of the common procedures done in every ward for disease diagnosis and prognosis. Daily hundreds of samples are collected from different wards but lack of appropriate knowledge of blood sampling by paramedical staff and accidental errors make the samples inappropriate for testing. Thus the need to avoid these errors for better results still remains. We carried out this research with an aim to determine the common errors during blood sampling; find factors responsible and propose ways to reduce these errors. Methods: A cross sectional descriptive study was carried out at the Military and Combined Military Hospital Rawalpindi during February and March 2014. A Venous Blood Sampling questionnaire (VBSQ) was filled by the staff on voluntary basis in front of the researchers. The staff was briefed on the purpose of the survey before filling the questionnaire. Sample size was 228. Results were analysed using SPSS-21. Results: When asked in the questionnaire, around 61.6 percent of the paramedical staff stated that they cleaned the vein by moving the alcohol swab from inward to outwards while 20.8 percent of the staff reported that they felt the vein after disinfection. On contrary to WHO guidelines, 89.6 percent identified that they had a habit of placing blood in the test tube by holding it in the other hand, which should actually be done after inserting it into the stand. Although 86 percent thought that they had ample knowledge regarding the blood sampling process but they did not practice it properly. Conclusion: Pre analytical blood sampling errors are common in our setup. Eighty six percent participants though thought that they had adequate knowledge regarding blood sampling, but most of them were not adhering to standard protocols. There is a need of continued education and refresher courses. (author)

  12. Acceptability of self-collection sampling for HPV-DNA testing in low-resource settings: a mixed methods approach.

    Science.gov (United States)

    Bansil, Pooja; Wittet, Scott; Lim, Jeanette L; Winkler, Jennifer L; Paul, Proma; Jeronimo, Jose

    2014-06-12

    Vaginal self-sampling with HPV-DNA tests is a promising primary screening method for cervical cancer. However, women's experiences, concerns and the acceptability of such tests in low-resource settings remain unknown. In India, Nicaragua, and Uganda, a mixed-method design was used to collect data from surveys (N = 3,863), qualitative interviews (N = 72; 20 providers and 52 women) and focus groups (N = 30 women) on women's and providers' experiences with self-sampling, women's opinions of sampling at home, and their future needs. Among surveyed women, 90% provided a self- collected sample. Of these, 75% reported it was easy, although 52% were initially concerned about hurting themselves and 24% were worried about not getting a good sample. Most surveyed women preferred self-sampling (78%). However it was not clear if they responded to the privacy of self-sampling or the convenience of avoiding a pelvic examination, or both. In follow-up interviews, most women reported that they didn't mind self-sampling, but many preferred to have a provider collect the vaginal sample. Most women also preferred clinic-based screening (as opposed to home-based self-sampling), because the sample could be collected by a provider, women could receive treatment if needed, and the clinic was sanitary and provided privacy. Self-sampling acceptability was higher when providers prepared women through education, allowed women to examine the collection brush, and were present during the self-collection process. Among survey respondents, aids that would facilitate self-sampling in the future were: staff help (53%), additional images in the illustrated instructions (31%), and a chance to practice beforehand with a doll/model (26%). Self-and vaginal-sampling are widely acceptable among women in low-resource settings. Providers have a unique opportunity to educate and prepare women for self-sampling and be flexible in accommodating women's preference for self-sampling.

  13. Detecting representative data and generating synthetic samples to improve learning accuracy with imbalanced data sets.

    Directory of Open Access Journals (Sweden)

    Der-Chiang Li

    Full Text Available It is difficult for learning models to achieve high classification performances with imbalanced data sets, because with imbalanced data sets, when one of the classes is much larger than the others, most machine learning and data mining classifiers are overly influenced by the larger classes and ignore the smaller ones. As a result, the classification algorithms often have poor learning performances due to slow convergence in the smaller classes. To balance such data sets, this paper presents a strategy that involves reducing the sizes of the majority data and generating synthetic samples for the minority data. In the reducing operation, we use the box-and-whisker plot approach to exclude outliers and the Mega-Trend-Diffusion method to find representative data from the majority data. To generate the synthetic samples, we propose a counterintuitive hypothesis to find the distributed shape of the minority data, and then produce samples according to this distribution. Four real datasets were used to examine the performance of the proposed approach. We used paired t-tests to compare the Accuracy, G-mean, and F-measure scores of the proposed data pre-processing (PPDP method merging in the D3C method (PPDP+D3C with those of the one-sided selection (OSS, the well-known SMOTEBoost (SB study, and the normal distribution-based oversampling (NDO approach, and the proposed data pre-processing (PPDP method. The results indicate that the classification performance of the proposed approach is better than that of above-mentioned methods.

  14. Evaluation of Two Lyophilized Molecular Assays to Rapidly Detect Foot-and-Mouth Disease Virus Directly from Clinical Samples in Field Settings.

    Science.gov (United States)

    Howson, E L A; Armson, B; Madi, M; Kasanga, C J; Kandusi, S; Sallu, R; Chepkwony, E; Siddle, A; Martin, P; Wood, J; Mioulet, V; King, D P; Lembo, T; Cleaveland, S; Fowler, V L

    2017-06-01

    Accurate, timely diagnosis is essential for the control, monitoring and eradication of foot-and-mouth disease (FMD). Clinical samples from suspect cases are normally tested at reference laboratories. However, transport of samples to these centralized facilities can be a lengthy process that can impose delays on critical decision making. These concerns have motivated work to evaluate simple-to-use technologies, including molecular-based diagnostic platforms, that can be deployed closer to suspect cases of FMD. In this context, FMD virus (FMDV)-specific reverse transcription loop-mediated isothermal amplification (RT-LAMP) and real-time RT-PCR (rRT-PCR) assays, compatible with simple sample preparation methods and in situ visualization, have been developed which share equivalent analytical sensitivity with laboratory-based rRT-PCR. However, the lack of robust 'ready-to-use kits' that utilize stabilized reagents limits the deployment of these tests into field settings. To address this gap, this study describes the performance of lyophilized rRT-PCR and RT-LAMP assays to detect FMDV. Both of these assays are compatible with the use of fluorescence to monitor amplification in real-time, and for the RT-LAMP assays end point detection could also be achieved using molecular lateral flow devices. Lyophilization of reagents did not adversely affect the performance of the assays. Importantly, when these assays were deployed into challenging laboratory and field settings within East Africa they proved to be reliable in their ability to detect FMDV in a range of clinical samples from acutely infected as well as convalescent cattle. These data support the use of highly sensitive molecular assays into field settings for simple and rapid detection of FMDV. © 2015 The Authors. Transboundary and Emerging Diseases Published by Blackwell Verlag GmbH.

  15. An evaluation of sampling and full enumeration strategies for Fisher Jenks classification in big data settings

    Science.gov (United States)

    Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.

    2017-01-01

    Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.

  16. Selection Component Analysis of Natural Polymorphisms using Population Samples Including Mother-Offspring Combinations, II

    DEFF Research Database (Denmark)

    Jarmer, Hanne Østergaard; Christiansen, Freddy Bugge

    1981-01-01

    Population samples including mother-offspring combinations provide information on the selection components: zygotic selection, sexual selection, gametic seletion and fecundity selection, on the mating pattern, and on the deviation from linkage equilibrium among the loci studied. The theory...

  17. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  18. Health Outcomes Survey - Limited Data Set

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Health Outcomes Survey (HOS) limited data sets (LDS) are comprised of the entire national sample for a given 2-year cohort (including both respondents...

  19. Optimum sample length for estimating anchovy size distribution and the proportion of juveniles per fishing set for the Peruvian purse-seine fleet

    Directory of Open Access Journals (Sweden)

    Rocío Joo

    2017-04-01

    Full Text Available The length distribution of catches represents a fundamental source of information for estimating growth and spatio-temporal dynamics of cohorts. The length distribution of caught is estimated based on samples of catched individuals. This work studies the optimum sample size of individuals at each fishing set in order to obtain a representative sample of the length and the proportion of juveniles in the fishing set. For that matter, we use anchovy (Engraulis ringens length data from different fishing sets recorded by observers at-sea from the On-board Observers Program from the Peruvian Marine Research Institute. Finally, we propose an optimum sample size for obtaining robust size and juvenile estimations. Though the application of this work corresponds to the anchovy fishery, the procedure can be applied to any fishery, either for on board or inland biometric measurements.

  20. Results for five sets of forensic genetic markers studied in a Greek population sample.

    Science.gov (United States)

    Tomas, C; Skitsa, I; Steinmeier, E; Poulsen, L; Ampati, A; Børsting, C; Morling, N

    2015-05-01

    A population sample of 223 Greek individuals was typed for five sets of forensic genetic markers with the kits NGM SElect™, SNPforID 49plex, DIPplex®, Argus X-12 and PowerPlex® Y23. No significant deviation from Hardy-Weinberg expectations was observed for any of the studied markers after Holm-Šidák correction. Statistically significant (P21) individuals for 16 autosomal STRs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. X fluorescence spectrometer including at least one toroidal monochromator with logarithmic spiral

    International Nuclear Information System (INIS)

    Florestan, J.

    1986-01-01

    This spectrometer includes a X-ray source, an entrance diaphragm, a revolution monochromator with monocrystal thin plates and a seal set in its center, an outer diaphragm and a X-ray detector. A second monochromator can be set between the source and the sample. The thin plates are set so as to be a toroidal ring whose cross section in an axial plane describes a logarithmic spiral [fr

  2. Perceived climate in physical activity settings.

    Science.gov (United States)

    Gill, Diane L; Morrow, Ronald G; Collins, Karen E; Lucey, Allison B; Schultz, Allison M

    2010-01-01

    This study focused on the perceived climate for LGBT youth and other minority groups in physical activity settings. A large sample of undergraduates and a selected sample including student teachers/interns and a campus Pride group completed a school climate survey and rated the climate in three physical activity settings (physical education, organized sport, exercise). Overall, school climate survey results paralleled the results with national samples revealing high levels of homophobic remarks and low levels of intervention. Physical activity climate ratings were mid-range, but multivariate analysis of variation test (MANOVA) revealed clear differences with all settings rated more inclusive for racial/ethnic minorities and most exclusive for gays/lesbians and people with disabilities. The results are in line with national surveys and research suggesting sexual orientation and physical characteristics are often the basis for harassment and exclusion in sport and physical activity. The current results also indicate that future physical activity professionals recognize exclusion, suggesting they could benefit from programs that move beyond awareness to skills and strategies for creating more inclusive programs.

  3. Experimental set-up for electrical resistivity measurements at low temperature in amorphous and crystalline metallic samples

    International Nuclear Information System (INIS)

    Rodriquez Fernandez, J.M.; Lopez Sanchez, R.J.; Gomez-Sal, J.C.

    1986-01-01

    The experimental set-up to measure the thermal variation of the electrical resistivity between 10.5 K and 300 K, has been developed. A four probe A.C. method with a synchronous-detection (lock'in) technique were the idoneous for our proposes. We have designed a new type of pressure sample-holder adopted to the CS-202 type cryostat. The measurements performed on samples already known have allowed us to determine the sensitivity of our experiments, which is Δ ρ/ρ=2x10 -4 . The measurements performed in the new Y 3 Rh 2 Si 2 compound which at 10 K has no magnetic ordering, are also presented. (author)

  4. The prevalence of terraced treescapes in analyses of phylogenetic data sets.

    Science.gov (United States)

    Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J

    2018-04-04

    The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.

  5. Principal component analysis applied to Fourier transform infrared spectroscopy for the design of calibration sets for glycerol prediction models in wine and for the detection and classification of outlier samples.

    Science.gov (United States)

    Nieuwoudt, Helene H; Prior, Bernard A; Pretorius, Isak S; Manley, Marena; Bauer, Florian F

    2004-06-16

    Principal component analysis (PCA) was used to identify the main sources of variation in the Fourier transform infrared (FT-IR) spectra of 329 wines of various styles. The FT-IR spectra were gathered using a specialized WineScan instrument. The main sources of variation included the reducing sugar and alcohol content of the samples, as well as the stage of fermentation and the maturation period of the wines. The implications of the variation between the different wine styles for the design of calibration models with accurate predictive abilities were investigated using glycerol calibration in wine as a model system. PCA enabled the identification and interpretation of samples that were poorly predicted by the calibration models, as well as the detection of individual samples in the sample set that had atypical spectra (i.e., outlier samples). The Soft Independent Modeling of Class Analogy (SIMCA) approach was used to establish a model for the classification of the outlier samples. A glycerol calibration for wine was developed (reducing sugar content 8% v/v) with satisfactory predictive ability (SEP = 0.40 g/L). The RPD value (ratio of the standard deviation of the data to the standard error of prediction) was 5.6, indicating that the calibration is suitable for quantification purposes. A calibration for glycerol in special late harvest and noble late harvest wines (RS 31-147 g/L, alcohol > 11.6% v/v) with a prediction error SECV = 0.65 g/L, was also established. This study yielded an analytical strategy that combined the careful design of calibration sets with measures that facilitated the early detection and interpretation of poorly predicted samples and outlier samples in a sample set. The strategy provided a powerful means of quality control, which is necessary for the generation of accurate prediction data and therefore for the successful implementation of FT-IR in the routine analytical laboratory.

  6. Methods for sampling geographically mobile female traders in an East African market setting

    Science.gov (United States)

    Achiro, Lillian; Kwena, Zachary A.; McFarland, Willi; Neilands, Torsten B.; Cohen, Craig R.; Bukusi, Elizabeth A.; Camlin, Carol S.

    2018-01-01

    Background The role of migration in the spread of HIV in sub-Saharan Africa is well-documented. Yet migration and HIV research have often focused on HIV risks to male migrants and their partners, or migrants overall, often failing to measure the risks to women via their direct involvement in migration. Inconsistent measures of mobility, gender biases in those measures, and limited data sources for sex-specific population-based estimates of mobility have contributed to a paucity of research on the HIV prevention and care needs of migrant and highly mobile women. This study addresses an urgent need for novel methods for developing probability-based, systematic samples of highly mobile women, focusing on a population of female traders operating out of one of the largest open air markets in East Africa. Our method involves three stages: 1.) identification and mapping of all market stall locations using Global Positioning System (GPS) coordinates; 2.) using female market vendor stall GPS coordinates to build the sampling frame using replicates; and 3.) using maps and GPS data for recruitment of study participants. Results The location of 6,390 vendor stalls were mapped using GPS. Of these, 4,064 stalls occupied by women (63.6%) were used to draw four replicates of 128 stalls each, and a fifth replicate of 15 pre-selected random alternates for a total of 527 stalls assigned to one of five replicates. Staff visited 323 stalls from the first three replicates and from these successfully recruited 306 female vendors into the study for a participation rate of 94.7%. Mobilization strategies and involving traders association representatives in participant recruitment were critical to the study’s success. Conclusion The study’s high participation rate suggests that this geospatial sampling method holds promise for development of probability-based samples in other settings that serve as transport hubs for highly mobile populations. PMID:29324780

  7. Results for five sets of forensic genetic markers studied in a Greek population sample

    DEFF Research Database (Denmark)

    Tomas Mas, Carmen; Skitsa, I; Steinmeier, E

    2015-01-01

    A population sample of 223 Greek individuals was typed for five sets of forensic genetic markers with the kits NGM SElect™, SNPforID 49plex, DIPplex(®), Argus X-12 and PowerPlex(®) Y23. No significant deviation from Hardy-Weinberg expectations was observed for any of the studied markers after Holm...... origin. The Greek population grouped closely to the other European populations measured by FST(*) distances. The match probability ranged from a value of 1 in 2×10(7) males by using haplotype frequencies of four X-chromosome haplogroups in males to 1 in 1.73×10(21) individuals for 16 autosomal STRs....

  8. BioSampling Data from LHP Cruises

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set includes separate bioSampling logs from each LHP Bottomfishing cruise both within and outside of the Main Hawaiian Islands, as well as a master file...

  9. Malaria PCR Detection in Cambodian Low-Transmission Settings: Dried Blood Spots versus Venous Blood Samples

    Science.gov (United States)

    Canier, Lydie; Khim, Nimol; Kim, Saorin; Eam, Rotha; Khean, Chanra; Loch, Kaknika; Ken, Malen; Pannus, Pieter; Bosman, Philippe; Stassijns, Jorgen; Nackers, Fabienne; Alipon, SweetC; Char, Meng Chuor; Chea, Nguon; Etienne, William; De Smet, Martin; Kindermans, Jean-Marie; Ménard, Didier

    2015-01-01

    In the context of malaria elimination, novel strategies for detecting very low malaria parasite densities in asymptomatic individuals are needed. One of the major limitations of the malaria parasite detection methods is the volume of blood samples being analyzed. The objective of the study was to compare the diagnostic accuracy of a malaria polymerase chain reaction assay, from dried blood spots (DBS, 5 μL) and different volumes of venous blood (50 μL, 200 μL, and 1 mL). The limit of detection of the polymerase chain reaction assay, using calibrated Plasmodium falciparum blood dilutions, showed that venous blood samples (50 μL, 200 μL, 1 mL) combined with Qiagen extraction methods gave a similar threshold of 100 parasites/mL, ∼100-fold lower than 5 μL DBS/Instagene method. On a set of 521 field samples, collected in two different transmission areas in northern Cambodia, no significant difference in the proportion of parasite carriers, regardless of the methods used was found. The 5 μL DBS method missed 27% of the samples detected by the 1 mL venous blood method, but most of the missed parasites carriers were infected by Plasmodium vivax (84%). The remaining missed P. falciparum parasite carriers (N = 3) were only detected in high-transmission areas. PMID:25561570

  10. Precision, time, and cost: a comparison of three sampling designs in an emergency setting

    Science.gov (United States)

    Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles

    2008-01-01

    The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster) and a 67 × 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data collection in emergency

  11. Precision, time, and cost: a comparison of three sampling designs in an emergency setting

    Directory of Open Access Journals (Sweden)

    Deconinck Hedwig

    2008-05-01

    Full Text Available Abstract The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster and a 67 × 3 cluster design (67 clusters, 3 observations per cluster. Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data

  12. Sample Set (SE): SE57 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available etabolite accumulation patterns in plants We optimized the MRM conditions for specifi c compounds by performing automate...applied to high-throughput automated analysis of biological samples using TQMS coupled with ultra performanc...ies, and family-specifi c metabolites could be predicted using a batch-learning self organizing map analysis. Thus, the automate

  13. The DSM-5 Dimensional Anxiety Scales in a Dutch non-clinical sample: psychometric properties including the adult separation anxiety disorder scale.

    Science.gov (United States)

    Möller, Eline L; Bögels, Susan M

    2016-09-01

    With DSM-5, the American Psychiatric Association encourages complementing categorical diagnoses with dimensional severity ratings. We therefore examined the psychometric properties of the DSM-5 Dimensional Anxiety Scales, a set of brief dimensional scales that are consistent in content and structure and assess DSM-5-based core features of anxiety disorders. Participants (285 males, 255 females) completed the DSM-5 Dimensional Anxiety Scales for social anxiety disorder, generalized anxiety disorder, specific phobia, agoraphobia, and panic disorder that were included in previous studies on the scales, and also for separation anxiety disorder, which is included in the DSM-5 chapter on anxiety disorders. Moreover, they completed the Screen for Child Anxiety Related Emotional Disorders Adult version (SCARED-A). The DSM-5 Dimensional Anxiety Scales demonstrated high internal consistency, and the scales correlated significantly and substantially with corresponding SCARED-A subscales, supporting convergent validity. Separation anxiety appeared present among adults, supporting the DSM-5 recognition of separation anxiety as an anxiety disorder across the life span. To conclude, the DSM-5 Dimensional Anxiety Scales are a valuable tool to screen for specific adult anxiety disorders, including separation anxiety. Research in more diverse and clinical samples with anxiety disorders is needed. © 2016 The Authors International Journal of Methods in Psychiatric Research Published by John Wiley & Sons Ltd. © 2016 The Authors International Journal of Methods in Psychiatric Research Published by John Wiley & Sons Ltd.

  14. A combined approach of generalized additive model and bootstrap with small sample sets for fault diagnosis in fermentation process of glutamate.

    Science.gov (United States)

    Liu, Chunbo; Pan, Feng; Li, Yun

    2016-07-29

    Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.

  15. Implementation of antimicrobial peptides for sample preparation prior to nucleic acid amplification in point-of-care settings.

    Science.gov (United States)

    Krõlov, Katrin; Uusna, Julia; Grellier, Tiia; Andresen, Liis; Jevtuševskaja, Jekaterina; Tulp, Indrek; Langel, Ülo

    2017-12-01

    A variety of sample preparation techniques are used prior to nucleic acid amplification. However, their efficiency is not always sufficient and nucleic acid purification remains the preferred method for template preparation. Purification is difficult and costly to apply in point-of-care (POC) settings and there is a strong need for more robust, rapid, and efficient biological sample preparation techniques in molecular diagnostics. Here, the authors applied antimicrobial peptides (AMPs) for urine sample preparation prior to isothermal loop-mediated amplification (LAMP). AMPs bind to many microorganisms such as bacteria, fungi, protozoa and viruses causing disruption of their membrane integrity and facilitate nucleic acid release. The authors show that incubation of E. coli with antimicrobial peptide cecropin P1 for 5 min had a significant effect on the availability of template DNA compared with untreated or even heat treated samples resulting in up to six times increase of the amplification efficiency. These results show that AMPs treatment is a very efficient sample preparation technique that is suitable for application prior to nucleic acid amplification directly within biological samples. Furthermore, the entire process of AMPs treatment was performed at room temperature for 5 min thereby making it a good candidate for use in POC applications.

  16. Sampling on Quasicrystals

    OpenAIRE

    Grepstad, Sigrid

    2011-01-01

    We prove that quasicrystals are universal sets of stable sampling in any dimension. Necessary and sufficient density conditions for stable sampling and interpolation sets in one dimension are studied in detail.

  17. Sampling a guide for internal auditors

    CERN Document Server

    Apostolou, Barbara

    2004-01-01

    While it is possible to examine 100 percent of an audit customer's data, the time and cost associated with such a study are often prohibitive. To obtain sufficient, reliable, and relevant information with a limited data set, sampling is an efficient and effective tool. It can help you evaluate the customer's assertions, as well as reach audit conclusions and provide reasonable assurance to your organization. This handbook will help you understand sampling. It also serves as a guide for auditors and students preparing for certification. Topics include: An overview of sampling. Statistical and nonstatistical sampling issues. Sampling selection methods and risks. The pros and cons of popular sampling plans.

  18. Tracer experiment data sets for the verification of local and meso-scale atmospheric dispersion models including topographic effects

    International Nuclear Information System (INIS)

    Sartori, E.; Schuler, W.

    1992-01-01

    Software and data for nuclear energy applications are acquired, tested and distributed by several information centres; in particular, relevant computer codes are distributed internationally by the OECD/NEA Data Bank (France) and by ESTSC and EPIC/RSIC (United States). This activity is coordinated among the centres and is extended outside the OECD area through an arrangement with the IAEA. This article proposes more specifically a scheme for acquiring, storing and distributing atmospheric tracer experiment data (ATE) required for verification of atmospheric dispersion models especially the most advanced ones including topographic effects and specific to the local and meso-scale. These well documented data sets will form a valuable complement to the set of atmospheric dispersion computer codes distributed internationally. Modellers will be able to gain confidence in the predictive power of their models or to verify their modelling skills. (au)

  19. Distance sampling methods and applications

    CERN Document Server

    Buckland, S T; Marques, T A; Oedekoven, C S

    2015-01-01

    In this book, the authors cover the basic methods and advances within distance sampling that are most valuable to practitioners and in ecology more broadly. This is the fourth book dedicated to distance sampling. In the decade since the last book published, there have been a number of new developments. The intervening years have also shown which advances are of most use. This self-contained book covers topics from the previous publications, while also including recent developments in method, software and application. Distance sampling refers to a suite of methods, including line and point transect sampling, in which animal density or abundance is estimated from a sample of distances to detected individuals. The book illustrates these methods through case studies; data sets and computer code are supplied to readers through the book’s accompanying website.  Some of the case studies use the software Distance, while others use R code. The book is in three parts.  The first part addresses basic methods, the ...

  20. BWIP-RANDOM-SAMPLING, Random Sample Generation for Nuclear Waste Disposal

    International Nuclear Information System (INIS)

    Sagar, B.

    1989-01-01

    1 - Description of program or function: Random samples for different distribution types are generated. Distribution types as required for performance assessment modeling of geologic nuclear waste disposal are provided. These are: - Uniform, - Log-uniform (base 10 or natural), - Normal, - Lognormal (base 10 or natural), - Exponential, - Bernoulli, - User defined continuous distribution. 2 - Method of solution: A linear congruential generator is used for uniform random numbers. A set of functions is used to transform the uniform distribution to the other distributions. Stratified, rather than random, sampling can be chosen. Truncated limits can be specified on many distributions, whose usual definition has an infinite support. 3 - Restrictions on the complexity of the problem: Generation of correlated random variables is not included

  1. Including Students with Severe Disabilities in General Education Settings.

    Science.gov (United States)

    Wisniewski, Lech; Alper, Sandra

    1994-01-01

    This paper presents five systematic phases for bringing about successful regular education inclusion of students with severe disabilities. Phases include develop networks within the community, assess school and community resources, review strategies for integration, install strategies that lead to integration, and develop a system of feedback and…

  2. Sample Set (SE): SE47 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE47 Metabolomic correlation-network modules in Arabidopsis based on a graph-cluste...as a system. Typical metabolomics data show a few but significant correlations among metabolite levels when ...itions. Although several studies have assessed topologies in metabolomic correlation networks, it remains un... (mto1), and transparent testa4 (tt4) to compare systematically the metabolomic correlation...s in samples of roots and aerial parts. We then applied graph clustering to the constructed correlation

  3. A set of triple-resonance nuclear magnetic resonance experiments for structural characterization of organophosphorus compounds in mixture samples

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, Harri, E-mail: Harri.T.Koskela@helsinki.fi [VERIFIN, University of Helsinki, P.O. Box 55, FIN-00014 Helsinki (Finland)

    2012-11-02

    Highlights: Black-Right-Pointing-Pointer New {sup 1}H, {sup 13}C, {sup 31}P triple-resonance NMR pulse experiments. Black-Right-Pointing-Pointer Analysis of organophosphorus (OP) compounds in complex matrix. Black-Right-Pointing-Pointer Selective extraction of {sup 1}H, {sup 31}P, and {sup 13}C chemical shifts and connectivities. Black-Right-Pointing-Pointer More precise NMR identification of OP nerve agents and their degradation products. - Abstract: The {sup 1}H, {sup 13}C correlation NMR spectroscopy utilizes J{sub CH} couplings in molecules, and provides important structural information from small organic molecules in the form of carbon chemical shifts and carbon-proton connectivities. The full potential of the {sup 1}H, {sup 13}C correlation NMR spectroscopy has not been realized in the Chemical Weapons Convention (CWC) related verification analyses due to the sample matrix, which usually contains a high amount of non-related compounds obscuring the correlations of the relevant compounds. Here, the results of the application of {sup 1}H, {sup 13}C, {sup 31}P triple-resonance NMR spectroscopy in characterization of OP compounds related to the CWC are presented. With a set of two-dimensional triple-resonance experiments the J{sub HP}, J{sub CH} and J{sub PC} couplings are utilized to map the connectivities of the atoms in OP compounds and to extract the carbon chemical shift information. With the use of the proposed pulse sequences the correlations from the OP compounds can be recorded without significant artifacts from the non-OP compound impurities in the sample. Further selectivity of the observed correlations is achieved with the application of phosphorus band-selective pulse in the pulse sequences to assist the analysis of multiple OP compounds in mixture samples. The use of the triple-resonance experiments in the analysis of a complex sample is shown with a test mixture containing typical scheduled OP compounds, including the characteristic degradation

  4. Sampled data CT system including analog filter and compensating digital filter

    International Nuclear Information System (INIS)

    Glover, G. H.; DallaPiazza, D. G.; Pelc, N. J.

    1985-01-01

    A CT scanner in which the amount of x-ray information acquired per unit time is substantially increased by using a continuous-on x-ray source and a sampled data system with the detector. An analog filter is used in the sampling system for band limiting the detector signal below the highest frequency of interest, but is a practically realizable filter and is therefore non-ideal. A digital filter is applied to the detector data after digitization to compensate for the characteristics of the analog filter, and to provide an overall filter characteristic more nearly like the ideal

  5. High-throughput miRNA profiling of human melanoma blood samples

    Directory of Open Access Journals (Sweden)

    Rass Knuth

    2010-06-01

    Full Text Available Abstract Background MicroRNA (miRNA signatures are not only found in cancer tissue but also in blood of cancer patients. Specifically, miRNA detection in blood offers the prospect of a non-invasive analysis tool. Methods Using a microarray based approach we screened almost 900 human miRNAs to detect miRNAs that are deregulated in their expression in blood cells of melanoma patients. We analyzed 55 blood samples, including 20 samples of healthy individuals, 24 samples of melanoma patients as test set, and 11 samples of melanoma patients as independent validation set. Results A hypothesis test based approch detected 51 differentially regulated miRNAs, including 21 miRNAs that were downregulated in blood cells of melanoma patients and 30 miRNAs that were upregulated in blood cells of melanoma patients as compared to blood cells of healthy controls. The tets set and the independent validation set of the melanoma samples showed a high correlation of fold changes (0.81. Applying hierarchical clustering and principal component analysis we found that blood samples of melanoma patients and healthy individuals can be well differentiated from each other based on miRNA expression analysis. Using a subset of 16 significant deregulated miRNAs, we were able to reach a classification accuracy of 97.4%, a specificity of 95% and a sensitivity of 98.9% by supervised analysis. MiRNA microarray data were validated by qRT-PCR. Conclusions Our study provides strong evidence for miRNA expression signatures of blood cells as useful biomarkers for melanoma.

  6. LTRM Fish Sampling Strata, UMRS La Grange Reach

    Data.gov (United States)

    Department of the Interior — The data set includes delineation of sampling strata for the six study reaches of the UMRR Program’s LTRM element. Separate strata coverages exist for each of the...

  7. Potential High Priority Subaerial Environments for Mars Sample Return

    Science.gov (United States)

    iMOST Team; Bishop, J. L.; Horgan, B.; Benning, L. G.; Carrier, B. L.; Hausrath, E. M.; Altieri, F.; Amelin, Y.; Ammannito, E.; Anand, M.; Beaty, D. W.; Borg, L. E.; Boucher, D.; Brucato, J. R.; Busemann, H.; Campbell, K. A.; Czaja, A. D.; Debaille, V.; Des Marais, D. J.; Dixon, M.; Ehlmann, B. L.; Farmer, J. D.; Fernandez-Remolar, D. C.; Fogarty, J.; Glavin, D. P.; Goreva, Y. S.; Grady, M. M.; Hallis, L. J.; Harrington, A. D.; Herd, C. D. K.; Humayun, M.; Kleine, T.; Kleinhenz, J.; Mangold, N.; Mackelprang, R.; Mayhew, L. E.; McCubbin, F. M.; Mccoy, J. T.; McLennan, S. M.; McSween, H. Y.; Moser, D. E.; Moynier, F.; Mustard, J. F.; Niles, P. B.; Ori, G. G.; Raulin, F.; Rettberg, P.; Rucker, M. A.; Schmitz, N.; Sefton-Nash, E.; Sephton, M. A.; Shaheen, R.; Shuster, D. L.; Siljestrom, S.; Smith, C. L.; Spry, J. A.; Steele, A.; Swindle, T. D.; ten Kate, I. L.; Tosca, N. J.; Usui, T.; Van Kranendonk, M. J.; Wadhwa, M.; Weiss, B. P.; Werner, S. C.; Westall, F.; Wheeler, R. M.; Zipfel, J.; Zorzano, M. P.

    2018-04-01

    The highest priority subaerial environments for Mars Sample Return include subaerial weathering (paleosols, periglacial/glacial, and rock coatings/rinds), wetlands (mineral precipitates, redox environments, and salt ponds), or cold spring settings.

  8. Floating point only SIMD instruction set architecture including compare, select, Boolean, and alignment operations

    Science.gov (United States)

    Gschwind, Michael K [Chappaqua, NY

    2011-03-01

    Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.

  9. Fast sampling from a Hidden Markov Model posterior for large data

    DEFF Research Database (Denmark)

    Bonnevie, Rasmus; Hansen, Lars Kai

    2014-01-01

    Hidden Markov Models are of interest in a broad set of applications including modern data driven systems involving very large data sets. However, approximate inference methods based on Bayesian averaging are precluded in such applications as each sampling step requires a full sweep over the data...

  10. Systematic review including re-analyses of 1148 individual data sets of central venous pressure as a predictor of fluid responsiveness

    DEFF Research Database (Denmark)

    Eskesen, T G; Wetterslev, M; Perner, A

    2016-01-01

    PURPOSE: Central venous pressure (CVP) has been shown to have poor predictive value for fluid responsiveness in critically ill patients. We aimed to re-evaluate this in a larger sample subgrouped by baseline CVP values. METHODS: In April 2015, we systematically searched and included all clinical...

  11. The impact of screening-test negative samples not enumerated by MPN

    DEFF Research Database (Denmark)

    Corbellini, Luis Gustavo; Ribeiro Duarte, Ana Sofia; de Knegt, Leonardo

    2015-01-01

    that includes false negative results from the screening, and a third that considers the entire data set. The relative sensitivity of the screening test was also calculated assuming as gold standard samples with confirmed Salmonella. Salmonella was confirmed by a reference laboratory in 29 samples either...

  12. LTRM Water Quality Sampling Strata, UMRS La Grange Reach

    Data.gov (United States)

    Department of the Interior — The data set includes delineation of sampling strata for the six study reaches of the UMRR Program’s LTRM element. Separate strata coverages exist for each of the...

  13. A replica exchange transition interface sampling method with multiple interface sets for investigating networks of rare events

    Science.gov (United States)

    Swenson, David W. H.; Bolhuis, Peter G.

    2014-07-01

    The multiple state transition interface sampling (TIS) framework in principle allows the simulation of a large network of complex rare event transitions, but in practice suffers from convergence problems. To improve convergence, we combine multiple state TIS [J. Rogal and P. G. Bolhuis, J. Chem. Phys. 129, 224107 (2008)] with replica exchange TIS [T. S. van Erp, Phys. Rev. Lett. 98, 268301 (2007)]. In addition, we introduce multiple interface sets, which allow more than one order parameter to be defined for each state. We illustrate the methodology on a model system of multiple independent dimers, each with two states. For reaction networks with up to 64 microstates, we determine the kinetics in the microcanonical ensemble, and discuss the convergence properties of the sampling scheme. For this model, we find that the kinetics depend on the instantaneous composition of the system. We explain this dependence in terms of the system's potential and kinetic energy.

  14. Optimising Mycobacterium tuberculosis detection in resource limited settings.

    Science.gov (United States)

    Alfred, Nwofor; Lovette, Lawson; Aliyu, Gambo; Olusegun, Obasanya; Meshak, Panwal; Jilang, Tunkat; Iwakun, Mosunmola; Nnamdi, Emenyonu; Olubunmi, Onuoha; Dakum, Patrick; Abimiku, Alash'le

    2014-03-03

    The light-emitting diode (LED) fluorescence microscopy has made acid-fast bacilli (AFB) detection faster and efficient although its optimal performance in resource-limited settings is still being studied. We assessed the optimal performances of light and fluorescence microscopy in routine conditions of a resource-limited setting and evaluated the digestion time for sputum samples for maximum yield of positive cultures. Cross-sectional study. Facility-based involving samples of routine patients receiving tuberculosis treatment and care from the main tuberculosis case referral centre in northern Nigeria. The study included 450 sputum samples from 150 new patients with clinical diagnosis of pulmonary tuberculosis. The 450 samples were pooled into 150 specimens, examined independently with mercury vapour lamp (FM), LED CysCope (CY) and Primo Star iLED (PiLED) fluorescence microscopies, and with the Ziehl-Neelsen (ZN) microscopy to assess the performance of each technique compared with liquid culture. The cultured specimens were decontaminated with BD Mycoprep (4% NaOH-1% NLAC and 2.9% sodium citrate) for 10, 15 and 20 min before incubation in Mycobacterium growth incubator tube (MGIT) system and growth examined for acid-fast bacilli (AFB). Of the 150 specimens examined by direct microscopy: 44 (29%), 60 (40%), 49 (33%) and 64 (43%) were AFB positive by ZN, FM, CY and iLED microscopy, respectively. Digestion of sputum samples for 10, 15 and 20 min yielded mycobacterial growth in 72 (48%), 81 (54%) and 68 (45%) of the digested samples, respectively, after incubation in the MGIT system. In routine laboratory conditions of a resource-limited setting, our study has demonstrated the superiority of fluorescence microscopy over the conventional ZN technique. Digestion of sputum samples for 15 min yielded more positive cultures.

  15. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    Science.gov (United States)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  16. FT-midIR determination of fatty acid profiles, including trans fatty acids, in bakery products after focused microwave-assisted Soxhlet extraction.

    Science.gov (United States)

    Ruiz-Jiménez, J; Priego-Capote, F; Luque de Castro, M D

    2006-08-01

    A study of the feasibility of Fourier transform medium infrared spectroscopy (FT-midIR) for analytical determination of fatty acid profiles, including trans fatty acids, is presented. The training and validation sets-75% (102 samples) and 25% (36 samples) of the samples once the spectral outliers have been removed-to develop FT-midIR general equations, were built with samples from 140 commercial and home-made bakery products. The concentration of the analytes in the samples used for this study is within the typical range found in these kinds of products. Both sets were independent; thus, the validation set was only used for testing the equations. The criterion used for the selection of the validation set was samples with the highest number of neighbours and the most separation between them (H/=0.90, SEP=1-1.5 SEL and R (2)=0.70-0.89, SEP=2-3 SEL, respectively. The results obtained with the proposed method were compared with those provided by the conventional method based on GC-MS. At 95% significance level, the differences between the values obtained for the different fatty acids were within the experimental error.

  17. Sample Set (SE): SE48 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available onoids and eight rare flavonolignan isomers, were isolat... elucidated the structures of specialized metabolites from rice by using MS/MS and NMR. Thirty-six compounds, including five new flav

  18. The upgraded external-beam PIXE/PIGE set-up at LABEC for very fast measurements on aerosol samples

    Energy Technology Data Exchange (ETDEWEB)

    Lucarelli, F., E-mail: lucarelli@fi.infn.it; Calzolai, G.; Chiari, M.; Giannoni, M.; Mochi, D.; Nava, S.; Carraresi, L.

    2014-01-01

    At the 3 MV Tandetron accelerator of the LABEC laboratory of INFN in Florence, an external beam facility is fully dedicated to measurements of elemental composition of atmospheric aerosol. The experimental set-up hitherto used for this kind of applications has been upgraded with the replacement of a traditional Si(Li) detector for the detection of medium–high Z elements with a silicon drift detector (SDD) with a big active area (80 mm{sup 2}) and 450 μm thickness, with the aim of obtaining better minimum detection limits (MDL) and reduce measuring times. The Upilex extraction window has been replaced by a more resistant one (Si{sub 3}N{sub 4}). A comparison between the old Si(Li) and the new SDD for aerosol samples collected on different substrata like Teflon, Kapton and Nuclepore evidenced the better performances of the SDD. It allows obtaining better results (higher counting statistics, lower MDLs) even in shorter measuring times, thus allowing very fast analysis of both daily and hourly samples.

  19. Nanostructured and nanolayer coatings based on nitrides of the metals structure study and structure and composition standard samples set development

    Directory of Open Access Journals (Sweden)

    E. B. Chabina

    2014-01-01

    Full Text Available Researches by methods of analytical microscopy and the x-ray analysis have allowed to develop a set of standard samples of composition and structure of the strengthening nanostructured and nanolayer coatings for control of the strengthening nanostructured and nanolayer coatings based on nitrides of the metals used to protect critical parts of the compressor of the gas turbine engine from dust erosion, corrosion and oxidation.

  20. Role of the urate transporter SLC2A9 gene in susceptibility to gout in New Zealand Māori, Pacific Island, and Caucasian case-control sample sets.

    Science.gov (United States)

    Hollis-Moffatt, Jade E; Xu, Xin; Dalbeth, Nicola; Merriman, Marilyn E; Topless, Ruth; Waddell, Chloe; Gow, Peter J; Harrison, Andrew A; Highton, John; Jones, Peter B B; Stamp, Lisa K; Merriman, Tony R

    2009-11-01

    To examine the role of genetic variation in the renal urate transporter SLC2A9 in gout in New Zealand sample sets of Māori, Pacific Island, and Caucasian ancestry and to determine if the Māori and Pacific Island samples could be useful for fine-mapping. Patients (n= 56 Māori, 69 Pacific Island, and 131 Caucasian) were recruited from rheumatology outpatient clinics and satisfied the American College of Rheumatology criteria for gout. The control samples comprised 125 Māori subjects, 41 Pacific Island subjects, and 568 Caucasian subjects without arthritis. SLC2A9 single-nucleotide polymorphisms rs16890979 (V253I), rs5028843, rs11942223, and rs12510549 were genotyped (possible etiologic variants in Caucasians). Association of the major allele of rs16890979, rs11942223, and rs5028843 with gout was observed in all sample sets (P = 3.7 x 10(-7), 1.6 x 10(-6), and 7.6 x 10(-5) for rs11942223 in the Māori, Pacific Island, and Caucasian samples, respectively). One 4-marker haplotype (1/1/2/1; more prevalent in the Māori and Pacific Island control samples) was not observed in a single gout case. Our data confirm a role of SLC2A9 in gout susceptibility in a New Zealand Caucasian sample set, with the effect on risk (odds ratio >2.0) greater than previous estimates. We also demonstrate association of SLC2A9 with gout in samples of Māori and Pacific Island ancestry and a consistent pattern of haplotype association. The presence of both alleles of rs16890979 on susceptibility and protective haplotypes in the Māori and Pacific Island sample is evidence against a role for this nonsynonymous variant as the sole etiologic agent. More extensive linkage disequilibrium in Māori and Pacific Island samples suggests that Caucasian samples may be more useful for fine-mapping.

  1. NASTRAN thermal analyzer: Theory and application including a guide to modeling engineering problems, volume 2. [sample problem library guide

    Science.gov (United States)

    Jackson, C. E., Jr.

    1977-01-01

    A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.

  2. Image Sampling with Quasicrystals

    Directory of Open Access Journals (Sweden)

    Mark Grundland

    2009-07-01

    Full Text Available We investigate the use of quasicrystals in image sampling. Quasicrystals produce space-filling, non-periodic point sets that are uniformly discrete and relatively dense, thereby ensuring the sample sites are evenly spread out throughout the sampled image. Their self-similar structure can be attractive for creating sampling patterns endowed with a decorative symmetry. We present a brief general overview of the algebraic theory of cut-and-project quasicrystals based on the geometry of the golden ratio. To assess the practical utility of quasicrystal sampling, we evaluate the visual effects of a variety of non-adaptive image sampling strategies on photorealistic image reconstruction and non-photorealistic image rendering used in multiresolution image representations. For computer visualization of point sets used in image sampling, we introduce a mosaic rendering technique.

  3. Probabilistic Open Set Recognition

    Science.gov (United States)

    Jain, Lalit Prithviraj

    support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.

  4. Temperature Control Diagnostics for Sample Environments

    International Nuclear Information System (INIS)

    Santodonato, Louis J.; Walker, Lakeisha M.H.; Church, Andrew J.; Redmon, Christopher Mckenzie

    2010-01-01

    In a scientific laboratory setting, standard equipment such as cryocoolers are often used as part of a custom sample environment system designed to regulate temperature over a wide range. The end user may be more concerned with precise sample temperature control than with base temperature. But cryogenic systems tend to be specified mainly in terms of cooling capacity and base temperature. Technical staff at scientific user facilities (and perhaps elsewhere) often wonder how to best specify and evaluate temperature control capabilities. Here we describe test methods and give results obtained at a user facility that operates a large sample environment inventory. Although this inventory includes a wide variety of temperature, pressure, and magnetic field devices, the present work focuses on cryocooler-based systems.

  5. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  6. Sample Set (SE): SE59 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available idopsis thaliana (Arabidopsis). However, the behavior of flavonoids during drough...t stress is still not well documented. Herein we investigated the time-series alternation of flavonoids in t...vonoids, a class of specialized metabolites, including flavonols and anthocyanins w...SE59 Alternation of flavonoid accumulation under drought stress in Arabidopsis thaliana We reported that fla

  7. The Internet of Samples in the Earth Sciences (iSamples)

    Science.gov (United States)

    Carter, M. R.; Lehnert, K. A.

    2015-12-01

    Across most Earth Science disciplines, research depends on the availability of samples collected above, at, and beneath Earth's surface, on the moon and in space, or generated in experiments. Many domains in the Earth Sciences have recently expressed the need for better discovery, access, and sharing of scientific samples and collections (EarthCube End-User Domain workshops, 2012 and 2013, http://earthcube.org/info/about/end-user-workshops), as has the US government (OSTP Memo, March 2014). The Internet of Samples in the Earth Sciences (iSamples) is an initiative funded as a Research Coordination Network (RCN) within the EarthCube program to address this need. iSamples aims to advance the use of innovative cyberinfrastructure to connect physical samples and sample collections across the Earth Sciences with digital data infrastructures to revolutionize their utility for science. iSamples strives to build, grow, and foster a new community of practice, in which domain scientists, curators of sample repositories and collections, computer and information scientists, software developers and technology innovators engage in and collaborate on defining, articulating, and addressing the needs and challenges of physical samples as a critical component of digital data infrastructure. A primary goal of iSamples is to deliver a community-endorsed set of best practices and standards for the registration, description, identification, and citation of physical specimens and define an actionable plan for implementation. iSamples conducted a broad community survey about sample sharing and has created 5 different working groups to address the different challenges of developing the internet of samples - from metadata schemas and unique identifiers to an architecture of a shared cyberinfrastructure for collections, to digitization of existing collections, to education, and ultimately to establishing the physical infrastructure that will ensure preservation and access of the physical

  8. Comparison of established and novel purity tests for the quality control of heparin by means of a set of 177 heparin samples.

    Science.gov (United States)

    Alban, Susanne; Lühn, Susanne; Schiemann, Simone; Beyer, Tanja; Norwig, Jochen; Schilling, Claudia; Rädler, Oliver; Wolf, Bernhard; Matz, Magnus; Baumann, Knut; Holzgrabe, Ulrike

    2011-01-01

    The widespread occurrence of heparin contaminated with oversulfated chrondroitin sulfate (OSCS) in 2008 initiated a comprehensive revision process of the Pharmacopoeial heparin monographs and stimulated research in analytical techniques for the quality control of heparin. Here, a set of 177 heparin samples from the market in 2008 as well as pure heparin sodium spiked with defined amounts of OSCS and DS were used to evaluate established and novel methods for the quality control of heparin. Besides (1)H nuclear magnetic resonance spectroscopy (NMR), the assessment included two further spectroscopic methods, i.e., attenuated total reflection-infrared spectroscopy (ATR-IR) and Raman spectroscopy, three coagulation assays, i.e., activated partial thromboplastin time (aPTT) performed with both sheep and human plasma and the prothrombin time (PT), and finally two novel purity assays, each consisting of an incubation step with heparinase I followed by either a fluorescence measurement (Inc-PolyH-assay) or by a chromogenic aXa-assay (Inc-aXa-assay). NMR was shown to allow not only sensitive detection, but also quantification of OSCS by using the peak-height method and a response factor determined by calibration. Chemometric evaluation of the NMR, ATR-IR, and Raman spectra by statistical classification techniques turned out to be best with NMR spectra concerning the detection of OSCS. The validity of the aPTT, the current EP assay, could be considerably improved by replacing the sheep plasma by human plasma. In this way, most of the contaminated heparin samples did not meet the novel potency limit of 180 IU/mg. However, also more than 50% of the uncontaminated samples had interpretation of the results.

  9. Varying Associations Between Body Mass Index and Physical and Cognitive Function in Three Samples of Older Adults Living in Different Settings.

    Science.gov (United States)

    Kiesswetter, Eva; Schrader, Eva; Diekmann, Rebecca; Sieber, Cornel Christian; Volkert, Dorothee

    2015-10-01

    The study investigates variations in the associations between body mass index (BMI) and (a) physical and (b) cognitive function across three samples of older adults living in different settings, and moreover determines if the association between BMI and physical function is confounded by cognitive abilities. One hundred ninety-five patients of a geriatric day hospital, 322 persons receiving home care (HC), and 183 nursing home (NH) residents were examined regarding BMI, cognitive (Mini-Mental State Examination), and physical function (Barthel Index for activities of daily living). Differences in Mini-Mental State Examination and activities of daily living scores between BMI groups (Examination impairments increased from the geriatric day hospital over the HC to the NH sample, whereas prevalence rates of obesity and severe obesity (35%, 33%, 25%) decreased. In geriatric day hospital patients cognitive and physical function did not differ between BMI groups. In the HC and NH samples, cognitive abilities were highest in obese and severely obese subjects. Unadjusted mean activities of daily living scores differed between BMI groups in HC receivers (51.6±32.2, 61.8±26.1, 67.5±28.3, 72.0±23.4, 66.2±24.2, p = .002) and NH residents (35.6±28.6, 48.1±25.7, 39.9±28.7, 50.8±24.0, 57.1±28.2, p = .029). In both samples significance was lost after adjustment indicating cognitive function as dominant confounder. In older adults the associations between BMI and physical and cognitive function were dependent on the health and care status corresponding to the setting. In the HC and the NH samples, cognitive status, as measured by the Mini-Mental State Examination, emerged as an important confounder within the association between BMI and physical function. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Simulating the complex output of rainfall and hydrological processes using the information contained in large data sets: the Direct Sampling approach.

    Science.gov (United States)

    Oriani, Fabio

    2017-04-01

    The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model

  11. Polymerase chain reaction system using magnetic beads for analyzing a sample that includes nucleic acid

    Science.gov (United States)

    Nasarabadi, Shanavaz [Livermore, CA

    2011-01-11

    A polymerase chain reaction system for analyzing a sample containing nucleic acid includes providing magnetic beads; providing a flow channel having a polymerase chain reaction chamber, a pre polymerase chain reaction magnet position adjacent the polymerase chain reaction chamber, and a post pre polymerase magnet position adjacent the polymerase chain reaction chamber. The nucleic acid is bound to the magnetic beads. The magnetic beads with the nucleic acid flow to the pre polymerase chain reaction magnet position in the flow channel. The magnetic beads and the nucleic acid are washed with ethanol. The nucleic acid in the polymerase chain reaction chamber is amplified. The magnetic beads and the nucleic acid are separated into a waste stream containing the magnetic beads and a post polymerase chain reaction mix containing the nucleic acid. The reaction mix containing the nucleic acid flows to an analysis unit in the channel for analysis.

  12. Continuous sampling from distributed streams

    DEFF Research Database (Denmark)

    Graham, Cormode; Muthukrishnan, S.; Yi, Ke

    2012-01-01

    A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple distribu......A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple...... distributed sites. The main challenge is to ensure that a sample is drawn uniformly across the union of the data while minimizing the communication needed to run the protocol on the evolving data. At the same time, it is also necessary to make the protocol lightweight, by keeping the space and time costs low...... for each participant. In this article, we present communication-efficient protocols for continuously maintaining a sample (both with and without replacement) from k distributed streams. These apply to the case when we want a sample from the full streams, and to the sliding window cases of only the W most...

  13. Assessing the Alcohol-BMI Relationship in a US National Sample of College Students

    Science.gov (United States)

    Barry, Adam E.; Piazza-Gardner, Anna K.; Holton, M. Kim

    2015-01-01

    Objective: This study sought to assess the body mass index (BMI)-alcohol relationship among a US national sample of college students. Design: Secondary data analysis using the Fall 2011 National College Health Assessment (NCHA). Setting: A total of 44 US higher education institutions. Methods: Participants included a national sample of college…

  14. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  15. The redshift distribution of cosmological samples: a forward modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina, E-mail: joerg.herbel@phys.ethz.ch, E-mail: tomasz.kacprzak@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch, E-mail: claudio.bruderer@phys.ethz.ch, E-mail: andrina.nicola@phys.ethz.ch [Institute for Astronomy, Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 27, 8093 Zürich (Switzerland)

    2017-08-01

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  16. The redshift distribution of cosmological samples: a forward modeling approach

    Science.gov (United States)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-08-01

    Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  17. The redshift distribution of cosmological samples: a forward modeling approach

    International Nuclear Information System (INIS)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-01-01

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  18. Sample Selection for Training Cascade Detectors.

    Science.gov (United States)

    Vállez, Noelia; Deniz, Oscar; Bueno, Gloria

    2015-01-01

    Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  19. Effects of manual threshold setting on image analysis results of a sandstone sample structural characterization by X-ray microtomography

    International Nuclear Information System (INIS)

    Moreira, Anderson C.; Fernandes, Celso P.; Fernandes, Jaquiel S.; Marques, Leonardo C.; Appoloni, Carlos R.; Nagata, Rodrigo

    2009-01-01

    X-ray microtomography is a nondestructive nuclear technique widely applied for samples structural characterization. This methodology permits the investigation of materials porous phase, without special sample preparation, generating bidimensional images of the irradiated sample. The images are generated by the linear attenuation coefficient mapping of the sample. In order to do a quantitative characterization, the images have to be binarized, separating porous phase from the material matrix. The choice of the correct threshold in the grey level histogram is an important and discerning procedure for the binary images creation. Slight variations of the threshold level led to substantial variations in physical parameters determination, like porosity and pore size distribution values. The aim of this work is to evaluate these variations based on some manual threshold setting. Employing Imago image analysis software, four operators determined the porosity and pore size distribution of a sandstone sample by image analysis. The microtomography measurements were accomplished with the following scan conditions: 60 kV, 165 μA, 1 mm Al filter, 0.45 deg step size and 180.0 deg total rotation angle with and 3.8 μm and 11 μm spatial resolution. The global average porosity values, determined by the operators, range from 27.8 to 32.4 % for 3.8 μm spatial resolution and 12.3 to 28.3 % for 11 μm spatial resolution. Percentage differences among the pore size distributions were also found. For the same pore size range, 5.5 % and 17.1 %, for 3.8 μm and 11 μm spatial resolutions respectively, were noted. (author)

  20. Identification of a set of endogenous reference genes for miRNA expression studies in Parkinson's disease blood samples.

    Science.gov (United States)

    Serafin, Alice; Foco, Luisa; Blankenburg, Hagen; Picard, Anne; Zanigni, Stefano; Zanon, Alessandra; Pramstaller, Peter P; Hicks, Andrew A; Schwienbacher, Christine

    2014-10-10

    Research on microRNAs (miRNAs) is becoming an increasingly attractive field, as these small RNA molecules are involved in several physiological functions and diseases. To date, only few studies have assessed the expression of blood miRNAs related to Parkinson's disease (PD) using microarray and quantitative real-time PCR (qRT-PCR). Measuring miRNA expression involves normalization of qRT-PCR data using endogenous reference genes for calibration, but their choice remains a delicate problem with serious impact on the resulting expression levels. The aim of the present study was to evaluate the suitability of a set of commonly used small RNAs as normalizers and to identify which of these miRNAs might be considered reliable reference genes in qRT-PCR expression analyses on PD blood samples. Commonly used reference genes snoRNA RNU24, snRNA RNU6B, snoRNA Z30 and miR-103a-3p were selected from the literature. We then analyzed the effect of using these genes as reference, alone or in any possible combination, on the measured expression levels of the target genes miR-30b-5p and miR-29a-3p, which have been previously reported to be deregulated in PD blood samples. We identified RNU24 and Z30 as a reliable and stable pair of reference genes in PD blood samples.

  1. Strong smoker interest in 'setting an example to children' by quitting: national survey data.

    Science.gov (United States)

    Thomson, George; Wilson, Nick; Weerasekera, Deepa; Edwards, Richard

    2011-02-01

    To further explore smoker views on reasons to quit. As part of the multi-country ITC Project, a national sample of 1,376 New Zealand adult (18+ years) smokers was surveyed in 2007/08. This sample included boosted sampling of Māori, Pacific and Asian New Zealanders. 'Setting an example to children' was given as 'very much' a reason to quit by 51%, compared to 45% giving personal health concerns. However, the 'very much' and 'somewhat' responses (combined) were greater for personal health (81%) than 'setting an example to children' (74%). Price was the third ranked reason (67%). In a multivariate analysis, women were significantly more likely to state that 'setting an example to children' was 'very much' or 'somewhat' a reason to quit; as were Māori, or Pacific compared to European; and those suffering financial stress. The relatively high importance of 'example to children' as a reason to quit is an unusual finding, and may have arisen as a result of social marketing campaigns encouraging cessation to protect families in New Zealand. The policy implications could include a need for a greater emphasis on social reasons (e.g. 'example to children'), in pack warnings, and in social marketing for smoking cessation. © 2011 The Authors. ANZJPH © 2010 Public Health Association of Australia.

  2. [Outlier sample discriminating methods for building calibration model in melons quality detecting using NIR spectra].

    Science.gov (United States)

    Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang

    2012-11-01

    Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).

  3. Least-squares resolution of gamma-ray spectra in environmental samples

    International Nuclear Information System (INIS)

    Kanipe, L.G.; Seale, S.K.; Liggett, W.S.

    1977-08-01

    The use of ALPHA-M, a least squares computer program for analyzing NaI (Tl) gamma spectra of environmental samples, is evaluated. Included is a comprehensive set of program instructions, listings, and flowcharts. Two other programs, GEN4 and SIMSPEC, are also described. GEN4 is used to create standard libraries for ALPHA-M, and SIMSPEC is used to simulate spectra for ALPHA-M analysis. Tests to evaluate the standard libraries selected for use in analyzing environmental samples are provided. An evaluation of the results of sample analyses is discussed

  4. Data Set for the manuscript entitled, "Sample Processing Approach for Detection of Ricin in Surface Samples."

    Data.gov (United States)

    U.S. Environmental Protection Agency — Figure. This dataset is associated with the following publication: Shah, S., S. Kane, A.M. Erler, and T. Alfaro. Sample Processing Approach for Detection of Ricin in...

  5. Sample Selection for Training Cascade Detectors.

    Directory of Open Access Journals (Sweden)

    Noelia Vállez

    Full Text Available Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  6. Nonlinear Dynamics of Cantilever-Sample Interactions in Atomic Force Microscopy

    Science.gov (United States)

    Cantrell, John H.; Cantrell, Sean A.

    2010-01-01

    The interaction of the cantilever tip of an atomic force microscope (AFM) with the sample surface is obtained by treating the cantilever and sample as independent systems coupled by a nonlinear force acting between the cantilever tip and a volume element of the sample surface. The volume element is subjected to a restoring force from the remainder of the sample that provides dynamical equilibrium for the combined systems. The model accounts for the positions on the cantilever of the cantilever tip, laser probe, and excitation force (if any) via a basis set of set of orthogonal functions that may be generalized to account for arbitrary cantilever shapes. The basis set is extended to include nonlinear cantilever modes. The model leads to a pair of coupled nonlinear differential equations that are solved analytically using a matrix iteration procedure. The effects of oscillatory excitation forces applied either to the cantilever or to the sample surface (or to both) are obtained from the solution set and applied to the to the assessment of phase and amplitude signals generated by various acoustic-atomic force microscope (A-AFM) modalities. The influence of bistable cantilever modes of on AFM signal generation is discussed. The effects on the cantilever-sample surface dynamics of subsurface features embedded in the sample that are perturbed by surface-generated oscillatory excitation forces and carried to the cantilever via wave propagation are accounted by the Bolef-Miller propagating wave model. Expressions pertaining to signal generation and image contrast in A-AFM are obtained and applied to amplitude modulation (intermittent contact) atomic force microscopy and resonant difference-frequency atomic force ultrasonic microscopy (RDF-AFUM). The influence of phase accumulation in A-AFM on image contrast is discussed, as is the effect of hard contact and maximum nonlinearity regimes of A-AFM operation.

  7. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  8. Luna B. Leopold--pioneer setting the stage for modern hydrology

    Science.gov (United States)

    Hunt, Randall J.; Meine, Curt

    2012-01-01

    In 1986, during the first year of graduate school, the lead author was sampling the water from a pitcher pump in front of “The Shack,” the setting of the opening essays in Aldo Leopold's renowned book A Sand County Almanac. The sampling was part of my Master's work that included quarterly monitoring of water quality on the Leopold Memorial Reserve (LMR) near Baraboo, Wisconsin. The Shack was already a well-known landmark, and it was common to come upon visitors and hikers there. As such, I took no special note of the man who approached me as I was filling sample bottles and asked, as was typical, “What are you doing?”

  9. Sampling the reference set’ revisited

    NARCIS (Netherlands)

    Berkum, van E.E.M.; Linssen, H.N.; Overdijk, D.A.

    1998-01-01

    The confidence level of an inference table is defined as a weighted truth probability of the inference when sampling the reference set. The reference set is recognized by conditioning on the values of maximal partially ancillary statistics. In the sampling experiment values of incidental parameters

  10. Set-Asides and Subsidies in Auctions

    OpenAIRE

    Susan Athey; Dominic Coey; Jonathan Levin

    2011-01-01

    Set-asides and subsidies are used extensively in government procurement and natural resource sales. We analyze these policies in an empirical model of U.S. Forest Service timber auctions. The model fits the data well both within the sample of unrestricted sales where we estimate the model, and when we predict (out of sample) bidder entry and prices for small business set-asides. Our estimates suggest that restricting entry to small businesses substantially reduces efficiency and revenue, alth...

  11. Validating the Copenhagen Psychosocial Questionnaire (COPSOQ-II) Using Set-ESEM: Identifying Psychosocial Risk Factors in a Sample of School Principals.

    Science.gov (United States)

    Dicke, Theresa; Marsh, Herbert W; Riley, Philip; Parker, Philip D; Guo, Jiesi; Horwood, Marcus

    2018-01-01

    School principals world-wide report high levels of strain and attrition resulting in a shortage of qualified principals. It is thus crucial to identify psychosocial risk factors that reflect principals' occupational wellbeing. For this purpose, we used the Copenhagen Psychosocial Questionnaire (COPSOQ-II), a widely used self-report measure covering multiple psychosocial factors identified by leading occupational stress theories. We evaluated the COPSOQ-II regarding factor structure and longitudinal, discriminant, and convergent validity using latent structural equation modeling in a large sample of Australian school principals ( N = 2,049). Results reveal that confirmatory factor analysis produced marginally acceptable model fit. A novel approach we call set exploratory structural equation modeling (set-ESEM), where cross-loadings were only allowed within a priori defined sets of factors, fit well, and was more parsimonious than a full ESEM. Further multitrait-multimethod models based on the set-ESEM confirm the importance of a principal's psychosocial risk factors; Stressors and depression were related to demands and ill-being, while confidence and autonomy were related to wellbeing. We also show that working in the private sector was beneficial for showing a low psychosocial risk, while other demographics have little effects. Finally, we identify five latent risk profiles (high risk to no risk) of school principals based on all psychosocial factors. Overall the research presented here closes the theory application gap of a strong multi-dimensional measure of psychosocial risk-factors.

  12. Developing a set of consensus indicators to support maternity service quality improvement: using Core Outcome Set methodology including a Delphi process.

    Science.gov (United States)

    Bunch, K J; Allin, B; Jolly, M; Hardie, T; Knight, M

    2018-05-16

    To develop a core metric set to monitor the quality of maternity care. Delphi process followed by a face-to-face consensus meeting. English maternity units. Three representative expert panels: service designers, providers and users. Maternity care metrics judged important by participants. Participants were asked to complete a two-phase Delphi process, scoring metrics from existing local maternity dashboards. A consensus meeting discussed the results and re-scored the metrics. In all, 125 distinct metrics across six domains were identified from existing dashboards. Following the consensus meeting, 14 metrics met the inclusion criteria for the final core set: smoking rate at booking; rate of birth without intervention; caesarean section delivery rate in Robson group 1 women; caesarean section delivery rate in Robson group 2 women; caesarean section delivery rate in Robson group 5 women; third- and fourth-degree tear rate among women delivering vaginally; rate of postpartum haemorrhage of ≥1500 ml; rate of successful vaginal birth after a single previous caesarean section; smoking rate at delivery; proportion of babies born at term with an Apgar score improvement. Achieving consensus on core metrics for monitoring the quality of maternity care. © 2018 The Authors. BJOG: An International Journal of Obstetrics and Gynaecology published by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.

  13. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    Energy Technology Data Exchange (ETDEWEB)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au [School of Chemistry and Biochemistry, The University of Western Australia, Perth, WA 6009 (Australia)

    2015-05-15

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.

  14. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    International Nuclear Information System (INIS)

    Spackman, Peter R.; Karton, Amir

    2015-01-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L α two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol –1 . The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol –1

  15. Assessing Exhaustiveness of Stochastic Sampling for Integrative Modeling of Macromolecular Structures.

    Science.gov (United States)

    Viswanath, Shruthi; Chemmama, Ilan E; Cimermancic, Peter; Sali, Andrej

    2017-12-05

    Modeling of macromolecular structures involves structural sampling guided by a scoring function, resulting in an ensemble of good-scoring models. By necessity, the sampling is often stochastic, and must be exhaustive at a precision sufficient for accurate modeling and assessment of model uncertainty. Therefore, the very first step in analyzing the ensemble is an estimation of the highest precision at which the sampling is exhaustive. Here, we present an objective and automated method for this task. As a proxy for sampling exhaustiveness, we evaluate whether two independently and stochastically generated sets of models are sufficiently similar. The protocol includes testing 1) convergence of the model score, 2) whether model scores for the two samples were drawn from the same parent distribution, 3) whether each structural cluster includes models from each sample proportionally to its size, and 4) whether there is sufficient structural similarity between the two model samples in each cluster. The evaluation also provides the sampling precision, defined as the smallest clustering threshold that satisfies the third, most stringent test. We validate the protocol with the aid of enumerated good-scoring models for five illustrative cases of binary protein complexes. Passing the proposed four tests is necessary, but not sufficient for thorough sampling. The protocol is general in nature and can be applied to the stochastic sampling of any set of models, not just structural models. In addition, the tests can be used to stop stochastic sampling as soon as exhaustiveness at desired precision is reached, thereby improving sampling efficiency; they may also help in selecting a model representation that is sufficiently detailed to be informative, yet also sufficiently coarse for sampling to be exhaustive. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  16. Control charts for location based on different sampling schemes

    NARCIS (Netherlands)

    Mehmood, R.; Riaz, M.; Does, R.J.M.M.

    2013-01-01

    Control charts are the most important statistical process control tool for monitoring variations in a process. A number of articles are available in the literature for the X̄ control chart based on simple random sampling, ranked set sampling, median-ranked set sampling (MRSS), extreme-ranked set

  17. Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks

    Science.gov (United States)

    Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.

    Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.

  18. Chemometric and Statistical Analyses of ToF-SIMS Spectra of Increasingly Complex Biological Samples

    Energy Technology Data Exchange (ETDEWEB)

    Berman, E S; Wu, L; Fortson, S L; Nelson, D O; Kulp, K S; Wu, K J

    2007-10-24

    Characterizing and classifying molecular variation within biological samples is critical for determining fundamental mechanisms of biological processes that will lead to new insights including improved disease understanding. Towards these ends, time-of-flight secondary ion mass spectrometry (ToF-SIMS) was used to examine increasingly complex samples of biological relevance, including monosaccharide isomers, pure proteins, complex protein mixtures, and mouse embryo tissues. The complex mass spectral data sets produced were analyzed using five common statistical and chemometric multivariate analysis techniques: principal component analysis (PCA), linear discriminant analysis (LDA), partial least squares discriminant analysis (PLSDA), soft independent modeling of class analogy (SIMCA), and decision tree analysis by recursive partitioning. PCA was found to be a valuable first step in multivariate analysis, providing insight both into the relative groupings of samples and into the molecular basis for those groupings. For the monosaccharides, pure proteins and protein mixture samples, all of LDA, PLSDA, and SIMCA were found to produce excellent classification given a sufficient number of compound variables calculated. For the mouse embryo tissues, however, SIMCA did not produce as accurate a classification. The decision tree analysis was found to be the least successful for all the data sets, providing neither as accurate a classification nor chemical insight for any of the tested samples. Based on these results we conclude that as the complexity of the sample increases, so must the sophistication of the multivariate technique used to classify the samples. PCA is a preferred first step for understanding ToF-SIMS data that can be followed by either LDA or PLSDA for effective classification analysis. This study demonstrates the strength of ToF-SIMS combined with multivariate statistical and chemometric techniques to classify increasingly complex biological samples

  19. On-chip acoustophoretic isolation of microflora including S. typhimurium from raw chicken, beef and blood samples.

    Science.gov (United States)

    Ngamsom, Bongkot; Lopez-Martinez, Maria J; Raymond, Jean-Claude; Broyer, Patrick; Patel, Pradip; Pamme, Nicole

    2016-04-01

    Pathogen analysis in food samples routinely involves lengthy growth-based pre-enrichment and selective enrichment of food matrices to increase the ratio of pathogen to background flora. Similarly, for blood culture analysis, pathogens must be isolated and enriched from a large excess of blood cells to allow further analysis. Conventional techniques of centrifugation and filtration are cumbersome, suffer from low sample throughput, are not readily amenable to automation and carry a risk of damaging biological samples. We report on-chip acoustophoresis as a pre-analytical technique for the resolution of total microbial flora from food and blood samples. The resulting 'clarified' sample is expected to increase the performance of downstream systems for the specific detection of the pathogens. A microfluidic chip with three inlets, a central separation channel and three outlets was utilized. Samples were introduced through the side inlets, and buffer solution through the central inlet. Upon ultrasound actuation, large debris particles (10-100 μm) from meat samples were continuously partitioned into the central buffer channel, leaving the 'clarified' outer sample streams containing both, the pathogenic cells and the background flora (ca. 1 μm) to be collected over a 30 min operation cycle before further analysis. The system was successfully tested with Salmonella typhimurium-spiked (ca. 10(3)CFU mL(-1)) samples of chicken and minced beef, demonstrating a high level of the pathogen recovery (60-90%). When applied to S. typhimurium contaminated blood samples (10(7)CFU mL(-1)), acoustophoresis resulted in a high depletion (99.8%) of the red blood cells (RBC) which partitioned in the buffer stream, whilst sufficient numbers of the viable S. typhimurium remained in the outer channels for further analysis. These results indicate that the technology may provide a generic approach for pre-analytical sample preparation prior to integrated and automated downstream detection of

  20. Calibration of a liquid scintillation counter to assess tritium levels in various samples

    CERN Document Server

    Al-Haddad, M N; Abu-Jarad, F A

    1999-01-01

    An LKB-Wallac 1217 Liquid Scintillation Counter (LSC) was calibrated with a newly adopted cocktail. The LSC was then used to measure tritium levels in various samples to assess the compliance of tritium levels with the recommended international levels. The counter was calibrated to measure both biological and operational samples for personnel and for an accelerator facility at KFUPM. The biological samples include the bioassay (urine), saliva, and nasal tests. The operational samples of the light ion linear accelerator include target cooling water, organic oil, fomblin oil, and smear samples. Sets of standards, which simulate various samples, were fabricated using traceable certified tritium standards. The efficiency of the counter was obtained for each sample. The typical range of the efficiencies varied from 33% for smear samples down to 1.5% for organic oil samples. A quenching curve for each sample is presented. The minimum detectable activity for each sample was established. Typical tritium levels in bio...

  1. Application of Karasek's demand/control model a Canadian occupational setting including shift workers during a period of reorganization and downsizing.

    Science.gov (United States)

    Schechter, J; Green, L W; Olsen, L; Kruse, K; Cargo, M

    1997-01-01

    To apply Karasek's Job Content Model to an analysis of the relationships between job type and perceived stress and stress behaviors in a large company during a period of reorganization and downsizing. Cross-sectional mail-out, mail-back survey. A large Canadian telephone/telecommunications company. Stratified random sample (stratified by job category) of 2200 out of 13,000 employees with a response rate of 48.8%. Responses to 25 of Karasek's core questions were utilized to define four job types: low-demand and high control = "relaxed"; high demand and high control = "active"; low demand and low control = "passive", and high demand and low control = "high strain." These job types were compared against self-reported stress levels, perceived general level of health, absenteeism, alcohol use, exercise level, and use of medications and drugs. Similar analyses were performed to assess the influence of shift work. Employees with "passive" or "high strain" job types reported higher levels of stress (trend test p Karasek and Theorell was validated in this setting with respect to stress and some stress-associated attitudes and behaviors.

  2. Supplementing electronic health records through sample collection and patient diaries: A study set within a primary care research database.

    Science.gov (United States)

    Joseph, Rebecca M; Soames, Jamie; Wright, Mark; Sultana, Kirin; van Staa, Tjeerd P; Dixon, William G

    2018-02-01

    To describe a novel observational study that supplemented primary care electronic health record (EHR) data with sample collection and patient diaries. The study was set in primary care in England. A list of 3974 potentially eligible patients was compiled using data from the Clinical Practice Research Datalink. Interested general practices opted into the study then confirmed patient suitability and sent out postal invitations. Participants completed a drug-use diary and provided saliva samples to the research team to combine with EHR data. Of 252 practices contacted to participate, 66 (26%) mailed invitations to patients. Of the 3974 potentially eligible patients, 859 (22%) were at participating practices, and 526 (13%) were sent invitations. Of those invited, 117 (22%) consented to participate of whom 86 (74%) completed the study. We have confirmed the feasibility of supplementing EHR with data collected directly from patients. Although the present study successfully collected essential data from patients, it also underlined the requirement for improved engagement with both patients and general practitioners to support similar studies. © 2017 The Authors. Pharmacoepidemiology & Drug Safety published by John Wiley & Sons Ltd.

  3. Suitability of public use secondary data sets to study multiple activities.

    Science.gov (United States)

    Putnam, Michelle; Morrow-Howell, Nancy; Inoue, Megumi; Greenfield, Jennifer C; Chen, Huajuan; Lee, YungSoo

    2014-10-01

    The aims of this study were to inventory activity items within and across U.S. public use data sets, to identify gaps in represented activity domains and challenges in interpreting domains, and to assess the potential for studying multiple activity engagement among older adults using existing data. We engaged in content analysis of activity measures of 5U.S. public use data sets with nationally representative samples of older adults. Data sets included the Health & Retirement Survey (HRS), Americans' Changing Lives Survey (ACL), Midlife in the United States Survey (MIDUS), the National Health Interview Survey (NHIS), and the Panel Study of Income Dynamics survey (PSID). Two waves of each data set were analyzed. We identified 13 distinct activity domains across the 5 data sets, with substantial differences in representation of those domains among the data sets, and variance in the number and type of activity measures included in each. Our findings indicate that although it is possible to study multiple activity engagement within existing data sets, fuller sets of activity measures need to be developed in order to evaluate the portfolio of activities older adults engage in and the relationship of these portfolios to health and wellness outcomes. Importantly, clearer conceptual models of activity broadly conceived are required to guide this work. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Integration of morphological data sets for phylogenetic analysis of Amniota: the importance of integumentary characters and increased taxonomic sampling.

    Science.gov (United States)

    Hill, Robert V

    2005-08-01

    Several mutually exclusive hypotheses have been advanced to explain the phylogenetic position of turtles among amniotes. Traditional morphology-based analyses place turtles among extinct anapsids (reptiles with a solid skull roof), whereas more recent studies of both morphological and molecular data support an origin of turtles from within Diapsida (reptiles with a doubly fenestrated skull roof). Evaluation of these conflicting hypotheses has been hampered by nonoverlapping taxonomic samples and the exclusion of significant taxa from published analyses. Furthermore, although data from soft tissues and anatomical systems such as the integument may be particularly relevant to this problem, they are often excluded from large-scale analyses of morphological systematics. Here, conflicting hypotheses of turtle relationships are tested by (1) combining published data into a supermatrix of morphological characters to address issues of character conflict and missing data; (2) increasing taxonomic sampling by more than doubling the number of operational taxonomic units to test internal relationships within suprageneric ingroup taxa; and (3) increasing character sampling by approximately 25% by adding new data on the osteology and histology of the integument, an anatomical system that has been historically underrepresented in morphological systematics. The morphological data set assembled here represents the largest yet compiled for Amniota. Reevaluation of character data from prior studies of amniote phylogeny favors the hypothesis that turtles indeed have diapsid affinities. Addition of new ingroup taxa alone leads to a decrease in overall phylogenetic resolution, indicating that existing characters used for amniote phylogeny are insufficient to explain the evolution of more highly nested taxa. Incorporation of new data from the soft and osseous components of the integument, however, helps resolve relationships among both basal and highly nested amniote taxa. Analysis of a

  5. Data release for intermediate-density hydrogeochemical and stream sediment sampling in the Vallecito Creek Special Study Area, Colorado, including concentrations of uranium and forty-six additional elements

    International Nuclear Information System (INIS)

    Warren, R.G.

    1981-04-01

    A sediment sample and two water samples were collected at each location about a kilometer apart from small tributary streams within the area. One of the two water samples collected at each location was filtered in the field and the other was not. Both samples were acidified to a pH of < 1; field data and uranium concentrations are listed first for the filtered sample (sample type = 07) and followed by the unfiltered sample (sample type = 27) for each location in Appendix I-A. Uranium concentrations are higher in unfiltered samples than in filtered samples for most locations. Measured uranium concentrations in control standards analyzed with the water samples are listed in Appendix II. All sediments were air dried and the fraction finer than 100 mesh was separated and analyzed for uranium and forty-six additional elements. Field data and analytical results for each sediment sample are listed in Appendix I-B. Analytical procedures for both water and sediment samples are briefly described in Appendix III. Most bedrock units within the sampled area are of Precambrian age. Three Precambrian units are known or potential hosts for uranium deposits; the Trimble granite is associated with the recently discovered Florida Mountain vein deposit, the Uncompahgre formation hosts a vein-type occurrence in Elk Park near the contact with the Irving formation, and the Vallecito conglomerate has received some attention as a possible host for a quartz pebble conglomerate deposit. Nearly all sediment samples collected downslope from exposures of Timble granite (geologic unit symbol ''T'' in Appendix I) contain unusually high uranium concentrations. High uranium concentrations in sediment also occur for an individual sample location that has a geologic setting similar to the Elk Park occurrence and for a sample associated with the Vallecito conglomerate

  6. System for Earth Sample Registration SESAR: Services for IGSN Registration and Sample Metadata Management

    Science.gov (United States)

    Chan, S.; Lehnert, K. A.; Coleman, R. J.

    2011-12-01

    SESAR, the System for Earth Sample Registration, is an online registry for physical samples collected for Earth and environmental studies. SESAR generates and administers the International Geo Sample Number IGSN, a unique identifier for samples that is dramatically advancing interoperability amongst information systems for sample-based data. SESAR was developed to provide the complete range of registry services, including definition of IGSN syntax and metadata profiles, registration and validation of name spaces requested by users, tools for users to submit and manage sample metadata, validation of submitted metadata, generation and validation of the unique identifiers, archiving of sample metadata, and public or private access to the sample metadata catalog. With the development of SESAR v3, we placed particular emphasis on creating enhanced tools that make metadata submission easier and more efficient for users, and that provide superior functionality for users to manage metadata of their samples in their private workspace MySESAR. For example, SESAR v3 includes a module where users can generate custom spreadsheet templates to enter metadata for their samples, then upload these templates online for sample registration. Once the content of the template is uploaded, it is displayed online in an editable grid format. Validation rules are executed in real-time on the grid data to ensure data integrity. Other new features of SESAR v3 include the capability to transfer ownership of samples to other SESAR users, the ability to upload and store images and other files in a sample metadata profile, and the tracking of changes to sample metadata profiles. In the next version of SESAR (v3.5), we will further improve the discovery, sharing, registration of samples. For example, we are developing a more comprehensive suite of web services that will allow discovery and registration access to SESAR from external systems. Both batch and individual registrations will be possible

  7. Determination of Glucocorticoids in UPLC-MS in Environmental Samples from an Occupational Setting

    Directory of Open Access Journals (Sweden)

    Enrico Oddone

    2015-01-01

    Full Text Available Occupational exposures to glucocorticoids are still a neglected issue in some work environments, including pharmaceutical plants. We developed an analytical method to quantify simultaneously 21 glucocorticoids using UPLC coupled with mass spectrometry to provide a basis to carry out environmental monitoring. Samples were taken from air, hand-washing tests, pad-tests and wipe-tests. This paper reports the contents of the analytical methodology, along with the results of this extensive environmental and personal monitoring of glucocorticoids. The method in UPLC-MS turned out to be suitable and effective for the aim of the study. Wipe-test and pad-test desorption was carried out using 50 mL syringes, a simple technique that saves time without adversely affecting analyte recovery. Results showed a widespread environmental pollution due to glucocorticoids. This is of particular concern. Evaluation of the dose absorbed by each worker and identification of a biomarker for occupational exposure will contribute to assessment and prevention of occupational exposure.

  8. CAsubtype: An R Package to Identify Gene Sets Predictive of Cancer Subtypes and Clinical Outcomes.

    Science.gov (United States)

    Kong, Hualei; Tong, Pan; Zhao, Xiaodong; Sun, Jielin; Li, Hua

    2018-03-01

    In the past decade, molecular classification of cancer has gained high popularity owing to its high predictive power on clinical outcomes as compared with traditional methods commonly used in clinical practice. In particular, using gene expression profiles, recent studies have successfully identified a number of gene sets for the delineation of cancer subtypes that are associated with distinct prognosis. However, identification of such gene sets remains a laborious task due to the lack of tools with flexibility, integration and ease of use. To reduce the burden, we have developed an R package, CAsubtype, to efficiently identify gene sets predictive of cancer subtypes and clinical outcomes. By integrating more than 13,000 annotated gene sets, CAsubtype provides a comprehensive repertoire of candidates for new cancer subtype identification. For easy data access, CAsubtype further includes the gene expression and clinical data of more than 2000 cancer patients from TCGA. CAsubtype first employs principal component analysis to identify gene sets (from user-provided or package-integrated ones) with robust principal components representing significantly large variation between cancer samples. Based on these principal components, CAsubtype visualizes the sample distribution in low-dimensional space for better understanding of the distinction between samples and classifies samples into subgroups with prevalent clustering algorithms. Finally, CAsubtype performs survival analysis to compare the clinical outcomes between the identified subgroups, assessing their clinical value as potentially novel cancer subtypes. In conclusion, CAsubtype is a flexible and well-integrated tool in the R environment to identify gene sets for cancer subtype identification and clinical outcome prediction. Its simple R commands and comprehensive data sets enable efficient examination of the clinical value of any given gene set, thus facilitating hypothesis generating and testing in biological and

  9. Ranking metrics in gene set enrichment analysis: do they matter?

    Science.gov (United States)

    Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna

    2017-05-12

    There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner

  10. Setting-up of a direct reading emission spectrometer and its adaptation for plutonium handling

    International Nuclear Information System (INIS)

    Page, A.G.; Godbole, S.V.; Kulkarni, M.J.; Porwal, N.K.; Thulasidas, S.K.; Sastry, M.D.; Srinivasan, P.S.

    1986-01-01

    A Jarrell-Ash 750 AtomComp 1100 series direct reading emission spectrometer was set up and its performance features were checked with regard to analysis of uranium-based samples using d.c. arc/inductively coupled argon plasma excitation techniques. The instrument has been subsequently modified to enable handling of plutonium-based samples. The modifications include building up of a specially designed glove-box around the excitation sources and consequent changes in the electro-mechanical controls associated with them. The modified system was extensively used for the trace metal assay of FBTR fuel sample. (author)

  11. SamplingStrata: An R Package for the Optimization of Strati?ed Sampling

    Directory of Open Access Journals (Sweden)

    Giulio Barcaroli

    2014-11-01

    Full Text Available When designing a sampling survey, usually constraints are set on the desired precision levels regarding one or more target estimates (the Ys. If a sampling frame is available, containing auxiliary information related to each unit (the Xs, it is possible to adopt a stratified sample design. For any given strati?cation of the frame, in the multivariate case it is possible to solve the problem of the best allocation of units in strata, by minimizing a cost function sub ject to precision constraints (or, conversely, by maximizing the precision of the estimates under a given budget. The problem is to determine the best stratification in the frame, i.e., the one that ensures the overall minimal cost of the sample necessary to satisfy precision constraints. The Xs can be categorical or continuous; continuous ones can be transformed into categorical ones. The most detailed strati?cation is given by the Cartesian product of the Xs (the atomic strata. A way to determine the best stratification is to explore exhaustively the set of all possible partitions derivable by the set of atomic strata, evaluating each one by calculating the corresponding cost in terms of the sample required to satisfy precision constraints. This is una?ordable in practical situations, where the dimension of the space of the partitions can be very high. Another possible way is to explore the space of partitions with an algorithm that is particularly suitable in such situations: the genetic algorithm. The R package SamplingStrata, based on the use of a genetic algorithm, allows to determine the best strati?cation for a population frame, i.e., the one that ensures the minimum sample cost necessary to satisfy precision constraints, in a multivariate and multi-domain case.

  12. Data Sets from Major NCI Initiaves

    Science.gov (United States)

    The NCI Data Catalog includes links to data collections produced by major NCI initiatives and other widely used data sets, including animal models, human tumor cell lines, epidemiology data sets, genomics data sets from TCGA, TARGET, COSMIC, GSK, NCI60.

  13. Solvent Hold Tank Sample Results for MCU-16-991-992-993: July 2016 Monthly sample and MCU-16-1033-1034-1035: July 2016 Superwashed Sample

    Energy Technology Data Exchange (ETDEWEB)

    Fondeur, F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-11-25

    SRNL received one set of SHT samples (MCU-16-991, MCU-16-992 and MCU-16-993), pulled on 07/13/2016 and another set of SHT samples (MCU-16-1033, MCU-16-1034, and MCU-16-1035) that were pulled on 07/24/2016 after the solvent was superwashed with 300 mM sodium hydroxide for analysis. Samples MCU-16-991, MCU-16-992, and MCU-16-993 were combined into one sample (MCU-16-991-992-993) and samples MCU-16-1033, MCU-16-1034, and MCU-16-1035 were combined into one sample (MCU-16-1033-1034-1035). Of the two composite samples MCU-16-1033-1034-1035 represents the current chemical state of the solvent at MCU. All analytical conclusions are based on the chemical analysis of MCU-16-1033-1034-1035. There were no chemical differences between MCU-16- 991-992-993 and superwashed MCU-16-1033-1034-1035.

  14. Sparse sampling and reconstruction for electron and scanning probe microscope imaging

    Science.gov (United States)

    Anderson, Hyrum; Helms, Jovana; Wheeler, Jason W.; Larson, Kurt W.; Rohrer, Brandon R.

    2015-07-28

    Systems and methods for conducting electron or scanning probe microscopy are provided herein. In a general embodiment, the systems and methods for conducting electron or scanning probe microscopy with an undersampled data set include: driving an electron beam or probe to scan across a sample and visit a subset of pixel locations of the sample that are randomly or pseudo-randomly designated; determining actual pixel locations on the sample that are visited by the electron beam or probe; and processing data collected by detectors from the visits of the electron beam or probe at the actual pixel locations and recovering a reconstructed image of the sample.

  15. Choice of Sample Split in Out-of-Sample Forecast Evaluation

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Timmermann, Allan

    , while conversely the power of forecast evaluation tests is strongest with long out-of-sample periods. To deal with size distortions, we propose a test statistic that is robust to the effect of considering multiple sample split points. Empirical applications to predictabil- ity of stock returns......Out-of-sample tests of forecast performance depend on how a given data set is split into estimation and evaluation periods, yet no guidance exists on how to choose the split point. Empirical forecast evaluation results can therefore be difficult to interpret, particularly when several values...... and inflation demonstrate that out-of-sample forecast evaluation results can critically depend on how the sample split is determined....

  16. Caught you: threats to confidentiality due to the public release of large-scale genetic data sets.

    Science.gov (United States)

    Wjst, Matthias

    2010-12-29

    Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public.

  17. Caught you: threats to confidentiality due to the public release of large-scale genetic data sets

    Directory of Open Access Journals (Sweden)

    Wjst Matthias

    2010-12-01

    Full Text Available Abstract Background Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. Discussion The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Summary Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public.

  18. Updated 34-band Photometry for the SINGS/KINGFISH Samples of Nearby Galaxies

    International Nuclear Information System (INIS)

    Dale, D. A.; Turner, J. A.; Cook, D. O.; Roussel, H.; Armus, L.; Helou, G.; Bolatto, A. D.; Boquien, M.; Brown, M. J. I.; Calzetti, D.; Looze, I. De; Galametz, M.; Gordon, K. D.; Groves, B. A.; Jarrett, T. H.; Herrera-Camus, R.; Hinz, J. L.; Hunt, L. K.; Kennicutt, R. C.; Murphy, E. J.

    2017-01-01

    We present an update to the ultraviolet-to-radio database of global broadband photometry for the 79 nearby galaxies that comprise the union of the KINGFISH (Key Insights on Nearby Galaxies: A Far-Infrared Survey with Herschel ) and SINGS ( Spitzer Infrared Nearby Galaxies Survey) samples. The 34-band data set presented here includes contributions from observational work carried out with a variety of facilities including GALEX , SDSS, Pan-STARRS1, NOAO , 2MASS, Wide-Field Infrared Survey Explorer , Spitzer , Herschel , Planck , JCMT , and the VLA. Improvements of note include recalibrations of previously published SINGS BVR C I C and KINGFISH far-infrared/submillimeter photometry. Similar to previous results in the literature, an excess of submillimeter emission above model predictions is seen primarily for low-metallicity dwarf or irregular galaxies. This 33-band photometric data set for the combined KINGFISH+SINGS sample serves as an important multiwavelength reference for the variety of galaxies observed at low redshift. A thorough analysis of the observed spectral energy distributions is carried out in a companion paper.

  19. Updated 34-band Photometry for the SINGS/KINGFISH Samples of Nearby Galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Dale, D. A.; Turner, J. A. [Department of Physics and Astronomy, University of Wyoming, Laramie WY (United States); Cook, D. O. [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, Pasadena CA (United States); Roussel, H. [Institut d’Astrophysique de Paris, Sorbonne Universités, Paris (France); Armus, L.; Helou, G. [Spitzer Science Center, California Institute of Technology, Pasadena, CA (United States); Bolatto, A. D. [Department of Astronomy, University of Maryland, College Park, MD (United States); Boquien, M. [Unidad de Astronomía, Universidad de Antofagasta, Antofagasta (Chile); Brown, M. J. I. [School of Physics and Astronomy, Monash University, Victoria 3800 (Australia); Calzetti, D. [Department of Astronomy, University of Massachusetts, Amherst MA (United States); Looze, I. De [Sterrenkundig Observatorium, Universiteit Gent, Gent (Belgium); Galametz, M. [European Southern Observatory, Garching (Germany); Gordon, K. D. [Space Telescope Science Institute, Baltimore MD (United States); Groves, B. A. [Research School of Astronomy and Astrophysics, Australian National University, Canberra (Australia); Jarrett, T. H. [Astronomy Department, University of Capetown, Rondebosch (South Africa); Herrera-Camus, R. [Max-Planck-Institut für Extraterrestrische Physik, Garching (Germany); Hinz, J. L. [Steward Observatory, University of Arizona, Tucson AZ (United States); Hunt, L. K. [INAF—Osservatorio Astrofisico di Arcetri, Firenze (Italy); Kennicutt, R. C. [Institute of Astronomy, University of Cambridge, Cambridge (United Kingdom); Murphy, E. J., E-mail: ddale@uwyo.edu [National Radio Astronomy Observatory, Charlottesville, VA (United States); and others

    2017-03-01

    We present an update to the ultraviolet-to-radio database of global broadband photometry for the 79 nearby galaxies that comprise the union of the KINGFISH (Key Insights on Nearby Galaxies: A Far-Infrared Survey with Herschel ) and SINGS ( Spitzer Infrared Nearby Galaxies Survey) samples. The 34-band data set presented here includes contributions from observational work carried out with a variety of facilities including GALEX , SDSS, Pan-STARRS1, NOAO , 2MASS, Wide-Field Infrared Survey Explorer , Spitzer , Herschel , Planck , JCMT , and the VLA. Improvements of note include recalibrations of previously published SINGS BVR {sub C} I {sub C} and KINGFISH far-infrared/submillimeter photometry. Similar to previous results in the literature, an excess of submillimeter emission above model predictions is seen primarily for low-metallicity dwarf or irregular galaxies. This 33-band photometric data set for the combined KINGFISH+SINGS sample serves as an important multiwavelength reference for the variety of galaxies observed at low redshift. A thorough analysis of the observed spectral energy distributions is carried out in a companion paper.

  20. Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping.

    Science.gov (United States)

    Hamalainen, Sampsa; Geng, Xiaoyuan; He, Juanxia

    2017-04-01

    Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping. Sampsa Hamalainen, Xiaoyuan Geng, and Juanxia, He. AAFC - Agriculture and Agr-Food Canada, Ottawa, Canada. The Latin Hypercube Sampling (LHS) approach to assist with Digital Soil Mapping has been developed for some time now, however the purpose of this work was to complement LHS with use of multiple spatial resolutions of covariate datasets and variability in the range of sampling points produced. This allowed for specific sets of LHS points to be produced to fulfil the needs of various partners from multiple projects working in the Ontario and Prince Edward Island provinces of Canada. Secondary soil and environmental attributes are critical inputs that are required in the development of sampling points by LHS. These include a required Digital Elevation Model (DEM) and subsequent covariate datasets produced as a result of a Digital Terrain Analysis performed on the DEM. These additional covariates often include but are not limited to Topographic Wetness Index (TWI), Length-Slope (LS) Factor, and Slope which are continuous data. The range of specific points created in LHS included 50 - 200 depending on the size of the watershed and more importantly the number of soil types found within. The spatial resolution of covariates included within the work ranged from 5 - 30 m. The iterations within the LHS sampling were run at an optimal level so the LHS model provided a good spatial representation of the environmental attributes within the watershed. Also, additional covariates were included in the Latin Hypercube Sampling approach which is categorical in nature such as external Surficial Geology data. Some initial results of the work include using a 1000 iteration variable within the LHS model. 1000 iterations was consistently a reasonable value used to produce sampling points that provided a good spatial representation of the environmental

  1. Sample processing, protocol, and statistical analysis of the time-of-flight secondary ion mass spectrometry (ToF-SIMS) of protein, cell, and tissue samples.

    Science.gov (United States)

    Barreto, Goncalo; Soininen, Antti; Sillat, Tarvo; Konttinen, Yrjö T; Kaivosoja, Emilia

    2014-01-01

    Time-of-flight secondary ion mass spectrometry (ToF-SIMS) is increasingly being used in analysis of biological samples. For example, it has been applied to distinguish healthy and osteoarthritic human cartilage. This chapter discusses ToF-SIMS principle and instrumentation including the three modes of analysis in ToF-SIMS. ToF-SIMS sets certain requirements for the samples to be analyzed; for example, the samples have to be vacuum compatible. Accordingly, sample processing steps for different biological samples, i.e., proteins, cells, frozen and paraffin-embedded tissues and extracellular matrix for the ToF-SIMS are presented. Multivariate analysis of the ToF-SIMS data and the necessary data preprocessing steps (peak selection, data normalization, mean-centering, and scaling and transformation) are discussed in this chapter.

  2. Estimates of Inequality Indices Based on Simple Random, Ranked Set, and Systematic Sampling

    OpenAIRE

    Bansal, Pooja; Arora, Sangeeta; Mahajan, Kalpana K.

    2013-01-01

    Gini index, Bonferroni index, and Absolute Lorenz index are some popular indices of inequality showing different features of inequality measurement. In general simple random sampling procedure is commonly used to estimate the inequality indices and their related inference. The key condition that the samples must be drawn via simple random sampling procedure though makes calculations much simpler but this assumption is often violated in practice as the data does not always yield simple random ...

  3. 1999 Baseline Sampling and Analysis Sampling Locations, Geographic NAD83, LOSCO (2004) [BSA_1999_sample_locations_LOSCO_2004

    Data.gov (United States)

    Louisiana Geographic Information Center — The monitor point data set was produced as a part of the Baseline Sampling and Analysis program coordinated by the Louisiana Oil Spill Coordinator's Office. This...

  4. 1997 Baseline Sampling and Analysis Sample Locations, Geographic NAD83, LOSCO (2004) [BSA_1997_sample_locations_LOSCO_2004

    Data.gov (United States)

    Louisiana Geographic Information Center — The monitor point data set was produced as a part of the Baseline Sampling and Analysis (BSA) program coordinated by the Louisiana Oil Spill Coordinator's Office....

  5. 1998 Baseline Sampling and Analysis Sampling Locations, Geographic NAD83, LOSCO (2004) [BSA_1998_sample_locations_LOSCO_2004

    Data.gov (United States)

    Louisiana Geographic Information Center — The monitor point data set was produced as a part of the Baseline Sampling and Analysis program coordinated by the Louisiana Oil Spill Coordinator's Office. This...

  6. Sampling in practice

    DEFF Research Database (Denmark)

    Esbensen, Kim Harry; Petersen, Lars

    2005-01-01

    A basic knowledge of the Theory of Sampling (TOS) and a set of only eight sampling unit operations is all the practical sampler needs to ensure representativeness of samples extracted from all kinds of lots: production batches, - truckloads, - barrels, sub-division in the laboratory, sampling...... in nature and in the field (environmental sampling, forestry, geology, biology), from raw materials or manufactory processes etc. We here can only give a brief introduction to the Fundamental Sampling Principle (FSP) and these eight Sampling Unit Operations (SUO’s). Always respecting FSP and invoking only...... the necessary SUO’s (dependent on the practical situation) is the only prerequisite needed for eliminating all sampling bias and simultaneously minimizing sampling variance, and this is in addition a sure guarantee for making the final analytical results trustworthy. No reliable conclusions can be made unless...

  7. Statistical sampling strategies

    International Nuclear Information System (INIS)

    Andres, T.H.

    1987-01-01

    Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized

  8. Sample management implementation plan: Salt Repository Project

    International Nuclear Information System (INIS)

    1987-01-01

    The purpose of the Sample Management Implementation Plan is to define management controls and building requirements for handling materials collected during the site characterization of the Deaf Smith County, Texas, site. This work will be conducted for the US Department of Energy Salt Repository Project Office (SRPO). The plan provides for controls mandated by the US Nuclear Regulatory Commission and the US Environmental Protection Agency. Salt Repository Project (SRP) Sample Management will interface with program participants who request, collect, and test samples. SRP Sample Management will be responsible for the following: (1) preparing samples; (2) ensuring documentation control; (3) providing for uniform forms, labels, data formats, and transportation and storage requirements; and (4) identifying sample specifications to ensure sample quality. The SRP Sample Management Facility will be operated under a set of procedures that will impact numerous program participants. Requesters of samples will be responsible for definition of requirements in advance of collection. Sample requests for field activities will be approved by the SRPO, aided by an advisory group, the SRP Sample Allocation Committee. This document details the staffing, building, storage, and transportation requirements for establishing an SRP Sample Management Facility. Materials to be managed in the facility include rock core and rock discontinuities, soils, fluids, biota, air particulates, cultural artifacts, and crop and food stuffs. 39 refs., 3 figs., 11 tabs

  9. Comparing two sampling methods to engage hard-to-reach communities in research priority setting.

    Science.gov (United States)

    Valerio, Melissa A; Rodriguez, Natalia; Winkler, Paula; Lopez, Jaime; Dennison, Meagen; Liang, Yuanyuan; Turner, Barbara J

    2016-10-28

    Effective community-partnered and patient-centered outcomes research needs to address community priorities. However, optimal sampling methods to engage stakeholders from hard-to-reach, vulnerable communities to generate research priorities have not been identified. In two similar rural, largely Hispanic communities, a community advisory board guided recruitment of stakeholders affected by chronic pain using a different method in each community: 1) snowball sampling, a chain- referral method or 2) purposive sampling to recruit diverse stakeholders. In both communities, three groups of stakeholders attended a series of three facilitated meetings to orient, brainstorm, and prioritize ideas (9 meetings/community). Using mixed methods analysis, we compared stakeholder recruitment and retention as well as priorities from both communities' stakeholders on mean ratings of their ideas based on importance and feasibility for implementation in their community. Of 65 eligible stakeholders in one community recruited by snowball sampling, 55 (85 %) consented, 52 (95 %) attended the first meeting, and 36 (65 %) attended all 3 meetings. In the second community, the purposive sampling method was supplemented by convenience sampling to increase recruitment. Of 69 stakeholders recruited by this combined strategy, 62 (90 %) consented, 36 (58 %) attended the first meeting, and 26 (42 %) attended all 3 meetings. Snowball sampling recruited more Hispanics and disabled persons (all P research, focusing on non-pharmacologic interventions for management of chronic pain. Ratings on importance and feasibility for community implementation differed only on the importance of massage services (P = 0.045) which was higher for the purposive/convenience sampling group and for city improvements/transportation services (P = 0.004) which was higher for the snowball sampling group. In each of the two similar hard-to-reach communities, a community advisory board partnered with researchers

  10. Survey indicated that core outcome set development is increasingly including patients, being conducted internationally and using Delphi surveys.

    Science.gov (United States)

    Biggane, Alice M; Brading, Lucy; Ravaud, Philippe; Young, Bridget; Williamson, Paula R

    2018-02-17

    There are numerous challenges in including patients in a core outcome set (COS) study, these can vary depending on the patient group. This study describes current efforts to include patients in the development of COS, with the aim of identifying areas for further improvement and study. Using the COMET database, corresponding authors of COS projects registered or published from 1 January 2013 to 2 February 2017 were invited via a personalised email to participate in a short online survey. The survey and emails were constructed to maximise the response rate by following the academic literature on enhancing survey responses. Personalised reminder emails were sent to non-responders. This survey explored the frequency of patient input in COS studies, who was involved, what methods were used and whether or not the COS development was international. One hundred and ninety-two COS developers were sent the survey. Responses were collected from 21 February 2017 until 7 May 2017. One hundred and forty-six unique developers responded, yielding a 76% response rate and data in relation to 195 unique COSs (as some developers had worked on multiple COSs). Of focus here are their responses regarding 162 COSs at the published, completed or ongoing stages of development. Inclusion of patient participants was indicated in 87% (141/162) of COSs in the published completed or ongoing stages and over 94% (65/69) of ongoing COS projects. Nearly half (65/135) of COSs included patient participants from two or more countries and 22% (30/135) included patient participants from five or more countries. The Delphi survey was reported as being used singularly or in combination with other methods in 85% (119/140) of projects. Almost a quarter (16/65) of ongoing studies reported using a combination of qualitative interviews, Delphi survey and consensus meeting. These findings indicated that the Delphi survey is the most popular method of facilitating patient participation, while the combination of

  11. Comparison of culture based methods for the isolation of Clostridium difficile from stool samples in a research setting.

    Science.gov (United States)

    Lister, Michelle; Stevenson, Emma; Heeg, Daniela; Minton, Nigel P; Kuehne, Sarah A

    2014-08-01

    Effective isolation of Clostridium difficile from stool samples is important in the research setting, especially where low numbers of spores/vegetative cells may be present within a sample. In this study, three protocols for stool culture were investigated to find a sensitive, cost effective and timely method of C. difficile isolation. For the initial enrichment step, the effectiveness of two different rich media, cycloserine-cefoxitin fructose broth (CCFB) and cycloserine-cefoxitin mannitol broth with taurocholate and lysozyme (CCMB-TAL) were compared. For the comparison of four different, selective solid media; Cycloserine-cefoxitin fructose agar (CCFA), Cycloserine-cefoxitin egg yolk agar (CCEY), ChromID C. difficile and tryptone soy agar (TSA) with 5% sheep's blood with and without preceding broth enrichment were used. As a means to enable differentiation between C. difficile and other fecal flora, the effectiveness of the inclusion of a pH indictor (1% Neutral Red), was also evaluated. The data derived indicated that CCFB is more sensitive than CCMB-TAL, however, the latter had an improved recovery rate. A broth enrichment step had a reduced sensitivity over direct plating. ChromID C. difficile showed the best recovery rate whereas CCEY egg yolk agar was the most sensitive of the four. The addition of 1% Neutral Red did not show sufficient colour change when added to CCEY egg yolk agar to be used as a differential medium. For a low cost, timely and sensitive method of isolating C. difficile from stool samples we recommend direct plating onto CCEY egg yolk agar after heat shock. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. The Model Confidence Set

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.

    The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS......, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine...... the MCS of the best in terms of in-sample likelihood criteria....

  13. On the matched pairs sign test using bivariate ranked set sampling ...

    African Journals Online (AJOL)

    BVRSS) is introduced and investigated. We show that this test is asymptotically more efficient than its counterpart sign test based on a bivariate simple random sample (BVSRS). The asymptotic null distribution and the efficiency of the test are derived.

  14. Bioremediation of PAH contaminated soil samples

    International Nuclear Information System (INIS)

    Joshi, M.M.; Lee, S.

    1994-01-01

    Soils contaminated with polynuclear aromatic hydrocarbons (PAHs) pose a hazard to life. The remediation of such sites can be done using physical, chemical, and biological treatment methods or a combination of them. It is of interest to study the decontamination of soil using bioremediation. The experiments were conducted using Acinetobacter (ATCC 31012) at room temperature without pH or temperature control. In the first series of experiments, contaminated soil samples obtained from Alberta Research Council were analyzed to determine the toxic contaminant and their composition in the soil. These samples were then treated using aerobic fermentation and removal efficiency for each contaminant was determined. In the second series of experiments, a single contaminant was used to prepare a synthetic soil sample. This sample of known composition was then treated using aerobic fermentation in continuously stirred flasks. In one set of flasks, contaminant was the only carbon source and in the other set, starch was an additional carbon source. In the third series of experiments, the synthetic contaminated soil sample was treated in continuously stirred flasks in the first set and in fixed bed in the second set and the removal efficiencies were compared. The removal efficiencies obtained indicated the extent of biodegradation for various contaminants, the effect of additional carbon source, and performance in fixed bed without external aeration

  15. Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems

    KAUST Repository

    Xinyu Tang,

    2010-01-25

    Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end-effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot\\'s degrees of freedom. In addition to supporting efficient sampling of configurations, we show that the RD-space formulation naturally supports planning and, in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end-effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1,000 links in time comparable to open chain sampling, and we can generate samples for 1,000-link multi-loop systems of varying topologies in less than a second. © 2010 The Author(s).

  16. Information overload or search-amplified risk? Set size and order effects on decisions from experience.

    Science.gov (United States)

    Hills, Thomas T; Noguchi, Takao; Gibbert, Michael

    2013-10-01

    How do changes in choice-set size influence information search and subsequent decisions? Moreover, does information overload influence information processing with larger choice sets? We investigated these questions by letting people freely explore sets of gambles before choosing one of them, with the choice sets either increasing or decreasing in number for each participant (from two to 32 gambles). Set size influenced information search, with participants taking more samples overall, but sampling a smaller proportion of gambles and taking fewer samples per gamble, when set sizes were larger. The order of choice sets also influenced search, with participants sampling from more gambles and taking more samples overall if they started with smaller as opposed to larger choice sets. Inconsistent with information overload, information processing appeared consistent across set sizes and choice order conditions, reliably favoring gambles with higher sample means. Despite the lack of evidence for information overload, changes in information search did lead to systematic changes in choice: People who started with smaller choice sets were more likely to choose gambles with the highest expected values, but only for small set sizes. For large set sizes, the increase in total samples increased the likelihood of encountering rare events at the same time that the reduction in samples per gamble amplified the effect of these rare events when they occurred-what we call search-amplified risk. This led to riskier choices for individuals whose choices most closely followed the sample mean.

  17. Sample Return Robot

    Data.gov (United States)

    National Aeronautics and Space Administration — This Challenge requires demonstration of an autonomous robotic system to locate and collect a set of specific sample types from a large planetary analog area and...

  18. Evaluation of rotational set-up errors in patients with thoracic neoplasms

    International Nuclear Information System (INIS)

    Wang Yanyang; Fu Xiaolong; Xia Bing; Fan Min; Yang Huanjun; Ren Jun; Xu Zhiyong; Jiang Guoliang

    2010-01-01

    Objective: To assess the rotational set-up errors in patients with thoracic neoplasms. Methods: 224 kilovoltage cone-beam computed tomography (KVCBCT) scans from 20 thoracic tumor patients were evaluated retrospectively. All these patients were involved in the research of 'Evaluation of the residual set-up error for online kilovoltage cone-beam CT guided thoracic tumor radiation'. Rotational set-up errors, including pitch, roll and yaw, were calculated by 'aligning the KVCBCT with the planning CT, using the semi-automatic alignment method. Results: The average rotational set-up errors were -0.28 degree ±1.52 degree, 0.21 degree ± 0.91 degree and 0.27 degree ± 0.78 degree in the left-fight, superior-inferior and anterior-posterior axis, respectively. The maximal rotational errors of pitch, roll and yaw were 3.5 degree, 2.7 degree and 2.2 degree, respectively. After correction for translational set-up errors, no statistically significant changes in rotational error were observed. Conclusions: The rotational set-up errors in patients with thoracic neoplasms were all small in magnitude. Rotational errors may not change after the correction for translational set-up errors alone, which should be evaluated in a larger sample future. (authors)

  19. Comparing two sampling methods to engage hard-to-reach communities in research priority setting

    Directory of Open Access Journals (Sweden)

    Melissa A. Valerio

    2016-10-01

    Full Text Available Abstract Background Effective community-partnered and patient-centered outcomes research needs to address community priorities. However, optimal sampling methods to engage stakeholders from hard-to-reach, vulnerable communities to generate research priorities have not been identified. Methods In two similar rural, largely Hispanic communities, a community advisory board guided recruitment of stakeholders affected by chronic pain using a different method in each community: 1 snowball sampling, a chain- referral method or 2 purposive sampling to recruit diverse stakeholders. In both communities, three groups of stakeholders attended a series of three facilitated meetings to orient, brainstorm, and prioritize ideas (9 meetings/community. Using mixed methods analysis, we compared stakeholder recruitment and retention as well as priorities from both communities’ stakeholders on mean ratings of their ideas based on importance and feasibility for implementation in their community. Results Of 65 eligible stakeholders in one community recruited by snowball sampling, 55 (85 % consented, 52 (95 % attended the first meeting, and 36 (65 % attended all 3 meetings. In the second community, the purposive sampling method was supplemented by convenience sampling to increase recruitment. Of 69 stakeholders recruited by this combined strategy, 62 (90 % consented, 36 (58 % attended the first meeting, and 26 (42 % attended all 3 meetings. Snowball sampling recruited more Hispanics and disabled persons (all P < 0.05. Despite differing recruitment strategies, stakeholders from the two communities identified largely similar ideas for research, focusing on non-pharmacologic interventions for management of chronic pain. Ratings on importance and feasibility for community implementation differed only on the importance of massage services (P = 0.045 which was higher for the purposive/convenience sampling group and for city improvements

  20. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    variance was close to the parametric value. Only rather strong trends in genetic variance deviated statistically significantly from zero in setting S. Results also showed that the new method was sensitive to the quality of the approximated reliabilities of breeding values used in calculating the prediction error variance. Thus, we recommend that only animals with a reliability of Mendelian sampling higher than 0.1 be included in the test and that low heritability traits be analyzed using bull data sets only. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  1. Why Do Phylogenomic Data Sets Yield Conflicting Trees? Data Type Influences the Avian Tree of Life more than Taxon Sampling.

    Science.gov (United States)

    Reddy, Sushma; Kimball, Rebecca T; Pandey, Akanksha; Hosner, Peter A; Braun, Michael J; Hackett, Shannon J; Han, Kin-Lan; Harshman, John; Huddleston, Christopher J; Kingston, Sarah; Marks, Ben D; Miglia, Kathleen J; Moore, William S; Sheldon, Frederick H; Witt, Christopher C; Yuri, Tamaki; Braun, Edward L

    2017-09-01

    Phylogenomics, the use of large-scale data matrices in phylogenetic analyses, has been viewed as the ultimate solution to the problem of resolving difficult nodes in the tree of life. However, it has become clear that analyses of these large genomic data sets can also result in conflicting estimates of phylogeny. Here, we use the early divergences in Neoaves, the largest clade of extant birds, as a "model system" to understand the basis for incongruence among phylogenomic trees. We were motivated by the observation that trees from two recent avian phylogenomic studies exhibit conflicts. Those studies used different strategies: 1) collecting many characters [$\\sim$ 42 mega base pairs (Mbp) of sequence data] from 48 birds, sometimes including only one taxon for each major clade; and 2) collecting fewer characters ($\\sim$ 0.4 Mbp) from 198 birds, selected to subdivide long branches. However, the studies also used different data types: the taxon-poor data matrix comprised 68% non-coding sequences whereas coding exons dominated the taxon-rich data matrix. This difference raises the question of whether the primary reason for incongruence is the number of sites, the number of taxa, or the data type. To test among these alternative hypotheses we assembled a novel, large-scale data matrix comprising 90% non-coding sequences from 235 bird species. Although increased taxon sampling appeared to have a positive impact on phylogenetic analyses the most important variable was data type. Indeed, by analyzing different subsets of the taxa in our data matrix we found that increased taxon sampling actually resulted in increased congruence with the tree from the previous taxon-poor study (which had a majority of non-coding data) instead of the taxon-rich study (which largely used coding data). We suggest that the observed differences in the estimates of topology for these studies reflect data-type effects due to violations of the models used in phylogenetic analyses, some of which

  2. Analytic device including nanostructures

    KAUST Repository

    Di Fabrizio, Enzo M.; Fratalocchi, Andrea; Totero Gongora, Juan Sebastian; Coluccio, Maria Laura; Candeloro, Patrizio; Cuda, Gianni

    2015-01-01

    A device for detecting an analyte in a sample comprising: an array including a plurality of pixels, each pixel including a nanochain comprising: a first nanostructure, a second nanostructure, and a third nanostructure, wherein size of the first nanostructure is larger than that of the second nanostructure, and size of the second nanostructure is larger than that of the third nanostructure, and wherein the first nanostructure, the second nanostructure, and the third nanostructure are positioned on a substrate such that when the nanochain is excited by an energy, an optical field between the second nanostructure and the third nanostructure is stronger than an optical field between the first nanostructure and the second nanostructure, wherein the array is configured to receive a sample; and a detector arranged to collect spectral data from a plurality of pixels of the array.

  3. Theory of random sets

    CERN Document Server

    Molchanov, Ilya

    2017-01-01

    This monograph, now in a thoroughly revised second edition, offers the latest research on random sets. It has been extended to include substantial developments achieved since 2005, some of them motivated by applications of random sets to econometrics and finance. The present volume builds on the foundations laid by Matheron and others, including the vast advances in stochastic geometry, probability theory, set-valued analysis, and statistical inference. It shows the various interdisciplinary relationships of random set theory within other parts of mathematics, and at the same time fixes terminology and notation that often vary in the literature, establishing it as a natural part of modern probability theory and providing a platform for future development. It is completely self-contained, systematic and exhaustive, with the full proofs that are necessary to gain insight. Aimed at research level, Theory of Random Sets will be an invaluable reference for probabilists; mathematicians working in convex and integ...

  4. Probabilistic generation of quantum contextual sets

    International Nuclear Information System (INIS)

    Megill, Norman D.; Fresl, Kresimir; Waegell, Mordecai; Aravind, P.K.; Pavicic, Mladen

    2011-01-01

    We give a method for exhaustive generation of a huge number of Kochen-Specker contextual sets, based on the 600-cell, for possible experiments and quantum gates. The method is complementary to our previous parity proof generation of these sets, and it gives all sets while the parity proof method gives only sets with an odd number of edges in their hypergraph representation. Thus we obtain 35 new kinds of critical KS sets with an even number of edges. We also give a statistical estimate of the number of sets that might be obtained in an eventual exhaustive enumeration. -- Highlights: → We generate millions of new Kochen-Specker noncontextual set. → We find thousands of novel critical Kochen-Specker (KS) sets. → We give algorithms for generating KS sets from a new 4-dim class. → We represent KS sets by means of hypergraphs and their figures. → We give a new exact estimation method for random sampling of sets.

  5. Rapid DNA analysis for automated processing and interpretation of low DNA content samples.

    Science.gov (United States)

    Turingan, Rosemary S; Vasantgadkar, Sameer; Palombo, Luke; Hogan, Catherine; Jiang, Hua; Tan, Eugene; Selden, Richard F

    2016-01-01

    Short tandem repeat (STR) analysis of casework samples with low DNA content include those resulting from the transfer of epithelial cells from the skin to an object (e.g., cells on a water bottle, or brim of a cap), blood spatter stains, and small bone and tissue fragments. Low DNA content (LDC) samples are important in a wide range of settings, including disaster response teams to assist in victim identification and family reunification, military operations to identify friend or foe, criminal forensics to identify suspects and exonerate the innocent, and medical examiner and coroner offices to identify missing persons. Processing LDC samples requires experienced laboratory personnel, isolated workstations, and sophisticated equipment, requires transport time, and involves complex procedures. We present a rapid DNA analysis system designed specifically to generate STR profiles from LDC samples in field-forward settings by non-technical operators. By performing STR in the field, close to the site of collection, rapid DNA analysis has the potential to increase throughput and to provide actionable information in real time. A Low DNA Content BioChipSet (LDC BCS) was developed and manufactured by injection molding. It was designed to function in the fully integrated Accelerated Nuclear DNA Equipment (ANDE) instrument previously designed for analysis of buccal swab and other high DNA content samples (Investigative Genet. 4(1):1-15, 2013). The LDC BCS performs efficient DNA purification followed by microfluidic ultrafiltration of the purified DNA, maximizing the quantity of DNA available for subsequent amplification and electrophoretic separation and detection of amplified fragments. The system demonstrates accuracy, precision, resolution, signal strength, and peak height ratios appropriate for casework analysis. The LDC rapid DNA analysis system is effective for the generation of STR profiles from a wide range of sample types. The technology broadens the range of sample

  6. Long-Term Ecological Monitoring Field Sampling Plan for 2007

    International Nuclear Information System (INIS)

    T. Haney R. VanHorn

    2007-01-01

    This field sampling plan describes the field investigations planned for the Long-Term Ecological Monitoring Project at the Idaho National Laboratory Site in 2007. This plan and the Quality Assurance Project Plan for Waste Area Groups 1, 2, 3, 4, 5, 6, 7, 10, and Removal Actions constitute the sampling and analysis plan supporting long-term ecological monitoring sampling in 2007. The data collected under this plan will become part of the long-term ecological monitoring data set that is being collected annually. The data will be used to determine the requirements for the subsequent long-term ecological monitoring. This plan guides the 2007 investigations, including sampling, quality assurance, quality control, analytical procedures, and data management. As such, this plan will help to ensure that the resulting monitoring data will be scientifically valid, defensible, and of known and acceptable quality

  7. Long-Term Ecological Monitoring Field Sampling Plan for 2007

    Energy Technology Data Exchange (ETDEWEB)

    T. Haney

    2007-07-31

    This field sampling plan describes the field investigations planned for the Long-Term Ecological Monitoring Project at the Idaho National Laboratory Site in 2007. This plan and the Quality Assurance Project Plan for Waste Area Groups 1, 2, 3, 4, 5, 6, 7, 10, and Removal Actions constitute the sampling and analysis plan supporting long-term ecological monitoring sampling in 2007. The data collected under this plan will become part of the long-term ecological monitoring data set that is being collected annually. The data will be used t determine the requirements for the subsequent long-term ecological monitoring. This plan guides the 2007 investigations, including sampling, quality assurance, quality control, analytical procedures, and data management. As such, this plan will help to ensure that the resulting monitoring data will be scientifically valid, defensible, and of known and acceptable quality.

  8. The upgraded external-beam PIXE/PIGE set-up at LABEC for very fast measurements on aerosol samples

    Energy Technology Data Exchange (ETDEWEB)

    Lucarelli, F.; Calzolai, G.; Chiari, M.; Mochi, D.; Nava, S. [Department of Physics, University of Florence and INFN, Florence (Italy)

    2013-07-01

    Full text: Particle Induced X-ray Emission (PIXE)technique has been widely used since its birth for the study of the aerosol composition, and for a long time it has been the dominating technique for its elemental analysis. However now it has to compete with other techniques, like Induced Coupled Plasma and detection by Atomic Emission Spectroscopy (ICP-AES) or Mass Spectrometry (ICP-MS) or Synchrotron Radiation XRF (SR-XRF). To remain competitive, a proper experimental set-up is important to fully exploit PIXE capabilities. At LABEC, an external beam line is fully dedicated to PIXE-PIGE measurements of atmospheric aerosols [1]. Recently SDD (Silicon Drift Detectors) have been introduced for X-ray detection thanks to their better resolution with respect to Si(Li) detectors and the possibility of managing high counting rates (up to 50 kHz at 0.5 μsec shaping time). This implies, in turn, the possibility of using very high beam currents thus drastically reducing the measurement time. However their use for a complete characterization of X-rays was limited by the small thickness and surface areas available. Now SDD with a thickness of 500 μm and 80 mm{sup 2} area have been introduced in the market. We have therefore replaced the Si(Li) detector used so far for the detection of medium-high Z elements with such a SDD. A comparison of the two detectors has been carried out; PIXE minimum detection limits (MDLs) at different proton beam energies have been studied to find out the best energy for PIXE measurements on aerosol samples collected on different substrata, namely Teflon, Kapton, Nuclepore and Kimfol, used for daily or hourly sampling or for cascade impactors. In particular in the case of Teflon filters, the production of γ-rays by F in the Teflon filter limits the current which may be used and the Compton γ-ray background worsens the MDLs. Due to the lower thickness of the SDD detector with respect to a typical Si(Li) detector, these problems are reduced

  9. Setting-related influences on physical inactivity of older adults in residential care settings: a review.

    Science.gov (United States)

    Douma, Johanna G; Volkers, Karin M; Engels, Gwenda; Sonneveld, Marieke H; Goossens, Richard H M; Scherder, Erik J A

    2017-04-28

    Despite the detrimental effects of physical inactivity for older adults, especially aged residents of residential care settings may spend much time in inactive behavior. This may be partly due to their poorer physical condition; however, there may also be other, setting-related factors that influence the amount of inactivity. The aim of this review was to review setting-related factors (including the social and physical environment) that may contribute to the amount of older adults' physical inactivity in a wide range of residential care settings (e.g., nursing homes, assisted care facilities). Five databases were systematically searched for eligible studies, using the key words 'inactivity', 'care facilities', and 'older adults', including their synonyms and MeSH terms. Additional studies were selected from references used in articles included from the search. Based on specific eligibility criteria, a total of 12 studies were included. Quality of the included studies was assessed using the Mixed Methods Appraisal Tool (MMAT). Based on studies using different methodologies (e.g., interviews and observations), and of different quality (assessed quality range: 25-100%), we report several aspects related to the physical environment and caregivers. Factors of the physical environment that may be related to physical inactivity included, among others, the environment's compatibility with the abilities of a resident, the presence of equipment, the accessibility, security, comfort, and aesthetics of the environment/corridors, and possibly the presence of some specific areas. Caregiver-related factors included staffing levels, the available time, and the amount and type of care being provided. Inactivity levels in residential care settings may be reduced by improving several features of the physical environment and with the help of caregivers. Intervention studies could be performed in order to gain more insight into causal effects of improving setting-related factors on

  10. System Administrator for LCS Development Sets

    Science.gov (United States)

    Garcia, Aaron

    2013-01-01

    The Spaceport Command and Control System Project is creating a Checkout and Control System that will eventually launch the next generation of vehicles from Kennedy Space Center. KSC has a large set of Development and Operational equipment already deployed in several facilities, including the Launch Control Center, which requires support. The position of System Administrator will complete tasks across multiple platforms (Linux/Windows), many of them virtual. The Hardware Branch of the Control and Data Systems Division at the Kennedy Space Center uses system administrators for a variety of tasks. The position of system administrator comes with many responsibilities which include maintaining computer systems, repair or set up hardware, install software, create backups and recover drive images are a sample of jobs which one must complete. Other duties may include working with clients in person or over the phone and resolving their computer system needs. Training is a major part of learning how an organization functions and operates. Taking that into consideration, NASA is no exception. Training on how to better protect the NASA computer infrastructure will be a topic to learn, followed by NASA work polices. Attending meetings and discussing progress will be expected. A system administrator will have an account with root access. Root access gives a user full access to a computer system and or network. System admins can remove critical system files and recover files using a tape backup. Problem solving will be an important skill to develop in order to complete the many tasks.

  11. Set theory essentials

    CERN Document Server

    Milewski, Emil G

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Set Theory includes elementary logic, sets, relations, functions, denumerable and non-denumerable sets, cardinal numbers, Cantor's theorem, axiom of choice, and order relations.

  12. Yoga in school settings: a research review.

    Science.gov (United States)

    Khalsa, Sat Bir S; Butzer, Bethany

    2016-06-01

    Research on the efficacy of yoga for improving mental, emotional, physical, and behavioral health characteristics in school settings is a recent but growing field of inquiry. This systematic review of research on school-based yoga interventions published in peer-reviewed journals offers a bibliometric analysis that identified 47 publications. The studies from these publications have been conducted primarily in the United States (n = 30) and India (n = 15) since 2005, with the majority of studies (n = 41) conducted from 2010 onward. About half of the publications were of studies at elementary schools; most (85%) were conducted within the school curriculum, and most (62%) also implemented a formal school-based yoga program. There was a high degree of variability in yoga intervention characteristics, including overall duration, and the number and duration of sessions. Most of these published research trials are preliminary in nature, with numerous study design limitations, including limited sample sizes (median = 74; range = 20-660) and relatively weak research designs (57% randomized controlled trials, 19% uncontrolled trials), as would be expected in an infant research field. Nevertheless, these publications suggest that yoga in the school setting is a viable and potentially efficacious strategy for improving child and adolescent health and therefore worthy of continued research. © 2016 New York Academy of Sciences.

  13. Mining big data sets of plankton images: a zero-shot learning approach to retrieve labels without training data

    Science.gov (United States)

    Orenstein, E. C.; Morgado, P. M.; Peacock, E.; Sosik, H. M.; Jaffe, J. S.

    2016-02-01

    Technological advances in instrumentation and computing have allowed oceanographers to develop imaging systems capable of collecting extremely large data sets. With the advent of in situ plankton imaging systems, scientists must now commonly deal with "big data" sets containing tens of millions of samples spanning hundreds of classes, making manual classification untenable. Automated annotation methods are now considered to be the bottleneck between collection and interpretation. Typically, such classifiers learn to approximate a function that predicts a predefined set of classes for which a considerable amount of labeled training data is available. The requirement that the training data span all the classes of concern is problematic for plankton imaging systems since they sample such diverse, rapidly changing populations. These data sets may contain relatively rare, sparsely distributed, taxa that will not have associated training data; a classifier trained on a limited set of classes will miss these samples. The computer vision community, leveraging advances in Convolutional Neural Networks (CNNs), has recently attempted to tackle such problems using "zero-shot" object categorization methods. Under a zero-shot framework, a classifier is trained to map samples onto a set of attributes rather than a class label. These attributes can include visual and non-visual information such as what an organism is made out of, where it is distributed globally, or how it reproduces. A second stage classifier is then used to extrapolate a class. In this work, we demonstrate a zero-shot classifier, implemented with a CNN, to retrieve out-of-training-set labels from images. This method is applied to data from two continuously imaging, moored instruments: the Scripps Plankton Camera System (SPCS) and the Imaging FlowCytobot (IFCB). Results from simulated deployment scenarios indicate zero-shot classifiers could be successful at recovering samples of rare taxa in image sets. This

  14. 16 CFR 305.6 - Sampling.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Sampling. 305.6 Section 305.6 Commercial... ENERGY POLICY AND CONSERVATION ACT (âAPPLIANCE LABELING RULEâ) Testing § 305.6 Sampling. (a) For any... based upon the sampling procedures set forth in § 430.24 of 10 CFR part 430, subpart B. (b) For any...

  15. Using SETS to find minimal cut sets in large fault trees

    International Nuclear Information System (INIS)

    Worrell, R.B.; Stack, D.W.

    1978-01-01

    An efficient algebraic algorithm for finding the minimal cut sets for a large fault tree was defined and a new procedure which implements the algorithm was added to the Set Equation Transformation System (SETS). The algorithm includes the identification and separate processing of independent subtrees, the coalescing of consecutive gates of the same kind, the creation of additional independent subtrees, and the derivation of the fault tree stem equation in stages. The computer time required to determine the minimal cut sets using these techniques is shown to be substantially less than the computer time required to determine the minimal cut sets when these techniques are not employed. It is shown for a given example that the execution time required to determine the minimal cut sets can be reduced from 7,686 seconds to 7 seconds when all of these techniques are employed

  16. Evaluation of endogenous control genes for gene expression studies across multiple tissues and in the specific sets of fat- and muscle-type samples of the pig.

    Science.gov (United States)

    Gu, Y R; Li, M Z; Zhang, K; Chen, L; Jiang, A A; Wang, J Y; Li, X W

    2011-08-01

    To normalize a set of quantitative real-time PCR (q-PCR) data, it is essential to determine an optimal number/set of housekeeping genes, as the abundance of housekeeping genes can vary across tissues or cells during different developmental stages, or even under certain environmental conditions. In this study, of the 20 commonly used endogenous control genes, 13, 18 and 17 genes exhibited credible stability in 56 different tissues, 10 types of adipose tissue and five types of muscle tissue, respectively. Our analysis clearly showed that three optimal housekeeping genes are adequate for an accurate normalization, which correlated well with the theoretical optimal number (r ≥ 0.94). In terms of economical and experimental feasibility, we recommend the use of the three most stable housekeeping genes for calculating the normalization factor. Based on our results, the three most stable housekeeping genes in all analysed samples (TOP2B, HSPCB and YWHAZ) are recommended for accurate normalization of q-PCR data. We also suggest that two different sets of housekeeping genes are appropriate for 10 types of adipose tissue (the HSPCB, ALDOA and GAPDH genes) and five types of muscle tissue (the TOP2B, HSPCB and YWHAZ genes), respectively. Our report will serve as a valuable reference for other studies aimed at measuring tissue-specific mRNA abundance in porcine samples. © 2011 Blackwell Verlag GmbH.

  17. Improving small RNA-seq by using a synthetic spike-in set for size-range quality control together with a set for data normalization.

    Science.gov (United States)

    Locati, Mauro D; Terpstra, Inez; de Leeuw, Wim C; Kuzak, Mateusz; Rauwerda, Han; Ensink, Wim A; van Leeuwen, Selina; Nehrdich, Ulrike; Spaink, Herman P; Jonker, Martijs J; Breit, Timo M; Dekker, Rob J

    2015-08-18

    There is an increasing interest in complementing RNA-seq experiments with small-RNA (sRNA) expression data to obtain a comprehensive view of a transcriptome. Currently, two main experimental challenges concerning sRNA-seq exist: how to check the size distribution of isolated sRNAs, given the sensitive size-selection steps in the protocol; and how to normalize data between samples, given the low complexity of sRNA types. We here present two separate sets of synthetic RNA spike-ins for monitoring size-selection and for performing data normalization in sRNA-seq. The size-range quality control (SRQC) spike-in set, consisting of 11 oligoribonucleotides (10-70 nucleotides), was tested by intentionally altering the size-selection protocol and verified via several comparative experiments. We demonstrate that the SRQC set is useful to reproducibly track down biases in the size-selection in sRNA-seq. The external reference for data-normalization (ERDN) spike-in set, consisting of 19 oligoribonucleotides, was developed for sample-to-sample normalization in differential-expression analysis of sRNA-seq data. Testing and applying the ERDN set showed that it can reproducibly detect differential expression over a dynamic range of 2(18). Hence, biological variation in sRNA composition and content between samples is preserved while technical variation is effectively minimized. Together, both spike-in sets can significantly improve the technical reproducibility of sRNA-seq. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Basic set theory

    CERN Document Server

    Levy, Azriel

    2002-01-01

    An advanced-level treatment of the basics of set theory, this text offers students a firm foundation, stopping just short of the areas employing model-theoretic methods. Geared toward upper-level undergraduate and graduate students, it consists of two parts: the first covers pure set theory, including the basic motions, order and well-foundedness, cardinal numbers, the ordinals, and the axiom of choice and some of it consequences; the second deals with applications and advanced topics such as point set topology, real spaces, Boolean algebras, and infinite combinatorics and large cardinals. An

  19. Evaluation of Suitability of Selected Set of Coal Plant Sites for Repowering with Small Modular Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Belles, Randy [ORNL; Copinger, Donald A [ORNL; Mays, Gary T [ORNL; Omitaomu, Olufemi A [ORNL; Poore III, Willis P [ORNL

    2013-03-01

    This report summarizes the approach that ORNL developed for screening a sample set of small coal stations for possible repowering with SMRs; the methodology employed, including spatial modeling; and initial results for these sample plants. The objective in conducting this type of siting evaluation is to demonstrate the capability to characterize specific sample coal plant sites to identify any particular issues associated with repowering existing coal stations with SMRs using OR-SAGE; it is not intended to be a definitive assessment per se as to the absolute suitability of any particular site.

  20. Adaptive maximal poisson-disk sampling on surfaces

    KAUST Repository

    Yan, Dongming

    2012-01-01

    In this paper, we study the generation of maximal Poisson-disk sets with varying radii on surfaces. Based on the concepts of power diagram and regular triangulation, we present a geometric analysis of gaps in such disk sets on surfaces, which is the key ingredient of the adaptive maximal Poisson-disk sampling framework. Moreover, we adapt the presented sampling framework for remeshing applications. Several novel and efficient operators are developed for improving the sampling/meshing quality over the state-of-theart. © 2012 ACM.

  1. Content validation of the international classification of functioning, disability and health core set for stroke from gender perspective using a qualitative approach.

    Science.gov (United States)

    Glässel, A; Coenen, M; Kollerits, B; Cieza, A

    2014-06-01

    The extended ICF Core Set for stroke is an application of the International Classification of Functioning, Disability and Health (ICF) of the World Health Organisation (WHO) with the purpose to represent the typical spectrum of functioning of persons with stroke. The objective of the study is to add evidence to the content validity of the extended ICF Core Set for stroke from persons after stroke taking into account gender perspective. A qualitative study design was conducted by using individual interviews with women and men after stroke in an in- and outpatient rehabilitation setting. The sampling followed the maximum variation strategy. Sample size was determined by saturation. Concepts from qualitative data analysis were linked to ICF categories and compared to the extended ICF Core Set for stroke. Twelve women and 12 men participated in 24 individual interviews. In total, 143 out of 166 ICF categories included in the extended ICF Core Set for stroke were confirmed (women: N.=13; men: N.=17; both genders: N.=113). Thirty-eight additional categories that are not yet included in the extended ICF Core Set for stroke were raised by women and men. This study confirms that the experience of functioning and disability after stroke shows communalities and differences for women and men. The validity of the extended ICF Core Set for stroke could be mostly confirmed, since it does not only include those areas of functioning and disability relevant to both genders but also those exclusively relevant to either women or men. Further research is needed on ICF categories not yet included in the extended ICF Core Set for stroke.

  2. Performance of local information-based link prediction: a sampling perspective

    Science.gov (United States)

    Zhao, Jichang; Feng, Xu; Dong, Li; Liang, Xiao; Xu, Ke

    2012-08-01

    Link prediction is pervasively employed to uncover the missing links in the snapshots of real-world networks, which are usually obtained through different kinds of sampling methods. In the previous literature, in order to evaluate the performance of the prediction, known edges in the sampled snapshot are divided into the training set and the probe set randomly, without considering the underlying sampling approaches. However, different sampling methods might lead to different missing links, especially for the biased ways. For this reason, random partition-based evaluation of performance is no longer convincing if we take the sampling method into account. In this paper, we try to re-evaluate the performance of local information-based link predictions through sampling method governed division of the training set and the probe set. It is interesting that we find that for different sampling methods, each prediction approach performs unevenly. Moreover, most of these predictions perform weakly when the sampling method is biased, which indicates that the performance of these methods might have been overestimated in the prior works.

  3. An Interdisciplinary Method for the Visualization of Novel High-Resolution Precision Photography and Micro-XCT Data Sets of NASA's Apollo Lunar Samples and Antarctic Meteorite Samples to Create Combined Research-Grade 3D Virtual Samples for the Benefit of Astromaterials Collections Conservation, Curation, Scientific Research and Education

    Science.gov (United States)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Hanna, R. D.; Ketcham, R. A.

    2016-01-01

    New technologies make possible the advancement of documentation and visualization practices that can enhance conservation and curation protocols for NASA's Astromaterials Collections. With increasing demands for accessibility to updated comprehensive data, and with new sample return missions on the horizon, it is of primary importance to develop new standards for contemporary documentation and visualization methodologies. Our interdisciplinary team has expertise in the fields of heritage conservation practices, professional photography, photogrammetry, imaging science, application engineering, data curation, geoscience, and astromaterials curation. Our objective is to create virtual 3D reconstructions of Apollo Lunar and Antarctic Meteorite samples that are a fusion of two state-of-the-art data sets: the interior view of the sample by collecting Micro-XCT data and the exterior view of the sample by collecting high-resolution precision photography data. These new data provide researchers an information-rich visualization of both compositional and textural information prior to any physical sub-sampling. Since January 2013 we have developed a process that resulted in the successful creation of the first image-based 3D reconstruction of an Apollo Lunar Sample correlated to a 3D reconstruction of the same sample's Micro- XCT data, illustrating that this technique is both operationally possible and functionally beneficial. In May of 2016 we began a 3-year research period during which we aim to produce Virtual Astromaterials Samples for 60 high-priority Apollo Lunar and Antarctic Meteorite samples and serve them on NASA's Astromaterials Acquisition and Curation website. Our research demonstrates that research-grade Virtual Astromaterials Samples are beneficial in preserving for posterity a precise 3D reconstruction of the sample prior to sub-sampling, which greatly improves documentation practices, provides unique and novel visualization of the sample's interior and

  4. Training set optimization under population structure in genomic selection.

    Science.gov (United States)

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  5. German Value Set for the EQ-5D-5L.

    Science.gov (United States)

    Ludwig, Kristina; Graf von der Schulenburg, J-Matthias; Greiner, Wolfgang

    2018-06-01

    The objective of this study was to develop a value set for EQ-5D-5L based on the societal preferences of the German population. As the first country to do so, the study design used the improved EQ-5D-5L valuation protocol 2.0 developed by the EuroQol Group, including a feedback module as internal validation and a quality control process that was missing in the first wave of EQ-5D-5L valuation studies. A representative sample of the general German population (n = 1158) was interviewed using a composite time trade-off and a discrete choice experiment under close quality control. Econometric modeling was used to estimate values for all 3125 possible health states described by EQ-5D-5L. The value set was based on a hybrid model including all available information from the composite time trade-off and discrete choice experiment valuations without any exclusions due to data issues. The final German value set was constructed from a combination of a conditional logit model for the discrete choice experiment data and a censored at -1 Tobit model for the composite time trade-off data, correcting for heteroskedasticity. The value set had logically consistent parameter estimates (p German version of EQ-5D-5L representing the preferences of the German population. The study successfully employed for the first time worldwide the improved protocol 2.0. The value set enables the use of the EQ-5D-5L instrument in economic evaluations and in clinical studies.

  6. Comparing Data Sets: Implicit Summaries of the Statistical Properties of Number Sets

    Science.gov (United States)

    Morris, Bradley J.; Masnick, Amy M.

    2015-01-01

    Comparing datasets, that is, sets of numbers in context, is a critical skill in higher order cognition. Although much is known about how people compare single numbers, little is known about how number sets are represented and compared. We investigated how subjects compared datasets that varied in their statistical properties, including ratio of…

  7. Cone penetrometer testing and discrete-depth groundwater sampling techniques: A cost-effective method of site characterization in a multiple-aquifer setting

    International Nuclear Information System (INIS)

    Zemo, D.A.; Pierce, Y.G.; Gallinatti, J.D.

    1992-01-01

    Cone penetrometer testing (CPT), combined with discrete-depth groundwater sampling methods, can reduce significantly the time and expense required to characterize large sites that have multiple aquifers. Results from the screening site characterization can be used to design and install a cost-effective monitoring well network. At a site in northern California, it was necessary to characterize the stratigraphy and the distribution of volatile organic compounds (VOCs) to a depth of 80 feet within a 1/2 mile-by-1/4-mile residential and commercial area in a complex alluvial fan setting. To expedite site characterization, a five-week field screening program was implemented that consisted of a shallow groundwater survey, CPT soundings, and discrete-depth groundwater sampling. Based on continuous lithologic information provided by the CPT soundings, four coarse-grained water-yielding sedimentary packages were identified. Eighty-three discrete-depth groundwater samples were collected using shallow groundwater survey techniques, the BAT Enviroprobe, or the QED HydroPunch 1, depending on subsurface conditions. A 20-well monitoring network was designed and installed to monitor critical points within each sedimentary package. Understanding the vertical VOC distribution and concentrations produced substantial cost savings by minimizing the number of permanent monitoring wells and reducing the number of costly conductor casings to be installed. Significant long-term cost savings will result from reduced sampling costs. Where total VOC concentrations exceeded 20 φg/l in the screening samples, a good correlation was found between the discrete-depth screening data and data from monitoring wells. Using a screening program to characterize the site before installing monitoring wells resulted in an estimated 50-percent reduction in costs for site characterization, 65-percent reduction in time for site characterization, and 50-percent reduction in long-term monitoring costs

  8. SKATE: a docking program that decouples systematic sampling from scoring.

    Science.gov (United States)

    Feng, Jianwen A; Marshall, Garland R

    2010-11-15

    SKATE is a docking prototype that decouples systematic sampling from scoring. This novel approach removes any interdependence between sampling and scoring functions to achieve better sampling and, thus, improves docking accuracy. SKATE systematically samples a ligand's conformational, rotational and translational degrees of freedom, as constrained by a receptor pocket, to find sterically allowed poses. Efficient systematic sampling is achieved by pruning the combinatorial tree using aggregate assembly, discriminant analysis, adaptive sampling, radial sampling, and clustering. Because systematic sampling is decoupled from scoring, the poses generated by SKATE can be ranked by any published, or in-house, scoring function. To test the performance of SKATE, ligands from the Asetex/CDCC set, the Surflex set, and the Vertex set, a total of 266 complexes, were redocked to their respective receptors. The results show that SKATE was able to sample poses within 2 A RMSD of the native structure for 98, 95, and 98% of the cases in the Astex/CDCC, Surflex, and Vertex sets, respectively. Cross-docking accuracy of SKATE was also assessed by docking 10 ligands to thymidine kinase and 73 ligands to cyclin-dependent kinase. 2010 Wiley Periodicals, Inc.

  9. Utilizing the Zero-One Linear Programming Constraints to Draw Multiple Sets of Matched Samples from a Non-Treatment Population as Control Groups for the Quasi-Experimental Design

    Science.gov (United States)

    Li, Yuan H.; Yang, Yu N.; Tompkins, Leroy J.; Modarresi, Shahpar

    2005-01-01

    The statistical technique, "Zero-One Linear Programming," that has successfully been used to create multiple tests with similar characteristics (e.g., item difficulties, test information and test specifications) in the area of educational measurement, was deemed to be a suitable method for creating multiple sets of matched samples to be…

  10. Set discrimination of quantum states

    International Nuclear Information System (INIS)

    Zhang Shengyu; Ying Mingsheng

    2002-01-01

    We introduce a notion of set discrimination, which is an interesting extension of quantum state discrimination. A state is secretly chosen from a number of quantum states, which are partitioned into some disjoint sets. A set discrimination is required to identify which set the given state belongs to. Several essential problems are addressed in this paper, including the condition of perfect set discrimination, unambiguous set discrimination, and in the latter case, the efficiency of the discrimination. This generalizes some important results on quantum state discrimination in the literature. A combination of state and set discrimination and the efficiency are also studied

  11. Sampling Large Graphs for Anticipatory Analytics

    Science.gov (United States)

    2015-05-15

    low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges

  12. Diagnosing intramammary infections: evaluation of definitions based on a single milk sample.

    Science.gov (United States)

    Dohoo, I R; Smith, J; Andersen, S; Kelton, D F; Godden, S

    2011-01-01

    Criteria for diagnosing intramammary infections (IMI) have been debated for many years. Factors that may be considered in making a diagnosis include the organism of interest being found on culture, the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and whether or not concurrent evidence of inflammation existed (often measured by somatic cell count). However, research using these criteria has been hampered by the lack of a "gold standard" test (i.e., a perfect test against which the criteria can be evaluated) and the need for very large data sets of culture results to have sufficient numbers of quarters with infections with a variety of organisms. This manuscript used 2 large data sets of culture results to evaluate several definitions (sets of criteria) for classifying a quarter as having, or not having an IMI by comparing the results from a single culture to a gold standard diagnosis based on a set of 3 milk samples. The first consisted of 38,376 milk samples from which 25,886 triplicate sets of milk samples taken 1 wk apart were extracted. The second consisted of 784 quarters that were classified as infected or not based on a set of 3 milk samples collected at 2-d intervals. From these quarters, a total of 3,136 additional samples were evaluated. A total of 12 definitions (named A to L) based on combinations of the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and the somatic cell count were evaluated for each organism (or group of organisms) with sufficient data. The sensitivity (ability of a definition to detect IMI) and the specificity (Sp; ability of a definition to correctly classify noninfected quarters) were both computed. For all species, except Staphylococcus aureus, the sensitivity of all definitions was definition A). With the exception of "any organism" and coagulase-negative staphylococci, all Sp estimates were over 94% in the daily data and over 97

  13. MOCUS, Minimal Cut Sets and Minimal Path Sets from Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.; Henry, E.B.; Marshall, N.H.

    1976-01-01

    1 - Description of problem or function: From a description of the Boolean failure logic of a system, called a fault tree, and control parameters specifying the minimal cut set length to be obtained MOCUS determines the system failure modes, or minimal cut sets, and the system success modes, or minimal path sets. 2 - Method of solution: MOCUS uses direct resolution of the fault tree into the cut and path sets. The algorithm used starts with the main failure of interest, the top event, and proceeds to basic independent component failures, called primary events, to resolve the fault tree to obtain the minimal sets. A key point of the algorithm is that an and gate alone always increases the number of path sets; an or gate alone always increases the number of cut sets and increases the size of path sets. Other types of logic gates must be described in terms of and and or logic gates. 3 - Restrictions on the complexity of the problem: Output from MOCUS can include minimal cut and path sets for up to 20 gates

  14. Goal-setting in clinical medicine.

    Science.gov (United States)

    Bradley, E H; Bogardus, S T; Tinetti, M E; Inouye, S K

    1999-07-01

    The process of setting goals for medical care in the context of chronic disease has received little attention in the medical literature, despite the importance of goal-setting in the achievement of desired outcomes. Using qualitative research methods, this paper develops a theory of goal-setting in the care of patients with dementia. The theory posits several propositions. First, goals are generated from embedded values but are distinct from values. Goals vary based on specific circumstances and alternatives whereas values are person-specific and relatively stable in the face of changing circumstances. Second, goals are hierarchical in nature, with complex mappings between general and specific goals. Third, there are a number of factors that modify the goal-setting process, by affecting the generation of goals from values or the translation of general goals to specific goals. Modifying factors related to individuals include their degree of risk-taking, perceived self-efficacy, and acceptance of the disease. Disease factors that modify the goal-setting process include the urgency and irreversibility of the medical condition. Pertinent characteristics of the patient-family-clinician interaction include the level of participation, control, and trust among patients, family members, and clinicians. The research suggests that the goal-setting process in clinical medicine is complex, and the potential for disagreements regarding goals substantial. The nature of the goal-setting process suggests that explicit discussion of goals for care may be necessary to promote effective patient-family-clinician communication and adequate care planning.

  15. Screening experiments of ecstasy street samples using near infrared spectroscopy.

    Science.gov (United States)

    Sondermann, N; Kovar, K A

    1999-12-20

    Twelve different sets of confiscated ecstasy samples were analysed applying both near infrared spectroscopy in reflectance mode (1100-2500 nm) and high-performance liquid chromatography (HPLC). The sets showed a large variance in composition. A calibration data set was generated based on the theory of factorial designs. It contained 221 N-methyl-3,4-methylenedioxyamphetamine (MDMA) samples, 167 N-ethyl-3,4-methylenedioxyamphetamine (MDE), 111 amphetamine and 106 samples without a controlled substance, which will be called placebo samples thereafter. From this data set, PLS-1 models were calculated and were successfully applied for validation of various external laboratory test sets. The transferability of these results to confiscated tablets is demonstrated here. It is shown that differentiation into placebo, amphetamine and ecstasy samples is possible. Analysis of intact tablets is practicable. However, more reliable results are obtained from pulverised samples. This is due to ill-defined production procedures. The use of mathematically pretreated spectra improves the prediction quality of all the PLS-1 models studied. It is possible to improve discrimination between MDE and MDMA with the help of a second model based on raw spectra. Alternative strategies are briefly discussed.

  16. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

    OpenAIRE

    Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we comp...

  17. Functional Multiple-Set Canonical Correlation Analysis

    Science.gov (United States)

    Hwang, Heungsun; Jung, Kwanghee; Takane, Yoshio; Woodward, Todd S.

    2012-01-01

    We propose functional multiple-set canonical correlation analysis for exploring associations among multiple sets of functions. The proposed method includes functional canonical correlation analysis as a special case when only two sets of functions are considered. As in classical multiple-set canonical correlation analysis, computationally, the…

  18. Device including a contact detector

    DEFF Research Database (Denmark)

    2011-01-01

    arms (12) may extend from the supporting body in co-planar relationship with the first surface. The plurality of cantilever arms (12) may extend substantially parallel to each other and each of the plurality of cantilever arms (12) may include an electrical conductive tip for contacting the area......The present invention relates to a probe for determining an electrical property of an area of a surface of a test sample, the probe is intended to be in a specific orientation relative to the test sample. The probe may comprise a supporting body defining a first surface. A plurality of cantilever...... of the test sample by movement of the probe relative to the surface of the test sample into the specific orientation.; The probe may further comprise a contact detector (14) extending from the supporting body arranged so as to contact the surface of the test sample prior to any one of the plurality...

  19. Using lot quality assurance sampling to assess access to water, sanitation and hygiene services in a refugee camp setting in South Sudan: a feasibility study.

    Science.gov (United States)

    Harding, Elizabeth; Beckworth, Colin; Fesselet, Jean-Francois; Lenglet, Annick; Lako, Richard; Valadez, Joseph J

    2017-08-08

    Humanitarian agencies working in refugee camp settings require rapid assessment methods to measure the needs of the populations they serve. Due to the high level of dependency of refugees, agencies need to carry out these assessments. Lot Quality Assurance Sampling (LQAS) is a method commonly used in development settings to assess populations living in a project catchment area to identify their greatest needs. LQAS could be well suited to serve the needs of refugee populations, but it has rarely been used in humanitarian settings. We adapted and implemented an LQAS survey design in Batil refugee camp, South Sudan in May 2013 to measure the added value of using it for sub-camp level assessment. Using pre-existing divisions within the camp, we divided the Batil catchment area into six contiguous segments, called 'supervision areas' (SA). Six teams of two data collectors randomly selected 19 respondents in each SA, who they interviewed to collect information on water, sanitation, hygiene, and diarrhoea prevalence. These findings were aggregated into a stratified random sample of 114 respondents, and the results were analysed to produce a coverage estimate with 95% confidence interval for the camp and to prioritize SAs within the camp. The survey provided coverage estimates on WASH indicators as well as evidence that areas of the camp closer to the main road, to clinics and to the market were better served than areas at the periphery of the camp. This assumption did not hold for all services, however, as sanitation services were uniformly high regardless of location. While it was necessary to adapt the standard LQAS protocol used in low-resource communities, the LQAS model proved to be feasible in a refugee camp setting, and program managers found the results useful at both the catchment area and SA level. This study, one of the few adaptations of LQAS for a camp setting, shows that it is a feasible method for regular monitoring, with the added value of enabling camp

  20. Using lot quality assurance sampling to assess access to water, sanitation and hygiene services in a refugee camp setting in South Sudan: a feasibility study

    Directory of Open Access Journals (Sweden)

    Elizabeth Harding

    2017-08-01

    Full Text Available Abstract Background Humanitarian agencies working in refugee camp settings require rapid assessment methods to measure the needs of the populations they serve. Due to the high level of dependency of refugees, agencies need to carry out these assessments. Lot Quality Assurance Sampling (LQAS is a method commonly used in development settings to assess populations living in a project catchment area to identify their greatest needs. LQAS could be well suited to serve the needs of refugee populations, but it has rarely been used in humanitarian settings. We adapted and implemented an LQAS survey design in Batil refugee camp, South Sudan in May 2013 to measure the added value of using it for sub-camp level assessment. Methods Using pre-existing divisions within the camp, we divided the Batil catchment area into six contiguous segments, called ‘supervision areas’ (SA. Six teams of two data collectors randomly selected 19 respondents in each SA, who they interviewed to collect information on water, sanitation, hygiene, and diarrhoea prevalence. These findings were aggregated into a stratified random sample of 114 respondents, and the results were analysed to produce a coverage estimate with 95% confidence interval for the camp and to prioritize SAs within the camp. Results The survey provided coverage estimates on WASH indicators as well as evidence that areas of the camp closer to the main road, to clinics and to the market were better served than areas at the periphery of the camp. This assumption did not hold for all services, however, as sanitation services were uniformly high regardless of location. While it was necessary to adapt the standard LQAS protocol used in low-resource communities, the LQAS model proved to be feasible in a refugee camp setting, and program managers found the results useful at both the catchment area and SA level. Conclusions This study, one of the few adaptations of LQAS for a camp setting, shows that it is a feasible

  1. Spatial variation of contaminant elements of roadside dust samples from Budapest (Hungary) and Seoul (Republic of Korea), including Pt, Pd and Ir.

    Science.gov (United States)

    Sager, Manfred; Chon, Hyo-Taek; Marton, Laszlo

    2015-02-01

    Roadside dusts were studied to explain the spatial variation and present levels of contaminant elements including Pt, Pd and Ir in urban environment and around Budapest (Hungary) and Seoul (Republic of Korea). The samples were collected from six sites of high traffic volumes in Seoul metropolitan city and from two control sites within the suburbs of Seoul, for comparison. Similarly, road dust samples were obtained two times from traffic focal points in Budapest, from the large bridges across the River Danube, from Margitsziget (an island in the Danube in the northern part of Budapest, used for recreation) as well as from main roads (no highways) outside Budapest. The samples were analysed for contaminant elements by ICP-AES and for Pt, Pd and Ir by ICP-MS. The highest Pt, Pd and Ir levels in road dusts were found from major roads with high traffic volume, but correlations with other contaminant elements were low, however. This reflects automobile catalytic converter to be an important source. To interpret the obtained multi-element results in short, pollution index, contamination index and geo-accumulation index were calculated. Finally, the obtained data were compared with total concentrations encountered in dust samples from Madrid, Oslo, Tokyo and Muscat (Oman). Dust samples from Seoul reached top level concentrations for Cd-Zn-As-Co-Cr-Cu-Mo-Ni-Sn. Just Pb was rather low because unleaded gasoline was introduced as compulsory in 1993. Concentrations in Budapest dust samples were lower than from Seoul, except for Pb and Mg. Compared with Madrid as another continental site, Budapest was higher in Co-V-Zn. Dust from Oslo, which is not so large, contained more Mn-Na-Sr than dust from other towns, but less other metals.

  2. Statistical sampling plans

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    In auditing and in inspection, one selects a number of items by some set of procedures and performs measurements which are compared with the operator's values. This session considers the problem of how to select the samples to be measured, and what kinds of measurements to make. In the inspection situation, the ultimate aim is to independently verify the operator's material balance. The effectiveness of the sample plan in achieving this objective is briefly considered. The discussion focuses on the model plant

  3. Learning to reason from samples

    NARCIS (Netherlands)

    Ben-Zvi, Dani; Bakker, Arthur; Makar, Katie

    2015-01-01

    The goal of this article is to introduce the topic of learning to reason from samples, which is the focus of this special issue of Educational Studies in Mathematics on statistical reasoning. Samples are data sets, taken from some wider universe (e.g., a population or a process) using a particular

  4. Sampling strategies to measure the prevalence of common recurrent infections in longitudinal studies

    Directory of Open Access Journals (Sweden)

    Luby Stephen P

    2010-08-01

    Full Text Available Abstract Background Measuring recurrent infections such as diarrhoea or respiratory infections in epidemiological studies is a methodological challenge. Problems in measuring the incidence of recurrent infections include the episode definition, recall error, and the logistics of close follow up. Longitudinal prevalence (LP, the proportion-of-time-ill estimated by repeated prevalence measurements, is an alternative measure to incidence of recurrent infections. In contrast to incidence which usually requires continuous sampling, LP can be measured at intervals. This study explored how many more participants are needed for infrequent sampling to achieve the same study power as frequent sampling. Methods We developed a set of four empirical simulation models representing low and high risk settings with short or long episode durations. The model was used to evaluate different sampling strategies with different assumptions on recall period and recall error. Results The model identified three major factors that influence sampling strategies: (1 the clustering of episodes in individuals; (2 the duration of episodes; (3 the positive correlation between an individual's disease incidence and episode duration. Intermittent sampling (e.g. 12 times per year often requires only a slightly larger sample size compared to continuous sampling, especially in cluster-randomized trials. The collection of period prevalence data can lead to highly biased effect estimates if the exposure variable is associated with episode duration. To maximize study power, recall periods of 3 to 7 days may be preferable over shorter periods, even if this leads to inaccuracy in the prevalence estimates. Conclusion Choosing the optimal approach to measure recurrent infections in epidemiological studies depends on the setting, the study objectives, study design and budget constraints. Sampling at intervals can contribute to making epidemiological studies and trials more efficient, valid

  5. Sample Selection for Training Cascade Detectors

    OpenAIRE

    V?llez, Noelia; Deniz, Oscar; Bueno, Gloria

    2015-01-01

    Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our a...

  6. OpenMSI Arrayed Analysis Toolkit: Analyzing Spatially Defined Samples Using Mass Spectrometry Imaging

    Energy Technology Data Exchange (ETDEWEB)

    de Raad, Markus [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); de Rond, Tristan [Univ. of California, Berkeley, CA (United States); Rübel, Oliver [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Keasling, Jay D. [Univ. of California, Berkeley, CA (United States); Joint BioEnergy Inst. (JBEI), Emeryville, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Technical Univ. of Denmark, Lyngby (Denmark); Northen, Trent R. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Bowen, Benjamin P. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States)

    2017-05-03

    Mass spectrometry imaging (MSI) has primarily been applied in localizing biomolecules within biological matrices. Although well-suited, the application of MSI for comparing thousands of spatially defined spotted samples has been limited. One reason for this is a lack of suitable and accessible data processing tools for the analysis of large arrayed MSI sample sets. In this paper, the OpenMSI Arrayed Analysis Toolkit (OMAAT) is a software package that addresses the challenges of analyzing spatially defined samples in MSI data sets. OMAAT is written in Python and is integrated with OpenMSI (http://openmsi.nersc.gov), a platform for storing, sharing, and analyzing MSI data. By using a web-based python notebook (Jupyter), OMAAT is accessible to anyone without programming experience yet allows experienced users to leverage all features. OMAAT was evaluated by analyzing an MSI data set of a high-throughput glycoside hydrolase activity screen comprising 384 samples arrayed onto a NIMS surface at a 450 μm spacing, decreasing analysis time >100-fold while maintaining robust spot-finding. The utility of OMAAT was demonstrated for screening metabolic activities of different sized soil particles, including hydrolysis of sugars, revealing a pattern of size dependent activities. Finally, these results introduce OMAAT as an effective toolkit for analyzing spatially defined samples in MSI. OMAAT runs on all major operating systems, and the source code can be obtained from the following GitHub repository: https://github.com/biorack/omaat.

  7. Sampling design for long-term regional trends in marine rocky intertidal communities

    Science.gov (United States)

    Irvine, Gail V.; Shelley, Alice

    2013-01-01

    Probability-based designs reduce bias and allow inference of results to the pool of sites from which they were chosen. We developed and tested probability-based designs for monitoring marine rocky intertidal assemblages at Glacier Bay National Park and Preserve (GLBA), Alaska. A multilevel design was used that varied in scale and inference. The levels included aerial surveys, extensive sampling of 25 sites, and more intensive sampling of 6 sites. Aerial surveys of a subset of intertidal habitat indicated that the original target habitat of bedrock-dominated sites with slope ≤30° was rare. This unexpected finding illustrated one value of probability-based surveys and led to a shift in the target habitat type to include steeper, more mixed rocky habitat. Subsequently, we evaluated the statistical power of different sampling methods and sampling strategies to detect changes in the abundances of the predominant sessile intertidal taxa: barnacles Balanomorpha, the mussel Mytilus trossulus, and the rockweed Fucus distichus subsp. evanescens. There was greatest power to detect trends in Mytilus and lesser power for barnacles and Fucus. Because of its greater power, the extensive, coarse-grained sampling scheme was adopted in subsequent years over the intensive, fine-grained scheme. The sampling attributes that had the largest effects on power included sampling of “vertical” line transects (vs. horizontal line transects or quadrats) and increasing the number of sites. We also evaluated the power of several management-set parameters. Given equal sampling effort, sampling more sites fewer times had greater power. The information gained through intertidal monitoring is likely to be useful in assessing changes due to climate, including ocean acidification; invasive species; trampling effects; and oil spills.

  8. Ligand pose and orientational sampling in molecular docking.

    Directory of Open Access Journals (Sweden)

    Ryan G Coleman

    Full Text Available Molecular docking remains an important tool for structure-based screening to find new ligands and chemical probes. As docking ambitions grow to include new scoring function terms, and to address ever more targets, the reliability and extendability of the orientation sampling, and the throughput of the method, become pressing. Here we explore sampling techniques that eliminate stochastic behavior in DOCK3.6, allowing us to optimize the method for regularly variable sampling of orientations. This also enabled a focused effort to optimize the code for efficiency, with a three-fold increase in the speed of the program. This, in turn, facilitated extensive testing of the method on the 102 targets, 22,805 ligands and 1,411,214 decoys of the Directory of Useful Decoys-Enhanced (DUD-E benchmarking set, at multiple levels of sampling. Encouragingly, we observe that as sampling increases from 50 to 500 to 2000 to 5000 to 20,000 molecular orientations in the binding site (and so from about 1×10(10 to 4×10(10 to 1×10(11 to 2×10(11 to 5×10(11 mean atoms scored per target, since multiple conformations are sampled per orientation, the enrichment of ligands over decoys monotonically increases for most DUD-E targets. Meanwhile, including internal electrostatics in the evaluation ligand conformational energies, and restricting aromatic hydroxyls to low energy rotamers, further improved enrichment values. Several of the strategies used here to improve the efficiency of the code are broadly applicable in the field.

  9. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  10. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan; Alzahrani, Majed A.; Gao, Xin

    2014-01-01

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  11. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  12. Sample Results From Tank 48H Samples HTF-48-14-158, -159, -169, and -170

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hang, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-04-28

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 48H in support of determining the cause for the unusually high dose rates at the sampling points for this tank. A set of two samples was taken from the quiescent tank, and two additional samples were taken after the contents of the tank were mixed. The results of the analyses of all the samples show that the contents of the tank have changed very little since the analysis of the previous sample in 2012. The solids are almost exclusively composed of tetraphenylborate (TPB) salts, and there is no indication of acceleration in the TPB decomposition. The filtrate composition shows a moderate increase in salt concentration and density, which is attributable to the addition of NaOH for the purposes of corrosion control. An older modeling simulation of the TPB degradation was updated, and the supernate results from a 2012 sample were run in the model. This result was compared to the results from the 2014 recent sample results reported in this document. The model indicates there is no change in the TPB degradation from 2012 to 2014. SRNL measured the buoyancy of the TPB solids in Tank 48H simulant solutions. It was determined that a solution of density 1.279 g/mL (~6.5M sodium) was capable of indefinitely suspending the TPB solids evenly throughout the solution. A solution of density 1.296 g/mL (~7M sodium) caused a significant fraction of the solids to float on the solution surface. As the experiments could not include the effect of additional buoyancy elements such as benzene or hydrogen generation, the buoyancy measurements provide an upper bound estimate of the density in Tank 48H required to float the solids.

  13. Sampling of ore

    International Nuclear Information System (INIS)

    Boehme, R.C.; Nicholas, B.L.

    1987-01-01

    This invention relates to a method of an apparatus for ore sampling. The method includes the steps of periodically removing a sample of the output material of a sorting machine, weighing each sample so that each is of the same weight, measuring a characteristic such as the radioactivity, magnetivity or the like of each sample, subjecting at least an equal portion of each sample to chemical analysis to determine the mineral content of the sample and comparing the characteristic measurement with desired mineral content of the chemically analysed portion of the sample to determine the characteristic/mineral ratio of the sample. The apparatus includes an ore sample collector, a deflector for deflecting a sample of ore particles from the output of an ore sorter into the collector and means for moving the deflector from a first position in which it is clear of the particle path from the sorter to a second position in which it is in the particle path at predetermined time intervals and for predetermined time periods to deflect the sample particles into the collector. The apparatus conveniently includes an ore crusher for comminuting the sample particle, a sample hopper means for weighing the hopper, a detector in the hopper for measuring a characteristic such as radioactivity, magnetivity or the like of particles in the hopper, a discharge outlet from the hopper and means for feeding the particles from the collector to the crusher and then to the hopper

  14. Sample summary report for KOR1 pressure tube sample

    International Nuclear Information System (INIS)

    Lee, Hee Jong; Nam, Min Woo; Choi, Young Ha

    2006-01-01

    This summary report includes basically the following: - The FLAW CHARACTERIZATION TABLE of KOR1 sample and supporting documentation. - The CROSS REFERENCE TABLES for each investigator, which is the SAMPLE INSPECTION TABLE that cross reference to the FLAW CHARACTERIZATION TABLE. - Each Sample Inspection Report as Appendices

  15. Planning for the Collection and Analysis of Samples of Martian Granular Materials Potentially to be Returned by Mars Sample Return

    Science.gov (United States)

    Carrier, B. L.; Beaty, D. W.

    2017-12-01

    NASA's Mars 2020 rover is scheduled to land on Mars in 2021 and will be equipped with a sampling system capable of collecting rock cores, as well as a specialized drill bit for collecting unconsolidated granular material. A key mission objective is to collect a set of samples that have enough scientific merit to justify returning to Earth. In the case of granular materials, we would like to catalyze community discussion on what we would do with these samples if they arrived in our laboratories, as input to decision-making related to sampling the regolith. Numerous scientific objectives have been identified which could be achieved or significantly advanced via the analysis of martian rocks, "regolith," and gas samples. The term "regolith" has more than one definition, including one that is general and one that is much more specific. For the purpose of this analysis we use the term "granular materials" to encompass the most general meaning and restrict "regolith" to a subset of that. Our working taxonomy includes the following: 1) globally sourced airfall dust (dust); 2) saltation-sized particles (sand); 3) locally sourced decomposed rock (regolith); 4) crater ejecta (ejecta); and, 5) other. Analysis of martian granular materials could serve to advance our understanding areas including habitability and astrobiology, surface-atmosphere interactions, chemistry, mineralogy, geology and environmental processes. Results of these analyses would also provide input into planning for future human exploration of Mars, elucidating possible health and mechanical hazards caused by the martian surface material, as well as providing valuable information regarding available resources for ISRU and civil engineering purposes. Results would also be relevant to matters of planetary protection and ground-truthing orbital observations. We will present a preliminary analysis of the following, in order to generate community discussion and feedback on all issues relating to: What are the

  16. Nursing students’ experiences of clinical education setting

    Directory of Open Access Journals (Sweden)

    Rahnama M

    2015-08-01

    Full Text Available Background and Objective: Appropriate clinical environment has an important role in preparing students to use learned knowledge in practice through providing learning opportunities. Since the students’ experiences in the clinical setting affect on quality of their learning, the current study aimed to explain the experiences of nursing students concerning clinical education setting. Materials and Method: The current study was conducted based on conventional content analysis. Sampling was done purposively and the participants were 13 last year nursing students in Zabol Nursing and Midwifery School in 2013-2014. Data collection was done through in-depth semi-structured interviews. Data analysis was conducted through qualitative content analysis approach. Results: Based on the results, five major categories including threats, vision, dual forces, mindset and students’ action to clinical education and also10 subcategorie were identified. Conclusion: Since the formation of students’ experiences in these environments is one of the predictive factors in achieving their learning and in facilitating the professionalization process, thus the attention of managers in clinical settings is very important for decreasing the threats and concerns for students. In this way, the marred prospects of profession can be recovered through the meeting students’ expectations, attractiveness of the profession can be increased and the positive belief, actions and feelings can be created in students.

  17. Under-utilized Important Data Sets from Barrow, Alaska

    Science.gov (United States)

    Jensen, A. M.; Misarti, N.

    2012-12-01

    The Barrow region has a number of high resolution data sets of high quality and high scientific and stakeholder relevance. Many are described as being of long duration, yet span mere decades. Here we highlight the fact that there are data sets available in the Barrow area that span considerably greater periods of time (centuries to millennia), at varying degrees of resolution. When used appropriately, these data sets can contribute to the study and understanding of the changing Arctic. However, because these types of data are generally acquired as part of archaeological projects, funded through Arctic Social Science and similar programs, their use in other sciences has been limited. Archaeologists focus on analyzing these data sets in ways designed to answer particular anthropological questions. That in no way precludes archaeological collaboration with other types of scientists nor the analysis of these data sets in new and innovative ways, in order to look at questions of Arctic change over a time span beginning well before the Industrial Revolution introduced complicating factors. One major data group consists of zooarchaeological data from sites in the Barrow area. This consists of faunal remains of human subsistence activities, recovered either from middens (refuse deposits) or dwellings. In effect, occupants of a site were sampling their environment as it existed at the time of occupation, although not in a random or systematic way. When analyzed to correct for biases introduced by taphonomic and human behavioral factors, such data sets are used by archaeologists to understand past people's subsistence practices, and how such practices changed through time. However, there is much additional information that can be obtained from these collections. Certain species have fairly specific habitat requirements, and their presence in significant numbers at a site indicates that such conditions existed relatively nearby at a particular time in the past, and

  18. Mock Quasar-Lyman-α forest data-sets for the SDSS-III Baryon Oscillation Spectroscopic Survey

    Energy Technology Data Exchange (ETDEWEB)

    Bautista, Julian E.; Busca, Nicolas G. [APC, Université Paris Diderot-Paris 7, CNRS/IN2P3, CEA, Observatoire de Paris, 10, rue A. Domon and L. Duquet, Paris (France); Bailey, Stephen; Font-Ribera, Andreu; Schlegel, David [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA (United States); Pieri, Matthew M. [Aix Marseille Université, CNRS, LAM (Laboratoire d' Astrophysique de Marseille) UMR 7326, 38 rue Frédéric Joliot-Curie, 13388, Marseille (France); Miralda-Escudé, Jordi; Gontcho, Satya Gontcho A. [Institut de Ciències del Cosmos, Universitat de Barcelona/IEEC, 1 Martí i Franquès, Barcelona 08028, Catalonia (Spain); Palanque-Delabrouille, Nathalie; Rich, James; Goff, Jean Marc Le [CEA, Centre de Saclay, Irfu/SPP, D128, F-91191 Gif-sur-Yvette (France); Dawson, Kyle [Department of Physics and Astronomy, University of Utah, 115 S 100 E, RM 201, Salt Lake City, UT 84112 (United States); Feng, Yu; Ho, Shirley [McWilliams Center for Cosmology, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA, 15213 (United States); Ge, Jian [Department of Astronomy, University of Florida, 211 Bryant Space Science Center, Gainesville, FL 32611-2055 (United States); Noterdaeme, Pasquier; Pâris, Isabelle [Université Paris 6 et CNRS, Institut d' Astrophysique de Paris, 98bis blvd. Arago, 75014 Paris (France); Rossi, Graziano, E-mail: bautista@astro.utah.edu [Department of Astronomy and Space Science, Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul, 143-747 (Korea, Republic of)

    2015-05-01

    We describe mock data-sets generated to simulate the high-redshift quasar sample in Data Release 11 (DR11) of the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS). The mock spectra contain Lyα forest correlations useful for studying the 3D correlation function including Baryon Acoustic Oscillations (BAO). They also include astrophysical effects such as quasar continuum diversity and high-density absorbers, instrumental effects such as noise and spectral resolution, as well as imperfections introduced by the SDSS pipeline treatment of the raw data. The Lyα forest BAO analysis of the BOSS collaboration, described in Delubac et al. 2014, has used these mock data-sets to develop and cross-check analysis procedures prior to performing the BAO analysis on real data, and for continued systematic cross checks. Tests presented here show that the simulations reproduce sufficiently well important characteristics of real spectra. These mock data-sets will be made available together with the data at the time of the Data Release 11.

  19. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively

  20. Setting conservation priorities.

    Science.gov (United States)

    Wilson, Kerrie A; Carwardine, Josie; Possingham, Hugh P

    2009-04-01

    A generic framework for setting conservation priorities based on the principles of classic decision theory is provided. This framework encapsulates the key elements of any problem, including the objective, the constraints, and knowledge of the system. Within the context of this framework the broad array of approaches for setting conservation priorities are reviewed. While some approaches prioritize assets or locations for conservation investment, it is concluded here that prioritization is incomplete without consideration of the conservation actions required to conserve the assets at particular locations. The challenges associated with prioritizing investments through time in the face of threats (and also spatially and temporally heterogeneous costs) can be aided by proper problem definition. Using the authors' general framework for setting conservation priorities, multiple criteria can be rationally integrated and where, how, and when to invest conservation resources can be scheduled. Trade-offs are unavoidable in priority setting when there are multiple considerations, and budgets are almost always finite. The authors discuss how trade-offs, risks, uncertainty, feedbacks, and learning can be explicitly evaluated within their generic framework for setting conservation priorities. Finally, they suggest ways that current priority-setting approaches may be improved.

  1. Electron beam weld parameter set development and cavity cost

    International Nuclear Information System (INIS)

    John Brawley; John Mammossor; Larry Philips

    1997-01-01

    Various methods have recently been considered for use in the cost-effective manufacturing of large numbers of niobium cavities. A method commonly assumed to be too expensive is the joining of half cells by electron beam welding (EBW), as has been done with multipurpose EBW equipment for producing small numbers of cavities at accelerator laboratories. The authors have begun to investigate the advantages that would be available if a single-purpose, task-specific EBW processing tool were used to produce cavities in a high-volume commercial-industrial context. For such a tool and context they have sought to define an EBW parameter set that is cost-effective not only in terms of per-cavity production cost, but also in terms of the minimization of quench-producing weld defects. That is, they define cavity cost-effectiveness to include both production and performance costs. For such an EBW parameter set, they have developed a set of ideal characteristics, produced and tested samples and a complete cavity, studied the weld-defect question, and obtained industrial estimates of cavity high-volume production costs. The investigation in ongoing. This paper reports preliminary findings

  2. Goal Setting to Promote a Health Lifestyle.

    Science.gov (United States)

    Paxton, Raheem J; Taylor, Wendell C; Hudnall, Gina Evans; Christie, Juliette

    2012-01-01

    The purpose of this parallel-group study was to determine whether a feasibility study based on newsletters and telephone counseling would improve goal- setting constructs; physical activity (PA); and fruit and vegetable (F & V) intake in a sample of older adults. Forty-three older adults ( M age = 70 years, >70% Asian, 54% female) living in Honolulu, Hawaii were recruited and randomly assigned to either a PA or F & V intake condition. All participants completed measures of PA, F & V intake, and goal setting mechanisms (i.e., specificity, difficulty, effort, commitment, and persistence) at baseline and 8-weeks. Paired t -tests were used to evaluate changes across time. We found that F & V participants significantly increased F & V intake and mean scores of goal specificity, effort, commitment, and persistence (all p goal setting mechanisms were observed for participants in the PA condition. Overall, our results show that a short-term intervention using newsletters and motivational calls based on goal- setting theory was effective in improving F & V intake; however, more research is needed to determine whether these strategies are effective for improving PA among a multiethnic sample of older adults.

  3. Thermal neutron self-shielding correction factors for large sample instrumental neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Tzika, F.; Stamatelatos, I.E.

    2004-01-01

    Thermal neutron self-shielding within large samples was studied using the Monte Carlo neutron transport code MCNP. The code enabled a three-dimensional modeling of the actual source and geometry configuration including reactor core, graphite pile and sample. Neutron flux self-shielding correction factors derived for a set of materials of interest for large sample neutron activation analysis are presented and evaluated. Simulations were experimentally verified by measurements performed using activation foils. The results of this study can be applied in order to determine neutron self-shielding factors of unknown samples from the thermal neutron fluxes measured at the surface of the sample

  4. An integrated and accessible sample data library for Mars sample return science

    Science.gov (United States)

    Tuite, M. L., Jr.; Williford, K. H.

    2015-12-01

    Over the course of the next decade or more, many thousands of geological samples will be collected and analyzed in a variety of ways by researchers at the Jet Propulsion Laboratory (California Institute of Technology) in order to facilitate discovery and contextualize observations made of Mars rocks both in situ and here on Earth if samples are eventually returned. Integration of data from multiple analyses of samples including petrography, thin section and SEM imaging, isotope and organic geochemistry, XRF, XRD, and Raman spectrometry is a challenge and a potential obstacle to discoveries that require supporting lines of evidence. We report the development of a web-accessible repository, the Sample Data Library (SDL) for the sample-based data that are generated by the laboratories and instruments that comprise JPL's Center for Analysis of Returned Samples (CARS) in order to facilitate collaborative interpretation of potential biosignatures in Mars-analog geological samples. The SDL is constructed using low-cost, open-standards-based Amazon Web Services (AWS), including web-accessible storage, relational data base services, and a virtual web server. The data structure is sample-centered with a shared registry for assigning unique identifiers to all samples including International Geo-Sample Numbers. Both raw and derived data produced by instruments and post-processing workflows are automatically uploaded to online storage and linked via the unique identifiers. Through the web interface, users are able to find all the analyses associated with a single sample or search across features shared by multiple samples, sample localities, and analysis types. Planned features include more sophisticated search and analytical interfaces as well as data discoverability through NSF's EarthCube program.

  5. Counting SET-free sets

    OpenAIRE

    Harman, Nate

    2016-01-01

    We consider the following counting problem related to the card game SET: How many $k$-element SET-free sets are there in an $n$-dimensional SET deck? Through a series of algebraic reformulations and reinterpretations, we show the answer to this question satisfies two polynomiality conditions.

  6. Overcoming Barriers in Unhealthy Settings

    Directory of Open Access Journals (Sweden)

    Michael K. Lemke

    2016-03-01

    Full Text Available We investigated the phenomenon of sustained health-supportive behaviors among long-haul commercial truck drivers, who belong to an occupational segment with extreme health disparities. With a focus on setting-level factors, this study sought to discover ways in which individuals exhibit resiliency while immersed in endemically obesogenic environments, as well as understand setting-level barriers to engaging in health-supportive behaviors. Using a transcendental phenomenological research design, 12 long-haul truck drivers who met screening criteria were selected using purposeful maximum sampling. Seven broad themes were identified: access to health resources, barriers to health behaviors, recommended alternative settings, constituents of health behavior, motivation for health behaviors, attitude toward health behaviors, and trucking culture. We suggest applying ecological theories of health behavior and settings approaches to improve driver health. We also propose the Integrative and Dynamic Healthy Commercial Driving (IDHCD paradigm, grounded in complexity science, as a new theoretical framework for improving driver health outcomes.

  7. Using Google Glass in Surgical Settings: Systematic Review.

    Science.gov (United States)

    Wei, Nancy J; Dougherty, Bryn; Myers, Aundria; Badawy, Sherif M

    2018-03-06

    In recent years, wearable devices have become increasingly attractive and the health care industry has been especially drawn to Google Glass because of its ability to serve as a head-mounted wearable device. The use of Google Glass in surgical settings is of particular interest due to the hands-free device potential to streamline workflow and maintain sterile conditions in an operating room environment. The aim is to conduct a systematic evaluation of the literature on the feasibility and acceptability of using Google Glass in surgical settings and to assess the potential benefits and limitations of its application. The literature was searched for articles published between January 2013 and May 2017. The search included the following databases: PubMed MEDLINE, Embase, Cumulative Index to Nursing and Allied Health Literature, PsycINFO (EBSCO), and IEEE Xplore. Two reviewers independently screened titles and abstracts and assessed full-text articles. Original research articles that evaluated the feasibility, usability, or acceptability of using Google Glass in surgical settings were included. This review was completed following the Preferred Reporting Results of Systematic Reviews and Meta-Analyses guidelines. Of the 520 records obtained, 31 met all predefined criteria and were included in this review. Google Glass was used in various surgical specialties. Most studies were in the United States (23/31, 74%) and all were conducted in hospital settings: 29 in adult hospitals (29/31, 94%) and two in children's hospitals (2/31, 7%). Sample sizes of participants who wore Google Glass ranged from 1 to 40. Of the 31 studies, 25 (81%) were conducted under real-time conditions or actual clinical care settings, whereas the other six (19%) were conducted under simulated environment. Twenty-six studies were pilot or feasibility studies (84%), three were case studies (10%), and two were randomized controlled trials (6%). The majority of studies examined the potential use of

  8. Determinants in the development of advanced nursing practice: a case study of primary-care settings in Hong Kong.

    Science.gov (United States)

    Twinn, Sheila; Thompson, David R; Lopez, Violeta; Lee, Diana T F; Shiu, Ann T Y

    2005-01-01

    Different factors have been shown to influence the development of models of advanced nursing practice (ANP) in primary-care settings. Although ANP is being developed in hospitals in Hong Kong, China, it remains undeveloped in primary care and little is known about the factors determining the development of such a model. The aims of the present study were to investigate the contribution of different models of nursing practice to the care provided in primary-care settings in Hong Kong, and to examine the determinants influencing the development of a model of ANP in such settings. A multiple case study design was selected using both qualitative and quantitative methods of data collection. Sampling methods reflected the population groups and stage of the case study. Sampling included a total population of 41 nurses from whom a secondary volunteer sample was drawn for face-to-face interviews. In each case study, a convenience sample of 70 patients were recruited, from whom 10 were selected purposively for a semi-structured telephone interview. An opportunistic sample of healthcare professionals was also selected. The within-case and cross-case analysis demonstrated four major determinants influencing the development of ANP: (1) current models of nursing practice; (2) the use of skills mix; (3) the perceived contribution of ANP to patient care; and (4) patients' expectations of care. The level of autonomy of individual nurses was considered particularly important. These determinants were used to develop a model of ANP for a primary-care setting. In conclusion, although the findings highlight the complexity determining the development and implementation of ANP in primary care, the proposed model suggests that definitions of advanced practice are appropriate to a range of practice models and cultural settings. However, the findings highlight the importance of assessing the effectiveness of such models in terms of cost and long-term patient outcomes.

  9. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Science.gov (United States)

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  10. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Directory of Open Access Journals (Sweden)

    Meznah Almutairy

    Full Text Available Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  11. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    Science.gov (United States)

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  12. Comparing Microbiome Sampling Methods in a Wild Mammal: Fecal and Intestinal Samples Record Different Signals of Host Ecology, Evolution.

    Science.gov (United States)

    Ingala, Melissa R; Simmons, Nancy B; Wultsch, Claudia; Krampis, Konstantinos; Speer, Kelly A; Perkins, Susan L

    2018-01-01

    The gut microbiome is a community of host-associated symbiotic microbes that fulfills multiple key roles in host metabolism, immune function, and tissue development. Given the ability of the microbiome to impact host fitness, there is increasing interest in studying the microbiome of wild animals to better understand these communities in the context of host ecology and evolution. Human microbiome research protocols are well established, but wildlife microbiome research is still a developing field. Currently, there is no standardized set of best practices guiding the collection of microbiome samples from wildlife. Gut microflora are typically sampled either by fecal collection, rectal swabbing, or by destructively sampling the intestinal contents of the host animal. Studies rarely include more than one sampling technique and no comparison of these methods currently exists for a wild mammal. Although some studies have hypothesized that the fecal microbiome is a nested subset of the intestinal microbiome, this hypothesis has not been formally tested. To address these issues, we examined guano (feces) and distal intestinal mucosa from 19 species of free-ranging bats from Lamanai, Belize, using 16S rRNA amplicon sequencing to compare microbial communities across sample types. We found that the diversity and composition of intestine and guano samples differed substantially. In addition, we conclude that signatures of host evolution are retained by studying gut microbiomes based on mucosal tissue samples, but not fecal samples. Conversely, fecal samples retained more signal of host diet than intestinal samples. These results suggest that fecal and intestinal sampling methods are not interchangeable, and that these two microbiotas record different information about the host from which they are isolated.

  13. Comparing Microbiome Sampling Methods in a Wild Mammal: Fecal and Intestinal Samples Record Different Signals of Host Ecology, Evolution

    Directory of Open Access Journals (Sweden)

    Melissa R. Ingala

    2018-05-01

    Full Text Available The gut microbiome is a community of host-associated symbiotic microbes that fulfills multiple key roles in host metabolism, immune function, and tissue development. Given the ability of the microbiome to impact host fitness, there is increasing interest in studying the microbiome of wild animals to better understand these communities in the context of host ecology and evolution. Human microbiome research protocols are well established, but wildlife microbiome research is still a developing field. Currently, there is no standardized set of best practices guiding the collection of microbiome samples from wildlife. Gut microflora are typically sampled either by fecal collection, rectal swabbing, or by destructively sampling the intestinal contents of the host animal. Studies rarely include more than one sampling technique and no comparison of these methods currently exists for a wild mammal. Although some studies have hypothesized that the fecal microbiome is a nested subset of the intestinal microbiome, this hypothesis has not been formally tested. To address these issues, we examined guano (feces and distal intestinal mucosa from 19 species of free-ranging bats from Lamanai, Belize, using 16S rRNA amplicon sequencing to compare microbial communities across sample types. We found that the diversity and composition of intestine and guano samples differed substantially. In addition, we conclude that signatures of host evolution are retained by studying gut microbiomes based on mucosal tissue samples, but not fecal samples. Conversely, fecal samples retained more signal of host diet than intestinal samples. These results suggest that fecal and intestinal sampling methods are not interchangeable, and that these two microbiotas record different information about the host from which they are isolated.

  14. Apology in the criminal justice setting: evidence for including apology as an additional component in the legal system.

    Science.gov (United States)

    Petrucci, Carrie J

    2002-01-01

    The criminal justice system has reached unprecedented scope in the United States, with over 6.4 million people under some type of supervision. Remedies that have the potential to reduce this number are continually being sought. This article analyzes an innovative strategy currently being reconsidered in criminal justice: the apology. Despite a legal system that only sporadically acknowledges it, evidence for the use of apology is supported by social science research, current criminal justice theories, case law, and empirical studies. Social psychological, sociological and socio-legal studies pinpoint the elements and function of apology, what makes apologies effective, and concerns about apology if it were implemented in the criminal justice system. Theoretical evidence is examined (including restorative justice, therapeutic jurisprudence, crime, shame, and reintegration) to explore the process of apology in the criminal justice context. Attribution theory and social conduct theory are used to explain the apology process specifically for victims and offenders. A brief examination of case law reveals that though apology has no formal place in criminal law, it has surfaced recently under the federal sentencing guidelines. Finally, empirical evidence in criminal justice settings reveals that offenders want to apologize and victims desire an apology. Moreover, by directly addressing the harmful act, apology may be the link to reduced recidivism for offenders, as well as empowerment for victims. This evidence combined suggests that apology is worthy of further study as a potentially valuable addition to the criminal justice process. Copyright 2002 John Wiley & Sons, Ltd.

  15. Infection Control Practices in Dental Settings - A Review

    Directory of Open Access Journals (Sweden)

    Mohammad Mukhit Kazi

    2012-01-01

    Full Text Available In the era of HIV/ AIDS it is essential to follow the infection prevention protocols in all health care settings including dental settings. The present review article highlighted the various preventive protocols to be followed in dental settings. It includes right from the simple hand hygiene to biomedical waste segregation.

  16. Demonstration/Validation of Incremental Sampling at Two Diverse Military Ranges and Development of an Incremental Sampling Tool

    Science.gov (United States)

    2010-06-01

    Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit

  17. Family Adversity and Resilience Measures in Pediatric Acute Care Settings.

    Science.gov (United States)

    O'Malley, Donna M; Randell, Kimberly A; Dowd, M Denise

    2016-01-01

    Adverse childhood experiences (ACEs) impact health across the life course. The purpose of this study was to identify caregiver ACEs, current adversity, and resilience in families seeking care in pediatric acute care settings. Study aims included identifying demographic characteristics, current adversities, and resilience measures associated with caregiver ACEs ≥4. A cross-sectional survey study design was used and a convenience sample (n = 470) recruited at emergency and urgent care settings of a large Midwest pediatric hospital system. Measures were self-reported. The original 10-item ACEs questionnaire measured caregiver past adversity. Current adversity was measured using the 10-item IHELP. The six-item Brief Resiliency Scale measured resilience, and WHO-5 Well-Being Index was used to measure depressive affect. Compared to participants with ACEs score of 0-3 participants with ACEs ≥4 were more likely to have multiple current adversities, increased risk of depression, and lower resilience. Caregivers using pediatric acute care settings carry a high burden of ACEs and current adversities. Caregiver ACEs are associated with current child experiences of adversity. Caregivers socioeconomic status and education level may not be an accurate indicator of a family's risks or needs. Pediatric acute care settings offer opportunities to access, intervene, and prevent childhood adversity. © 2016 Wiley Periodicals, Inc.

  18. Optimizing sampling strategy for radiocarbon dating of Holocene fluvial systems in a vertically aggrading setting

    International Nuclear Information System (INIS)

    Toernqvist, T.E.; Dijk, G.J. Van

    1993-01-01

    The authors address the question of how to determine the period of activity (sedimentation) of fossil (Holocene) fluvial systems in vertically aggrading environments. The available data base consists of almost 100 14 C ages from the Rhine-Meuse delta. Radiocarbon samples from the tops of lithostratigraphically correlative organic beds underneath overbank deposits (sample type 1) yield consistent ages, indicating a synchronous onset of overbank deposition over distances of at least up to 20 km along channel belts. Similarly, 14 C ages from the base of organic residual channel fills (sample type 3) generally indicate a clear termination of within-channel sedimentation. In contrast, 14 C ages from the base of organic beds overlying overbank deposits (sample type 2), commonly assumed to represent the end of fluvial sedimentation, show a large scatter reaching up to 1000 14 C years. It is concluded that a combination of sample types 1 and 3 generally yields a satisfactory delimitation of the period of activity of a fossil fluvial system. 30 refs., 11 figs., 4 tabs

  19. Gram-negative and -positive bacteria differentiation in blood culture samples by headspace volatile compound analysis.

    Science.gov (United States)

    Dolch, Michael E; Janitza, Silke; Boulesteix, Anne-Laure; Graßmann-Lichtenauer, Carola; Praun, Siegfried; Denzer, Wolfgang; Schelling, Gustav; Schubert, Sören

    2016-12-01

    Identification of microorganisms in positive blood cultures still relies on standard techniques such as Gram staining followed by culturing with definite microorganism identification. Alternatively, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry or the analysis of headspace volatile compound (VC) composition produced by cultures can help to differentiate between microorganisms under experimental conditions. This study assessed the efficacy of volatile compound based microorganism differentiation into Gram-negatives and -positives in unselected positive blood culture samples from patients. Headspace gas samples of positive blood culture samples were transferred to sterilized, sealed, and evacuated 20 ml glass vials and stored at -30 °C until batch analysis. Headspace gas VC content analysis was carried out via an auto sampler connected to an ion-molecule reaction mass spectrometer (IMR-MS). Measurements covered a mass range from 16 to 135 u including CO2, H2, N2, and O2. Prediction rules for microorganism identification based on VC composition were derived using a training data set and evaluated using a validation data set within a random split validation procedure. One-hundred-fifty-two aerobic samples growing 27 Gram-negatives, 106 Gram-positives, and 19 fungi and 130 anaerobic samples growing 37 Gram-negatives, 91 Gram-positives, and two fungi were analysed. In anaerobic samples, ten discriminators were identified by the random forest method allowing for bacteria differentiation into Gram-negative and -positive (error rate: 16.7 % in validation data set). For aerobic samples the error rate was not better than random. In anaerobic blood culture samples of patients IMR-MS based headspace VC composition analysis facilitates bacteria differentiation into Gram-negative and -positive.

  20. Collection and preparation of samples for gamma spectrometry

    International Nuclear Information System (INIS)

    Pan Jingquan

    1994-01-01

    The paper presents the basic principles of sample collection and preparation: setting up unified sampling program, methods and procedures, sample packing, transportation and storage, determination of sample quantity, sample pretreatment and preparation of samples to be analysed, etc. for gamma spectrometry. And the paper also describes briefly the main methods and special issues of sampling and preparation for the same environmental and biological samples, such as, air, water, grass, soil and foods

  1. Urine drug screening in the medical setting.

    Science.gov (United States)

    Hammett-Stabler, Catherine A; Pesce, Amadeo J; Cannon, Donald J

    2002-01-01

    The term drug screen is a misnomer since it implies screening for all drugs, which is not possible. Current practice is to limit the testing to the examination of serum for several drugs such as ethanol, acetaminophen, salicylate, and of urine for several specific drugs or classes of drugs. In the emergency setting the screen should be performed in less than one hour. Controversies continue to exist regarding the value of urine drug testing in the medical setting. The reasons for these include the drugs involved, the sample, the methods utilized to perform the tests, and the level of understanding of the physician using the data, all of which are closely related to the other. Current automated methods provide rapid results demanded in emergency situations, but are often designed for, or adapted from, workplace testing and are not necessarily optimized for clinical applications. Furthermore, the use of these methods without consideration of the frequency in which the drugs are found in a given area is not cost-effective. The laboratory must understand the limitations of the assays used and provide this information to the physician. Additionally, the laboratory and the physicians using the data must cooperate to determine which drugs are appropriate and necessary to measure for their institution and clinical setting. In doing so it should be remembered that for many drugs, the sample, urine, contains the end product(s) of drug metabolism, not the parent drug. Furthermore, it is necessary to understand the pharmacokinetic parameters of the drug of interest when interpreting data. Finally, while testing for some drugs may not appear cost-effective, the prevention or reduction of morbidity and mortality may offset any laboratory costs. While the literature is replete with studies concerning new methods and a few regarding physician understanding, there are none that we could find that thoroughly, objectively, and fully addressed the issues of utility and cost-effectiveness.

  2. Interpolation-free scanning and sampling scheme for tomographic reconstructions

    International Nuclear Information System (INIS)

    Donohue, K.D.; Saniie, J.

    1987-01-01

    In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation

  3. Data Validation Package April 2016 Groundwater and Surface Water Sampling at the Monticello, Utah, Disposal and Processing Sites August 2016

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Jason [USDOE Office of Legacy Management, Washington, DC (United States); Smith, Fred [Navarro Research and Engineering, Oak Ridge, TN (United States)

    2016-08-01

    This semiannual event includes sampling groundwater and surface water at the Monticello Disposal and Processing Sites. Sampling and analyses were conducted as specified in the Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated) and Program Directive MNT-2016-01. Complete sample sets were collected from 42 of 48 planned locations (9 of 9 former mill site wells, 13 of 13 downgradient wells, 7 of 9 downgradient permeable reactive barrier wells, 4 of 7 seeps and wetlands, and 9 of 10 surface water locations). Planned monitoring locations are shown in Attachment 1, Sampling and Analysis Work Order. Locations R6-M3, SW00-01, Seep 1, Seep 2, and Seep 5 were not sampled due to insufficient water availability. A partial sample was collected at location R4-M3 due to insufficient water. All samples from the permeable reactive barrier wells were filtered as specified in the program directive. Duplicate samples were collected from surface water location Sorenson and from monitoring wells 92-07 and RlO-Ml. Water levels were measured at all sampled wells and an additional set of wells. See Attachment2, Trip Report for additional details. The contaminants of concern (COCs) for the Monticello sites are arsenic, manganese, molybdenum, nitrate+ nitrite as nitrogen (nitrate+ nitrite as N), selenium, uranium, and vanadium. Locations with COCs that exceeded remediation goals are listed in Table 1 and Table 2. Time-concentration graphs of the COCs for all groundwater and surface water locations are included in Attachment 3, Data Presentation. An assessment of anomalous data is included in Attachment 4.

  4. Laser sampling

    International Nuclear Information System (INIS)

    Gorbatenko, A A; Revina, E I

    2015-01-01

    The review is devoted to the major advances in laser sampling. The advantages and drawbacks of the technique are considered. Specific features of combinations of laser sampling with various instrumental analytical methods, primarily inductively coupled plasma mass spectrometry, are discussed. Examples of practical implementation of hybrid methods involving laser sampling as well as corresponding analytical characteristics are presented. The bibliography includes 78 references

  5. Adaptive Angular Sampling for SPECT Imaging

    OpenAIRE

    Li, Nan; Meng, Ling-Jian

    2011-01-01

    This paper presents an analytical approach for performing adaptive angular sampling in single photon emission computed tomography (SPECT) imaging. It allows for a rapid determination of the optimum sampling strategy that minimizes image variance in regions-of-interest (ROIs). The proposed method consists of three key components: (a) a set of close-form equations for evaluating image variance and resolution attainable with a given sampling strategy, (b) a gradient-based algor...

  6. Autonomous sample switcher for Mössbauer spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    López, J. H., E-mail: jolobotero@gmail.com; Restrepo, J., E-mail: jrestre@gmail.com [University of Antioquia, Group of Magnetism and Simulation, Institute of Physics (Colombia); Barrero, C. A., E-mail: cesar.barrero.meneses@gmail.com [University of Antioquia, Group of Solid State Physics, Institute of Physics (Colombia); Tobón, J. E., E-mail: nobotj@gmail.com; Ramírez, L. F., E-mail: luisf.ramirez@udea.edu.co; Jaramillo, J., E-mail: jdex87@gmail.com [University of Antioquia, Group of Scientific Instrumentation and Microelectronics, Institute of Physics (Colombia)

    2017-11-15

    In this work we show the design and implementation of an autonomous sample switcher device to be used as a part of the experimental set up in transmission Mössbauer spectroscopy, which can be extended to other spectroscopic techniques employing radioactive sources. The changer is intended to minimize radiation exposure times to the users or technical staff and to optimize the use of radioactive sources without compromising the resolution of measurements or spectra. This proposal is motivated firstly by the potential hazards arising from the use of radioactive sources and secondly by the expensive costs involved, and in other cases the short life times, where a suitable and optimum use of the sources is crucial. The switcher system includes a PIC microcontroller for simple tasks involving sample displacement and positioning, in addition to a virtual instrument developed by using LabView. The shuffle of the samples proceeds in a sequential way based on the number of counts and the signal to noise ratio as selection criteria whereas the virtual instrument allows performing a remote monitoring from a PC via Internet about the status of the spectra and to take control decisions. As an example, we show a case study involving a series of akaganeite samples. An efficiency and economical analysis is finally presented and discussed.

  7. Soot on snow in Iceland: First results on black carbon and organic carbon in Iceland 2016 snow and ice samples, including the glacier Solheimajökull

    Science.gov (United States)

    Meinander, Outi; Dagsson-Waldhauserova, Pavla; Gritsevich, Maria; Aurela, Minna; Arnalds, Olafur; Dragosics, Monika; Virkkula, Aki; Svensson, Jonas; Peltoniemi, Jouni; Kontu, Anna; Kivekäs, Niku; Leppäranta, Matti; de Leeuw, Gerrit; Laaksonen, Ari; Lihavainen, Heikki; Arslan, Ali N.; Paatero, Jussi

    2017-04-01

    New results on black carbon (BC) and organic carbon (OC) on snow and ice in Iceland in 2016 will be presented in connection to our earlier results on BC and OC on Arctic seasonal snow surface, and in connection to our 2013 and 2016 experiments on effects of light absorbing impurities, including Icelandic dust, on snow albedo, melt and density. Our sampling included the glacier Solheimajökull in Iceland. The mass balance of this glacier is negative and it has been shrinking during the last 20 years by 900 meters from its southwestern corner. Icelandic snow and ice samples were not expected to contain high concentrations of BC, as power generation with domestic renewable water and geothermal power energy sources cover 80 % of the total energy consumption in Iceland. Our BC results on filters analyzed with a Thermal/Optical Carbon Aerosol Analyzer (OC/EC) confirm this assumption. Other potential soot sources in Iceland include agricultural burning, industry (aluminum and ferroalloy production and fishing industry), open burning, residential heating and transport (shipping, road traffic, aviation). On the contrary to low BC, we have found high concentrations of organic carbon in our Iceland 2016 samples. Some of the possible reasons for those will be discussed in this presentation. Earlier, we have measured and reported unexpectedly low snow albedo values of Arctic seasonally melting snow in Sodankylä, north of Arctic Circle. Our low albedo results of melting snow have been confirmed by three independent data sets. We have explained these low values to be due to: (i) large snow grain sizes up to 3 mm in diameter (seasonally melting snow); (ii) meltwater surrounding the grains and increasing the effective grain size; (iii) absorption caused by impurities in the snow, with concentration of elemental carbon (black carbon) in snow of 87 ppb, and organic carbon 2894 ppb. The high concentrations of carbon were due to air masses originating from the Kola Peninsula, Russia

  8. Sample representativeness verification of the FADN CZ farm business sample

    Directory of Open Access Journals (Sweden)

    Marie Prášilová

    2011-01-01

    Full Text Available Sample representativeness verification is one of the key stages of statistical work. After having joined the European Union the Czech Republic joined also the Farm Accountancy Data Network system of the Union. This is a sample of bodies and companies doing business in agriculture. Detailed production and economic data on the results of farming business are collected from that sample annually and results for the entire population of the country´s farms are then estimated and assessed. It is important hence, that the sample be representative. Representativeness is to be assessed as to the number of farms included in the survey and also as to the degree of accordance of the measures and indices as related to the population. The paper deals with the special statistical techniques and methods of the FADN CZ sample representativeness verification including the necessary sample size statement procedure. The Czech farm population data have been obtained from the Czech Statistical Office data bank.

  9. Two methods of self-sampling compared to clinician sampling to detect reproductive tract infections in Gugulethu, South Africa

    NARCIS (Netherlands)

    van de Wijgert, Janneke; Altini, Lydia; Jones, Heidi; de Kock, Alana; Young, Taryn; Williamson, Anna-Lise; Hoosen, Anwar; Coetzee, Nicol

    2006-01-01

    To assess the validity, feasibility, and acceptability of 2 methods of self-sampling compared to clinician sampling during a speculum examination. To improve screening for reproductive tract infections (RTIs) in resource-poor settings. In a public clinic in Cape Town, 450 women underwent a speculum

  10. Sampling by electro-erosion on irradiated materials

    International Nuclear Information System (INIS)

    Riviere, M.; Pizzanelli, J.P.

    1986-05-01

    Sampling on irradiated materials, in particular for mechanical property study of steels in the FAST NEUTRON program needed the set in a hot cell of a machining device by electroerosion. This device allows sampling of tenacity, traction, resilience test pieces [fr

  11. [A comparison of convenience sampling and purposive sampling].

    Science.gov (United States)

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  12. Improved phylogenomic taxon sampling noticeably affects nonbilaterian relationships.

    Science.gov (United States)

    Pick, K S; Philippe, H; Schreiber, F; Erpenbeck, D; Jackson, D J; Wrede, P; Wiens, M; Alié, A; Morgenstern, B; Manuel, M; Wörheide, G

    2010-09-01

    Despite expanding data sets and advances in phylogenomic methods, deep-level metazoan relationships remain highly controversial. Recent phylogenomic analyses depart from classical concepts in recovering ctenophores as the earliest branching metazoan taxon and propose a sister-group relationship between sponges and cnidarians (e.g., Dunn CW, Hejnol A, Matus DQ, et al. (18 co-authors). 2008. Broad phylogenomic sampling improves resolution of the animal tree of life. Nature 452:745-749). Here, we argue that these results are artifacts stemming from insufficient taxon sampling and long-branch attraction (LBA). By increasing taxon sampling from previously unsampled nonbilaterians and using an identical gene set to that reported by Dunn et al., we recover monophyletic Porifera as the sister group to all other Metazoa. This suggests that the basal position of the fast-evolving Ctenophora proposed by Dunn et al. was due to LBA and that broad taxon sampling is of fundamental importance to metazoan phylogenomic analyses. Additionally, saturation in the Dunn et al. character set is comparatively high, possibly contributing to the poor support for some nonbilaterian nodes.

  13. Sampling optimization for printer characterization by direct search.

    Science.gov (United States)

    Bianco, Simone; Schettini, Raimondo

    2012-12-01

    Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.

  14. Causal Set Generator and Action Computer

    OpenAIRE

    Cunningham, William; Krioukov, Dmitri

    2017-01-01

    The causal set approach to quantum gravity has gained traction over the past three decades, but numerical experiments involving causal sets have been limited to relatively small scales. The software suite presented here provides a new framework for the generation and study of causal sets. Its efficiency surpasses previous implementations by several orders of magnitude. We highlight several important features of the code, including the compact data structures, the $O(N^2)$ causal set generatio...

  15. Monitoring human papillomavirus prevalence in urine samples: a review

    Directory of Open Access Journals (Sweden)

    Enerly E

    2013-03-01

    Full Text Available Espen Enerly, Cecilia Olofsson, Mari NygårdDepartment of Research, Cancer Registry of Norway, Oslo, NorwayAbstract: Human papillomavirus (HPV is the main cause of cervical cancer, and many countries now offer vaccination against HPV to girls by way of government-funded national immunization programs. Monitoring HPV prevalence in adolescents could offer a near-term biological measure of vaccine impact, and urine sampling may be an attractive large-scale method that could be used for this purpose. Our objective was to provide an overview of the literature on HPV DNA detection in urine samples, with an emphasis on adolescents. We searched the PubMed database using the terms “HPV” and “urine” and identified 21 female and 14 male study populations in which HPV prevalence in urine samples was reported, four of which included only asymptomatic female adolescents. We provide herein an overview of the recruitment setting, age, urine sampling procedure, lesion type, HPV assay, and HPV prevalence in urine samples and other urogenital samples for the studies included in this review. In female study populations, concordance for any HPV type and type-specific concordance in paired urine and cervical samples are provided in addition to sensitivity and specificity. We concluded that few studies on HPV prevalence in urine samples have been performed in asymptomatic female adolescent populations but that urine samples may be a useful alternative to cervical samples to monitor changes in HPV prevalence in females in the post-HPV vaccination era. However, care should be taken when extrapolating HPV findings from urine samples to the cervix. In males, urine samples do not seem to be optimal for monitoring HPV prevalence due to a low human genomic DNA content and HPV DNA detection rate compared to other urogenital sites. In each situation the costs and benefits of HPV DNA detection in urine compared to alternative monitoring options should be carefully

  16. The COG database: an updated version includes eukaryotes

    Directory of Open Access Journals (Sweden)

    Sverdlov Alexander V

    2003-09-01

    Full Text Available Abstract Background The availability of multiple, essentially complete genome sequences of prokaryotes and eukaryotes spurred both the demand and the opportunity for the construction of an evolutionary classification of genes from these genomes. Such a classification system based on orthologous relationships between genes appears to be a natural framework for comparative genomics and should facilitate both functional annotation of genomes and large-scale evolutionary studies. Results We describe here a major update of the previously developed system for delineation of Clusters of Orthologous Groups of proteins (COGs from the sequenced genomes of prokaryotes and unicellular eukaryotes and the construction of clusters of predicted orthologs for 7 eukaryotic genomes, which we named KOGs after eukaryotic orthologous groups. The COG collection currently consists of 138,458 proteins, which form 4873 COGs and comprise 75% of the 185,505 (predicted proteins encoded in 66 genomes of unicellular organisms. The eukaryotic orthologous groups (KOGs include proteins from 7 eukaryotic genomes: three animals (the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster and Homo sapiens, one plant, Arabidopsis thaliana, two fungi (Saccharomyces cerevisiae and Schizosaccharomyces pombe, and the intracellular microsporidian parasite Encephalitozoon cuniculi. The current KOG set consists of 4852 clusters of orthologs, which include 59,838 proteins, or ~54% of the analyzed eukaryotic 110,655 gene products. Compared to the coverage of the prokaryotic genomes with COGs, a considerably smaller fraction of eukaryotic genes could be included into the KOGs; addition of new eukaryotic genomes is expected to result in substantial increase in the coverage of eukaryotic genomes with KOGs. Examination of the phyletic patterns of KOGs reveals a conserved core represented in all analyzed species and consisting of ~20% of the KOG set. This conserved portion of the

  17. FUZZY ACCEPTANCE SAMPLING AND CHARACTERISTIC CURVES

    Directory of Open Access Journals (Sweden)

    Ebru Turano?lu

    2012-02-01

    Full Text Available Acceptance sampling is primarily used for the inspection of incoming or outgoing lots. Acceptance sampling refers to the application of specific sampling plans to a designated lot or sequence of lots. The parameters of acceptance sampling plans are sample sizes and acceptance numbers. In some cases, it may not be possible to define acceptance sampling parameters as crisp values. These parameters can be expressed by linguistic variables. The fuzzy set theory can be successfully used to cope with the vagueness in these linguistic expressions for acceptance sampling. In this paper, the main distributions of acceptance sampling plans are handled with fuzzy parameters and their acceptance probability functions are derived. Then the characteristic curves of acceptance sampling are examined under fuzziness. Illustrative examples are given.

  18. Square-wave anodic-stripping voltammetric determination of Cd, Pb and Cu in wine: Set-up and optimization of sample pre-treatment and instrumental parameters

    International Nuclear Information System (INIS)

    Illuminati, Silvia; Annibaldi, Anna; Truzzi, Cristina; Finale, Carolina; Scarponi, Giuseppe

    2013-01-01

    For the first time, square-wave anodic-stripping voltammetry (SWASV) was set up and optimized for the determination of Cd, Pb and Cu in white wine after UV photo-oxidative digestion of the sample. The best procedure for the sample pre-treatment consisted in a 6-h UV irradiation of diluted, acidified wine, with the addition of ultrapure H 2 O 2 (three sequential additions during the irradiation). Due to metal concentration differences, separate measurements were carried out for Cd (deposition potential −950 mV vs. Ag/AgCl/3 M KCl deposition time 15 min) and simultaneously for Pb and Cu (E d −750 mV, t d 30 s). The optimum set-up of the main instrumental parameters, evaluated also in terms of the signal-to-noise ratio, were as follows: E SW 20 mV, f 100 Hz, ΔE step 8 mV, t step 100 ms, t wait 60 ms, t delay 2 ms, t meas 3 ms. The electrochemical behaviour was reversible bielectronic for Cd and Pb, and kinetically controlled monoelectronic for Cu. Good accuracy was found both when the recovery procedure was used and when the results were compared with data obtained by differential pulse anodic stripping voltammetry. The linearity of the response was verified up to ∼4 μg L −1 for Cd and Pb and ∼15 μg L −1 for Cu. The detection limits for t d = 5 min in the 10 times diluted, UV digested sample were (ng L −1 ): Cd 7.0, Pb 1.2 and Cu 6.6, which are well below currently applied methods. Application to a Verdicchio dei Castelli di Jesi white wine revealed concentration levels of Cd ∼0.2, Pb ∼10, Cu ∼30 μg L −1 with repeatabilities of (±RSD%) Cd ±6%, Pb ±5%, Cu ±10%

  19. Irradiation chamber and sample changer for biological samples

    International Nuclear Information System (INIS)

    Kraft, G.; Daues, H.W.; Fischer, B.; Kopf, U.; Liebold, H.P.; Quis, D.; Stelzer, H.; Kiefer, J.; Schoepfer, F.; Schneider, E.

    1980-01-01

    This paper describes an irradiaton system with which living cells of different origin are irradiated with heavy ion beams (18 <- Z <- 92) at energies up to 10 MeV/amu. The system consists of a beam monitor connected to the vacuum system of the accelerator and the irradiation chamber, containing the biological samples under atmospheric pressure. The requirements and aims of the set up are discussed. The first results with saccharomyces cerevisiae and Chinese Hamster tissue cells are presented. (orig.)

  20. Uranium isotope ratio measurements in field settings

    International Nuclear Information System (INIS)

    Shaw, R.W.; Barshick, C.M.; Young, J.P.; Ramsey, J.M.

    1997-01-01

    The authors have developed a technique for uranium isotope ratio measurements of powder samples in field settings. Such a method will be invaluable for environmental studies, radioactive waste operations, and decommissioning and decontamination operations. Immediate field data can help guide an ongoing sampling campaign. The measurement encompasses glow discharge sputtering from pressed sample hollow cathodes, high resolution laser spectroscopy using conveniently tunable diode lasers, and optogalvanic detection. At 10% 235 U enrichment and above, the measurement precision for 235 U/( 235 U+ 238 U) isotope ratios was ±3%; it declined to ±15% for 0.3% (i.e., depleted) samples. A prototype instrument was constructed and is described

  1. Development of new auxiliary basis functions of the Karlsruhe segmented contracted basis sets including diffuse basis functions (def2-SVPD, def2-TZVPPD, and def2-QVPPD) for RI-MP2 and RI-CC calculations.

    Science.gov (United States)

    Hellweg, Arnim; Rappoport, Dmitrij

    2015-01-14

    We report optimized auxiliary basis sets for use with the Karlsruhe segmented contracted basis sets including moderately diffuse basis functions (Rappoport and Furche, J. Chem. Phys., 2010, 133, 134105) in resolution-of-the-identity (RI) post-self-consistent field (post-SCF) computations for the elements H-Rn (except lanthanides). The errors of the RI approximation using optimized auxiliary basis sets are analyzed on a comprehensive test set of molecules containing the most common oxidation states of each element and do not exceed those of the corresponding unaugmented basis sets. During these studies an unsatisfying performance of the def2-SVP and def2-QZVPP auxiliary basis sets for Barium was found and improved sets are provided. We establish the versatility of the def2-SVPD, def2-TZVPPD, and def2-QZVPPD basis sets for RI-MP2 and RI-CC (coupled-cluster) energy and property calculations. The influence of diffuse basis functions on correlation energy, basis set superposition error, atomic electron affinity, dipole moments, and computational timings is evaluated at different levels of theory using benchmark sets and showcase examples.

  2. Model Selection and Evaluation Based on Emerging Infectious Disease Data Sets including A/H1N1 and Ebola

    Directory of Open Access Journals (Sweden)

    Wendi Liu

    2015-01-01

    Full Text Available The aim of the present study is to apply simple ODE models in the area of modeling the spread of emerging infectious diseases and show the importance of model selection in estimating parameters, the basic reproduction number, turning point, and final size. To quantify the plausibility of each model, given the data and the set of four models including Logistic, Gompertz, Rosenzweg, and Richards models, the Bayes factors are calculated and the precise estimates of the best fitted model parameters and key epidemic characteristics have been obtained. In particular, for Ebola the basic reproduction numbers are 1.3522 (95% CI (1.3506, 1.3537, 1.2101 (95% CI (1.2084, 1.2119, 3.0234 (95% CI (2.6063, 3.4881, and 1.9018 (95% CI (1.8565, 1.9478, the turning points are November 7,November 17, October 2, and November 3, 2014, and the final sizes until December 2015 are 25794 (95% CI (25630, 25958, 3916 (95% CI (3865, 3967, 9886 (95% CI (9740, 10031, and 12633 (95% CI (12515, 12750 for West Africa, Guinea, Liberia, and Sierra Leone, respectively. The main results confirm that model selection is crucial in evaluating and predicting the important quantities describing the emerging infectious diseases, and arbitrarily picking a model without any consideration of alternatives is problematic.

  3. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  4. New sampling strategy using a Bayesian approach to assess iohexol clearance in kidney transplant recipients.

    Science.gov (United States)

    Benz-de Bretagne, I; Le Guellec, C; Halimi, J M; Gatault, P; Barbet, C; Alnajjar, A; Büchler, M; Lebranchu, Y; Andres, Christian Robert; Vourcʼh, P; Blasco, H

    2012-06-01

    Glomerular filtration rate (GFR) measurement is a major issue in kidney transplant recipients for clinicians. GFR can be determined by estimating the plasma clearance of iohexol, a nonradiolabeled compound. For practical and convenient application for patients and caregivers, it is important that a minimal number of samples are drawn. The aim of this study was to develop and validate a Bayesian model with fewer samples for reliable prediction of GFR in kidney transplant recipients. Iohexol plasma concentration-time curves from 95 patients were divided into an index (n = 63) and a validation set (n = 32). Samples (n = 4-6 per patient) were obtained during the elimination phase, that is, between 120 and 270 minutes. Individual reference values of iohexol clearance (CL(iohexol)) were calculated from k (elimination slope) and V (volume of distribution from intercept). Individual CL(iohexol) values were then introduced into the Bröchner-Mortensen equation to obtain the GFR (reference value). A population pharmacokinetic model was developed from the index set and validated using standard methods. For the validation set, we tested various combinations of 1, 2, or 3 sampling time to estimate CL(iohexol). According to the different combinations tested, a maximum a posteriori Bayesian estimation of CL(iohexol) was obtained from population parameters. Individual estimates of GFR were compared with individual reference values through analysis of bias and precision. A capability analysis allowed us to determine the best sampling strategy for Bayesian estimation. A 1-compartment model best described our data. Covariate analysis showed that uremia, serum creatinine, and age were significantly associated with k(e), and weight with V. The strategy, including samples drawn at 120 and 270 minutes, allowed accurate prediction of GFR (mean bias: -3.71%, mean imprecision: 7.77%). With this strategy, about 20% of individual predictions were outside the bounds of acceptance set at ± 10

  5. Solvent Hold Tank Sample Results for MCU-16-701-702-703: May 2016 Monthly Sample and MCU-16-710-711-712: May 2016 Superwashed Sample

    Energy Technology Data Exchange (ETDEWEB)

    Fondeur, F. F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, D. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-08-30

    The Savannah River National Laboratory (SRNL) received one set of Solvent Hold Tank (SHT) samples (MCU-16-701, MCU-16-702 and MCU-16-703), pulled on 05/23/2016, and another set of SHT samples (MCU-16-710, MCU-16-711, and MCU-16-712) were pulled on 05/28/2016 after the solvent was superwashed with 300 mM sodium hydroxide for analysis. Samples MCU-16-701, MCU-16-702, and MCU-16-703 were combined into one sample (MCU-16-701-702-703) and samples MCU-16-710, MCU- 16-711, and MCU-16-712 were combined into one sample (MCU-16-710-711-712). Of the two composite samples MCU-16-710-711-712 represents the current chemical state of the solvent at MCU. All analytical conclusions are based on the chemical analysis of MCU-16-710-711-712. There were no chemical differences between MCU-16-701-702-703 and superwashed MCU-16-710-711-712. Analysis of the composited sample MCU-16-710-712-713 indicated the Isopar™L concentration is above its nominal level (102%). The modifier (CS-7SB) is 16% below its nominal concentration, while the TiDG and MaxCalix concentrations are at and above their nominal concentrations, respectively. The TiDG level has begun to decrease, and it is 7% below its nominal level as of May 28, 2016. Based on this current analysis, the levels of TiDG, Isopar™L, MaxCalix, and modifier are sufficient for continuing operation but are expected to decrease with time. Periodic characterization and trimming additions to the solvent are recommended.

  6. Reconstructing the temporal ordering of biological samples using microarray data.

    Science.gov (United States)

    Magwene, Paul M; Lizardi, Paul; Kim, Junhyong

    2003-05-01

    Accurate time series for biological processes are difficult to estimate due to problems of synchronization, temporal sampling and rate heterogeneity. Methods are needed that can utilize multi-dimensional data, such as those resulting from DNA microarray experiments, in order to reconstruct time series from unordered or poorly ordered sets of observations. We present a set of algorithms for estimating temporal orderings from unordered sets of sample elements. The techniques we describe are based on modifications of a minimum-spanning tree calculated from a weighted, undirected graph. We demonstrate the efficacy of our approach by applying these techniques to an artificial data set as well as several gene expression data sets derived from DNA microarray experiments. In addition to estimating orderings, the techniques we describe also provide useful heuristics for assessing relevant properties of sample datasets such as noise and sampling intensity, and we show how a data structure called a PQ-tree can be used to represent uncertainty in a reconstructed ordering. Academic implementations of the ordering algorithms are available as source code (in the programming language Python) on our web site, along with documentation on their use. The artificial 'jelly roll' data set upon which the algorithm was tested is also available from this web site. The publicly available gene expression data may be found at http://genome-www.stanford.edu/cellcycle/ and http://caulobacter.stanford.edu/CellCycle/.

  7. Nurses' reflections on pain management in a nursing home setting.

    Science.gov (United States)

    Clark, Lauren; Fink, Regina; Pennington, Karen; Jones, Katherine

    2006-06-01

    Achieving optimal and safe pain-management practices in the nursing home setting continues to challenge administrators, nurses, physicians, and other health care providers. Several factors in nursing home settings complicate the conduct of clinical process improvement research. The purpose of this qualitative study was to explore the perceptions of a sample of Colorado nursing home staff who participated in a study to develop and evaluate a multifaceted pain-management intervention. Semistructured interviews were conducted with 103 staff from treatment and control nursing homes, audiotaped, and content analyzed. Staff identified changes in their knowledge and attitudes about pain and their pain-assessment and management practices. Progressive solutions and suggestions for changing practice include establishing an internal pain team and incorporating nursing assistants into the care planning process. Quality improvement strategies can accommodate the special circumstances of nursing home care and build the capacity of the nursing homes to initiate and monitor their own process-improvement programs using a participatory research approach.

  8. White matter tracts associated with set-shifting in healthy aging.

    Science.gov (United States)

    Perry, Michele E; McDonald, Carrie R; Hagler, Donald J; Gharapetian, Lusineh; Kuperman, Joshua M; Koyama, Alain K; Dale, Anders M; McEvoy, Linda K

    2009-11-01

    Attentional set-shifting ability, commonly assessed with the Trail Making Test (TMT), decreases with increasing age in adults. Since set-shifting performance relies on activity in widespread brain regions, deterioration of the white matter tracts that connect these regions may underlie the age-related decrease in performance. We used an automated fiber tracking method to investigate the relationship between white matter integrity in several cortical association tracts and TMT performance in a sample of 24 healthy adults, 21-80 years. Diffusion tensor images were used to compute average fractional anisotropy (FA) for five cortical association tracts, the corpus callosum (CC), and the corticospinal tract (CST), which served as a control. Results showed that advancing age was associated with declines in set-shifting performance and with decreased FA in the CC and in association tracts that connect frontal cortex to more posterior brain regions, including the inferior fronto-occipital fasciculus (IFOF), uncinate fasciculus (UF), and superior longitudinal fasciculus (SLF). Declines in average FA in these tracts, and in average FA of the right inferior longitudinal fasciculus (ILF), were associated with increased time to completion on the set-shifting subtask of the TMT but not with the simple sequencing subtask. FA values in these tracts were strong mediators of the effect of age on set-shifting performance. Automated tractography methods can enhance our understanding of the fiber systems involved in performance of specific cognitive tasks and of the functional consequences of age-related changes in those systems.

  9. Tetrahedral meshing via maximal Poisson-disk sampling

    KAUST Repository

    Guo, Jianwei; Yan, Dongming; Chen, Li; Zhang, Xiaopeng; Deussen, Oliver; Wonka, Peter

    2016-01-01

    -distributed point sets in arbitrary domains. We first perform MPS on the boundary of the input domain, we then sample the interior of the domain, and we finally extract the tetrahedral mesh from the samples by using 3D Delaunay or regular triangulation for uniform

  10. Effect of a polyethylene-lined administration set on the availability of diazepam injection.

    Science.gov (United States)

    Hancock, B G; Black, C D

    1985-02-01

    Delivery of diazepam through a polyethylene-lined i.v. administration set and through a polyvinyl chloride (PVC) set was compared. Diazepam was prepared in concentrations of 50 mg/500 mL and 100 mg/500 mL in 0.9% sodium chloride injection and 5% dextrose injection in glass containers. Diazepam concentrations were measured by high-performance liquid chromatography at 0 through 5 hours in samples collected simultaneously from the glass solution containers and from the distal ends of a PVC administration set and a polyethylene-lined (non-PVC) set. Flow rates of 50 and 100 mL/hr were tested. For the non-PVC sets, diazepam concentration in the infusate was not significantly different from concentration in the glass container at any sampling time. The overall percentage of diazepam recovered was 100.7 +/- 6.8%. For the PVC sets, diazepam concentration in the infusate was less than in the container at all sampling times, and the overall percentage of diazepam recovered was 65.4 +/- 13.3% (significantly different from delivery for the non-PVC sets). Delivery through the non-PVC sets was not affected by flow rate, type of solution, or concentration of diazepam. For infusion periods of up to five hours, delivery of diazepam through polyethylene-lined i.v. administration sets was superior to delivery through polyvinyl chloride sets.

  11. Estimates and sampling schemes for the instrumentation of accountability systems

    International Nuclear Information System (INIS)

    Jewell, W.S.; Kwiatkowski, J.W.

    1976-10-01

    The problem of estimation of a physical quantity from a set of measurements is considered, where the measurements are made on samples with a hierarchical error structure, and where within-groups error variances may vary from group to group at each level of the structure; minimum mean squared-error estimators are developed, and the case where the physical quantity is a random variable with known prior mean and variance is included. Estimators for the error variances are also given, and optimization of experimental design is considered

  12. Estimates of laboratory accuracy and precision on Hanford waste tank samples

    International Nuclear Information System (INIS)

    Dodd, D.A.

    1995-01-01

    A review was performed on three sets of analyses generated in Battelle, Pacific Northwest Laboratories and three sets generated by Westinghouse Hanford Company, 222-S Analytical Laboratory. Laboratory accuracy and precision was estimated by analyte and is reported in tables. The sources used to generate this estimate is of limited size but does include the physical forms, liquid and solid, which are representative of samples from tanks to be characterized. This estimate was published as an aid to programs developing data quality objectives in which specified limits are established. Data resulting from routine analyses of waste matrices can be expected to be bounded by the precision and accuracy estimates of the tables. These tables do not preclude or discourage direct negotiations between program and laboratory personnel while establishing bounding conditions. Programmatic requirements different than those listed may be reliably met on specific measurements and matrices. It should be recognized, however, that these are specific to waste tank matrices and may not be indicative of performance on samples from other sources

  13. Automated SEM and TEM sample preparation applied to copper/low k materials

    Science.gov (United States)

    Reyes, R.; Shaapur, F.; Griffiths, D.; Diebold, A. C.; Foran, B.; Raz, E.

    2001-01-01

    We describe the use of automated microcleaving for preparation of both SEM and TEM samples as done by SELA's new MC500 and TEMstation tools. The MC500 is an automated microcleaving tool that is capable of producing cleaves with 0.25 μm accuracy resulting in SEM-ready samples. The TEMstation is capable of taking a sample output from the MC500 (or from SELA's earlier MC200 tool) and producing a FIB ready slice of 25±5 μm, mounted on a TEM-washer and ready for FIB thinning to electron transparency for TEM analysis. The materials selected for the tool set evaluation mainly included the Cu/TaN/HOSP low-k system. The paper is divided into three sections, experimental approach, SEM preparation and analysis of HOSP low-k, and TEM preparation and analysis of Cu/TaN/HOSP low-k samples. For the samples discussed, data is presented to show the quality of preparation provided by these new automated tools.

  14. Multi-index Monte Carlo: when sparsity meets sampling

    KAUST Repository

    Haji Ali, Abdul Lateef

    2015-06-27

    We propose and analyze a novel multi-index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, we use in MIMC high-order mixed differences instead of using first-order differences as in MLMC to reduce the variance of the hierarchical differences dramatically. This in turn yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis and which increase the domain of the problem parameters for which we achieve the optimal convergence, O(TOL−2). Moreover, in MIMC, the rate of increase of required memory with respect to TOL is independent of the number of directions up to a logarithmic term which allows far more accurate solutions to be calculated for higher dimensions than what is possible when using MLMC. We motivate the setting of MIMC by first focusing on a simple full tensor index set. We then propose a systematic construction of optimal sets of indices for MIMC based on properly defined profits that in turn depend on the average cost per sample and the corresponding weak error and variance. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be the total degree type. In some cases, using optimal index sets, MIMC achieves a better rate for the computational complexity than the corresponding rate when using full tensor index sets. We also show the asymptotic normality of the statistical error in the resulting MIMC estimator and justify in this way our error estimate, which allows both the required accuracy and the confidence level in our computational

  15. Classification of sand samples according to radioactivity content by the use of euclidean and rough sets techniques

    International Nuclear Information System (INIS)

    Abd El-Monsef, M.M.; Kozae, A.M.; Seddeek, M.K.; Medhat, T.; Sharshar, T.; Badran, H.M.

    2004-01-01

    Form the geological point of view, the origin and transport of black and normal sands is particularly important. Black and normal sands came to their places along the Mediterranean-sea coast after transport by some natural process. Both types of sands have different radiological properties. This study is, therefore, attempts to use mathematical methods to classify Egyptian sand samples collected from 42 locations in an area of 40 x 19 km 2 based on their radioactivity contents. The use of all information resulted from the experimental measurements of radioactivity contents as well as some other parameters can be a time and effort consuming task. So that the process of eliminating unnecessary attributes is of prime importance. This elimination process of the superfluous attributes that cannot affect the decision was carried out. Some topological techniques to classify the information systems resulting from the radioactivity measurements were then carried out. These techniques were applied in Euclidean and quasi-discrete topological cases. While there are some applications in environmental radioactivity of the former case, the use of the quasi-discrete in the so-called rough set information analysis is new in such a study. The mathematical methods are summarized and the results and their radiological implications are discussed. Generally, the results indicate no radiological anomaly and it supports the hypothesis previously suggested about the presence of two types of sand in the studied area

  16. Aqueous samples from B-Plant, Tank 9-1

    International Nuclear Information System (INIS)

    Bell, K.E.

    1995-01-01

    This document is the final report for the B-Plant Tank 9-1 sampling and analysis program. This report is divided into three parts: first, a narrative about the history, sampling effort, quality control, sample tracking/laboratory identification, and a summary of the analysis; second, sampling and custody data; and lastly, a set of compiled data from the laboratory analysis

  17. Determination of total alpha index in samples of see water by coprecipitation method

    International Nuclear Information System (INIS)

    Suarez-Navarro, J.A.; Pujol, L.; Pozuelo, M.; Pablo, A. de

    1998-01-01

    An environmental radiological monitoring network in the Spanish sea waters was set up by CEDEX in 1993. Water radioactivity is determined quarterly in eleven sampling points along the Spanish coast. The gross alpha activity is one of the parameters to be determined. The usual method for monitoring the gross alpha activity includes sample evaporation to dryness on a disk and counting using ZnS(Ag) scintillation detector. Nevertheless, the gross alpha activity determination in saline waters, such as sea waters, is troublesome, because mass attenuation is high and a very small of water is needed (0.2 ml). The coprecipitation method allows to analyze 500 ml water samples, so the detection limit is reduced and sensitivity is improved. In this work, the coprecipitation method was used to determine the gross alpha activity in the radiological network of the Spanish coast sea waters during 1996 and 1997. Gross alpha activity was very homogenous. It averaged 0.0844±0.0086 Bq.1''1 and ranged from 0.062 to 0.102 Bq.1''1. In collaboration with CIEMAT a set of samples was analyzed, they averaged 0.0689±0.0074 Bq.1''1 and ranged from 0.056 to 0.082 Bq.1''1. (Author) 5 refs

  18. Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search

    Science.gov (United States)

    Nakamura, Katsuhiko; Hoshina, Akemi

    This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

  19. Mechanism-based biomarker gene sets for glutathione depletion-related hepatotoxicity in rats

    International Nuclear Information System (INIS)

    Gao Weihua; Mizukawa, Yumiko; Nakatsu, Noriyuki; Minowa, Yosuke; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro

    2010-01-01

    Chemical-induced glutathione depletion is thought to be caused by two types of toxicological mechanisms: PHO-type glutathione depletion [glutathione conjugated with chemicals such as phorone (PHO) or diethyl maleate (DEM)], and BSO-type glutathione depletion [i.e., glutathione synthesis inhibited by chemicals such as L-buthionine-sulfoximine (BSO)]. In order to identify mechanism-based biomarker gene sets for glutathione depletion in rat liver, male SD rats were treated with various chemicals including PHO (40, 120 and 400 mg/kg), DEM (80, 240 and 800 mg/kg), BSO (150, 450 and 1500 mg/kg), and bromobenzene (BBZ, 10, 100 and 300 mg/kg). Liver samples were taken 3, 6, 9 and 24 h after administration and examined for hepatic glutathione content, physiological and pathological changes, and gene expression changes using Affymetrix GeneChip Arrays. To identify differentially expressed probe sets in response to glutathione depletion, we focused on the following two courses of events for the two types of mechanisms of glutathione depletion: a) gene expression changes occurring simultaneously in response to glutathione depletion, and b) gene expression changes after glutathione was depleted. The gene expression profiles of the identified probe sets for the two types of glutathione depletion differed markedly at times during and after glutathione depletion, whereas Srxn1 was markedly increased for both types as glutathione was depleted, suggesting that Srxn1 is a key molecule in oxidative stress related to glutathione. The extracted probe sets were refined and verified using various compounds including 13 additional positive or negative compounds, and they established two useful marker sets. One contained three probe sets (Akr7a3, Trib3 and Gstp1) that could detect conjugation-type glutathione depletors any time within 24 h after dosing, and the other contained 14 probe sets that could detect glutathione depletors by any mechanism. These two sets, with appropriate scoring

  20. Default settings of computerized physician order entry system order sets drive ordering habits.

    Science.gov (United States)

    Olson, Jordan; Hollenbeak, Christopher; Donaldson, Keri; Abendroth, Thomas; Castellani, William

    2015-01-01

    laboratory tests. Careful consideration by all stakeholders, including clinicians and pathologists, should be obtained when establishing default settings in order sets.

  1. Representative process sampling for reliable data analysis

    DEFF Research Database (Denmark)

    Julius, Lars Petersen; Esbensen, Kim

    2005-01-01

    (sampling variances) can be reduced greatly however, and sampling biases can be eliminated completely, by respecting a simple set of rules and guidelines provided by TOS. A systematic approach for description of process heterogeneity furnishes in-depth knowledge about the specific variability of any 1-D lot...

  2. OVERVIEW OF BERYLLIUM SAMPLING AND ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Brisson, M

    2009-04-01

    Because of its unique properties as a lightweight metal with high tensile strength, beryllium is widely used in applications including cell phones, golf clubs, aerospace, and nuclear weapons. Beryllium is also encountered in industries such as aluminium manufacturing, and in environmental remediation projects. Workplace exposure to beryllium particulates is a growing concern, as exposure to minute quantities of anthropogenic forms of beryllium may lead to sensitization and to chronic beryllium disease, which can be fatal and for which no cure is currently known. Furthermore, there is no known exposure-response relationship with which to establish a 'safe' maximum level of beryllium exposure. As a result, the current trend is toward ever lower occupational exposure limits, which in turn make exposure assessment, both in terms of sampling and analysis, more challenging. The problems are exacerbated by difficulties in sample preparation for refractory forms of beryllium, such as beryllium oxide, and by indications that some beryllium forms may be more toxic than others. This chapter provides an overview of sources and uses of beryllium, health risks, and occupational exposure limits. It also provides a general overview of sampling, analysis, and data evaluation issues that will be explored in greater depth in the remaining chapters. The goal of this book is to provide a comprehensive resource to aid personnel in a wide variety of disciplines in selecting sampling and analysis methods that will facilitate informed decision-making in workplace and environmental settings.

  3. Priority setting: what constitutes success? A conceptual framework for successful priority setting.

    Science.gov (United States)

    Sibbald, Shannon L; Singer, Peter A; Upshur, Ross; Martin, Douglas K

    2009-03-05

    The sustainability of healthcare systems worldwide is threatened by a growing demand for services and expensive innovative technologies. Decision makers struggle in this environment to set priorities appropriately, particularly because they lack consensus about which values should guide their decisions. One way to approach this problem is to determine what all relevant stakeholders understand successful priority setting to mean. The goal of this research was to develop a conceptual framework for successful priority setting. Three separate empirical studies were completed using qualitative data collection methods (one-on-one interviews with healthcare decision makers from across Canada; focus groups with representation of patients, caregivers and policy makers; and Delphi study including scholars and decision makers from five countries). This paper synthesizes the findings from three studies into a framework of ten separate but interconnected elements germane to successful priority setting: stakeholder understanding, shifted priorities/reallocation of resources, decision making quality, stakeholder acceptance and satisfaction, positive externalities, stakeholder engagement, use of explicit process, information management, consideration of values and context, and revision or appeals mechanism. The ten elements specify both quantitative and qualitative dimensions of priority setting and relate to both process and outcome components. To our knowledge, this is the first framework that describes successful priority setting. The ten elements identified in this research provide guidance for decision makers and a common language to discuss priority setting success and work toward improving priority setting efforts.

  4. Guided goal setting: effectiveness in a dietary and physical activity intervention with low-income adolescents.

    Science.gov (United States)

    Shilts, Mical Kay; Horowitz, Marcel; Townsend, Marilyn S

    2009-01-01

    Determining the effectiveness of the guided goal setting strategy on changing adolescents' dietary and physical activity self-efficacy and behaviors. Adolescents were individually assigned to treatment (intervention with guided goal setting) or control conditions (intervention without guided goal setting) with data collected before and after the education intervention. Urban middle school in a low-income community in Central California. Ethnically diverse middle school students (n = 94, 55% male) who were participants of a USDA nutrition education program. Driven by the Social Cognitive Theory, the intervention targeted dietary and physical activity behaviors of adolescents. Dietary self-efficacy and behavior; physical activity self-efficacy and behavior; goal effort and spontaneous goal setting. ANCOVA and path analysis were performed using the full sample and a sub-sample informed by Locke's recommendations (accounting for goal effort and spontaneous goal setting). No significant differences were found between groups using the full sample. Using the sub-sample, greater gains in dietary behavior (p goal effort and spontaneous goal setting, this study provides some evidence that the use of guided goal setting with adolescents may be a viable strategy to promote dietary and physical activity behavior change.

  5. How large a training set is needed to develop a classifier for microarray data?

    Science.gov (United States)

    Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M

    2008-01-01

    A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.

  6. Comparison of radon and radon-daughter grab samples obtained during the winter and summer

    International Nuclear Information System (INIS)

    Karp, K.E.

    1987-08-01

    The Technical Measurements Center (TMC), under the auspices of the US Department of Energy (DOE) Uranium Mill Tailings Remedial Action (UMTRA) program, is investigating short-term methods for estimating annual average indoor radon-daughter concentrations (RDC). A field study at 40 sample locations in 26 residential structures in Grand Junction, Colorado, was conducted once in the winter and once in the summer. The short-term methods investigated as part of this study include ten-minute radon and radon-daughter grab sampling and hourly RDC measurements. The results of the field study indicate that ten-minute radon grab samples from basement locations are reproducible over different seasons during controlled sampling conditions. Nonbasement radon and RDC grab samples are highly variable even when the use of the location by the occupant is controlled and the ventilation rate is restricted. The grab sampling was performed under controlled occupied conditions. These results confirm that a short-term radon or RDC measurement in a nonbasement location in a house is not a standardized measurement that can be used to infer an annual average concentration. The hourly RDC measurements were performed under three sets of conditions over a 72-hour period. The three sets of conditions were uncontrolled occupied, controlled occupied, and controlled unoccupied. These results indicate that it is not necessary to relocate the occupants during the time of grab sampling. 8 refs., 8 figs., 10 tabs

  7. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Directory of Open Access Journals (Sweden)

    Lauren Hund

    Full Text Available Lot quality assurance sampling (LQAS surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  8. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Science.gov (United States)

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  9. Evaluation of setting time and flow properties of self-synthesize alginate impressions

    Science.gov (United States)

    Halim, Calista; Cahyanto, Arief; Sriwidodo, Harsatiningsih, Zulia

    2018-02-01

    Alginate is an elastic hydrocolloid dental impression materials to obtain negative reproduction of oral mucosa such as to record soft-tissue and occlusal relationships. The aim of the present study was to synthesize alginate and to determine the setting time and flow properties. There were five groups of alginate consisted of fifty samples self-synthesize alginate and commercial alginate impression product. Fifty samples were divided according to two tests, each twenty-five samples for setting time and flow test. Setting time test was recorded in the s unit, meanwhile, flow test was recorded in the mm2 unit. The fastest setting time result was in the group three (148.8 s) and the latest was group fours). The highest flow test result was in the group three (69.70 mm2) and the lowest was group one (58.34 mm2). Results were analyzed statistically by one way ANOVA (α= 0.05), showed that there was a statistical significance of setting time while no statistical significance of flow properties between self-synthesize alginate and alginate impression product. In conclusion, the alginate impression was successfully self-synthesized and variation composition gives influence toward setting time and flow properties. The most resemble setting time of control group is group three. The most resemble flow of control group is group four.

  10. Comprehensive processing of high-throughput small RNA sequencing data including quality checking, normalization, and differential expression analysis using the UEA sRNA Workbench.

    Science.gov (United States)

    Beckers, Matthew; Mohorianu, Irina; Stocks, Matthew; Applegate, Christopher; Dalmay, Tamas; Moulton, Vincent

    2017-06-01

    Recently, high-throughput sequencing (HTS) has revealed compelling details about the small RNA (sRNA) population in eukaryotes. These 20 to 25 nt noncoding RNAs can influence gene expression by acting as guides for the sequence-specific regulatory mechanism known as RNA silencing. The increase in sequencing depth and number of samples per project enables a better understanding of the role sRNAs play by facilitating the study of expression patterns. However, the intricacy of the biological hypotheses coupled with a lack of appropriate tools often leads to inadequate mining of the available data and thus, an incomplete description of the biological mechanisms involved. To enable a comprehensive study of differential expression in sRNA data sets, we present a new interactive pipeline that guides researchers through the various stages of data preprocessing and analysis. This includes various tools, some of which we specifically developed for sRNA analysis, for quality checking and normalization of sRNA samples as well as tools for the detection of differentially expressed sRNAs and identification of the resulting expression patterns. The pipeline is available within the UEA sRNA Workbench, a user-friendly software package for the processing of sRNA data sets. We demonstrate the use of the pipeline on a H. sapiens data set; additional examples on a B. terrestris data set and on an A. thaliana data set are described in the Supplemental Information A comparison with existing approaches is also included, which exemplifies some of the issues that need to be addressed for sRNA analysis and how the new pipeline may be used to do this. © 2017 Beckers et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  11. Introduction to Fuzzy Set Theory

    Science.gov (United States)

    Kosko, Bart

    1990-01-01

    An introduction to fuzzy set theory is described. Topics covered include: neural networks and fuzzy systems; the dynamical systems approach to machine intelligence; intelligent behavior as adaptive model-free estimation; fuzziness versus probability; fuzzy sets; the entropy-subsethood theorem; adaptive fuzzy systems for backing up a truck-and-trailer; product-space clustering with differential competitive learning; and adaptive fuzzy system for target tracking.

  12. Superfund Site Information - Site Sampling Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — This asset includes Superfund site-specific sampling information including location of samples, types of samples, and analytical chemistry characteristics of...

  13. A new MCNP trademark test set

    International Nuclear Information System (INIS)

    Brockhoff, R.C.; Hendricks, J.S.

    1994-09-01

    The MCNP test set is used to test the MCNP code after installation on various computer platforms. For MCNP4 and MCNP4A this test set included 25 test problems designed to test as many features of the MCNP code as possible. A new and better test set has been devised to increase coverage of the code from 85% to 97% with 28 problems. The new test set is as fast as and shorter than the MCNP4A test set. The authors describe the methodology for devising the new test set, the features that were not covered in the MCNP4A test set, and the changes in the MCNP4A test set that have been made for MCNP4B and its developmental versions. Finally, new bugs uncovered by the new test set and a compilation of all known MCNP4A bugs are presented

  14. Solvent hold tank sample results for MCU-16-1363-1364-1365: November 2016 monthly sample

    Energy Technology Data Exchange (ETDEWEB)

    Fondeur, F. F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, D. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-06-16

    Savannah River National Laboratory (SRNL) received one set of three Solvent Hold Tank (SHT) samples (MCU-16-1363-1364-1365), pulled on 11/15/2016 for analysis. The samples were combined and analyzed for composition. Analysis of the composite sample MCU-16-1363-1364-1365 indicated the Isopar™L concentration is at its nominal level (100%). The extractant (MaxCalix) and the modifier (CS- 7SB) are 8% and 2 % below their nominal concentrations. The suppressor (TiDG) is 7% below its nominal concentration. A summary of the concentration of the relevant solvent components is shown below.

  15. Solvent hold tank sample results for MCU-16-1317-1318-1319: September 2016 monthly sample

    Energy Technology Data Exchange (ETDEWEB)

    Fondeur, F. F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, D. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-06-16

    Savannah River National Laboratory (SRNL) received one set of three Solvent Hold Tank (SHT) samples (MCU-16-1317-1318-1319), pulled on 09/12/2016 for analysis. The samples were combined and analyzed for composition. Analysis of the composite sample MCU-16-1317-1318-1319 indicated the Isopar™L concentration is above its nominal level (102%). The extractant (MaxCalix) and the modifier (CS-7SB) are 5% and 9% below their nominal concentrations. The suppressor (TiDG) is 76% below its nominal concentration. A summary of the concentration of the relevant solvent components is shown below.

  16. Solvent hold tank sample results for MCU-16-1247-1248-1249: August 2016 monthly sample

    Energy Technology Data Exchange (ETDEWEB)

    Fondeur, F. F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, D. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-06-16

    Savannah River National Laboratory (SRNL) received one set of Solvent Hold Tank (SHT) samples (MCU-16-1247-1248-1249), pulled on 08/22/2016 for analysis. The samples were combined and analyzed for composition. Analysis of the composite sample MCU-16-1247-1248-1249 indicated the Isopar™L concentration is above its nominal level (101%). The extractant (MaxCalix) and the modifier (CS-7SB) are 7% and 9 % below their nominal concentrations. The suppressor (TiDG) is 63% below its nominal concentration. A summary of the concentration of the relevant solvent components is shown below.

  17. Genetic Sample Inventory

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database archives genetic tissue samples from marine mammals collected primarily from the U.S. east coast. The collection includes samples from field programs,...

  18. Sparse feature learning for instrument identification: Effects of sampling and pooling methods.

    Science.gov (United States)

    Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu

    2016-05-01

    Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.

  19. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  20. RANdom SAmple Consensus (RANSAC) algorithm for material-informatics: application to photovoltaic solar cells.

    Science.gov (United States)

    Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch

    2017-06-06

    An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.

  1. A simple vibrating sample magnetometer for macroscopic samples

    Science.gov (United States)

    Lopez-Dominguez, V.; Quesada, A.; Guzmán-Mínguez, J. C.; Moreno, L.; Lere, M.; Spottorno, J.; Giacomone, F.; Fernández, J. F.; Hernando, A.; García, M. A.

    2018-03-01

    We here present a simple model of a vibrating sample magnetometer (VSM). The system allows recording magnetization curves at room temperature with a resolution of the order of 0.01 emu and is appropriated for macroscopic samples. The setup can be mounted with different configurations depending on the requirements of the sample to be measured (mass, saturation magnetization, saturation field, etc.). We also include here examples of curves obtained with our setup and comparison curves measured with a standard commercial VSM that confirms the reliability of our device.

  2. Iowa Geologic Sampling Points

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — Point locations of geologic samples/files in the IGS repository. Types of samples include well cuttings, outcrop samples, cores, drillers logs, measured sections,...

  3. Distribution-Preserving Stratified Sampling for Learning Problems.

    Science.gov (United States)

    Cervellera, Cristiano; Maccio, Danilo

    2017-06-09

    The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data. This is obvious for unsupervised learning problems, where the goal is to gain insights on the distribution of the data, but it is also relevant for supervised problems, where the theory explains how the training set distribution influences the generalization error. In this paper, we analyze the technique of stratified sampling from the point of view of distances between probabilities. This allows us to introduce an algorithm, based on recursive binary partition of the input space, aimed at obtaining samples that are distributed as much as possible as the original data. A theoretical analysis is proposed, proving the (greedy) optimality of the procedure together with explicit error bounds. An adaptive version of the algorithm is also introduced to cope with streaming data. Simulation tests on various data sets and different learning tasks are also provided.

  4. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    Science.gov (United States)

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  5. Method of extruding and packaging a thin sample of reactive material including forming the extrusion die

    International Nuclear Information System (INIS)

    Lewandowski, E.F.; Peterson, L.L.

    1985-01-01

    This invention teaches a method of cutting a narrow slot in an extrusion die with an electrical discharge machine by first drilling spaced holes at the ends of where the slot will be, whereby the oil can flow through the holes and slot to flush the material eroded away as the slot is being cut. The invention further teaches a method of extruding a very thin ribbon of solid highly reactive material such as lithium or sodium through the die in an inert atmosphere of nitrogen, argon or the like as in a glovebox. The invention further teaches a method of stamping out sample discs from the ribbon and of packaging each disc by sandwiching it between two aluminum sheets and cold welding the sheets together along an annular seam beyond the outer periphery of the disc. This provides a sample of high purity reactive material that can have a long shelf life

  6. Functions with disconnected spectrum sampling, interpolation, translates

    CERN Document Server

    Olevskii, Alexander M

    2016-01-01

    The classical sampling problem is to reconstruct entire functions with given spectrum S from their values on a discrete set L. From the geometric point of view, the possibility of such reconstruction is equivalent to determining for which sets L the exponential system with frequencies in L forms a frame in the space L^2(S). The book also treats the problem of interpolation of discrete functions by analytic ones with spectrum in S and the problem of completeness of discrete translates. The size and arithmetic structure of both the spectrum S and the discrete set L play a crucial role in these problems. After an elementary introduction, the authors give a new presentation of classical results due to Beurling, Kahane, and Landau. The main part of the book focuses on recent progress in the area, such as construction of universal sampling sets, high-dimensional and non-analytic phenomena. The reader will see how methods of harmonic and complex analysis interplay with various important concepts in different areas, ...

  7. Automatic sets and Delone sets

    International Nuclear Information System (INIS)

    Barbe, A; Haeseler, F von

    2004-01-01

    Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples

  8. On Setting Priorities among Human Rights

    NARCIS (Netherlands)

    Philips, Jos

    2014-01-01

    Should conflicts among human rights be dealt with by including general principles for priority setting at some prominent place in the practice of human rights? This essay argues that neither setting prominent and principled priorities nor a case-by-case approach are likely to be defensible as

  9. A combinatory approach for analysis of protein sets in barley sieve-tube samples using EDTA-facilitated exudation and aphid stylectomy.

    Science.gov (United States)

    Gaupels, Frank; Knauer, Torsten; van Bel, Aart J E

    2008-01-01

    This study investigated advantages and drawbacks of two sieve-tube sap sampling methods for comparison of phloem proteins in powdery mildew-infested vs. non-infested Hordeum vulgare plants. In one approach, sieve tube sap was collected by stylectomy. Aphid stylets were cut and immediately covered with silicon oil to prevent any contamination or modification of exudates. In this way, a maximum of 1muL pure phloem sap could be obtained per hour. Interestingly, after pathogen infection exudation from microcauterized stylets was reduced to less than 40% of control plants, suggesting that powdery mildew induced sieve tube-occlusion mechanisms. In contrast to the laborious stylectomy, facilitated exudation using EDTA to prevent calcium-mediated callose formation is quick and easy with a large volume yield. After two-dimensional (2D) electrophoresis, a digital overlay of the protein sets extracted from EDTA solutions and stylet exudates showed that some major spots were the same with both sampling techniques. However, EDTA exudates also contained large amounts of contaminative proteins of unknown origin. A combinatory approach may be most favourable for studies in which the protein composition of phloem sap is compared between control and pathogen-infected plants. Facilitated exudation may be applied for subtractive identification of differentially expressed proteins by 2D/mass spectrometry, which requires large amounts of protein. A reference gel loaded with pure phloem sap from stylectomy may be useful for confirmation of phloem origin of candidate spots by digital overlay. The method provides a novel opportunity to study differential expression of phloem proteins in monocotyledonous plant species.

  10. Stability of mercury concentration measurements in archived soil and peat samples

    Science.gov (United States)

    Navrátil, Tomáš; Burns, Douglas; Nováková, Tereza; Kaňa, Jiří; Rohovec, Jan; Roll, Michal; Ettler, Vojtěch

    2018-01-01

    Archived soil samples can provide important information on the history of environmental contamination and by comparison with recently collected samples, temporal trends can be inferred. Little previous work has addressed whether mercury (Hg) concentrations in soil samples are stable with long-term storage under standard laboratory conditions. In this study, we have re-analyzed using cold vapor atomic adsorption spectroscopy a set of archived soil samples that ranged from relatively pristine mountainous sites to a polluted site near a non-ferrous metal smelter with a wide range of Hg concentrations (6 - 6485 µg kg-1). Samples included organic and mineral soils and peats with a carbon content that ranged from 0.2 to 47.7%. Soil samples were stored in polyethylene bags or bottles and held in laboratory rooms where temperature was not kept to a constant value. Mercury concentrations in four subsets of samples were originally measured in 2000, 2005, 2006 and 2007, and re-analyzed in 2017, i.e. after 17, 12, 11 and 10 years of storage. Statistical analyses of either separated or lumped data yielded no significant differences between the original and current Hg concentrations. Based on these analyses, we show that archived soil and peat samples can be used to evaluate historical soil mercury contamination.

  11. Neuro-genetic system for optimization of GMI samples sensitivity.

    Science.gov (United States)

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Design of a groundwater sampling network for Minnesota

    International Nuclear Information System (INIS)

    Kanivetsky, R.

    1977-01-01

    This folio was compiled to facilitate the use of groundwater as a sampling medium to aid in exploration for hitherto undiscovered deposits of uranium in the subsurface rocks of Minnesota. The report consists of the following sheets of the hydrogeologic map of Minnesota: (1) map of bedrock hydrogeology, (2) generalized cross sections of the hydrogeologic map of Minnesota, showing both Quaternary deposits and bedrock, (3) map of waterwells that penetrate Precambrian rocks in Minnesota. A list of these wells, showing locations, names of owners, type of Precambrian aquifers penetrated, lithologic material of the aquifers, and well depths is provided in the appendix to this report. Structural settings, locations, and composition of the bedrock aquifers, movement of groundwater, and preliminary suggestions for a sampling program are discussed below under the heading Bedrock Hydrogeology of Minnesota. The map sheet showing Quaternary hydrogeology is not included in this report because the chemistry of groundwater in these deposits is not directly related to bedrock mineralization

  13. Guideline for Sampling and Analysis of Tar and Particles in Biomass Producer Gases. Version 3.3

    Energy Technology Data Exchange (ETDEWEB)

    Neeft, J.P.A.; Knoef, H.A.M.; Zielke, U.; Sjoestroem, K.; Hasler, P.; Simell, P.A.; Dorrington, M.A.; Thomas, L.; Abatzoglou, N.; Deutch, S.; Greil, C.; Buffinga, G.J.; Brage, C.; Suomalainen, M.

    2002-07-01

    This Guideline provides a set of procedures for the measurement of organic contaminants and particles in producer gases from biomass gasifiers. The procedures are designed to cover different gasifier types (updraft or downdraft fixed bed or fluidised bed gasifiers), operating conditions (0 - 900C and 0.6-60 bars) and concentration ranges (1 mg/m{sub n}{sup 3} to 300 g/m{sub n}{sup 3}). The Guideline describes a modular sampling train, and a set of procedures, which include: planning and preparation of the sampling, sampling and post-sampling, analysis, calculations, error analysis and reporting. The modular sampling train consists of 4 modules. Module 1 is a preconditioning module for isokinetic sampling and gas cooling. Module 2 is a particle collection module including a heated filter. Module 3 is a tar collection module with a gas quench (optionally by circulating a liquid), impinger bottles and a backup adsorber. Module 4 is a volume-sampling module consisting of a pump, a rotameter, a gas flow meter and pressure and temperature indicators. The equipment and materials that are required for procuring this modular sampling train are given in the Guideline. The sampling procedures consist of a description for isokinetic sampling, a leakage test prior to sampling, the actual sampling and its duration, how the equipment is cleaned after the sampling, and how the samples are prepared and stored. Analysis of the samples is performed via three procedures. Prior to these procedures, the sample is prepared by Soxhlet extraction of the tars on the particle filter and by collection of all tars in one bulk solution. The first procedure describes the weighing of the particle filter to obtain the concentration of particles in the biomass producer gas. The bulk tar solution is used for two purposes: for determination of gravimetric tar and for analysis of individual compounds. The second procedure describes how to determine the gravimetric tar mass from the bulk solution. The

  14. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  15. Analysis of monazite samples

    International Nuclear Information System (INIS)

    Kartiwa Sumadi; Yayah Rohayati

    1996-01-01

    The 'monazit' analytical program has been set up for routine work of Rare Earth Elements analysis in the monazite and xenotime minerals samples. Total relative error of the analysis is very low, less than 2.50%, and the reproducibility of counting statistic and stability of the instrument were very excellent. The precision and accuracy of the analytical program are very good with the maximum percentage relative are 5.22% and 1.61%, respectively. The mineral compositions of the 30 monazite samples have been also calculated using their chemical constituents, and the results were compared to the grain counting microscopic analysis

  16. A persisting secondhand smoke hazard in urban public places: results from fine particulate (PM2.5) air sampling.

    Science.gov (United States)

    Wilson, Nick; Edwards, Richard; Parry, Rhys

    2011-03-04

    To assess the need for additional smokefree settings, by measuring secondhand smoke (SHS) in a range of public places in an urban setting. Measurements were made in Wellington City during the 6-year period after the implementation of legislation that made indoor areas of restaurants and bars/pubs smokefree in December 2004, and up to 20 years after the 1990 legislation making most indoor workplaces smokefree. Fine particulate levels (PM2.5) were measured with a portable real-time airborne particle monitor. We collated data from our previously published work involving random sampling, purposeful sampling and convenience sampling of a wide range of settings (in 2006) and from additional sampling of selected indoor and outdoor areas (in 2007-2008 and 2010). The "outdoor" smoking areas of hospitality venues had the highest particulate levels, with a mean value of 72 mcg/m3 (range of maximum values 51-284 mcg/m3) (n=20 sampling periods). These levels are likely to create health hazards for some workers and patrons (i.e., when considered in relation to the WHO air quality guidelines). National survey data also indicate that these venues are the ones where SHS exposure is most frequently reported by non-smokers. Areas inside bars that were adjacent to "outdoor" smoking areas also had high levels, with a mean of 54 mcg/m3 (range of maximum values: 18-239 mcg/m3, for n=13 measurements). In all other settings mean levels were lower (means: 2-22 mcg/m3). These other settings included inside traditional style pubs/sports bars (n=10), bars (n=18), restaurants (n=9), cafes (n=5), inside public buildings (n=15), inside transportation settings (n=15), and various outdoor street/park settings (n=22). During the data collection in all settings made smokefree by law, there was only one occasion of a person observed smoking. The results suggest that compliance in pubs/bars and restaurants has remained extremely high in this city in the nearly six years since implementation of the

  17. A self-sampling method to obtain large volumes of undiluted cervicovaginal secretions.

    Science.gov (United States)

    Boskey, Elizabeth R; Moench, Thomas R; Hees, Paul S; Cone, Richard A

    2003-02-01

    Studies of vaginal physiology and pathophysiology sometime require larger volumes of undiluted cervicovaginal secretions than can be obtained by current methods. A convenient method for self-sampling these secretions outside a clinical setting can facilitate such studies of reproductive health. The goal was to develop a vaginal self-sampling method for collecting large volumes of undiluted cervicovaginal secretions. A menstrual collection device (the Instead cup) was inserted briefly into the vagina to collect secretions that were then retrieved from the cup by centrifugation in a 50-ml conical tube. All 16 women asked to perform this procedure found it feasible and acceptable. Among 27 samples, an average of 0.5 g of secretions (range, 0.1-1.5 g) was collected. This is a rapid and convenient self-sampling method for obtaining relatively large volumes of undiluted cervicovaginal secretions. It should prove suitable for a wide range of assays, including those involving sexually transmitted diseases, microbicides, vaginal physiology, immunology, and pathophysiology.

  18. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

    Directory of Open Access Journals (Sweden)

    Stefanović Milena

    2013-01-01

    Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

  19. Neutron emission from impacted solid LiD samples

    International Nuclear Information System (INIS)

    Kaushik, T.C.; Shyam, A.; Kulkarni, L.V.; Srinivasan, M.

    1993-01-01

    Nylon projectiles with 0.1 g to 0.3 g mass, accelerated to velocities of 0.2-1 km/s using a 60 cm long electromagnetic accelerator (railgun), have been impacted upon solid lithium deuteride (LiD) samples of 3 proportional counters. The output from the BF 3 set-up is monitored in several ways to characterize the possible neutron emission from the target. This includes a simple technique of counting the single channel analyser (SCA) output through a dead-time unit to identify bursts of < 100 μs duration. Counting is started after a delay of ∼ 1 ms to avoid the initial interference from the capacitor bank discharge. The signal is also recorded in a storage oscilloscope from the start of projectile acceleration along with a time marker just before the impact. From a number of shots taken with and without the samples, a significant evidence of neutron emission from the LiD samples appears to emerge. The experiments suggest that approximately 100 neutrons might be generated during every such impact in a duration of < 4 ms. (author). 7 refs., 3 figs

  20. Automated Classification and Analysis of Non-metallic Inclusion Data Sets

    Science.gov (United States)

    Abdulsalam, Mohammad; Zhang, Tongsheng; Tan, Jia; Webler, Bryan A.

    2018-05-01

    The aim of this study is to utilize principal component analysis (PCA), clustering methods, and correlation analysis to condense and examine large, multivariate data sets produced from automated analysis of non-metallic inclusions. Non-metallic inclusions play a major role in defining the properties of steel and their examination has been greatly aided by automated analysis in scanning electron microscopes equipped with energy dispersive X-ray spectroscopy. The methods were applied to analyze inclusions on two sets of samples: two laboratory-scale samples and four industrial samples from a near-finished 4140 alloy steel components with varying machinability. The laboratory samples had well-defined inclusions chemistries, composed of MgO-Al2O3-CaO, spinel (MgO-Al2O3), and calcium aluminate inclusions. The industrial samples contained MnS inclusions as well as (Ca,Mn)S + calcium aluminate oxide inclusions. PCA could be used to reduce inclusion chemistry variables to a 2D plot, which revealed inclusion chemistry groupings in the samples. Clustering methods were used to automatically classify inclusion chemistry measurements into groups, i.e., no user-defined rules were required.

  1. Pieces of Other Worlds - Enhance YSS Education and Public Outreach Events with Extraterrestrial Samples

    Science.gov (United States)

    Allen, C.

    2010-12-01

    During the Year of the Solar System spacecraft will encounter two comets; orbit the asteroid Vesta, continue to explore Mars with rovers, and launch robotic explorers to the Moon and Mars. We have pieces of all these worlds in our laboratories. Extensive information about these unique materials, as well as actual lunar samples and meteorites, is available for display and education. The Johnson Space Center (JSC) curates NASA's extraterrestrial samples to support research, education, and public outreach. At the current time JSC curates five types of extraterrestrial samples: Moon rocks and soils collected by the Apollo astronauts Meteorites collected on US expeditions to Antarctica (including rocks from the Moon, Mars, and many asteroids including Vesta) “Cosmic dust” (asteroid and comet particles) collected by high-altitude aircraft Solar wind atoms collected by the Genesis spacecraft Comet and interstellar dust particles collected by the Stardust spacecraft These rocks, soils, dust particles, and atoms continue to be studied intensively by scientists around the world. Descriptions of the samples, research results, thousands of photographs, and information on how to request research samples are on the JSC Curation website: http://curator.jsc.nasa.gov/ NASA is eager for scientists and the public to have access to these exciting samples through our various loan procedures. NASA provides a limited number of Moon rock samples for either short-term or long-term displays at museums, planetariums, expositions, and professional events that are open to the public. The JSC Public Affairs Office handles requests for such display samples. Requestors should apply in writing to Mr. Louis Parker, JSC Exhibits Manager. He will advise successful applicants regarding provisions for receipt, display, and return of the samples. All loans will be preceded by a signed loan agreement executed between NASA and the requestor's organization. Email address: louis.a.parker@nasa.gov Sets

  2. Sequential function approximation on arbitrarily distributed point sets

    Science.gov (United States)

    Wu, Kailiang; Xiu, Dongbin

    2018-02-01

    We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.

  3. Random sampling of elementary flux modes in large-scale metabolic networks.

    Science.gov (United States)

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  4. Systemic consultation and goal setting

    OpenAIRE

    Carr, Alan

    1993-01-01

    Over two decades of empirical research conducted within a positivist framework has shown that goal setting is a particularly useful method for influencing task performance in occupational and industrial contexts. The conditions under which goal setting is maximally effective are now clearly established. These include situations where there is a high level of acceptance and commitment, where goals are specific and challenging, where the task is relatively simple rather than ...

  5. Simplifying sample pretreatment: application of dried blood spot (DBS) method to blood samples, including postmortem, for UHPLC-MS/MS analysis of drugs of abuse.

    Science.gov (United States)

    Odoardi, Sara; Anzillotti, Luca; Strano-Rossi, Sabina

    2014-10-01

    The complexity of biological matrices, such as blood, requires the development of suitably selective and reliable sample pretreatment procedures prior to their instrumental analysis. A method has been developed for the analysis of drugs of abuse and their metabolites from different chemical classes (opiates, methadone, fentanyl and analogues, cocaine, amphetamines and amphetamine-like substances, ketamine, LSD) in human blood using dried blood spot (DBS) and subsequent UHPLC-MS/MS analysis. DBS extraction required only 100μL of sample, added with the internal standards and then three droplets (30μL each) of this solution were spotted on the card, let dry for 1h, punched and extracted with methanol with 0.1% of formic acid. The supernatant was evaporated and the residue was then reconstituted in 100μL of water with 0.1% of formic acid and injected in the UHPLC-MS/MS system. The method was validated considering the following parameters: LOD and LOQ, linearity, precision, accuracy, matrix effect and dilution integrity. LODs were 0.05-1ng/mL and LOQs were 0.2-2ng/mL. The method showed satisfactory linearity for all substances, with determination coefficients always higher than 0.99. Intra and inter day precision, accuracy, matrix effect and dilution integrity were acceptable for all the studied substances. The addition of internal standards before DBS extraction and the deposition of a fixed volume of blood on the filter cards ensured the accurate quantification of the analytes. The validated method was then applied to authentic postmortem blood samples. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Clinical code set engineering for reusing EHR data for research: A review.

    Science.gov (United States)

    Williams, Richard; Kontopantelis, Evangelos; Buchan, Iain; Peek, Niels

    2017-06-01

    The construction of reliable, reusable clinical code sets is essential when re-using Electronic Health Record (EHR) data for research. Yet code set definitions are rarely transparent and their sharing is almost non-existent. There is a lack of methodological standards for the management (construction, sharing, revision and reuse) of clinical code sets which needs to be addressed to ensure the reliability and credibility of studies which use code sets. To review methodological literature on the management of sets of clinical codes used in research on clinical databases and to provide a list of best practice recommendations for future studies and software tools. We performed an exhaustive search for methodological papers about clinical code set engineering for re-using EHR data in research. This was supplemented with papers identified by snowball sampling. In addition, a list of e-phenotyping systems was constructed by merging references from several systematic reviews on this topic, and the processes adopted by those systems for code set management was reviewed. Thirty methodological papers were reviewed. Common approaches included: creating an initial list of synonyms for the condition of interest (n=20); making use of the hierarchical nature of coding terminologies during searching (n=23); reviewing sets with clinician input (n=20); and reusing and updating an existing code set (n=20). Several open source software tools (n=3) were discovered. There is a need for software tools that enable users to easily and quickly create, revise, extend, review and share code sets and we provide a list of recommendations for their design and implementation. Research re-using EHR data could be improved through the further development, more widespread use and routine reporting of the methods by which clinical codes were selected. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  7. Agenda Setting and Mass Communication Theory.

    Science.gov (United States)

    Shaw, Eugene F.

    The agenda-setting concept in mass communication asserts that the news media determine what people will include or exclude in their cognition of public events. Findings in uses and gratification research provide the foundation for this concept: an initial focus on people's needs, particularly the need for information. The agenda-setting concept…

  8. Automated Blood Sample Preparation Unit (ABSPU) for Portable Microfluidic Flow Cytometry.

    Science.gov (United States)

    Chaturvedi, Akhil; Gorthi, Sai Siva

    2017-02-01

    Portable microfluidic diagnostic devices, including flow cytometers, are being developed for point-of-care settings, especially in conjunction with inexpensive imaging devices such as mobile phone cameras. However, two pervasive drawbacks of these have been the lack of automated sample preparation processes and cells settling out of sample suspensions, leading to inaccurate results. We report an automated blood sample preparation unit (ABSPU) to prevent blood samples from settling in a reservoir during loading of samples in flow cytometers. This apparatus automates the preanalytical steps of dilution and staining of blood cells prior to microfluidic loading. It employs an assembly with a miniature vibration motor to drive turbulence in a sample reservoir. To validate performance of this system, we present experimental evidence demonstrating prevention of blood cell settling, cell integrity, and staining of cells prior to flow cytometric analysis. This setup is further integrated with a microfluidic imaging flow cytometer to investigate cell count variability. With no need for prior sample preparation, a drop of whole blood can be directly introduced to the setup without premixing with buffers manually. Our results show that integration of this assembly with microfluidic analysis provides a competent automation tool for low-cost point-of-care blood-based diagnostics.

  9. Time-Frequency Based Instantaneous Frequency Estimation of Sparse Signals from an Incomplete Set of Samples

    Science.gov (United States)

    2014-06-17

    100 0 2 4 Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function 0 50 100 0 2 4 L- Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function ...bilinear or higher order autocorrelation functions will increase the number of missing samples, the analysis shows that accurate instantaneous...frequency estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with

  10. UpSet: Visualization of Intersecting Sets

    Science.gov (United States)

    Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter

    2016-01-01

    Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912

  11. Quantum mechanics over sets

    Science.gov (United States)

    Ellerman, David

    2014-03-01

    In models of QM over finite fields (e.g., Schumacher's ``modal quantum theory'' MQT), one finite field stands out, Z2, since Z2 vectors represent sets. QM (finite-dimensional) mathematics can be transported to sets resulting in quantum mechanics over sets or QM/sets. This gives a full probability calculus (unlike MQT with only zero-one modalities) that leads to a fulsome theory of QM/sets including ``logical'' models of the double-slit experiment, Bell's Theorem, QIT, and QC. In QC over Z2 (where gates are non-singular matrices as in MQT), a simple quantum algorithm (one gate plus one function evaluation) solves the Parity SAT problem (finding the parity of the sum of all values of an n-ary Boolean function). Classically, the Parity SAT problem requires 2n function evaluations in contrast to the one function evaluation required in the quantum algorithm. This is quantum speedup but with all the calculations over Z2 just like classical computing. This shows definitively that the source of quantum speedup is not in the greater power of computing over the complex numbers, and confirms the idea that the source is in superposition.

  12. Constructing DNA Barcode Sets Based on Particle Swarm Optimization.

    Science.gov (United States)

    Wang, Bin; Zheng, Xuedong; Zhou, Shihua; Zhou, Changjun; Wei, Xiaopeng; Zhang, Qiang; Wei, Ziqi

    2018-01-01

    Following the completion of the human genome project, a large amount of high-throughput bio-data was generated. To analyze these data, massively parallel sequencing, namely next-generation sequencing, was rapidly developed. DNA barcodes are used to identify the ownership between sequences and samples when they are attached at the beginning or end of sequencing reads. Constructing DNA barcode sets provides the candidate DNA barcodes for this application. To increase the accuracy of DNA barcode sets, a particle swarm optimization (PSO) algorithm has been modified and used to construct the DNA barcode sets in this paper. Compared with the extant results, some lower bounds of DNA barcode sets are improved. The results show that the proposed algorithm is effective in constructing DNA barcode sets.

  13. Toward a Principled Sampling Theory for Quasi-Orders.

    Science.gov (United States)

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  14. Toward a Principled Sampling Theory for Quasi-Orders

    Science.gov (United States)

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  15. Comparison of sampling techniques for Bayesian parameter estimation

    Science.gov (United States)

    Allison, Rupert; Dunkley, Joanna

    2014-02-01

    The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.

  16. On the Analysis of Case-Control Studies in Cluster-correlated Data Settings.

    Science.gov (United States)

    Haneuse, Sebastien; Rivera-Rodriguez, Claudia

    2018-01-01

    In resource-limited settings, long-term evaluation of national antiretroviral treatment (ART) programs often relies on aggregated data, the analysis of which may be subject to ecological bias. As researchers and policy makers consider evaluating individual-level outcomes such as treatment adherence or mortality, the well-known case-control design is appealing in that it provides efficiency gains over random sampling. In the context that motivates this article, valid estimation and inference requires acknowledging any clustering, although, to our knowledge, no statistical methods have been published for the analysis of case-control data for which the underlying population exhibits clustering. Furthermore, in the specific context of an ongoing collaboration in Malawi, rather than performing case-control sampling across all clinics, case-control sampling within clinics has been suggested as a more practical strategy. To our knowledge, although similar outcome-dependent sampling schemes have been described in the literature, a case-control design specific to correlated data settings is new. In this article, we describe this design, discuss balanced versus unbalanced sampling techniques, and provide a general approach to analyzing case-control studies in cluster-correlated settings based on inverse probability-weighted generalized estimating equations. Inference is based on a robust sandwich estimator with correlation parameters estimated to ensure appropriate accounting of the outcome-dependent sampling scheme. We conduct comprehensive simulations, based in part on real data on a sample of N = 78,155 program registrants in Malawi between 2005 and 2007, to evaluate small-sample operating characteristics and potential trade-offs associated with standard case-control sampling or when case-control sampling is performed within clusters.

  17. The new LLNL AMS sample changer

    International Nuclear Information System (INIS)

    Roberts, M.L.; Norman, P.J.; Garibaldi, J.L.; Hornady, R.S.

    1993-01-01

    The Center for Accelerator Mass Spectrometry at LLNL has installed a new 64 position AMS sample changer on our spectrometer. This new sample changer has the capability of being controlled manually by an operator or automatically by the AMS data acquisition computer. Automatic control of the sample changer by the data acquisition system is a necessary step towards unattended AMS operation in our laboratory. The sample changer uses a fiber optic shaft encoder for rough rotational indexing of the sample wheel and a series of sequenced pneumatic cylinders for final mechanical indexing of the wheel and insertion and retraction of samples. Transit time from sample to sample varies from 4 s to 19 s, depending on distance moved. Final sample location can be set to within 50 microns on the x and y axis and within 100 microns in the z axis. Changing sample wheels on the new sample changer is also easier and faster than was possible on our previous sample changer and does not require the use of any tools

  18. The Manifestation of Stopping Sets and Absorbing Sets as Deviations on the Computation Trees of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Eric Psota

    2010-01-01

    Full Text Available The error mechanisms of iterative message-passing decoders for low-density parity-check codes are studied. A tutorial review is given of the various graphical structures, including trapping sets, stopping sets, and absorbing sets that are frequently used to characterize the errors observed in simulations of iterative decoding of low-density parity-check codes. The connections between trapping sets and deviations on computation trees are explored in depth using the notion of problematic trapping sets in order to bridge the experimental and analytic approaches to these error mechanisms. A new iterative algorithm for finding low-weight problematic trapping sets is presented and shown to be capable of identifying many trapping sets that are frequently observed during iterative decoding of low-density parity-check codes on the additive white Gaussian noise channel. Finally, a new method is given for characterizing the weight of deviations that result from problematic trapping sets.

  19. Set theoretical aspects of real analysis

    CERN Document Server

    Kharazishvili, Alexander B

    2014-01-01

    This book addresses a number of questions in real analysis and classical measure theory that are of a set-theoretic flavor. Accessible to graduate students, the beginning of the book presents introductory topics on real analysis and Lebesque measure theory. These topics highlight the boundary between fundamental concepts of measurability and non-measurability for point sets and functions. The remainder of the book deals with more specialized material on set-theoretical real analysis. Problems are included at the end of each chapter.

  20. Unconfined versus confined speleogenetic settings: variations of solution porosity.

    Directory of Open Access Journals (Sweden)

    Klimchouk Alexander

    2006-01-01

    Full Text Available Speleogenesis in confined settings generates cave morphologies that differ much from those formed in unconfined settings. Cavesdeveloped in unconfined settings are characterised by broadly dendritic patterns of channels due to highly competing development.In contrast, caves originated under confined conditions tend to form two- or three-dimensional mazes with densely packed conduits.This paper illustrates variations of solution (channel porosity resulted from speleogenesis in unconfined and confined settings by theanalysis of morphometric parameters of typical cave patterns. Two samples of typical cave systems formed in the respective settingsare compared. The sample that represents unconfined speleogenesis consists of solely limestone caves, whereas gypsum cavesof this type tend to be less dendritic and more linear. The sample that represents confined speleogenesis consists of both limestoneand gypsum maze caves. The comparison shows considerable differences in average values of some parameters between thesettings. Passage network density (the ratio of the cave length to the area of the cave field, km/km2 is one order of magnitudegreater in confined settings than in unconfined (average 167.3 km/km2 versus 16.6 km/km2. Similarly, an order of magnitudedifference is observed in cave porosity (a fraction of the volume of a cave block, occupied by mapped cavities; 5.0 % versus 0.4 %.This illustrates that storage in maturely karstified confined aquifers is generally much greater than in unconfined. The average areal coverage (a fraction of the area of the cave field occupied by passages in a plan view is about 5 times greater in confined settingsthan in unconfined (29.7 % versus 6.4 %. This indicates that conduit permeability in confined aquifers is appreciably easier to targetwith drilling than the widely spaced conduits in unconfined aquifers.

  1. Concentration of ions in selected bottled water samples sold in Malaysia

    Science.gov (United States)

    Aris, Ahmad Zaharin; Kam, Ryan Chuan Yang; Lim, Ai Phing; Praveena, Sarva Mangala

    2013-03-01

    Many consumers around the world, including Malaysians, have turned to bottled water as their main source of drinking water. The aim of this study is to determine the physical and chemical properties of bottled water samples sold in Selangor, Malaysia. A total of 20 bottled water brands consisting of `natural mineral (NM)' and `packaged drinking (PD)' types were randomly collected and analyzed for their physical-chemical characteristics: hydrogen ion concentration (pH), electrical conductivity (EC) and total dissolved solids (TDS), selected major ions: calcium (Ca), potassium (K), magnesium (Mg) and sodium (Na), and minor trace constituents: copper (Cu) and zinc (Zn) to ascertain their suitability for human consumption. The results obtained were compared with guideline values recommended by World Health Organization (WHO) and Malaysian Ministry of Health (MMOH), respectively. It was found that all bottled water samples were in accordance with the guidelines set by WHO and MMOH except for one sample (D3) which was below the pH limit of 6.5. Both NM and PD bottled water were dominated by Na + K > Ca > Mg. Low values for EC and TDS in the bottled water samples showed that water was deficient in essential elements, likely an indication that these were removed by water treatment. Minerals like major ions were present in very low concentrations which could pose a risk to individuals who consume this water on a regular basis. Generally, the overall quality of the supplied bottled water was in accordance to standards and guidelines set by WHO and MMOH and safe for consumption.

  2. MODEL PREDIKSI NILAI PERUSAHAAN MELALUI KEPEMILIKAN MANAJERIAL DAN SET KESEMPATAN INVESTASI

    Directory of Open Access Journals (Sweden)

    Herry Laksito

    2017-03-01

    Full Text Available This study empirically examined the effect of managerial ownership on firm value of Investment OpportunitySet with mediation. Model, this research examined corporate governance measured by the shares of thecompany’s value with the mediation set of investment opportunities. The purpose of this study was to analyzethe effect on the value of corporate governance mediation firm with an investment opportunity sets on manufacturingcompanies listed in Indonesia Stock Exchange. The populations in this study were all of manufacturingcompanies listed in Indonesia Stock Exchange and reporting financial statement in the Indonesian capitalmarket directory during the period 2005-2007. Determination of sample used purposive sampling. The datamet the characteristic of 37 firms. Statistical method used was path analysis. The results showed that managerialstock ownership (corporate governance did not affect the value of a company with a negative direction.Managerial stock ownership (corporate governance affected the investment opportunity set (IOS. IOS did notaffect the value of the company and investment opportunity set could not significantly mediate the effect ofmanagerial ownership (corporate governance against the value of the firm.

  3. The efficiency of systematic sampling in stereology-reconsidered

    DEFF Research Database (Denmark)

    Gundersen, Hans Jørgen Gottlieb; Jensen, Eva B. Vedel; Kieu, K

    1999-01-01

    In the present paper, we summarize and further develop recent research in the estimation of the variance of stereological estimators based on systematic sampling. In particular, it is emphasized that the relevant estimation procedure depends on the sampling density. The validity of the variance...... estimation is examined in a collection of data sets, obtained by systematic sampling. Practical recommendations are also provided in a separate section....

  4. Measurement of regional cerebral blood flow using one-point venous blood sampling and causality model. Evaluation by comparing with conventional continuous arterial blood sampling method

    International Nuclear Information System (INIS)

    Mimura, Hiroaki; Sone, Teruki; Takahashi, Yoshitake

    2008-01-01

    Optimal setting of the input function is essential for the measurement of regional cerebral blood flow (rCBF) based on the microsphere model using N-isopropyl-4-[ 123 I]iodoamphetamine ( 123 I-IMP), and usually the arterial 123 I-IMP concentration (integral value) in the initial 5 min is used for this purpose. We have developed a new convenient method in which 123 I-IMP concentration in arterial blood sample is estimated from that in venous blood sample. Brain perfusion single photon emission computed tomography (SPECT) with 123 I-IMP was performed in 110 cases of central nervous system disorders. The causality was analyzed between the various parameters of SPECT data and the ratio of octanol-extracted arterial radioactivity concentration during the first 5 min (Caoct) to octanol-extracted venous radioactivity concentration at 27 min after intravenous injection of 123 I-IMP (Cvoct). A high correlation was observed between the measured and estimated values of Caoct/Cvoct (r=0.856) when the following five parameters were included in the regression formula: radioactivity concentration in venous blood sampled at 27 min (Cv), Cvoct, Cvoct/Cv, and total brain radioactivity counts that were measured by a four-head gamma camera 5 min and 28 min after 123 I-IMP injection. Furthermore, the rCBF values obtained using the input parameters estimated by this method were also highly correlated with the rCBF values measured using the continuous arterial blood sampling method (r=0.912). These results suggest that this method would serve as the new, convenient and less invasive method of rCBF measurement in clinical setting. (author)

  5. Extended Set Constraints and Tree Grammar Abstraction of Programs

    DEFF Research Database (Denmark)

    Rosendahl, Mads; Gallagher, John Patrick

    2011-01-01

    Set constraints are relations between sets of ground terms or trees. This paper presents two main contributions: firstly we consider an extension of the systems of set constraints to include a tuple constructor, and secondly we construct a simplified solution procedure for set constraints. We...

  6. Domains of Risk in the Developmental Continuity of Fire Setting

    OpenAIRE

    McCarty, Carolyn A.; McMahon, Robert J.

    2005-01-01

    Juvenile fire setting is a serious, dangerous, and costly behavior. The majority of research examining youth fire setting has been cross-sectional. We sought to examine early risk attributes that could differentiate fire setters from non–fire setters, in addition to examining their association with the developmental continuity of fire-setting behavior into late childhood. Using a sample of 361 youth drawn from 4 different U.S. communities, this study examined the association between a broad a...

  7. Sample-based reporting of official national control of veterinary drug residues

    DEFF Research Database (Denmark)

    Andersen, Jens Hinge; Jensen, Louise Grønhøj Hørbye; Madsen, Helle L.

    assessment as well as risk management. The European Food Safety Authority has been assigned with the task to set up a system for data collection based on individual analytical results. A pilot project has been launched with participants from eleven Member States for parallel reporting of monitoring results...... from 2015 in aggregated form as well as individual analytical results using a standardised data model. The challenges that face the pilot participants include provisions for categorised sample information, specific method performance data, result evaluation and follow-up actions. Experience gained...

  8. Genesis Contingency Planning and Mishap Recovery: The Sample Curation View

    Science.gov (United States)

    Stansbery, E. K.; Allton, J. H.; Allen, C. C.; McNamara, K. M.; Calaway, M.; Rodriques, M. C.

    2007-01-01

    Planning for sample preservation and curation was part of mission design from the beginning. One of the scientific objectives for Genesis included collecting samples of three regimes of the solar wind in addition to collecting bulk solar wind during the mission. Collectors were fabricated in different thicknesses for each regime of the solar wind and attached to separate frames exposed to the solar wind during specific periods of solar activity associated with each regime. The original plan to determine the solar regime sampled for specific collectors was to identify to which frame the collector was attached. However, the collectors were dislodged during the hard landing making identification by frame attachment impossible. Because regimes were also identified by thickness of the collector, the regime sampled is identified by measuring fragment thickness. A variety of collector materials and thin films applied to substrates were selected and qualified for flight. This diversity provided elemental measurement in more than one material, mitigating effects of diffusion rates and/or radiation damage. It also mitigated against different material and substrate strengths resulting in differing effects of the hard landing. For example, silicon crystal substrates broke into smaller fragments than sapphire-based substrates and diamond surfaces were more resilient to flying debris damage than gold. The primary responsibility of the curation team for recovery was process documentation. Contingency planning for the recovery phase expanded this responsibility to include not only equipment to document, but also gather, contain and identify samples from the landing area and the recovered spacecraft. The team developed contingency plans for various scenarios as part of mission planning that included topographic maps to aid in site recovery and identification of different modes of transport and purge capability depending on damage. A clean tent, set-up at Utah Test & Training Range

  9. Sampling design for the Study of Cardiovascular Risks in Adolescents (ERICA

    Directory of Open Access Journals (Sweden)

    Mauricio Teixeira Leite de Vasconcellos

    2015-05-01

    Full Text Available The Study of Cardiovascular Risk in Adolescents (ERICA aims to estimate the prevalence of cardiovascular risk factors and metabolic syndrome in adolescents (12-17 years enrolled in public and private schools of the 273 municipalities with over 100,000 inhabitants in Brazil. The study population was stratified into 32 geographical strata (27 capitals and five sets with other municipalities in each macro-region of the country and a sample of 1,251 schools was selected with probability proportional to size. In each school three combinations of shift (morning and afternoon and grade were selected, and within each of these combinations, one class was selected. All eligible students in the selected classes were included in the study. The design sampling weights were calculated by the product of the reciprocals of the inclusion probabilities in each sampling stage, and were later calibrated considering the projections of the numbers of adolescents enrolled in schools located in the geographical strata by sex and age.

  10. Solvent Hold Tank Sample Results for MCU-16-596-597-598: April 2016 Monthly Sample

    Energy Technology Data Exchange (ETDEWEB)

    Fondeur, F. F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL). Advanced Characterization and Processing; Jones, D. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL). Research Support

    2016-07-12

    Savannah River National Laboratory (SRNL) received one set of Solvent Hold Tank (SHT) samples (MCU-16-596-597-598), pulled on 04/30/2016 for analysis. The samples were combined and analyzed for composition. Analysis of the composite sample MCU-16-596-597-598 indicated the Isopar™L concentration is above its nominal level (102%). The modifier (CS-7SB) is 14% below its nominal concentration, while the TiDG and MaxCalix concentrations are at and above their nominal concentrations, respectively. This analysis confirms the solvent may require the addition of modifier. Based on the current monthly sample, the levels of TiDG, Isopar™L, MaxCalix, and modifier are sufficient for continuing operation but are expected to decrease with time. Periodic characterization and trimming additions to the solvent are recommended.

  11. Activity Settings and Daily Routines in Preschool Classrooms: Diverse Experiences in Early Learning Settings for Low-Income Children

    Science.gov (United States)

    Fuligni, Allison Sidle; Howes, Carollee; Huang, Yiching; Hong, Sandra Soliday; Lara-Cinisomo, Sandraluz

    2012-01-01

    This paper examines activity settings and daily classroom routines experienced by 3- and 4-year-old low-income children in public center-based preschool programs, private center-based programs, and family child care homes. Two daily routine profiles were identified using a time-sampling coding procedure: a High Free-Choice pattern in which…

  12. Efficiency calibration and measurement of self-absorption correction of environmental gamma spectroscopy of soils samples using Marinelli beaker

    International Nuclear Information System (INIS)

    Abdi, M. R.; Mostajaboddavati, M.; Hassanzadeh, S.; Faghihian, H.; Rezaee, Kh.; Kamali, M.

    2006-01-01

    A nonlinear function in combination with the method of mixing activity calibrated is applied for fitting the experimental peak efficiency of HPGe spectrometers in 59-2614 keV energy range. The preparation of Marinelli beaker standards of mixed gamma and RG-set at secular equilibrium with its daughter radionuclides was studied. Standards were prepared by mixing of known amounts of 13B a, 241 Am, 152 Eu, 207 Bi, 24 Na, Al 2 O 3 powder and soil. The validity of these standards was checked by comparison with certified standard reference material RG-set and IAEA-Soil-6 Self-absorption was measured for the activity calculation of the gamma-ray lines about series of 238 U daughter, 232 Th series, 137 Cs and 40 K in soil samples. Self-absorption in the sample will depend on a number of factor including sample composition, density, sample size and gamma-ray energy. Seven Marinelli beaker standards were prepared in different degrees of compaction with bulk density ( ρ) of 1.000 to 1.600 g cm -3 . The detection efficiency versus density was obtained and the equation of self-absorption correction factors calculated for soil samples

  13. Introduction to set theory and topology

    CERN Document Server

    Kuratowski, Kazimierz; Stark, M

    1972-01-01

    Introduction to Set Theory and Topology describes the fundamental concepts of set theory and topology as well as its applicability to analysis, geometry, and other branches of mathematics, including algebra and probability theory. Concepts such as inverse limit, lattice, ideal, filter, commutative diagram, quotient-spaces, completely regular spaces, quasicomponents, and cartesian products of topological spaces are considered. This volume consists of 21 chapters organized into two sections and begins with an introduction to set theory, with emphasis on the propositional calculus and its applica

  14. Support vector machine incremental learning triggered by wrongly predicted samples

    Science.gov (United States)

    Tang, Ting-long; Guan, Qiu; Wu, Yi-rong

    2018-05-01

    According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.

  15. Using Group Projects to Assess the Learning of Sampling Distributions

    Science.gov (United States)

    Neidigh, Robert O.; Dunkelberger, Jake

    2012-01-01

    In an introductory business statistics course, student groups used sample data to compare a set of sample means to the theoretical sampling distribution. Each group was given a production measurement with a population mean and standard deviation. The groups were also provided an excel spreadsheet with 40 sample measurements per week for 52 weeks…

  16. Soil sample collection and analysis for the Fugitive Dust Characterization Study

    Science.gov (United States)

    Ashbaugh, Lowell L.; Carvacho, Omar F.; Brown, Michael S.; Chow, Judith C.; Watson, John G.; Magliano, Karen C.

    A unique set of soil samples was collected as part of the Fugitive Dust Characterization Study. The study was carried out to establish whether or not source profiles could be constructed using novel analytical methods that could distinguish soil dust sources from each other. The soil sources sampled included fields planted in cotton, almond, tomato, grape, and safflower, dairy and feedlot facilities, paved and unpaved roads (both urban and rural), an agricultural staging area, disturbed land with salt buildup, and construction areas where the topsoil had been removed. The samples were collected using a systematic procedure designed to reduce sampling bias, and were stored frozen to preserve possible organic signatures. For this paper the samples were characterized by particle size (percent sand, silt, and clay), dry silt content (used in EPA-recommended fugitive dust emission factors), carbon and nitrogen content, and potential to emit both PM 10 and PM 2.5. These are not the "novel analytical methods" referred to above; rather, it was the basic characterization of the samples to use in comparing analytical methods by other scientists contracted to the California Air Resources Board. The purpose of this paper is to document the methods used to collect the samples, the collection locations, the analysis of soil type and potential to emit PM 10, and the sample variability, both within field and between fields of the same crop type.

  17. Representative process sampling - in practice

    DEFF Research Database (Denmark)

    Esbensen, Kim; Friis-Pedersen, Hans Henrik; Julius, Lars Petersen

    2007-01-01

    Didactic data sets representing a range of real-world processes are used to illustrate "how to do" representative process sampling and process characterisation. The selected process data lead to diverse variogram expressions with different systematics (no range vs. important ranges; trends and....../or periodicity; different nugget effects and process variations ranging from less than one lag to full variogram lag). Variogram data analysis leads to a fundamental decomposition into 0-D sampling vs. 1-D process variances, based on the three principal variogram parameters: range, sill and nugget effect...

  18. The ocean sampling day consortium

    DEFF Research Database (Denmark)

    Kopf, Anna; Bicak, Mesude; Kottmann, Renzo

    2015-01-01

    Ocean Sampling Day was initiated by the EU-funded Micro B3 (Marine Microbial Biodiversity, Bioinformatics, Biotechnology) project to obtain a snapshot of the marine microbial biodiversity and function of the world’s oceans. It is a simultaneous global mega-sequencing campaign aiming to generate...... the largest standardized microbial data set in a single day. This will be achievable only through the coordinated efforts of an Ocean Sampling Day Consortium, supportive partnerships and networks between sites. This commentary outlines the establishment, function and aims of the Consortium and describes our...

  19. Curating NASA's Past, Present, and Future Astromaterial Sample Collections

    Science.gov (United States)

    Zeigler, R. A.; Allton, J. H.; Evans, C. A.; Fries, M. D.; McCubbin, F. M.; Nakamura-Messenger, K.; Righter, K.; Zolensky, M.; Stansbery, E. K.

    2016-01-01

    's Astromaterials Research Office, which houses a world-class suite of analytical instrumentation and scientists. We leverage these labs and personnel to better curate the samples. Part of the cu-ration process is planning for the future, and we refer to these planning efforts as "advanced curation". Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envi-sioned by NASA exploration goals. We are (and have been) planning for future cu-ration, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, and curation of organically- and biologically-sensitive samples.

  20. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  1. Effectiveness of a structured motivational intervention including smoking cessation advice and spirometry information in the primary care setting: the ESPITAP study

    Directory of Open Access Journals (Sweden)

    Martin-Lujan Francisco

    2011-11-01

    Full Text Available Abstract Background There is current controversy about the efficacy of smoking cessation interventions that are based on information obtained by spirometry. The objective of this study is to evaluate the effectiveness in the primary care setting of structured motivational intervention to achieve smoking cessation, compared with usual clinical practice. Methods Design Multicentre randomized clinical trial with an intervention and a control group. Setting 12 primary care centres in the province of Tarragona (Spain. Subjects of study 600 current smokers aged between 35 and 70 years with a cumulative habit of more than 10 packs of cigarettes per year, attended in primary care for any reason and who did not meet any of the exclusion criteria for the study, randomly assigned to structured intervention or standard clinical attention. Intervention Usual advice to quit smoking by a general practitioner as well as a 20-minute personalized visit to provide detailed information about spirometry results, during which FEV1, FVC, FEF 25-75% and PEF measurements were discussed and interpreted in terms of theoretical values. Additional information included the lung age index (defined as the average age of a non-smoker with the same FEV1 as the study participant, comparing this with the chronological age to illustrate the pulmonary deterioration that results from smoking. Measurements Spirometry during the initial visit. Structured interview questionnaire administered at the primary care centre at the initial visit and at 12-month follow-up. Telephone follow-up interview at 6 months. At 12-month follow-up, expired CO was measured in patients who claimed to have quit smoking. Main variables Smoking cessation at 12 months. Analysis Data will be analyzed on the basis of "intention to treat" and the unit of analysis will be the individual smoker. Expected results Among active smokers treated in primary care we anticipate significantly higher smoking cessation in the

  2. Solvent hold tank sample results for MCU-16-1363-1365. November 2016 monthly sample

    Energy Technology Data Exchange (ETDEWEB)

    Fondeur, F. F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, D. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-03-22

    Savannah River National Laboratory (SRNL) received one set of three Solvent Hold Tank (SHT) samples (MCU-16-1363-1364-1365), pulled on 11/15/2016 for analysis. The samples were combined and analyzed for composition. Analysis of the composite sample MCU-16-1363-1364-1365 indicated the Isopar™L concentration is at its nominal level (100%). The extractant (MaxCalix) and the modifier (CS- 7SB) are 8% and 2 % below their nominal concentrations. The suppressor (TiDG) is 7% below its nominal concentration. This analysis confirms the trim and Isopar™ additions to the solvent in November. This analysis also indicates the solvent did not require further additions. Based on the current monthly sample, the levels of TiDG, Isopar™L, MaxCalix, and modifier are sufficient for continuing operation but are expected to decrease with time. Periodic characterization and trimming additions to the solvent are recommended.

  3. Determination of Slake Durability Index (Sdi) Values on Different Shape of Laminated Marl Samples

    Science.gov (United States)

    Ankara, Hüseyin; Çiçek, Fatma; Talha Deniz, İsmail; Uçak, Emre; Yerel Kandemir, Süheyla

    2016-10-01

    The slake durability index (SDI) test is widely used to determine the disintegration characteristic of the weak and clay-bearing rocks in geo-engineering problems. However, due to the different shapes of sample pieces, such as, irregular shapes displayed mechanical breakages in the slaking process, the SDI test has some limitations that affect the index values. In addition, shape and surface roughness of laminated marl samples have a severe influence on the SDI. In this study, a new sample preparation method called Pasha Method was used to prepare spherical specimens from the laminated marl collected from Seyitomer collar (SLI). Moreover the SDI tests were performed on equal size and weight specimens: three sets with different shapes were used. The three different sets were prepared as the test samples which had sphere shape, parallel to the layers in irregular shape, and vertical to the layers in irregular shape. Index values were determined for the three different sets subjected to the SDI test for 4 cycles. The index values at the end of fourth cycle were found to be 98.43, 98.39 and 97.20 %, respectively. As seen, the index values of the sphere sample set were found to be higher than irregular sample sets.

  4. Nurse managers' experiences in continuous quality improvement in resource-poor healthcare settings.

    Science.gov (United States)

    Kakyo, Tracy Alexis; Xiao, Lily Dongxia

    2017-06-01

    Ensuring safe and quality care for patients in hospitals is an important part of a nurse manager's role. Continuous quality improvement has been identified as one approach that leads to the delivery of quality care services to patients and is widely used by nurse managers to improve patient care. Nurse managers' experiences in initiating continuous quality improvement activities in resource-poor healthcare settings remain largely unknown. Research evidence is highly demanded in these settings to address disease burden and evidence-based practice. This interpretive qualitative study was conducted to gain an understanding of nurse managers' Continuous Quality Improvement experiences in rural hospitals in Uganda. Nurse managers in rural healthcare settings used their role to prioritize quality improvement activities, monitor the Continuous Quality Improvement process, and utilize in-service education to support continuous quality improvement. The nurse managers in our sample encountered a number of barriers during the implementation of Continuous Quality Improvement, including: limited patient participation, lack of materials, and limited human resources. Efforts to address the challenges faced through good governance and leadership development require more attention. © 2017 John Wiley & Sons Australia, Ltd.

  5. Wyoming CV Pilot Traveler Information Message Sample

    Data.gov (United States)

    Department of Transportation — This dataset contains a sample of the sanitized Traveler Information Messages (TIM) being generated by the Wyoming Connected Vehicle (CV) Pilot. The full set of TIMs...

  6. SNP calling, genotype calling, and sample allele frequency estimation from new-generation sequencing data

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Korneliussen, Thorfinn Sand; Albrechtsen, Anders

    2012-01-01

    We present a statistical framework for estimation and application of sample allele frequency spectra from New-Generation Sequencing (NGS) data. In this method, we first estimate the allele frequency spectrum using maximum likelihood. In contrast to previous methods, the likelihood function is cal...... be extended to various other cases including cases with deviations from Hardy-Weinberg equilibrium. We evaluate the statistical properties of the methods using simulations and by application to a real data set....

  7. Risk Factors for Pressure Ulcers Including Suspected Deep Tissue Injury in Nursing Home Facility Residents: Analysis of National Minimum Data Set 3.0.

    Science.gov (United States)

    Ahn, Hyochol; Cowan, Linda; Garvan, Cynthia; Lyon, Debra; Stechmiller, Joyce

    2016-04-01

    To provide information on risk factors associated with pressure ulcers (PrUs), including suspected deep tissue injury (sDTI), in nursing home residents in the United States. This continuing education activity is intended for physicians and nurses with an interest in skin and wound care. After participating in this educational activity, the participant should be better able to:1. Examine the literature related to risk factors for the development of PrUs.2. Compare risk factors associated with the prevalence of PrUs and sDTI from the revised Minimum Data Set 3.0 2012 using a modified Defloor's conceptual model of PrUs as a theoretical framework. This study aims to characterize and compare risk factors associated with pressure ulcers (PrUs), including suspected deep tissue injury (sDTI), in nursing home (NH) residents in the United States. Secondary analysis of the 2012 Minimum Data Set (MDS 3.0). Medicare- or Medicaid-certified NHs in the United States. Nursing home residents (n = 2,936,146) 18 years or older with complete PrU data, who received comprehensive assessments from January to December 2012. Pressure ulcer by stage was the outcome variable. Explanatory variables (age, gender, race and ethnicity, body mass index, skin integrity, system failure, disease, infection, mobility, and cognition) from the MDS 3.0 were aligned with the 4 elements of Defloor's conceptual model: compressive forces, shearing forces, tissue tolerance for pressure, and tissue tolerance for oxygen. Of 2,936,146 NH residents who had complete data for PrU, 89.9% had no PrU; 8.4% had a Stage 2, 3, or 4 or unstagable PrU; and 1.7% had an sDTI. The MDS variables corresponding to the 4 elements of Defloor's model were significantly predictive of both PrU and sDTI. Black residents had the highest risk of any-stage PrU, and Hispanic residents had the highest risk of sDTI. Skin integrity, system failure, infection, and disease risk factors had larger effect sizes for sDTI than for other PrU stages

  8. Subsurface Noble Gas Sampling Manual

    Energy Technology Data Exchange (ETDEWEB)

    Carrigan, C. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sun, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-18

    The intent of this document is to provide information about best available approaches for performing subsurface soil gas sampling during an On Site Inspection or OSI. This information is based on field sampling experiments, computer simulations and data from the NA-22 Noble Gas Signature Experiment Test Bed at the Nevada Nuclear Security Site (NNSS). The approaches should optimize the gas concentration from the subsurface cavity or chimney regime while simultaneously minimizing the potential for atmospheric radioxenon and near-surface Argon-37 contamination. Where possible, we quantitatively assess differences in sampling practices for the same sets of environmental conditions. We recognize that all sampling scenarios cannot be addressed. However, if this document helps to inform the intuition of the reader about addressing the challenges resulting from the inevitable deviations from the scenario assumed here, it will have achieved its goal.

  9. Fair-sampling assumption is not necessary for testing local realism

    International Nuclear Information System (INIS)

    Berry, Dominic W.; Jeong, Hyunseok; Stobinska, Magdalena; Ralph, Timothy C.

    2010-01-01

    Almost all Bell inequality experiments to date have used postselection and therefore relied on the fair sampling assumption for their interpretation. The standard form of the fair sampling assumption is that the loss is independent of the measurement settings, so the ensemble of detected systems provides a fair statistical sample of the total ensemble. This is often assumed to be needed to interpret Bell inequality experiments as ruling out hidden-variable theories. Here we show that it is not necessary; the loss can depend on measurement settings, provided the detection efficiency factorizes as a function of the measurement settings and any hidden variable. This condition implies that Tsirelson's bound must be satisfied for entangled states. On the other hand, we show that it is possible for Tsirelson's bound to be violated while the Clauser-Horne-Shimony-Holt (CHSH)-Bell inequality still holds for unentangled states, and present an experimentally feasible example.

  10. Analyzing ROC curves using the effective set-size model

    Science.gov (United States)

    Samuelson, Frank W.; Abbey, Craig K.; He, Xin

    2018-03-01

    The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical

  11. SAMPL4 & DOCK3.7: lessons for automated docking procedures

    Science.gov (United States)

    Coleman, Ryan G.; Sterling, Teague; Weiss, Dahlia R.

    2014-03-01

    The SAMPL4 challenges were used to test current automated methods for solvation energy, virtual screening, pose and affinity prediction of the molecular docking pipeline DOCK 3.7. Additionally, first-order models of binding affinity were proposed as milestones for any method predicting binding affinity. Several important discoveries about the molecular docking software were made during the challenge: (1) Solvation energies of ligands were five-fold worse than any other method used in SAMPL4, including methods that were similarly fast, (2) HIV Integrase is a challenging target, but automated docking on the correct allosteric site performed well in terms of virtual screening and pose prediction (compared to other methods) but affinity prediction, as expected, was very poor, (3) Molecular docking grid sizes can be very important, serious errors were discovered with default settings that have been adjusted for all future work. Overall, lessons from SAMPL4 suggest many changes to molecular docking tools, not just DOCK 3.7, that could improve the state of the art. Future difficulties and projects will be discussed.

  12. A simulative comparison of respondent driven sampling with incentivized snowball sampling--the "strudel effect".

    Science.gov (United States)

    Gyarmathy, V Anna; Johnston, Lisa G; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A

    2014-02-01

    Respondent driven sampling (RDS) and incentivized snowball sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania ("original sample") to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1-12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called "strudel effect" is discussed in the paper. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  14. Comparing sets of patterns with the Jaccard index

    Directory of Open Access Journals (Sweden)

    Sam Fletcher

    2018-03-01

    Full Text Available The ability to extract knowledge from data has been the driving force of Data Mining since its inception, and of statistical modeling long before even that. Actionable knowledge often takes the form of patterns, where a set of antecedents can be used to infer a consequent. In this paper we offer a solution to the problem of comparing different sets of patterns. Our solution allows comparisons between sets of patterns that were derived from different techniques (such as different classification algorithms, or made from different samples of data (such as temporal data or data perturbed for privacy reasons. We propose using the Jaccard index to measure the similarity between sets of patterns by converting each pattern into a single element within the set. Our measure focuses on providing conceptual simplicity, computational simplicity, interpretability, and wide applicability. The results of this measure are compared to prediction accuracy in the context of a real-world data mining scenario.

  15. Sampling and Analysis for Assessment of Body Burdens

    International Nuclear Information System (INIS)

    Harley, J.H.

    1964-01-01

    A review of sampling criteria and techniques and of sample processing methods for indirect assessment of body burdens is presented. The text is limited to the more recent developments in the field of bioassay and to the nuclides which cannot be readily determined in the body directly. A selected bibliography is included. The planning of a bioassay programme should emphasize the detection of high or unusual exposures and the concentrated study of these cases when detected. This procedure gives the maximum amount of data for the dosimetry of individuals at risk and also adds to our scientific background for an understanding of internal emitters. Only a minimum of effort should be spent on sampling individuals having had negligible exposure. The chemical separation procedures required for bioassay also fall into two categories. The first is the rapid method, possibly of low accuracy, used for detection. The second is the more accurate method required for study of the individual after detection of the exposure. Excretion, whether exponential or a power function, drops off rapidly. It is necessary to locate the exposure in time before any evaluation can be made, even before deciding if the exposure is significant. One approach is frequent sampling and analysis by a quick screening technique. More commonly, samples are collected at longer intervals and an arbitrary level of re-sampling is set to assist in the detection of real exposures. It is probable that too much bioassay effort has gone into measurements on individuals at low risk and not enough on those at higher risk. The development of bioassay procedures for overcoming this problem has begun, and this paper emphasizes this facet of sampling and sample processing. (author) [fr

  16. Inspecting close maternal relatedness: Towards better mtDNA population samples in forensic databases.

    Science.gov (United States)

    Bodner, Martin; Irwin, Jodi A; Coble, Michael D; Parson, Walther

    2011-03-01

    Reliable data are crucial for all research fields applying mitochondrial DNA (mtDNA) as a genetic marker. Quality control measures have been introduced to ensure the highest standards in sequence data generation, validation and a posteriori inspection. A phylogenetic alignment strategy has been widely accepted as a prerequisite for data comparability and database searches, for forensic applications, for reconstructions of human migrations and for correct interpretation of mtDNA mutations in medical genetics. There is continuing effort to enhance the number of worldwide population samples in order to contribute to a better understanding of human mtDNA variation. This has often lead to the analysis of convenience samples collected for other purposes, which might not meet the quality requirement of random sampling for mtDNA data sets. Here, we introduce an additional quality control means that deals with one aspect of this limitation: by combining autosomal short tandem repeat (STR) marker with mtDNA information, it helps to avoid the bias introduced by related individuals included in the same (small) sample. By STR analysis of individuals sharing their mitochondrial haplotype, pedigree construction and subsequent software-assisted calculation of likelihood ratios based on the allele frequencies found in the population, closely maternally related individuals can be identified and excluded. We also discuss scenarios that allow related individuals in the same set. An ideal population sample would be representative for its population: this new approach represents another contribution towards this goal. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  17. Developing Water Sampling Standards

    Science.gov (United States)

    Environmental Science and Technology, 1974

    1974-01-01

    Participants in the D-19 symposium on aquatic sampling and measurement for water pollution assessment were informed that determining the extent of waste water stream pollution is not a cut and dry procedure. Topics discussed include field sampling, representative sampling from storm sewers, suggested sampler features and application of improved…

  18. Extension of a dynamic headspace multi-volatile method to milliliter injection volumes with full sample evaporation: Application to green tea.

    Science.gov (United States)

    Ochiai, Nobuo; Sasamoto, Kikuo; Tsunokawa, Jun; Hoffmann, Andreas; Okanoya, Kazunori; MacNamara, Kevin

    2015-11-20

    An extension of multi-volatile method (MVM) technology using the combination of a standard dynamic headspace (DHS) configuration, and a modified DHS configuration incorporating an additional vacuum module, was developed for milliliter injection volume of aqueous sample with full sample evaporation. A prior step involved investigation of water management by weighing of the water residue in the adsorbent trap. The extended MVM for 1 mL aqueous sample consists of five different DHS method parameter sets including choice of the replaceable adsorbent trap. An initial two DHS sampling sets at 25°C with the standard DHS configuration using a carbon-based adsorbent trap target very volatile solutes with high vapor pressure (>10 kPa) and volatile solutes with moderate vapor pressure (1-10 kPa). Subsequent three DHS sampling sets at 80°C with the modified DHS configuration using a Tenax TA trap target solutes with low vapor pressure (88%) for 17 test aroma compounds and moderate recoveries (44-71%) for 4 test compounds. The method showed good linearity (r(2)>0.9913) and high sensitivity (limit of detection: 0.1-0.5 ng mL(-1)) even with MS scan mode. The improved sensitivity of the method was demonstrated with analysis of a wide variety of aroma compounds in brewed green tea. Compared to the original 100 μL MVM procedure, this extension to 1 mL MVM allowed detection of nearly twice the number of aroma compounds, including 18 potent aroma compounds from top-note to base-note (e.g. 2,3-butanedione, coumarin, furaneol, guaiacol, cis-3-hexenol, linalool, maltol, methional, 3-methyl butanal, 2,3,5-trimethyl pyrazine, and vanillin). Sensitivity for 23 compounds improved by a factor of 3.4-15 under 1 mL MVM conditions. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Optimal sampling strategy for data mining

    International Nuclear Information System (INIS)

    Ghaffar, A.; Shahbaz, M.; Mahmood, W.

    2013-01-01

    Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)

  20. On sampling social networking services

    OpenAIRE

    Wang, Baiyang

    2012-01-01

    This article aims at summarizing the existing methods for sampling social networking services and proposing a faster confidence interval for related sampling methods. It also includes comparisons of common network sampling techniques.

  1. A Study on the Reuse of Plastic Concrete Using Extended Set-Retarding Admixtures

    Science.gov (United States)

    Lobo, Colin; Guthrie, William F.; Kacker, Raghu

    1995-01-01

    The disposal of ready mixed concrete truck wash water and returned plastic concrete is a growing concern for the ready mixed concrete industry. Recently, extended set-retarding admixtures, or stabilizers, which slow or stop the hydration of portland cement have been introduced to the market. Treating truck wash-water or returned plastic concrete with stabilizing admixtures delays its setting and hardening, thereby facilitating the incorporation of these typically wasted materials in subsequent concrete batches. In a statistically designed experiment, the properties of blended concrete containing stabilized plastic concrete were evaluated. The variables in the study included (1) concrete age when stabilized, (2) stabilizer dosage, (3) holding period of the treated (stabilized) concrete prior to blending with fresh ingredients, and (4) amount of treated concrete in the blended batch. The setting time, strength, and drying shrinkage of the blended concretes were evaluated. For the conditions tested, batching 5 % treated concrete with fresh material did not have a significant effect on the setting time, strength, or drying shrinkage of the resulting blended concrete. Batching 50 % treated concrete with fresh materials had a significant effect on the setting characteristics of the blended cocnrete, which in turn affected the water demand to maintain slump. The data suggests that for a known set of conditions, the stabilizer dosage can be optimized within a relatively narrow range to produce desired setting characteristics. The strength and drying shrinkage of the blended concretes were essentially a function of the water content at different sampling ages and the relationship followed the general trend of control concrete. PMID:29151762

  2. A Study on the Reuse of Plastic Concrete Using Extended Set-Retarding Admixtures.

    Science.gov (United States)

    Lobo, Colin; Guthrie, William F; Kacker, Raghu

    1995-01-01

    The disposal of ready mixed concrete truck wash water and returned plastic concrete is a growing concern for the ready mixed concrete industry. Recently, extended set-retarding admixtures, or stabilizers, which slow or stop the hydration of portland cement have been introduced to the market. Treating truck wash-water or returned plastic concrete with stabilizing admixtures delays its setting and hardening, thereby facilitating the incorporation of these typically wasted materials in subsequent concrete batches. In a statistically designed experiment, the properties of blended concrete containing stabilized plastic concrete were evaluated. The variables in the study included (1) concrete age when stabilized, (2) stabilizer dosage, (3) holding period of the treated (stabilized) concrete prior to blending with fresh ingredients, and (4) amount of treated concrete in the blended batch. The setting time, strength, and drying shrinkage of the blended concretes were evaluated. For the conditions tested, batching 5 % treated concrete with fresh material did not have a significant effect on the setting time, strength, or drying shrinkage of the resulting blended concrete. Batching 50 % treated concrete with fresh materials had a significant effect on the setting characteristics of the blended cocnrete, which in turn affected the water demand to maintain slump. The data suggests that for a known set of conditions, the stabilizer dosage can be optimized within a relatively narrow range to produce desired setting characteristics. The strength and drying shrinkage of the blended concretes were essentially a function of the water content at different sampling ages and the relationship followed the general trend of control concrete.

  3. Setting time and thermal expansion of two endodontic cements.

    Science.gov (United States)

    Santos, Alailson D; Araújo, Eudes B; Yukimitu, Keizo; Barbosa, José C; Moraes, João C S

    2008-09-01

    The purpose of this study was to evaluate the setting time and the thermal expansion coefficient of 2 endodontic cements, MTA-Angelus and a novel cement called CER. The setting time was determined in accordance to ANSI/ADA specifications no. 57. Three samples of 10 mm diameter and 2 mm thickness were prepared for each cement. The thermal expansion measurements were performed by strain gauge technique. Four samples of each cement were prepared using silicone rings of 5 mm diameter and 2 mm thickness. The data were analyzed statistically using the Student t test. The setting time obtained for the MTA-Angelus and CER cements was 15 (SD 1) min and 7 (SD 1) min, respectively. The linear coefficient of thermal expansion was 8.86 (SD 0.28) microstrain/ degrees C for MTA-Angelus and 11.76 (SD 1.20) microstrain/ degrees C for CER. The statistical analysis showed significant difference (P linear coefficient of thermal expansion between the 2 cements. The CER cement has a coefficient of expansion similar to dentin, which could contribute to a decrease of microleakage degree.

  4. Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs

    Directory of Open Access Journals (Sweden)

    Faqir Muhammad

    2007-01-01

    Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.

  5. Combinatorial set theory with a gentle introduction to forcing

    CERN Document Server

    Halbeisen, Lorenz J

    2017-01-01

    This book, now in a thoroughly revised second edition, provides a comprehensive and accessible introduction to modern set theory. Following an overview of basic notions in combinatorics and first-order logic, the author outlines the main topics of classical set theory in the second part, including Ramsey theory and the axiom of choice. The revised edition contains new permutation models and recent results in set theory without the axiom of choice. The third part explains the sophisticated technique of forcing in great detail, now including a separate chapter on Suslin’s problem. The technique is used to show that certain statements are neither provable nor disprovable from the axioms of set theory. In the final part, some topics of classical set theory are revisited and further developed in light of forcing, with new chapters on Sacks Forcing and Shelah’s astonishing construction of a model with finitely many Ramsey ultrafilters. Written for graduate students in axiomatic set theory, Combinatorial Set Th...

  6. Random On-Board Pixel Sampling (ROPS) X-Ray Camera

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhehui [Los Alamos; Iaroshenko, O. [Los Alamos; Li, S. [Los Alamos; Liu, T. [Fermilab; Parab, N. [Argonne (main); Chen, W. W. [Purdue U.; Chu, P. [Los Alamos; Kenyon, G. [Los Alamos; Lipton, R. [Fermilab; Sun, K.-X. [Nevada U., Las Vegas

    2017-09-25

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.

  7. 100 Area Columbia River sediment sampling

    International Nuclear Information System (INIS)

    Weiss, S.G.

    1993-01-01

    Forty-four sediment samples were collected from 28 locations in the Hanford Reach of the Columbia River to assess the presence of metals and man-made radionuclides in the near shore and shoreline settings of the Hanford Site. Three locations were sampled upriver of the Hanford Site plutonium production reactors. Twenty-two locations were sampled near the reactors. Three locations were sampled downstream of the reactors near the Hanford Townsite. Sediment was collected from depths of 0 to 6 in. and between 12 to 24 in. below the surface. Samples containing concentrations of metals exceeding the 95 % upper threshold limit values (DOE-RL 1993b) are considered contaminated. Contamination by arsenic, chromium, copper, lead, and zinc was found. Man-made radionuclides occur in all samples except four collected opposite the Hanford Townsite. Man-made radionuclide concentrations were generally less than 1 pCi/g

  8. 100 Area Columbia River sediment sampling

    Energy Technology Data Exchange (ETDEWEB)

    Weiss, S.G. [Westinghouse Hanford Co., Richland, WA (United States)

    1993-09-08

    Forty-four sediment samples were collected from 28 locations in the Hanford Reach of the Columbia River to assess the presence of metals and man-made radionuclides in the near shore and shoreline settings of the Hanford Site. Three locations were sampled upriver of the Hanford Site plutonium production reactors. Twenty-two locations were sampled near the reactors. Three locations were sampled downstream of the reactors near the Hanford Townsite. Sediment was collected from depths of 0 to 6 in. and between 12 to 24 in. below the surface. Samples containing concentrations of metals exceeding the 95 % upper threshold limit values (DOE-RL 1993b) are considered contaminated. Contamination by arsenic, chromium, copper, lead, and zinc was found. Man-made radionuclides occur in all samples except four collected opposite the Hanford Townsite. Man-made radionuclide concentrations were generally less than 1 pCi/g.

  9. Ancestry prediction in Singapore population samples using the Illumina ForenSeq kit.

    Science.gov (United States)

    Ramani, Anantharaman; Wong, Yongxun; Tan, Si Zhen; Shue, Bing Hong; Syn, Christopher

    2017-11-01

    The ability to predict bio-geographic ancestry can be valuable to generate investigative leads towards solving crimes. Ancestry informative marker (AIM) sets include large numbers of SNPs to predict an ancestral population. Massively parallel sequencing has enabled forensic laboratories to genotype a large number of such markers in a single assay. Illumina's ForenSeq DNA Signature Kit includes the ancestry informative SNPs reported by Kidd et al. In this study, the ancestry prediction capabilities of the ForenSeq kit through sequencing on the MiSeq FGx were evaluated in 1030 unrelated Singapore population samples of Chinese, Malay and Indian origin. A total of 59 ancestry SNPs and phenotypic SNPs with AIM properties were selected. The bio-geographic ancestry of the 1030 samples, as predicted by Illumina's ForenSeq Universal Analysis Software (UAS), was determined. 712 of the genotyped samples were used as a training sample set for the generation of an ancestry prediction model using STRUCTURE and Snipper. The performance of the prediction model was tested by both methods with the remaining 318 samples. Ancestry prediction in UAS was able to correctly classify the Singapore Chinese as part of the East Asian cluster, while Indians clustered with Ad-mixed Americans and Malays clustered in-between these two reference populations. Principal component analyses showed that the 59 SNPs were only able to account for 26% of the variation between the Singapore sub-populations. Their discriminatory potential was also found to be lower (G ST =0.085) than that reported in ALFRED (F ST =0.357). The Snipper algorithm was able to correctly predict bio-geographic ancestry in 91% of Chinese and Indian, and 88% of Malay individuals, while the success rates for the STRUCTURE algorithm were 94% in Chinese, 80% in Malay, and 91% in Indian individuals. Both these algorithms were able to provide admixture proportions when present. Ancestry prediction accuracy (in terms of likelihood ratio

  10. The Alaska Commercial Fisheries Water Quality Sampling Methods and Procedures Manual

    Energy Technology Data Exchange (ETDEWEB)

    Folley, G.; Pearson, L.; Crosby, C. [Alaska Dept. of Environmental Conservation, Soldotna, AK (United States); DeCola, E.; Robertson, T. [Nuka Research and Planning Group, Seldovia, AK (United States)

    2006-07-01

    A comprehensive water quality sampling program was conducted in response to the oil spill that occurred when the M/V Selendang Ayu ship ran aground near a major fishing port at Unalaska Island, Alaska in December 2004. In particular, the sampling program focused on the threat of spilled oil to the local commercial fisheries resources. Spill scientists were unable to confidently model the movement of oil away from the wreck because of limited oceanographic data. In order to determine which fish species were at risk of oil contamination, a real-time assessment of how and where the oil was moving was needed, because the wreck became a continual source of oil release for several weeks after the initial grounding. The newly developed methods and procedures used to detect whole oil during the sampling program will be presented in the Alaska Commercial Fisheries Water Quality Sampling Methods and Procedures Manual which is currently under development. The purpose of the manual is to provide instructions to spill managers while they try to determine where spilled oil has or has not been encountered. The manual will include a meaningful data set that can be analyzed in real time to assess oil movement and concentration. Sections on oil properties and processes will be included along with scientific water quality sampling methods for whole and dissolved phase oil to assess potential contamination of commercial fishery resources and gear in Alaska waters during an oil spill. The manual will present a general discussion of factors that should be considered when designing a sampling program after a spill. In order to implement Alaska's improved seafood safety measures, the spatial scope of spilled oil must be known. A water quality sampling program can provide state and federal fishery managers and food safety inspectors with important information as they identify at-risk fisheries. 11 refs., 7 figs.

  11. Neonatal Outcomes in the Birth Center Setting: A Systematic Review.

    Science.gov (United States)

    Phillippi, Julia C; Danhausen, Kathleen; Alliman, Jill; Phillippi, R David

    2018-01-01

    This systematic review investigates the effect of the birth center setting on neonatal mortality in economically developed countries to aid women and clinicians in decision making. We searched the Google Scholar, CINAHL, and PubMed databases using key terms birth/birthing center or out of hospital with perinatal/neonatal outcomes. Ancestry searches identified additional studies, and an alert was set for new publications. We included primary source studies in English, published after 1980, conducted in a developed country, and researching planned birth in centers with guidelines similar to American Association of Birth Centers standards. After initial review, we conducted a preliminary analysis, assessing which measures of neonatal health, morbidity, and mortality were included across studies. Neonatal mortality was selected as the sole summary measure as other measures were sporadically reported or inconsistently defined. Seventeen studies were included, representing at least 84,500 women admitted to a birth center in labor. There were substantial differences of study design, sampling techniques, and definitions of neonatal outcomes across studies, limiting conclusive statements of the effect of intrapartum care in a birth center. No reviewed study found a statistically increased rate of neonatal mortality in birth centers compared to low-risk women giving birth in hospitals, nor did data suggest a trend toward higher neonatal mortality in birth centers. As in all birth settings, nulliparous women, women aged greater than 35 years, and women with pregnancies of more than 42 weeks' gestation may have an increased risk of neonatal mortality. There are substantial flaws in the literature concerning the effect of birth center care on neonatal outcomes. More research is needed on subgroups at risk of poor outcomes in the birth center environment. To expedite research, consistent use of national and international definitions of perinatal and neonatal mortality within

  12. Robust H2 performance for sampled-data systems

    DEFF Research Database (Denmark)

    Rank, Mike Lind

    1997-01-01

    Robust H2 performance conditions under structured uncertainty, analogous to well known methods for H∞ performance, have recently emerged in both discrete and continuous-time. This paper considers the extension into uncertain sampled-data systems, taking into account inter-sample behavior. Convex...... conditions for robust H2 performance are derived for different uncertainty sets...

  13. Genetic Sample Inventory - NRDA

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database archives genetic tissue samples from marine mammals collected in the North-Central Gulf of Mexico from 2010-2015. The collection includes samples from...

  14. Martian regolith geochemistry and sampling techniques

    Science.gov (United States)

    Clark, B. C.

    Laboratory study of samples of the intermediate and fine-grained regolith, including duricrust peds, is a fundamental prerequisite for understanding the types of physical and chemical weathering processes on Mars. The extraordinary importance of such samples is their relevance to understanding past changes in climate, availability (and possible physical state) of water, eolian forces, the thermal and chemical influences of volcanic and impact processes, and the inventory and fates of Martian volatiles. Fortunately, this regolith material appears to be ubiquitous over the Martian surface, and should be available at many different landing sites. Viking data has been interpreted to indicate a smectite-rich regolith material, implying extensive weathering involving aqueous activity and geochemical alteration. An all-igneous source of the Martian fines has also been proposed. The X-ray fluorescence measurement data set can now be fully explained in terms of a simple two-component model. The first component is silicate, having strong geochemical similarities with Shergottites, but not other SNC meteorites. The second component is salt. Variations in these components could produce silicate and salt-rich beds, the latter being of high potential importance for microenvironments in which liquid water (brines) could exist. It therefore would be desirable to scan the surface of the regolith for such prospects.

  15. Martian regolith geochemistry and sampling techniques

    Science.gov (United States)

    Clark, B. C.

    1988-01-01

    Laboratory study of samples of the intermediate and fine-grained regolith, including duricrust peds, is a fundamental prerequisite for understanding the types of physical and chemical weathering processes on Mars. The extraordinary importance of such samples is their relevance to understanding past changes in climate, availability (and possible physical state) of water, eolian forces, the thermal and chemical influences of volcanic and impact processes, and the inventory and fates of Martian volatiles. Fortunately, this regolith material appears to be ubiquitous over the Martian surface, and should be available at many different landing sites. Viking data has been interpreted to indicate a smectite-rich regolith material, implying extensive weathering involving aqueous activity and geochemical alteration. An all-igneous source of the Martian fines has also been proposed. The X-ray fluorescence measurement data set can now be fully explained in terms of a simple two-component model. The first component is silicate, having strong geochemical similarities with Shergottites, but not other SNC meteorites. The second component is salt. Variations in these components could produce silicate and salt-rich beds, the latter being of high potential importance for microenvironments in which liquid water (brines) could exist. It therefore would be desirable to scan the surface of the regolith for such prospects.

  16. Definitive Characterization of CA 19-9 in Resectable Pancreatic Cancer Using a Reference Set of Serum and Plasma Specimens.

    Science.gov (United States)

    Haab, Brian B; Huang, Ying; Balasenthil, Seetharaman; Partyka, Katie; Tang, Huiyuan; Anderson, Michelle; Allen, Peter; Sasson, Aaron; Zeh, Herbert; Kaul, Karen; Kletter, Doron; Ge, Shaokui; Bern, Marshall; Kwon, Richard; Blasutig, Ivan; Srivastava, Sudhir; Frazier, Marsha L; Sen, Subrata; Hollingsworth, Michael A; Rinaudo, Jo Ann; Killary, Ann M; Brand, Randall E

    2015-01-01

    The validation of candidate biomarkers often is hampered by the lack of a reliable means of assessing and comparing performance. We present here a reference set of serum and plasma samples to facilitate the validation of biomarkers for resectable pancreatic cancer. The reference set includes a large cohort of stage I-II pancreatic cancer patients, recruited from 5 different institutions, and relevant control groups. We characterized the performance of the current best serological biomarker for pancreatic cancer, CA 19-9, using plasma samples from the reference set to provide a benchmark for future biomarker studies and to further our knowledge of CA 19-9 in early-stage pancreatic cancer and the control groups. CA 19-9 distinguished pancreatic cancers from the healthy and chronic pancreatitis groups with an average sensitivity and specificity of 70-74%, similar to previous studies using all stages of pancreatic cancer. Chronic pancreatitis patients did not show CA 19-9 elevations, but patients with benign biliary obstruction had elevations nearly as high as the cancer patients. We gained additional information about the biomarker by comparing two distinct assays. The two CA 9-9 assays agreed well in overall performance but diverged in measurements of individual samples, potentially due to subtle differences in antibody specificity as revealed by glycan array analysis. Thus, the reference set promises be a valuable resource for biomarker validation and comparison, and the CA 19-9 data presented here will be useful for benchmarking and for exploring relationships to CA 19-9.

  17. Definitive Characterization of CA 19-9 in Resectable Pancreatic Cancer Using a Reference Set of Serum and Plasma Specimens.

    Directory of Open Access Journals (Sweden)

    Brian B Haab

    Full Text Available The validation of candidate biomarkers often is hampered by the lack of a reliable means of assessing and comparing performance. We present here a reference set of serum and plasma samples to facilitate the validation of biomarkers for resectable pancreatic cancer. The reference set includes a large cohort of stage I-II pancreatic cancer patients, recruited from 5 different institutions, and relevant control groups. We characterized the performance of the current best serological biomarker for pancreatic cancer, CA 19-9, using plasma samples from the reference set to provide a benchmark for future biomarker studies and to further our knowledge of CA 19-9 in early-stage pancreatic cancer and the control groups. CA 19-9 distinguished pancreatic cancers from the healthy and chronic pancreatitis groups with an average sensitivity and specificity of 70-74%, similar to previous studies using all stages of pancreatic cancer. Chronic pancreatitis patients did not show CA 19-9 elevations, but patients with benign biliary obstruction had elevations nearly as high as the cancer patients. We gained additional information about the biomarker by comparing two distinct assays. The two CA 9-9 assays agreed well in overall performance but diverged in measurements of individual samples, potentially due to subtle differences in antibody specificity as revealed by glycan array analysis. Thus, the reference set promises be a valuable resource for biomarker validation and comparison, and the CA 19-9 data presented here will be useful for benchmarking and for exploring relationships to CA 19-9.

  18. Foreign Language Optical Character Recognition, Phase II: Arabic and Persian Training and Test Data Sets

    National Research Council Canada - National Science Library

    Davidson, Robert

    1997-01-01

    .... Each data set is divided into a training set, which is made available to developers, and a carefully matched equal-sized set of closely analogous samples, which is reserved for testing of the developers' products...

  19. Treating panic symptoms within everyday clinical settings: the feasibility of a group cognitive behavioural intervention

    DEFF Research Database (Denmark)

    Austin, S.F.; Sumbundu, A.D.; Lykke, J.

    2008-01-01

    of significant clinical change displayed and resources required to carry out the intervention. A small sample of GP-referred patients displaying panic symptoms completed a 2-week intensive cognitive-behavioural intervention. Results collected post-intervention revealed significant clinical reductions in panic......Panic disorder is a common and debilitating disorder that has a prevalence rate of 3-5% in the general population. Cognitive-behavioural interventions have been shown to be an efficacious treatment for panic, although a limited number of studies have examined the effectiveness of such interventions...... implemented in everyday clinical settings. The aim of the following pilot study was to examine the feasibility of a brief group cognitive-behavioural intervention carried out in a clinical setting. Salient issues in determining feasibility include: representativeness of patient group treated, amount...

  20. Addressing Underrepresentation in Sex Work Research: Reflections on Designing a Purposeful Sampling Strategy.

    Science.gov (United States)

    Bungay, Vicky; Oliffe, John; Atchison, Chris

    2016-06-01

    Men, transgender people, and those working in off-street locales have historically been underrepresented in sex work health research. Failure to include all sections of sex worker populations precludes comprehensive understandings about a range of population health issues, including potential variations in the manifestation of such issues within and between population subgroups, which in turn can impede the development of effective services and interventions. In this article, we describe our attempts to define, determine, and recruit a purposeful sample for a qualitative study examining the interrelationships between sex workers' health and the working conditions in the Vancouver off-street sex industry. Detailed is our application of ethnographic mapping approaches to generate information about population diversity and work settings within distinct geographical boundaries. Bearing in mind the challenges and the overwhelming discrimination sex workers experience, we scope recommendations for safe and effective purposeful sampling inclusive of sex workers' heterogeneity. © The Author(s) 2015.

  1. Urine specimen validity test for drug abuse testing in workplace and court settings.

    Science.gov (United States)

    Lin, Shin-Yu; Lee, Hei-Hwa; Lee, Jong-Feng; Chen, Bai-Hsiun

    2018-01-01

    In recent decades, urine drug testing in the workplace has become common in many countries in the world. There have been several studies concerning the use of the urine specimen validity test (SVT) for drug abuse testing administered in the workplace. However, very little data exists concerning the urine SVT on drug abuse tests from court specimens, including dilute, substituted, adulterated, and invalid tests. We investigated 21,696 submitted urine drug test samples for SVT from workplace and court settings in southern Taiwan over 5 years. All immunoassay screen-positive urine specimen drug tests were confirmed by gas chromatography/mass spectrometry. We found that the mean 5-year prevalence of tampering (dilute, substituted, or invalid tests) in urine specimens from the workplace and court settings were 1.09% and 3.81%, respectively. The mean 5-year percentage of dilute, substituted, and invalid urine specimens from the workplace were 89.2%, 6.8%, and 4.1%, respectively. The mean 5-year percentage of dilute, substituted, and invalid urine specimens from the court were 94.8%, 1.4%, and 3.8%, respectively. No adulterated cases were found among the workplace or court samples. The most common drug identified from the workplace specimens was amphetamine, followed by opiates. The most common drug identified from the court specimens was ketamine, followed by amphetamine. We suggest that all urine specimens taken for drug testing from both the workplace and court settings need to be tested for validity. Copyright © 2017. Published by Elsevier B.V.

  2. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    Science.gov (United States)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  3. Development of Data Acquisition Set-up for Steady-state Experiments

    Science.gov (United States)

    Srivastava, Amit K.; Gupta, Arnab D.; Sunil, S.; Khan, Ziauddin

    2017-04-01

    For short duration experiments, generally digitized data is transferred for processing and storage after the experiment whereas in case of steady-state experiment the data is acquired, processed, displayed and stored continuously in pipelined manner. This requires acquiring data through special techniques for storage and on-the-go viewing data to display the current data trends for various physical parameters. A small data acquisition set-up is developed for continuously acquiring signals from various physical parameters at different sampling rate for long duration experiment. This includes the hardware set-up for signal digitization, Field Programmable Gate Arrays (FPGA) based timing system for clock synchronization and event/trigger distribution, time slicing of data streams for storage of data chunks to enable viewing of data during acquisition and channel profile display through down sampling etc. In order to store a long data stream of indefinite/long time duration, the data stream is divided into data slices/chunks of user defined time duration. Data chunks avoid the problem of non-access of server data until the channel data file is closed at the end of the long duration experiment. A graphical user interface has been developed in Lab VIEW application development environment for configuring the data acquisition hardware and storing data chunks on local machine as well as at remote data server through Python for further data access. The data plotting and analysis utilities have been developed with Python software, which provides tools for further data processing. This paper describes the development and implementation of data acquisition for steady-state experiment.

  4. Bounded Memory, Inertia, Sampling and Weighting Model for Market Entry Games

    Directory of Open Access Journals (Sweden)

    Yi-Shan Lee

    2011-03-01

    Full Text Available This paper describes the “Bounded Memory, Inertia, Sampling and Weighting” (BI-SAW model, which won the http://sites.google.com/site/gpredcomp/Market Entry Prediction Competition in 2010. The BI-SAW model refines the I-SAW Model (Erev et al. [1] by adding the assumption of limited memory span. In particular, we assume when players draw a small sample to weight against the average payoff of all past experience, they can only recall 6 trials of past experience. On the other hand, we keep all other key features of the I-SAW model: (1 Reliance on a small sample of past experiences, (2 Strong inertia and recency effects, and (3 Surprise triggers change. We estimate this model using the first set of experimental results run by the competition organizers, and use it to predict results of a second set of similar experiments later ran by the organizers. We find significant improvement in out-of-sample predictability (against the I-SAW model in terms of smaller mean normalized MSD, and such result is robust to resampling the predicted game set and reversing the role of the sets of experimental results. Our model’s performance is the best among all the participants.

  5. Pengaruh Kebijakan Pendanaan, Deviden dan Profitabilitas Perusahaan terhadap Set Kesempatan Investasi (IOS

    Directory of Open Access Journals (Sweden)

    Akhmad Adi Saputro

    2015-12-01

    Full Text Available The objective this research is to examine financing policy, dividend policy and profitability of the  firm association investment  opportunity set. The sample of this study consist  of 28 firm in Jakarta Stock Exchange period 1999-2004. Common factor analysis is conducted to constructcomposite  measures then ranked to classify the growth of sampling firm. To examine the impact of the financing policy, dividend policy, and profitability to the  investment opportunity set of the high and low growth firm is used logit regression analysis. The result indicates that the impact between  financing policy, proxied  by Book value  of debt  to equity, Market value of debt to equity to investment  opportunity set, is significantly negative. The impact between  dividend policy  proxied  by dividend yield  and investment opportunity set is significantly negative, but dividen pay out ratio is not  significant. The impact of profitability, proxied  by return on assets to investment opportunity set is significantly positive. 

  6. Soft sets combined with interval valued intuitionistic fuzzy sets of type-2 and rough sets

    Directory of Open Access Journals (Sweden)

    Anjan Mukherjee

    2015-03-01

    Full Text Available Fuzzy set theory, rough set theory and soft set theory are all mathematical tools dealing with uncertainties. The concept of type-2 fuzzy sets was introduced by Zadeh in 1975 which was extended to interval valued intuitionistic fuzzy sets of type-2 by the authors.This paper is devoted to the discussions of the combinations of interval valued intuitionistic sets of type-2, soft sets and rough sets.Three different types of new hybrid models, namely-interval valued intuitionistic fuzzy soft sets of type-2, soft rough interval valued intuitionistic fuzzy sets of type-2 and soft interval valued intuitionistic fuzzy rough sets of type-2 are proposed and their properties are derived.

  7. Uniform Sampling Table Method and its Applications II--Evaluating the Uniform Sampling by Experiment.

    Science.gov (United States)

    Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei

    2015-01-01

    A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.

  8. State-dependent importance sampling for a Jackson tandem network

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2008-01-01

    This paper considers importance sampling as a tool for rare-event simulation. The focus is on estimating the probability of overflow in the downstream queue of a Jacksonian two-node tandem queue – it is known that in this setting ‘traditional’ stateindependent importance-sampling distributions

  9. State-dependent importance sampling for a Jackson tandem network

    NARCIS (Netherlands)

    D.I. Miretskiy; W.R.W. Scheinhardt (Werner); M.R.H. Mandjes (Michel)

    2008-01-01

    htmlabstractThis paper considers importance sampling as a tool for rare-event simulation. The focus is on estimating the probability of overflow in the downstream queue of a Jacksonian two-node tandem queue – it is known that in this setting ‘traditional’ stateindependent importance-sampling

  10. Advanced pressure tube sampling tools

    International Nuclear Information System (INIS)

    Wittich, K.C.; King, J.M.

    2002-01-01

    Deuterium concentration is an important parameter that must be assessed to evaluate the Fitness for service of CANDU pressure tubes. In-reactor pressure tube sampling allows accurate deuterium concentration assessment to be made without the expenses associated with fuel channel removal. This technology, which AECL has developed over the past fifteen years, has become the standard method for deuterium concentration assessment. AECL is developing a multi-head tool that would reduce in-reactor handling overhead by allowing one tool to sequentially sample at all four axial pressure tube locations before removal from the reactor. Four sets of independent cutting heads, like those on the existing sampling tools, facilitate this incorporating proven technology demonstrated in over 1400 in-reactor samples taken to date. The multi-head tool is delivered by AECL's Advanced Delivery Machine or other similar delivery machines. Further, AECL has developed an automated sample handling system that receives and processes the tool once out of the reactor. This system retrieves samples from the tool, dries, weighs and places them in labelled vials which are then directed into shielded shipping flasks. The multi-head wet sampling tool and the automated sample handling system are based on proven technology and offer continued savings and dose reduction to utilities in a competitive electricity market. (author)

  11. How to handle speciose clades? Mass taxon-sampling as a strategy towards illuminating the natural history of Campanula (Campanuloideae.

    Directory of Open Access Journals (Sweden)

    Guilhem Mansion

    Full Text Available BACKGROUND: Speciose clades usually harbor species with a broad spectrum of adaptive strategies and complex distribution patterns, and thus constitute ideal systems to disentangle biotic and abiotic causes underlying species diversification. The delimitation of such study systems to test evolutionary hypotheses is difficult because they often rely on artificial genus concepts as starting points. One of the most prominent examples is the bellflower genus Campanula with some 420 species, but up to 600 species when including all lineages to which Campanula is paraphyletic. We generated a large alignment of petD group II intron sequences to include more than 70% of described species as a reference. By comparison with partial data sets we could then assess the impact of selective taxon sampling strategies on phylogenetic reconstruction and subsequent evolutionary conclusions. METHODOLOGY/PRINCIPAL FINDINGS: Phylogenetic analyses based on maximum parsimony (PAUP, PRAP, Bayesian inference (MrBayes, and maximum likelihood (RAxML were first carried out on the large reference data set (D680. Parameters including tree topology, branch support, and age estimates, were then compared to those obtained from smaller data sets resulting from "classification-guided" (D088 and "phylogeny-guided sampling" (D101. Analyses of D088 failed to fully recover the phylogenetic diversity in Campanula, whereas D101 inferred significantly different branch support and age estimates. CONCLUSIONS/SIGNIFICANCE: A short genomic region with high phylogenetic utility allowed us to easily generate a comprehensive phylogenetic framework for the speciose Campanula clade. Our approach recovered 17 well-supported and circumscribed sub-lineages. Knowing these will be instrumental for developing more specific evolutionary hypotheses and guide future research, we highlight the predictive value of a mass taxon-sampling strategy as a first essential step towards illuminating the detailed

  12. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range

    OpenAIRE

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-01-01

    Background In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. Methods In this paper, we propose to improve the existing literature in ...

  13. Algorithms for Learning Preferences for Sets of Objects

    Science.gov (United States)

    Wagstaff, Kiri L.; desJardins, Marie; Eaton, Eric

    2010-01-01

    A method is being developed that provides for an artificial-intelligence system to learn a user's preferences for sets of objects and to thereafter automatically select subsets of objects according to those preferences. The method was originally intended to enable automated selection, from among large sets of images acquired by instruments aboard spacecraft, of image subsets considered to be scientifically valuable enough to justify use of limited communication resources for transmission to Earth. The method is also applicable to other sets of objects: examples of sets of objects considered in the development of the method include food menus, radio-station music playlists, and assortments of colored blocks for creating mosaics. The method does not require the user to perform the often-difficult task of quantitatively specifying preferences; instead, the user provides examples of preferred sets of objects. This method goes beyond related prior artificial-intelligence methods for learning which individual items are preferred by the user: this method supports a concept of setbased preferences, which include not only preferences for individual items but also preferences regarding types and degrees of diversity of items in a set. Consideration of diversity in this method involves recognition that members of a set may interact with each other in the sense that when considered together, they may be regarded as being complementary, redundant, or incompatible to various degrees. The effects of such interactions are loosely summarized in the term portfolio effect. The learning method relies on a preference representation language, denoted DD-PREF, to express set-based preferences. In DD-PREF, a preference is represented by a tuple that includes quality (depth) functions to estimate how desired a specific value is, weights for each feature preference, the desired diversity of feature values, and the relative importance of diversity versus depth. The system applies statistical

  14. Sample-Based Extreme Learning Machine with Missing Data

    Directory of Open Access Journals (Sweden)

    Hang Gao

    2015-01-01

    Full Text Available Extreme learning machine (ELM has been extensively studied in machine learning community during the last few decades due to its high efficiency and the unification of classification, regression, and so forth. Though bearing such merits, existing ELM algorithms cannot efficiently handle the issue of missing data, which is relatively common in practical applications. The problem of missing data is commonly handled by imputation (i.e., replacing missing values with substituted values according to available information. However, imputation methods are not always effective. In this paper, we propose a sample-based learning framework to address this issue. Based on this framework, we develop two sample-based ELM algorithms for classification and regression, respectively. Comprehensive experiments have been conducted in synthetic data sets, UCI benchmark data sets, and a real world fingerprint image data set. As indicated, without introducing extra computational complexity, the proposed algorithms do more accurate and stable learning than other state-of-the-art ones, especially in the case of higher missing ratio.

  15. Systematic versus random sampling in stereological studies.

    Science.gov (United States)

    West, Mark J

    2012-12-01

    The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.

  16. Fit for purpose? Introducing a rational priority setting approach into a community care setting.

    Science.gov (United States)

    Cornelissen, Evelyn; Mitton, Craig; Davidson, Alan; Reid, Colin; Hole, Rachelle; Visockas, Anne-Marie; Smith, Neale

    2016-06-20

    Purpose - Program budgeting and marginal analysis (PBMA) is a priority setting approach that assists decision makers with allocating resources. Previous PBMA work establishes its efficacy and indicates that contextual factors complicate priority setting, which can hamper PBMA effectiveness. The purpose of this paper is to gain qualitative insight into PBMA effectiveness. Design/methodology/approach - A Canadian case study of PBMA implementation. Data consist of decision-maker interviews pre (n=20), post year-1 (n=12) and post year-2 (n=9) of PBMA to examine perceptions of baseline priority setting practice vis-à-vis desired practice, and perceptions of PBMA usability and acceptability. Findings - Fit emerged as a key theme in determining PBMA effectiveness. Fit herein refers to being of suitable quality and form to meet the intended purposes and needs of the end-users, and includes desirability, acceptability, and usability dimensions. Results confirm decision-maker desire for rational approaches like PBMA. However, most participants indicated that the timing of the exercise and the form in which PBMA was applied were not well-suited for this case study. Participant acceptance of and buy-in to PBMA changed during the study: a leadership change, limited organizational commitment, and concerns with organizational capacity were key barriers to PBMA adoption and thereby effectiveness. Practical implications - These findings suggest that a potential way-forward includes adding a contextual readiness/capacity assessment stage to PBMA, recognizing organizational complexity, and considering incremental adoption of PBMA's approach. Originality/value - These insights help us to better understand and work with priority setting conditions to advance evidence-informed decision making.

  17. Geostatistical analysis of groundwater chemistry in Japan. Evaluation of the base case groundwater data set

    Energy Technology Data Exchange (ETDEWEB)

    Salter, P.F.; Apted, M.J. [Monitor Scientific LLC, Denver, CO (United States); Sasamoto, Hiroshi; Yui, Mikazu

    1999-05-01

    The groundwater chemistry is one of important geological environment for performance assessment of high level radioactive disposal system. This report describes the results of geostatistical analysis of groundwater chemistry in Japan. Over 15,000 separate groundwater analyses have been collected of deep Japanese groundwaters for the purpose of evaluating the range of geochemical conditions for geological radioactive waste repositories in Japan. The significance to issues such as radioelement solubility limits, sorption, corrosion of overpack, behavior of compacted clay buffers, and many other factors involved in safety assessment. It is important therefore, that a small, but representative set of groundwater types be identified so that defensible models and data for generic repository performance assessment can be established. Principal component analysis (PCA) is used to categorize representative deep groundwater types from this extensive data set. PCA is a multi-variate statistical analysis technique, similar to factor analysis or eigenvector analysis, designed to provide the best possible resolution of the variability within multi-variate data sets. PCA allows the graphical inspection of the most important similarities (clustering) and differences among samples, based on simultaneous consideration of all variables in the dataset, in a low dimensionality plot. It also allows the analyst to determine the reasons behind any pattern that is observed. In this study, PCA has been aided by hierarchical cluster analysis (HCA), in which statistical indices of similarity among multiple samples are used to distinguish distinct clusters of samples. HCA allows the natural, a priori, grouping of data into clusters showing similar attributes and is graphically represented in a dendrogram Pirouette is the multivariate statistical software package used to conduct the PCA and HCA for the Japanese groundwater dataset. An audit of the initial 15,000 sample dataset on the basis of

  18. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    International Nuclear Information System (INIS)

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-01-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality

  19. Automated Sample Preparation for Radiogenic and Non-Traditional Metal Isotopes: Removing an Analytical Barrier for High Sample Throughput

    Science.gov (United States)

    Field, M. Paul; Romaniello, Stephen; Gordon, Gwyneth W.; Anbar, Ariel D.; Herrmann, Achim; Martinez-Boti, Miguel A.; Anagnostou, Eleni; Foster, Gavin L.

    2014-05-01

    MC-ICP-MS has dramatically improved the analytical throughput for high-precision radiogenic and non-traditional isotope ratio measurements, compared to TIMS. The generation of large data sets, however, remains hampered by tedious manual drip chromatography required for sample purification. A new, automated chromatography system reduces the laboratory bottle neck and expands the utility of high-precision isotope analyses in applications where large data sets are required: geochemistry, forensic anthropology, nuclear forensics, medical research and food authentication. We have developed protocols to automate ion exchange purification for several isotopic systems (B, Ca, Fe, Cu, Zn, Sr, Cd, Pb and U) using the new prepFAST-MC™ (ESI, Nebraska, Omaha). The system is not only inert (all-flouropolymer flow paths), but is also very flexible and can easily facilitate different resins, samples, and reagent types. When programmed, precise and accurate user defined volumes and flow rates are implemented to automatically load samples, wash the column, condition the column and elute fractions. Unattended, the automated, low-pressure ion exchange chromatography system can process up to 60 samples overnight. Excellent reproducibility, reliability, recovery, with low blank and carry over for samples in a variety of different matrices, have been demonstrated to give accurate and precise isotopic ratios within analytical error for several isotopic systems (B, Ca, Fe, Cu, Zn, Sr, Cd, Pb and U). This illustrates the potential of the new prepFAST-MC™ (ESI, Nebraska, Omaha) as a powerful tool in radiogenic and non-traditional isotope research.

  20. Experimental validation of control strategies for a microgrid test facility including a storage system and renewable generation sets

    DEFF Research Database (Denmark)

    Baccino, Francesco; Marinelli, Mattia; Silvestro, Federico

    2012-01-01

    The paper is aimed at describing and validating some control strategies in the SYSLAB experimental test facility characterized by the presence of a low voltage network with a 15 kW-190 kWh Vanadium Redox Flow battery system and a 11 kW wind turbine. The generation set is connected to the local...... network and is fully controllable by the SCADA system. The control strategies, implemented on a local pc interfaced to the SCADA, are realized in Matlab-Simulink. The main purpose is to control the charge/discharge action of the storage system in order to present at the point of common coupling...... the desired power or energy profiles....

  1. Effect of sampling schedule on pharmacokinetic parameter estimates of promethazine in astronauts

    Science.gov (United States)

    Boyd, Jason L.; Wang, Zuwei; Putcha, Lakshmi

    2005-08-01

    Six astronauts on the Shuttle Transport System (STS) participated in an investigation on the pharmacokinetics of promethazine (PMZ), a medication used for the treatment of space motion sickness (SMS) during flight. Each crewmember completed the protocol once during flight and repeated thirty days after returned to Earth. Saliva samples were collected at scheduled times for 72 h after PMZ administration; more frequent samples were collected on the ground than during flight owing to schedule constraints in flight. PMZ concentrations in saliva were determined by a liquid chromatographic/mass spectrometric (LC-MS) assay and pharmacokinetic parameters (PKPs) were calculated using actual flight and ground-based data sets and using time-matched sampling schedule on ground to that during flight. Volume of distribution (Vc) and clearance (Cls) decreased during flight compared to that from time-matched ground data set; however, ClS and Vc estimates were higher for all subjects when partial ground data sets were used for analysis. Area under the curve (AUC) normalized with administered dose was similar in flight and partial ground data; however AUC was significantly lower using time-matched sampling compared with the full data set on ground. Half life (t1/2) was longest during flight, shorter with matched-sampling schedule on ground and shortest when complete data set from ground was used. Maximum concentration (Cmax), time for Cmax (tmax), parameters of drug absorption, depicted a similar trend with lowest and longest respectively, during flight, lower with time- matched ground data and highest and shortest with full ground data.

  2. [Therapeutic effects of venlafaxine extended release for patients with depressive and anxiety disorders in the German outpatient setting - results of 2 observational studies including 8500 patients].

    Science.gov (United States)

    Anghelescu, I-G; Dierkes, W; Volz, H-P; Loeschmann, P-A; Schmitt, A B

    2009-11-01

    The therapeutic effects of venlafaxine extended release have been investigated by two prospective observational studies including 8506 patients in the outpatient setting of office based general practitioners and specialists. The efficacy has been documented by the Clinical Global Impression (CGI) scale and by the Hamilton depression (HAMD-21) scale. The tolerability has been assessed by the documentation of adverse events. About (2/3) of the patients were treated because of depression and about (1/3) mainly because of anxiety disorder. The patients of specialists did receive higher dosages and were more severely affected. The response rate on the CGI scale was 87.4 for the patients of general practitioners and 74.2 % for the patients of specialists. The results of the HAMD-21 scale, which has been used by specialists, showed a response rate of 71.8 and a remission rate of 56.3 %. These positive effects could be demonstrated even for the more severely and chronically affected patients. The incidence of adverse events was low in both studies and comparable to the tolerability profile of randomized studies. Importantly, the good tolerability profile was similar even for patients with concomitant cardiovascular disease. In conclusion, these results confirm the efficacy and good tolerability of venlafaxine extended release in the outpatient setting in Germany. Georg Thieme Verlag KG Stuttgart, New York.

  3. Transformation-cost time-series method for analyzing irregularly sampled data.

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  4. Transformation-cost time-series method for analyzing irregularly sampled data

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  5. Sexual Harassment: A Common Sample for the University and the Workplace.

    Science.gov (United States)

    Beauregard, Terri Kinion

    The sexual harassment experienced by a sample of women (N=154) in a university setting was compared with the sexual harassment experienced by them in a workplace setting. Results appeared to support the following generalizations: (1) there is greater gender harassment, unwanted sexual attention, and sexual coercion in the workplace setting than in…

  6. Noise in NC-AFM measurements with significant tip–sample interaction

    Directory of Open Access Journals (Sweden)

    Jannis Lübbe

    2016-12-01

    Full Text Available The frequency shift noise in non-contact atomic force microscopy (NC-AFM imaging and spectroscopy consists of thermal noise and detection system noise with an additional contribution from amplitude noise if there are significant tip–sample interactions. The total noise power spectral density DΔf(fm is, however, not just the sum of these noise contributions. Instead its magnitude and spectral characteristics are determined by the strongly non-linear tip–sample interaction, by the coupling between the amplitude and tip–sample distance control loops of the NC-AFM system as well as by the characteristics of the phase locked loop (PLL detector used for frequency demodulation. Here, we measure DΔf(fm for various NC-AFM parameter settings representing realistic measurement conditions and compare experimental data to simulations based on a model of the NC-AFM system that includes the tip–sample interaction. The good agreement between predicted and measured noise spectra confirms that the model covers the relevant noise contributions and interactions. Results yield a general understanding of noise generation and propagation in the NC-AFM and provide a quantitative prediction of noise for given experimental parameters. We derive strategies for noise-optimised imaging and spectroscopy and outline a full optimisation procedure for the instrumentation and control loops.

  7. Large Sets in Boolean and Non-Boolean Groups and Topology

    Directory of Open Access Journals (Sweden)

    Ol’ga V. Sipacheva

    2017-10-01

    Full Text Available Various notions of large sets in groups, including the classical notions of thick, syndetic, and piecewise syndetic sets and the new notion of vast sets in groups, are studied with emphasis on the interplay between such sets in Boolean groups. Natural topologies closely related to vast sets are considered; as a byproduct, interesting relations between vast sets and ultrafilters are revealed.

  8. The gradient boosting algorithm and random boosting for genome-assisted evaluation in large data sets.

    Science.gov (United States)

    González-Recio, O; Jiménez-Montero, J A; Alenda, R

    2013-01-01

    In the next few years, with the advent of high-density single nucleotide polymorphism (SNP) arrays and genome sequencing, genomic evaluation methods will need to deal with a large number of genetic variants and an increasing sample size. The boosting algorithm is a machine-learning technique that may alleviate the drawbacks of dealing with such large data sets. This algorithm combines different predictors in a sequential manner with some shrinkage on them; each predictor is applied consecutively to the residuals from the committee formed by the previous ones to form a final prediction based on a subset of covariates. Here, a detailed description is provided and examples using a toy data set are included. A modification of the algorithm called "random boosting" was proposed to increase predictive ability and decrease computation time of genome-assisted evaluation in large data sets. Random boosting uses a random selection of markers to add a subsequent weak learner to the predictive model. These modifications were applied to a real data set composed of 1,797 bulls genotyped for 39,714 SNP. Deregressed proofs of 4 yield traits and 1 type trait from January 2009 routine evaluations were used as dependent variables. A 2-fold cross-validation scenario was implemented. Sires born before 2005 were used as a training sample (1,576 and 1,562 for production and type traits, respectively), whereas younger sires were used as a testing sample to evaluate predictive ability of the algorithm on yet-to-be-observed phenotypes. Comparison with the original algorithm was provided. The predictive ability of the algorithm was measured as Pearson correlations between observed and predicted responses. Further, estimated bias was computed as the average difference between observed and predicted phenotypes. The results showed that the modification of the original boosting algorithm could be run in 1% of the time used with the original algorithm and with negligible differences in accuracy

  9. Development of a polymerase chain reaction applicable to rapid and sensitive detection of Clonorchis sinensis eggs in human stool samples

    Science.gov (United States)

    Cho, Pyo Yun; Na, Byoung-Kuk; Mi Choi, Kyung; Kim, Jin Su; Cho, Shin-Hyeong; Lee, Won-Ja; Lim, Sung-Bin; Cha, Seok Ho; Park, Yun-Kyu; Pak, Jhang Ho; Lee, Hyeong-Woo; Hong, Sung-Jong; Kim, Tong-Soo

    2013-01-01

    Microscopic examination of eggs of parasitic helminths in stool samples has been the most widely used classical diagnostic method for infections, but tiny and low numbers of eggs in stool samples often hamper diagnosis of helminthic infections with classical microscopic examination. Moreover, it is also difficult to differentiate parasite eggs by the classical method, if they have similar morphological characteristics. In this study, we developed a rapid and sensitive polymerase chain reaction (PCR)-based molecular diagnostic method for detection of Clonorchis sinensis eggs in stool samples. Nine primers were designed based on the long-terminal repeat (LTR) of C. sinensis retrotransposon1 (CsRn1) gene, and seven PCR primer sets were paired. Polymerase chain reaction with each primer pair produced specific amplicons for C. sinensis, but not for other trematodes including Metagonimus yokogawai and Paragonimus westermani. Particularly, three primer sets were able to detect 10 C. sinensis eggs and were applicable to amplify specific amplicons from DNA samples purified from stool of C. sinensis-infected patients. This PCR method could be useful for diagnosis of C. sinensis infections in human stool samples with a high level of specificity and sensitivity. PMID:23916334

  10. Multiple sample, radioactive particle counting apparatus

    International Nuclear Information System (INIS)

    Reddy, R.R.V.; Kelso, D.M.

    1978-01-01

    An apparatus is described for determining the respective radioactive particle sample count being emitted from radioactive particle containing samples. It includes means for modulating the information on the radioactive particles being emitted from the samples, coded detecting means for sequentially detecting different respective coded combinations of the radioactive particles emitted from more than one but less than all of the samples, and processing the modulated information to derive the sample count for each sample. It includes a single light emitting crystal next to a number of samples, an encoder belt sequentially movable between the crystal and the samples. The encoder belt has a coded array of apertures to provide corresponding modulated light pulses from the crystal, and a photomultiplier tube to convert the modulated light pulses to decodable electrical signals for deriving the respective sample count

  11. Rapid prediction of multi-dimensional NMR data sets

    International Nuclear Information System (INIS)

    Gradmann, Sabine; Ader, Christian; Heinrich, Ines; Nand, Deepak; Dittmann, Marc; Cukkemane, Abhishek; Dijk, Marc van; Bonvin, Alexandre M. J. J.; Engelhard, Martin; Baldus, Marc

    2012-01-01

    We present a computational environment for Fast Analysis of multidimensional NMR DAta Sets (FANDAS) that allows assembling multidimensional data sets from a variety of input parameters and facilitates comparing and modifying such “in silico” data sets during the various stages of the NMR data analysis. The input parameters can vary from (partial) NMR assignments directly obtained from experiments to values retrieved from in silico prediction programs. The resulting predicted data sets enable a rapid evaluation of sample labeling in light of spectral resolution and structural content, using standard NMR software such as Sparky. In addition, direct comparison to experimental data sets can be used to validate NMR assignments, distinguish different molecular components, refine structural models or other parameters derived from NMR data. The method is demonstrated in the context of solid-state NMR data obtained for the cyclic nucleotide binding domain of a bacterial cyclic nucleotide-gated channel and on membrane-embedded sensory rhodopsin II. FANDAS is freely available as web portal under WeNMR (http://www.wenmr.eu/services/FANDAShttp://www.wenmr.eu/services/FANDAS).

  12. Rapid prediction of multi-dimensional NMR data sets

    Energy Technology Data Exchange (ETDEWEB)

    Gradmann, Sabine; Ader, Christian [Utrecht University, Faculty of Science, Bijvoet Center for Biomolecular Research (Netherlands); Heinrich, Ines [Max Planck Institute for Molecular Physiology, Department of Physical Biochemistry (Germany); Nand, Deepak [Utrecht University, Faculty of Science, Bijvoet Center for Biomolecular Research (Netherlands); Dittmann, Marc [Max Planck Institute for Molecular Physiology, Department of Physical Biochemistry (Germany); Cukkemane, Abhishek; Dijk, Marc van; Bonvin, Alexandre M. J. J. [Utrecht University, Faculty of Science, Bijvoet Center for Biomolecular Research (Netherlands); Engelhard, Martin [Max Planck Institute for Molecular Physiology, Department of Physical Biochemistry (Germany); Baldus, Marc, E-mail: m.baldus@uu.nl [Utrecht University, Faculty of Science, Bijvoet Center for Biomolecular Research (Netherlands)

    2012-12-15

    We present a computational environment for Fast Analysis of multidimensional NMR DAta Sets (FANDAS) that allows assembling multidimensional data sets from a variety of input parameters and facilitates comparing and modifying such 'in silico' data sets during the various stages of the NMR data analysis. The input parameters can vary from (partial) NMR assignments directly obtained from experiments to values retrieved from in silico prediction programs. The resulting predicted data sets enable a rapid evaluation of sample labeling in light of spectral resolution and structural content, using standard NMR software such as Sparky. In addition, direct comparison to experimental data sets can be used to validate NMR assignments, distinguish different molecular components, refine structural models or other parameters derived from NMR data. The method is demonstrated in the context of solid-state NMR data obtained for the cyclic nucleotide binding domain of a bacterial cyclic nucleotide-gated channel and on membrane-embedded sensory rhodopsin II. FANDAS is freely available as web portal under WeNMR (http://www.wenmr.eu/services/FANDAShttp://www.wenmr.eu/services/FANDAS).

  13. The youth sports club as a health-promoting setting: An integrative review of research

    Science.gov (United States)

    Quennerstedt, Mikael; Eriksson, Charli

    2013-01-01

    Aims: The aims of this review is to compile and identify key issues in international research about youth sports clubs as health-promoting settings, and then discuss the results of the review in terms of a framework for the youth sports club as a health-promoting setting. Methods: The framework guiding this review of research is the health-promoting settings approach introduced by the World Health Organization (WHO). The method used is the integrated review. Inclusion criteria were, first, that the studies concerned sports clubs for young people, not professional clubs; second, that it be a question of voluntary participation in some sort of ongoing organized athletics outside of the regular school curricula; third, that the studies consider issues about youth sports clubs in terms of health-promoting settings as described by WHO. The final sample for the review consists of 44 publications. Results: The review shows that youth sports clubs have plentiful opportunities to be or become health-promoting settings; however this is not something that happens automatically. To do so, the club needs to include an emphasis on certain important elements in its strategies and daily practices. The youth sports club needs to be a supportive and healthy environment with activities designed for and adapted to the specific age-group or stage of development of the youth. Conclusions: To become a health-promoting setting, a youth sports club needs to take a comprehensive approach to its activities, aims, and purposes. PMID:23349167

  14. Setting Goals for Achievement in Physical Education Settings

    Science.gov (United States)

    Baghurst, Timothy; Tapps, Tyler; Kensinger, Weston

    2015-01-01

    Goal setting has been shown to improve student performance, motivation, and task completion in academic settings. Although goal setting is utilized by many education professionals to help students set realistic and proper goals, physical educators may not be using goal setting effectively. Without incorporating all three types of goals and…

  15. Tube closure device, especially for sample irradiation

    International Nuclear Information System (INIS)

    Klahn, F.C.; Nolan, J.H.; Wills, C.

    1979-01-01

    Device for closing the outlet of a bore and temporarily locking components in this bore. Specifically, it concerns a device for closing a tube containing a set of samples for monitoring irradiation in a nuclear reactor [fr

  16. Representativeness of two sampling procedures for an internet intervention targeting cancer-related distress: a comparison of convenience and registry samples

    OpenAIRE

    Owen, Jason E.; Bantum, Erin O'Carroll; Criswell, Kevin; Bazzo, Julie; Gorlick, Amanda; Stanton, Annette L.

    2013-01-01

    Internet interventions often rely on convenience sampling, yet convenience samples may differ in important ways from systematic recruitment approaches. The purpose of this study was to evaluate potential demographic, medical, and psychosocial differences between Internet-recruited and registry-recruited cancer survivors in an Internet-based intervention. Participants were recruited from a cancer registry (n = 80) and via broad Internet outreach efforts (n = 160). Participants completed a set ...

  17. Algorithms for detecting and analysing autocatalytic sets.

    Science.gov (United States)

    Hordijk, Wim; Smith, Joshua I; Steel, Mike

    2015-01-01

    Autocatalytic sets are considered to be fundamental to the origin of life. Prior theoretical and computational work on the existence and properties of these sets has relied on a fast algorithm for detectingself-sustaining autocatalytic sets in chemical reaction systems. Here, we introduce and apply a modified version and several extensions of the basic algorithm: (i) a modification aimed at reducing the number of calls to the computationally most expensive part of the algorithm, (ii) the application of a previously introduced extension of the basic algorithm to sample the smallest possible autocatalytic sets within a reaction network, and the application of a statistical test which provides a probable lower bound on the number of such smallest sets, (iii) the introduction and application of another extension of the basic algorithm to detect autocatalytic sets in a reaction system where molecules can also inhibit (as well as catalyse) reactions, (iv) a further, more abstract, extension of the theory behind searching for autocatalytic sets. (i) The modified algorithm outperforms the original one in the number of calls to the computationally most expensive procedure, which, in some cases also leads to a significant improvement in overall running time, (ii) our statistical test provides strong support for the existence of very large numbers (even millions) of minimal autocatalytic sets in a well-studied polymer model, where these minimal sets share about half of their reactions on average, (iii) "uninhibited" autocatalytic sets can be found in reaction systems that allow inhibition, but their number and sizes depend on the level of inhibition relative to the level of catalysis. (i) Improvements in the overall running time when searching for autocatalytic sets can potentially be obtained by using a modified version of the algorithm, (ii) the existence of large numbers of minimal autocatalytic sets can have important consequences for the possible evolvability of

  18. Solvent hold tank sample results for MCU-16-1317-1318-1319. September 2016 monthly sample

    Energy Technology Data Exchange (ETDEWEB)

    Fondeur, F. F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, D. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-01-01

    Savannah River National Laboratory (SRNL) received one set of Solvent Hold Tank (SHT) samples (MCU-16-1317-1318-1319), pulled on 09/12/2016 for analysis. The samples were combined and analyzed for composition. Analysis of the composite sample MCU-16-1317-1318-1319 indicated the Isopar™L concentration is above its nominal level (102%). The extractant (MaxCalix) and the modifier (CS-7SB) are 5% and 10 % below their nominal concentrations. The suppressor (TiDG) is 77% below its nominal concentration. A summary of the concentration of the relevant solvent components is shown below. This analysis confirms the Isopar™ addition to the solvent in August. This analysis also indicates the solvent may require the addition of TiDG, and possibly of modifier to restore them to nominal levels.

  19. Gene set of nuclear-encoded mitochondrial regulators is enriched for common inherited variation in obesity.

    Directory of Open Access Journals (Sweden)

    Nadja Knoll

    Full Text Available There are hints of an altered mitochondrial function in obesity. Nuclear-encoded genes are relevant for mitochondrial function (3 gene sets of known relevant pathways: (1 16 nuclear regulators of mitochondrial genes, (2 91 genes for oxidative phosphorylation and (3 966 nuclear-encoded mitochondrial genes. Gene set enrichment analysis (GSEA showed no association with type 2 diabetes mellitus in these gene sets. Here we performed a GSEA for the same gene sets for obesity. Genome wide association study (GWAS data from a case-control approach on 453 extremely obese children and adolescents and 435 lean adult controls were used for GSEA. For independent confirmation, we analyzed 705 obesity GWAS trios (extremely obese child and both biological parents and a population-based GWAS sample (KORA F4, n = 1,743. A meta-analysis was performed on all three samples. In each sample, the distribution of significance levels between the respective gene set and those of all genes was compared using the leading-edge-fraction-comparison test (cut-offs between the 50(th and 95(th percentile of the set of all gene-wise corrected p-values as implemented in the MAGENTA software. In the case-control sample, significant enrichment of associations with obesity was observed above the 50(th percentile for the set of the 16 nuclear regulators of mitochondrial genes (p(GSEA,50 = 0.0103. This finding was not confirmed in the trios (p(GSEA,50 = 0.5991, but in KORA (p(GSEA,50 = 0.0398. The meta-analysis again indicated a trend for enrichment (p(MAGENTA,50 = 0.1052, p(MAGENTA,75 = 0.0251. The GSEA revealed that weak association signals for obesity might be enriched in the gene set of 16 nuclear regulators of mitochondrial genes.

  20. Figures of merit and constraints from testing general relativity using the latest cosmological data sets including refined COSMOS 3D weak lensing

    International Nuclear Information System (INIS)

    Dossett, Jason N.; Moldenhauer, Jacob; Ishak, Mustapha

    2011-01-01

    We use cosmological constraints from current data sets and a figure of merit approach in order to probe any deviations from general relativity at cosmological scales. The figure of merit approach is used to study and compare the constraining power of various combinations of data sets on the modified gravity (MG) parameters. We use the recently refined HST-COSMOS weak-lensing tomography data, the ISW-galaxy cross correlations from 2MASS and SDSS luminous red galaxy surveys, the matter power spectrum from SDSS-DR7 (MPK), the WMAP7 temperature and polarization spectra, the baryon acoustic oscillations from Two-Degree Field and SDSS-DR7, and the Union2 compilation of type Ia supernovae, in addition to other bounds from Hubble parameter measurements and big bang nucleosynthesis. We use three parametrizations of MG parameters that enter the perturbed field equations. In order to allow for variations of the parameters with the redshift and scale, the first two parametrizations use recently suggested functional forms while the third is based on binning methods. Using the first parametrization, we find that the CMB+ISW+WL combination provides the strongest constraints on the MG parameters followed by CMB+WL or CMB+MPK+ISW. Using the second parametrization or the binning methods, we find that the combination CMB+MPK+ISW consistently provides some of the strongest constraints. This shows that the constraints are parametrization dependent. We find that adding up current data sets does not improve consistently the uncertainties on MG parameters due to tensions between the best-fit MG parameters preferred by different data sets. Furthermore, some functional forms imposed by the parametrizations can lead to an exacerbation of these tensions. Next, unlike some studies that used the CFHTLS lensing data, we do not find any deviation from general relativity using the refined HST-COSMOS data, confirming previous claims in those studies that their result may have been due to some

  1. International spinal cord injury cardiovascular function basic data set.

    Science.gov (United States)

    Krassioukov, A; Alexander, M S; Karlsson, A-K; Donovan, W; Mathias, C J; Biering-Sørensen, F

    2010-08-01

    To create an International Spinal Cord Injury (SCI) Cardiovascular Function Basic Data Set within the framework of the International SCI Data Sets. An international working group. The draft of the data set was developed by a working group comprising members appointed by the American Spinal Injury Association (ASIA), the International Spinal Cord Society (ISCoS) and a representative of the executive committee of the International SCI Standards and Data Sets. The final version of the data set was developed after review by members of the executive committee of the International SCI Standards and Data Sets, the ISCoS scientific committee, ASIA board, relevant and interested international organizations and societies, individual persons with specific interest and the ISCoS Council. To make the data set uniform, each variable and each response category within each variable have been specifically defined in a way that is designed to promote the collection and reporting of comparable minimal data. The variables included in the International SCI Cardiovascular Function Basic Data Set include the following items: date of data collection, cardiovascular history before the spinal cord lesion, events related to cardiovascular function after the spinal cord lesion, cardiovascular function after the spinal cord lesion, medications affecting cardiovascular function on the day of examination; and objective measures of cardiovascular functions, including time of examination, position of examination, pulse and blood pressure. The complete instructions for data collection and the data sheet itself are freely available on the websites of both ISCoS (http://www.iscos.org.uk) and ASIA (http://www.asia-spinalinjury.org).

  2. Rational Variability in Children's Causal Inferences: The Sampling Hypothesis

    Science.gov (United States)

    Denison, Stephanie; Bonawitz, Elizabeth; Gopnik, Alison; Griffiths, Thomas L.

    2013-01-01

    We present a proposal--"The Sampling Hypothesis"--suggesting that the variability in young children's responses may be part of a rational strategy for inductive inference. In particular, we argue that young learners may be randomly sampling from the set of possible hypotheses that explain the observed data, producing different hypotheses with…

  3. Sampling Operations on Big Data

    Science.gov (United States)

    2015-11-29

    gories. These include edge sampling methods where edges are selected by a predetermined criteria; snowball sampling methods where algorithms start... Sampling Operations on Big Data Vijay Gadepally, Taylor Herr, Luke Johnson, Lauren Milechin, Maja Milosavljevic, Benjamin A. Miller Lincoln...process and disseminate information for discovery and exploration under real-time constraints. Common signal processing operations such as sampling and

  4. Work setting, community attachment, and satisfaction among rural and remote nurses.

    Science.gov (United States)

    Kulig, Judith C; Stewart, Norma; Penz, Kelly; Forbes, Dorothy; Morgan, Debra; Emerson, Paige

    2009-01-01

    To describe community satisfaction and attachment among rural and remote registered nurses (RNs) in Canada. Cross-sectional survey of rural and remote RNs in Canada as part of a multimethod study.The sample consisted of a stratified random sample of RNs living in rural areas of the western country and the total population of RNs who worked in three northern regional areas and those in outpost settings. A subset of 3,331 rural and remote RNs who mainly worked in acute care, long-term care, community health, home care, and primary care comprised the sample. The home community satisfaction scale measured community satisfaction, whereas single-item questions measured work community satisfaction and overall job satisfaction. Community variables were compared across practice areas using analysis of variance, whereas a thematic analysis was conducted of the open-ended questions. Home care and community health RNs were significantly more satisfied with their work community than RNs from other practice areas. RNs who grew up in rural communities were more satisfied with their current home community. Four themes emerged from the open-ended responses that describe community satisfaction and community attachment. Recruitment and retention strategies need to include mechanisms that focus on community satisfaction, which will enhance job satisfaction.

  5. Enlarge the training set based on inter-class relationship for face recognition from one image per person.

    Science.gov (United States)

    Li, Qin; Wang, Hua Jing; You, Jane; Li, Zhao Ming; Li, Jin Xue

    2013-01-01

    In some large-scale face recognition task, such as driver license identification and law enforcement, the training set only contains one image per person. This situation is referred to as one sample problem. Because many face recognition techniques implicitly assume that several (at least two) images per person are available for training, they cannot deal with the one sample problem. This paper investigates principal component analysis (PCA), Fisher linear discriminant analysis (LDA), and locality preserving projections (LPP) and shows why they cannot perform well in one sample problem. After that, this paper presents four reasons that make one sample problem itself difficult: the small sample size problem; the lack of representative samples; the underestimated intra-class variation; and the overestimated inter-class variation. Based on the analysis, this paper proposes to enlarge the training set based on the inter-class relationship. This paper also extends LDA and LPP to extract features from the enlarged training set. The experimental results show the effectiveness of the proposed method.

  6. Multisource data set integration and characterization of uranium mineralization for the Montrose Quadrangle, Colorado

    International Nuclear Information System (INIS)

    Bolivar, S.L.; Balog, S.H.; Campbell, K.; Fugelso, L.E.; Weaver, T.A.; Wecksung, G.W.

    1981-04-01

    Several data-classification schemes were developed by the Los Alamos National Laboratory to detect potential uranium mineralization in the Montrose 1 0 x 2 0 quadrangle, Colorado. A first step was to develop and refine the techniques necessary to digitize, integrate, and register various large geological, geochemical, and geophysical data sets, including Landsat 2 imagery, for the Montrose quadrangle, Colorado, using a grid resolution of 1 km. All data sets for the Montrose quadrangle were registered to the Universal Transverse Mercator projection. The data sets include hydrogeochemical and stream sediment analyses for 23 elements, uranium-to-thorium ratios, airborne geophysical survey data, the locations of 90 uranium occurrences, a geologic map and Landsat 2 (bands 4 through 7) imagery. Geochemical samples were collected from 3965 locations in the 19 200 km 2 quadrangle; aerial data were collected on flight lines flown with 3 to 5 km spacings. These data sets were smoothed by universal kriging and interpolated to a 179 x 119 rectangular grid. A mylar transparency of the geologic map was prepared and digitized. Locations for the known uranium occurrences were also digitized. The Landsat 2 imagery was digitally manipulated and rubber-sheet transformed to quadrangle boundaries and bands 4 through 7 were resampled to both a 1-km and 100-m resolution. All possible combinations of three, for all data sets, were examined for general geologic correlations by utilizing a color microfilm output. Subsets of data were further examined for selected test areas. Two classification schemes for uranium mineralization, based on selected test areas in both the Cochetopa and Marshall Pass uranium districts, are presented. Areas favorable for uranium mineralization, based on these schemes, were identified and are discussed

  7. Set-up for pulse radiolysis of agressive substances

    International Nuclear Information System (INIS)

    Kozlowska-Milner, E.; Broszkiewicz, R.; Stanikowski, J.

    1975-01-01

    A set-up for the pulse radiolysis of aggressive substances with a relatively low consumption of the liquid, tested for anhydrous HNO 3 , has been described. The samples have been irradiated with single pulses of 10 MeV electrons at the linear accelerator type LAE 13-9. The absorption spectra of the irradiated samples (within a range of 300-800 nm) were provided by a xenon lamp. The variations of the voltage from the photomultiplier, coupled with an oscilloscope, were registered with the aid of a Polaroid camera. (T.G.)

  8. Using Google Glass in Nonsurgical Medical Settings: Systematic Review.

    Science.gov (United States)

    Dougherty, Bryn; Badawy, Sherif M

    2017-10-19

    in student training (n=9), disaster relief (n=4), diagnostics (n=2), nursing (n=1), autopsy and postmortem examination (n=1), wound care (n=1), behavioral sciences (n=1), and various medical subspecialties, including, cardiology (n=3), radiology (n=3), neurology (n=1), anesthesiology (n=1), pulmonology (n=1), toxicology (n=1), and dermatology (n=1). Most of the studies were conducted in the United States (40/51, 78%), did not report specific age information for participants (38/51, 75%), had sample size Google Glass, and outcomes assessment. The use of Google Glass in nonsurgical medical settings varied. More promising results regarding the feasibility, usability, and acceptability of using Google Glass were seen in patient-centered studies and student training settings. Further research evaluating the efficacy and cost-effectiveness of Google Glass as an intervention to improve important clinical outcomes is warranted. ©Bryn Dougherty, Sherif M Badawy. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 19.10.2017.

  9. Particulate organic nitrates: Sampling and night/day variation

    DEFF Research Database (Denmark)

    Nielsen, T.; Platz, J.; Granby, K.

    1998-01-01

    Atmospheric day and night concentrations of particulate organic nitrates (PON) and several other air pollutants were measured in the summer 1995 over an open-land area in Denmark. The sampling of PON was evaluated comparing 24 h samples with two sets of 12 h samples. These results indicate...... that the observed low contribution of PON to NO, is real and not the result of an extensive loss during the sampling. Empirical relationships between the vapour pressure and chemical formula of organic compounds were established in order to evaluate the gas/particle distribution of organic nitrates. A positive...

  10. ACTIVE LEARNING TO OVERCOME SAMPLE SELECTION BIAS: APPLICATION TO PHOTOMETRIC VARIABLE STAR CLASSIFICATION

    Energy Technology Data Exchange (ETDEWEB)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Berian James, J. [Astronomy Department, University of California, Berkeley, CA 94720-7450 (United States); Brink, Henrik [Dark Cosmology Centre, Juliane Maries Vej 30, 2100 Copenhagen O (Denmark); Long, James P.; Rice, John, E-mail: jwrichar@stat.berkeley.edu [Statistics Department, University of California, Berkeley, CA 94720-7450 (United States)

    2012-01-10

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL-where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up-is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  11. ACTIVE LEARNING TO OVERCOME SAMPLE SELECTION BIAS: APPLICATION TO PHOTOMETRIC VARIABLE STAR CLASSIFICATION

    International Nuclear Information System (INIS)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Berian James, J.; Brink, Henrik; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  12. Active Learning to Overcome Sample Selection Bias: Application to Photometric Variable Star Classification

    Science.gov (United States)

    Richards, Joseph W.; Starr, Dan L.; Brink, Henrik; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; James, J. Berian; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  13. Astrobiology Sample Analysis Program (ASAP) for Advanced Life Detection Instrumentation Development and Calibration

    Science.gov (United States)

    Glavin, Daniel; Brinkerhoff, Will; Dworkin, Jason; Eigenbrode, Jennifer; Franz, Heather; Mahaffy, Paul; Stern, Jen; Blake, Daid; Sandford, Scott; Fries, marc; hide

    2008-01-01

    Scientific ground-truth measurements for near-term Mars missions, such as the 2009 Mars Science Laboratory (MSL) mission, are essential for validating current in situ flight instrumentation and for the development of advanced instrumentation technologies for life-detection missions over the next decade. The NASA Astrobiology Institute (NAI) has recently funded a consortium of researchers called the Astrobiology Sample Analysis Program (ASAP) to analyze an identical set of homogenized martian analog materials in a "round-robin" style using both state-of-the-art laboratory techniques as well as in-situ flight instrumentation including the SAM gas chromatograph mass spectrometer and CHEMIN X-ray diffraction/fluorescence instruments on MSL and the Urey and MOMA organic analyzer instruments under development for the 2013 ExoMars missions. The analog samples studied included an Atacama Desert soil from Chile, the Murchison meteorite, a gypsum sample from the 2007 AMASE Mars analog site, jarosite from Panoche Valley, CA, a hydrothermal sample from Rio Tinto, Spain, and a "blind" sample collected during the 2007 MSL slow-motion field test in New Mexico. Each sample was distributed to the team for analysis to: (1) determine the nature and inventory of organic compounds, (2) measure the bulk carbon and nitrogen isotopic composition, (3) investigate elemental abundances, mineralogy and matrix, and (4) search for biological activity. The experimental results obtained from the ASAP Mars analog research consortium will be used to build a framework for understanding the biogeochemistry of martian analogs, help calibrate current spaceflight instrumentation, and enhance the scientific return from upcoming missions.

  14. Population variability of phthalate metabolites and bisphenol A concentrations in spot urine samples versus 24- or 48-h collections.

    Science.gov (United States)

    Christensen, Krista L Yorita; Lorber, Matthew; Koch, Holger M; Kolossa-Gehring, Marike; Morgan, Marsha K

    2012-11-01

    Human exposure to phthalates and bisphenol A (BPA) can be assessed through urinary biomonitoring, but methods to infer daily intakes assume that spot sample concentrations are comparable to daily average concentrations. We evaluate this assumption using human biomonitoring data from Germany and the United States (US). The German data comprised three regional studies with spot samples and one with full-day samples analyzed for phthalate metabolites. The US data included: a study on DEHP metabolites and BPA involving eight persons supplying all urine voids (from which 24-h samples were constructed) for seven consecutive days; NHANES spot sample data on DEHP metabolites and BPA; and a regional study of children with 48-h samples analyzed for BPA. In the German data, measures of central tendency differed, but spot and 24-h samples showed generally comparable variance including 95th percentiles and maxima equidistant from central tendency measures. In contrast, the US adult data from the eight-person study showed similar central tendencies for phthalate metabolites and BPA, but generally greater variability for the spot samples, including higher 95th percentiles and maxima. When comparing children's BPA concentrations in NHANES spot and 48-h samples, distributions showed similar central tendency and variability. Overall, spot urinary concentrations of DEHP metabolites and BPA have variability roughly comparable with corresponding 24-h average concentrations obtained from a comparable population, suggesting that spot samples can be used to characterize population distributions of intakes. However, the analysis also suggests that caution should be exercised when interpreting the high end of spot sample data sets.

  15. The challenges of detecting and responding to a Lassa fever outbreak in an Ebola-affected setting.

    Science.gov (United States)

    Hamblion, E L; Raftery, P; Wendland, A; Dweh, E; Williams, G S; George, R N C; Soro, L; Katawera, V; Clement, P; Gasasira, A N; Musa, E; Nagbe, T K

    2018-01-01

    Lassa fever (LF), a priority emerging pathogen likely to cause major epidemics, is endemic in much of West Africa and is difficult to distinguish from other viral hemorrhagic fevers, including Ebola virus disease (EVD). Definitive diagnosis requires laboratory confirmation, which is not widely available in affected settings. The public health action to contain a LF outbreak and the challenges encountered in an EVD-affected setting are reported herein. In February 2016, a rapid response team was deployed in Liberia in response to a cluster of LF cases. Active case finding, case investigation, contact tracing, laboratory testing, environmental investigation, risk communication, and community awareness raising were undertaken. From January to June 2016, 53 suspected LF cases were reported through the Integrated Disease Surveillance and Response system (IDSR). Fourteen cases (26%) were confirmed for LF, 14 (26%) did not have a sample tested, and 25 (47%) were classified as not a case following laboratory analysis. The case fatality rate in the confirmed cases was 29%. One case of international exportation was reported from Sweden. Difficulties were identified in timely specimen collection, packaging, and transportation (in confirmed cases, the time from sample collection to sample result ranged from 2 to 64 days) and a lack of response interventions for early cases. The delay in response to this outbreak could have been related to a number of challenges in this EVD-affected setting: a need to strengthen the IDSR system, develop preparedness plans, train rapid response teams, and build laboratory capacity. Prioritizing these actions will aid in the timely response to future outbreaks. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Small sets of interacting proteins suggest functional linkage mechanisms via Bayesian analogical reasoning.

    Science.gov (United States)

    Airoldi, Edoardo M; Heller, Katherine A; Silva, Ricardo

    2011-07-01

    Proteins and protein complexes coordinate their activity to execute cellular functions. In a number of experimental settings, including synthetic genetic arrays, genetic perturbations and RNAi screens, scientists identify a small set of protein interactions of interest. A working hypothesis is often that these interactions are the observable phenotypes of some functional process, which is not directly observable. Confirmatory analysis requires finding other pairs of proteins whose interaction may be additional phenotypical evidence about the same functional process. Extant methods for finding additional protein interactions rely heavily on the information in the newly identified set of interactions. For instance, these methods leverage the attributes of the individual proteins directly, in a supervised setting, in order to find relevant protein pairs. A small set of protein interactions provides a small sample to train parameters of prediction methods, thus leading to low confidence. We develop RBSets, a computational approach to ranking protein interactions rooted in analogical reasoning; that is, the ability to learn and generalize relations between objects. Our approach is tailored to situations where the training set of protein interactions is small, and leverages the attributes of the individual proteins indirectly, in a Bayesian ranking setting that is perhaps closest to propensity scoring in mathematical psychology. We find that RBSets leads to good performance in identifying additional interactions starting from a small evidence set of interacting proteins, for which an underlying biological logic in terms of functional processes and signaling pathways can be established with some confidence. Our approach is scalable and can be applied to large databases with minimal computational overhead. Our results suggest that analogical reasoning within a Bayesian ranking problem is a promising new approach for real-time biological discovery. Java code is available at

  17. Developing an assessment of fire-setting to guide treatment in secure settings: the St Andrew's Fire and Arson Risk Instrument (SAFARI).

    Science.gov (United States)

    Long, Clive G; Banyard, Ellen; Fulton, Barbara; Hollin, Clive R

    2014-09-01

    Arson and fire-setting are highly prevalent among patients in secure psychiatric settings but there is an absence of valid and reliable assessment instruments and no evidence of a significant approach to intervention. To develop a semi-structured interview assessment specifically for fire-setting to augment structured assessments of risk and need. The extant literature was used to frame interview questions relating to the antecedents, behaviour and consequences necessary to formulate a functional analysis. Questions also covered readiness to change, fire-setting self-efficacy, the probability of future fire-setting, barriers to change, and understanding of fire-setting behaviour. The assessment concludes with indications for assessment and a treatment action plan. The inventory was piloted with a sample of women in secure care and was assessed for comprehensibility, reliability and validity. Staff rated the St Andrews Fire and Risk Instrument (SAFARI) as acceptable to patients and easy to administer. SAFARI was found to be comprehensible by over 95% of the general population, to have good acceptance, high internal reliability, substantial test-retest reliability and validity. SAFARI helps to provide a clear explanation of fire-setting in terms of the complex interplay of antecedents and consequences and facilitates the design of an individually tailored treatment programme in sympathy with a cognitive-behavioural approach. Further studies are needed to verify the reliability and validity of SAFARI with male populations and across settings.

  18. Determination of water-extractable nonstructural carbohydrates, including inulin, in grass samples with high-performance anion exchange chromatography and pulsed amperometric detection.

    Science.gov (United States)

    Raessler, Michael; Wissuwa, Bianka; Breul, Alexander; Unger, Wolfgang; Grimm, Torsten

    2008-09-10

    The exact and reliable determination of carbohydrates in plant samples of different origin is of great importance with respect to plant physiology. Additionally, the identification and quantification of carbohydrates are necessary for the evaluation of the impact of these compounds on the biogeochemistry of carbon. To attain this goal, it is necessary to analyze a great number of samples with both high sensitivity and selectivity within a limited time frame. This paper presents a rugged and easy method that allows the isocratic chromatographic determination of 12 carbohydrates and sugar alcohols from one sample within 30 min. The method was successfully applied to a variety of plant materials with particular emphasis on perennial ryegrass samples of the species Lolium perenne. The method was easily extended to the analysis of the polysaccharide inulin after its acidic hydrolysis into the corresponding monomers without the need for substantial change of chromatographic conditions or even the use of enzymes. It therefore offers a fundamental advantage for the analysis of the complex mixture of nonstructural carbohydrates often found in plant samples.

  19. Mining environmental high-throughput sequence data sets to identify divergent amplicon clusters for phylogenetic reconstruction and morphotype visualization.

    Science.gov (United States)

    Gimmler, Anna; Stoeck, Thorsten

    2015-08-01

    Environmental high-throughput sequencing (envHTS) is a very powerful tool, which in protistan ecology is predominantly used for the exploration of diversity and its geographic and local patterns. We here used a pyrosequenced V4-SSU rDNA data set from a solar saltern pond as test case to exploit such massive protistan amplicon data sets beyond this descriptive purpose. Therefore, we combined a Swarm-based blastn network including 11 579 ciliate V4 amplicons to identify divergent amplicon clusters with targeted polymerase chain reaction (PCR) primer design for full-length small subunit of the ribosomal DNA retrieval and probe design for fluorescence in situ hybridization (FISH). This powerful strategy allows to benefit from envHTS data sets to (i) reveal the phylogenetic position of the taxon behind divergent amplicons; (ii) improve phylogenetic resolution and evolutionary history of specific taxon groups; (iii) solidly assess an amplicons (species') degree of similarity to its closest described relative; (iv) visualize the morphotype behind a divergent amplicons cluster; (v) rapidly FISH screen many environmental samples for geographic/habitat distribution and abundances of the respective organism and (vi) to monitor the success of enrichment strategies in live samples for cultivation and isolation of the respective organisms. © 2015 Society for Applied Microbiology and John Wiley & Sons Ltd.

  20. A posteriori noise estimation in variable data sets. With applications to spectra and light curves

    Science.gov (United States)

    Czesla, S.; Molle, T.; Schmitt, J. H. M. M.

    2018-01-01

    Most physical data sets contain a stochastic contribution produced by measurement noise or other random sources along with the signal. Usually, neither the signal nor the noise are accurately known prior to the measurement so that both have to be estimated a posteriori. We have studied a procedure to estimate the standard deviation of the stochastic contribution assuming normality and independence, requiring a sufficiently well-sampled data set to yield reliable results. This procedure is based on estimating the standard deviation in a sample of weighted sums of arbitrarily sampled data points and is identical to the so-called DER_SNR algorithm for specific parameter settings. To demonstrate the applicability of our procedure, we present applications to synthetic data, high-resolution spectra, and a large sample of space-based light curves and, finally, give guidelines to apply the procedure in situation not explicitly considered here to promote its adoption in data analysis.

  1. Exact sampling of graphs with prescribed degree correlations

    Science.gov (United States)

    Bassler, Kevin E.; Del Genio, Charo I.; Erdős, Péter L.; Miklós, István; Toroczkai, Zoltán

    2015-08-01

    Many real-world networks exhibit correlations between the node degrees. For instance, in social networks nodes tend to connect to nodes of similar degree and conversely, in biological and technological networks, high-degree nodes tend to be linked with low-degree nodes. Degree correlations also affect the dynamics of processes supported by a network structure, such as the spread of opinions or epidemics. The proper modelling of these systems, i.e., without uncontrolled biases, requires the sampling of networks with a specified set of constraints. We present a solution to the sampling problem when the constraints imposed are the degree correlations. In particular, we develop an exact method to construct and sample graphs with a specified joint-degree matrix, which is a matrix providing the number of edges between all the sets of nodes of a given degree, for all degrees, thus completely specifying all pairwise degree correlations, and additionally, the degree sequence itself. Our algorithm always produces independent samples without backtracking. The complexity of the graph construction algorithm is {O}({NM}) where N is the number of nodes and M is the number of edges.

  2. Interviewing Objects: Including Educational Technologies as Qualitative Research Participants

    Science.gov (United States)

    Adams, Catherine A.; Thompson, Terrie Lynn

    2011-01-01

    This article argues the importance of including significant technologies-in-use as key qualitative research participants when studying today's digitally enhanced learning environments. We gather a set of eight heuristics to assist qualitative researchers in "interviewing" technologies-in-use (or other relevant objects), drawing on concrete…

  3. Instruments to Identify Commercially Sexually Exploited Children: Feasibility of Use in an Emergency Department Setting.

    Science.gov (United States)

    Armstrong, Stephanie

    2017-12-01

    This review examines the screening instruments that are in existence today to identify commercially sexually exploited children. The instruments are compared and evaluated for their feasibility of use in an emergency department setting. Four electronic databases were searched to identify screening instruments that assessed solely for commercial sexual exploitation. Search terms included "commercially sexually exploited children," "CSEC," "domestic minor sex trafficking," "DMST," "juvenile sex trafficking," and "JST." Those terms were then searched in combination with each of the following: "tools," "instruments," "screening," "policies," "procedures," "data collection," "evidence," and "validity." Six screening instruments were found to meet the inclusion criteria. Variation among instruments included number of questions, ease of administration, information sources, scoring methods, and training information provided. Two instruments were determined to be highly feasible for use in the emergency department setting, those being the Asian Health Services and Banteay Srei's CSEC Screening Protocol and Greenbaum et al's CSEC/child sex trafficking 6-item screening tool. A current dearth of screening instruments was confirmed. It is recommended that additional screening instruments be created to include developmentally appropriate instruments for preadolescent children. Numerous positive features were identified within the instruments in this review and are suggested for use in future screening instruments, including succinctness, a simple format, easy administration, training materials, sample questions, multiple information sources, designation of questions requiring mandatory reporting, a straightforward scoring system, and an algorithm format.

  4. Analysis of hepatitis C viral dynamics using Latin hypercube sampling

    Science.gov (United States)

    Pachpute, Gaurav; Chakrabarty, Siddhartha P.

    2012-12-01

    We consider a mathematical model comprising four coupled ordinary differential equations (ODEs) to study hepatitis C viral dynamics. The model includes the efficacies of a combination therapy of interferon and ribavirin. There are two main objectives of this paper. The first one is to approximate the percentage of cases in which there is a viral clearance in absence of treatment as well as percentage of response to treatment for various efficacy levels. The other is to better understand and identify the parameters that play a key role in the decline of viral load and can be estimated in a clinical setting. A condition for the stability of the uninfected and the infected steady states is presented. A large number of sample points for the model parameters (which are physiologically feasible) are generated using Latin hypercube sampling. An analysis of the simulated values identifies that, approximately 29.85% cases result in clearance of the virus during the early phase of the infection. Results from the χ2 and the Spearman's tests done on the samples, indicate a distinctly different distribution for certain parameters for the cases exhibiting viral clearance under the combination therapy.

  5. Nuclear data sets for reactor design calculations - approved 1975

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    This standard identifies and describes the specifications for developing, preparing, and documenting nuclear data sets to be used in reactor design calculations. The specifications include (a) criteria for acceptance of evaluated nuclear data sets, (b) criteria for processing evaluated data and preparation of processed continuous data and averaged data sets, and (c) identification of specific evaluated, processed continuous, and averaged data sets which meet these criteria for specific reactor types

  6. Gender and physician specialization and practice settings in Ecuador: a qualitative study

    Directory of Open Access Journals (Sweden)

    Rita Bedoya-Vaca

    2016-11-01

    Full Text Available Abstract Background The increasing proportion of women in the medical profession is a worldwide phenomenon often called the “feminization of medicine.” However, it is understudied in low and middle-income countries, particularly in Latin America. Methods Using a qualitative, descriptive design, we explored the influence of gender and other factors on physician career decision-making and experiences, including medical specialty and public vs. private practice, in Quito, Ecuador, through in-depth, semi-structured interviews (n = 31 in 2014. Theoretical sampling was used to obtain approximately equal numbers of women and men and a range of medical specialties and practice settings; data saturation was used to determine sample size. Transcripts were analyzed using content coding procedures to mark quotations related to major topics and sub-themes included in the interview guide and inductive (grounded theory approaches to identify new themes and sub-themes. Results Gendered norms regarding women’s primary role in childrearing, along with social class or economic resources, strongly influenced physicians’ choice of medical specialty and practice settings. Women physicians, especially surgeons, have had to “pay the price” socially, often remaining single and/or childless, or ending up divorced; in addition, both women and men face limited opportunities for medical residency training in Ecuador, thus specialty is determined by economic resources and “opportunity.” Women physicians often experience discrimination from patients, nurses, and, sometimes, other physicians, which has limited their mobility and ability to operate independently and in the private sector. The public sector, where patients cannot “choose” their doctors, offers women more opportunities for professional success and advancement, and the regular hours enable organizing work and family responsibilities. However, the public sector has generally much less

  7. Gender and physician specialization and practice settings in Ecuador: a qualitative study.

    Science.gov (United States)

    Bedoya-Vaca, Rita; Derose, Kathryn P; Romero-Sandoval, Natalia

    2016-11-17

    The increasing proportion of women in the medical profession is a worldwide phenomenon often called the "feminization of medicine." However, it is understudied in low and middle-income countries, particularly in Latin America. Using a qualitative, descriptive design, we explored the influence of gender and other factors on physician career decision-making and experiences, including medical specialty and public vs. private practice, in Quito, Ecuador, through in-depth, semi-structured interviews (n = 31) in 2014. Theoretical sampling was used to obtain approximately equal numbers of women and men and a range of medical specialties and practice settings; data saturation was used to determine sample size. Transcripts were analyzed using content coding procedures to mark quotations related to major topics and sub-themes included in the interview guide and inductive (grounded theory) approaches to identify new themes and sub-themes. Gendered norms regarding women's primary role in childrearing, along with social class or economic resources, strongly influenced physicians' choice of medical specialty and practice settings. Women physicians, especially surgeons, have had to "pay the price" socially, often remaining single and/or childless, or ending up divorced; in addition, both women and men face limited opportunities for medical residency training in Ecuador, thus specialty is determined by economic resources and "opportunity." Women physicians often experience discrimination from patients, nurses, and, sometimes, other physicians, which has limited their mobility and ability to operate independently and in the private sector. The public sector, where patients cannot "choose" their doctors, offers women more opportunities for professional success and advancement, and the regular hours enable organizing work and family responsibilities. However, the public sector has generally much less flexibility than the private sector, making it more difficult to balance work

  8. A nested sampling particle filter for nonlinear data assimilation

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-04-15

    We present an efficient nonlinear data assimilation filter that combines particle filtering with the nested sampling algorithm. Particle filters (PF) utilize a set of weighted particles as a discrete representation of probability distribution functions (PDF). These particles are propagated through the system dynamics and their weights are sequentially updated based on the likelihood of the observed data. Nested sampling (NS) is an efficient sampling algorithm that iteratively builds a discrete representation of the posterior distributions by focusing a set of particles to high-likelihood regions. This would allow the representation of the posterior PDF with a smaller number of particles and reduce the effects of the curse of dimensionality. The proposed nested sampling particle filter (NSPF) iteratively builds the posterior distribution by applying a constrained sampling from the prior distribution to obtain particles in high-likelihood regions of the search space, resulting in a reduction of the number of particles required for an efficient behaviour of particle filters. Numerical experiments with the 3-dimensional Lorenz63 and the 40-dimensional Lorenz96 models show that NSPF outperforms PF in accuracy with a relatively smaller number of particles. © 2013 Royal Meteorological Society.

  9. Priority setting in clinical nursing practice: literature review.

    Science.gov (United States)

    Hendry, Charles; Walker, Anne

    2004-08-01

    Time is a valuable resource. When nurses experience demands on their services which exceed their available time, then 'rationing' must occur. In clinical practice such rationing requires practitioners to set priorities for care. The aim of this paper is establish what is currently known about priority setting in nursing, including how nurses set priorities and what factors influence this. CINAHL, Medline, ASSIA, and PsychLit databases for the years 1982-2002 were searched, using the terms (clinical decision-making or problem-solving or planning) and (setting priorities or prioriti*). The publications found were used in a selective, descriptive review. Priority setting is an important skill in nursing, and a skill deficit can have serious consequences for patients. Recent studies have suggested that it is a difficult skill for newly qualified nurses to acquire and may not be given sufficient attention in nurse education. Priority setting can be defined as the ordering of nursing problems using notions of urgency and/or importance, in order to establish a preferential order for nursing actions. A number of factors that may impact on priority setting have been identified in the literature. These include: the expertise of the nurse; the patient's condition; the availability of resources; ward organization; philosophies and models of care; the nurse-patient relationship; and the cognitive strategy used by the nurse to set priorities. However, very little empirical work has been conducted in this area. Further study of priority setting in a range of clinical practice settings is necessary. This could inform both practice and education, promote better use of limited resources and maximize patient outcomes.

  10. PENGARUH INVESTMENT OPPORTUNITY SET TERHADAP KEBIJAKAN DIVIDEN

    Directory of Open Access Journals (Sweden)

    Indriarti Sumarni

    2016-04-01

    Full Text Available This    study    aims    to    examine    the    effect    of    investment    opportunity    set  to    dividend    policy.  Population  are  manufacturing  companies  listed  in Indonesian  Stock  Exchange  (BEI  in  the period 2010-2011. Sample are 58 companies as  a  sample research based  on tecnique purposive sampling and used method quantitative. Data  used  in  this  research  is  secondary  data  in the  from  of  fi nancial  report, Indonesian  Capital Market  Directory   (ICMD  and  other  references  that  support  this  research.  Data  analysis tecnique uses the classical assumption test:  normaly  test,  multicollinearity  test,  autocorrelation  test  and  the  test heterokedasticity. Test hypothesis using multiple regression analysis using SPSS 17. The result has shown that: LnMVEBVE has  negative signifi cant  infl uence  to DPR. LnPER has a positive signifi cant effect to DPR. LnCAPXA has negative and signifi cant to DPR. LnDEPV does not effect to DPR.] Keyword  :  investment  opportunity  set,  dividend  policy,  MVEBVE,  PER,  CAPXA,  DEPV and DPR

  11. Absorption corrections for x-ray fluorescence analysis of environmental samples

    International Nuclear Information System (INIS)

    Bazan, F.; Bonner, N.A.

    1975-01-01

    The discovery of a very simple and useful relationship between the absorption coefficient of a particular element and the ratio of incoherent to coherent scattering by the sample containing the element is discussed. By measuring the absorption coefficients for a few elements in a few samples, absorption coefficients for many elements in an entire set of similar samples can be obtained. (auth)

  12. Absorption corrections for x-ray fluorescence analysis of environmental samples

    International Nuclear Information System (INIS)

    Bazan, F.; Bonner, N.A.

    1976-01-01

    The discovery of a very simple and useful relationship between the absorption coefficient of a particular element and the ratio of incoherent to coherent scattering by the sample containing the element is discussed. By measuring the absorption coefficients for a few elements in a few samples, absorption coefficients for many elements in an entire set of similar samples can be obtained

  13. Improved Sampling Algorithms in the Risk-Informed Safety Margin Characterization Toolkit

    International Nuclear Information System (INIS)

    Mandelli, Diego; Smith, Curtis Lee; Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua Joseph

    2015-01-01

    The RISMC approach is developing advanced set of methodologies and algorithms in order to perform Probabilistic Risk Analyses (PRAs). In contrast to classical PRA methods, which are based on Event-Tree and Fault-Tree methods, the RISMC approach largely employs system simulator codes applied to stochastic analysis tools. The basic idea is to randomly perturb (by employing sampling algorithms) timing and sequencing of events and internal parameters of the system codes (i.e., uncertain parameters) in order to estimate stochastic parameters such as core damage probability. This approach applied to complex systems such as nuclear power plants requires to perform a series of computationally expensive simulation runs given a large set of uncertain parameters. These types of analysis are affected by two issues. Firstly, the space of the possible solutions (a.k.a., the issue space or the response surface) can be sampled only very sparsely, and this precludes the ability to fully analyze the impact of uncertainties on the system dynamics. Secondly, large amounts of data are generated and tools to generate knowledge from such data sets are not yet available. This report focuses on the first issue and in particular employs novel methods that optimize the information generated by the sampling process by sampling unexplored and risk-significant regions of the issue space: adaptive (smart) sampling algorithms. They infer system response from surrogate models constructed from existing samples and predict the most relevant location of the next sample. It is therefore possible to understand features of the issue space with a small number of carefully selected samples. In this report, we will present how it is possible to perform adaptive sampling using the RISMC toolkit and highlight the advantages compared to more classical sampling approaches such Monte-Carlo. We will employ RAVEN to perform such statistical analyses using both analytical cases but also another RISMC code: RELAP-7.

  14. Improved Sampling Algorithms in the Risk-Informed Safety Margin Characterization Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Cogliati, Joshua Joseph [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RISMC approach is developing advanced set of methodologies and algorithms in order to perform Probabilistic Risk Analyses (PRAs). In contrast to classical PRA methods, which are based on Event-Tree and Fault-Tree methods, the RISMC approach largely employs system simulator codes applied to stochastic analysis tools. The basic idea is to randomly perturb (by employing sampling algorithms) timing and sequencing of events and internal parameters of the system codes (i.e., uncertain parameters) in order to estimate stochastic parameters such as core damage probability. This approach applied to complex systems such as nuclear power plants requires to perform a series of computationally expensive simulation runs given a large set of uncertain parameters. These types of analysis are affected by two issues. Firstly, the space of the possible solutions (a.k.a., the issue space or the response surface) can be sampled only very sparsely, and this precludes the ability to fully analyze the impact of uncertainties on the system dynamics. Secondly, large amounts of data are generated and tools to generate knowledge from such data sets are not yet available. This report focuses on the first issue and in particular employs novel methods that optimize the information generated by the sampling process by sampling unexplored and risk-significant regions of the issue space: adaptive (smart) sampling algorithms. They infer system response from surrogate models constructed from existing samples and predict the most relevant location of the next sample. It is therefore possible to understand features of the issue space with a small number of carefully selected samples. In this report, we will present how it is possible to perform adaptive sampling using the RISMC toolkit and highlight the advantages compared to more classical sampling approaches such Monte-Carlo. We will employ RAVEN to perform such statistical analyses using both analytical cases but also another RISMC code: RELAP-7.

  15. Support, shape and number of replicate samples for tree foliage analysis

    NARCIS (Netherlands)

    Luyssaert, Sebastiaan; Mertens, Jan; Raitio, Hannu

    Many fundamental features of a sampling program are determined by the heterogeneity of the object under study and the settings for the error (α), the power (β), the effect size (ES), the number of replicate samples, and sample support, which is a feature that is often overlooked. The number of

  16. Exposure to potentially toxic hydrocarbons and halocarbons released from the dialyzer and tubing set during hemodialysis.

    Science.gov (United States)

    Lee, Hyun Ji Julie; Meinardi, Simone; Pahl, Madeleine V; Vaziri, Nostratola D; Blake, Donald R

    2012-10-01

    Although much is known about the effect of chronic kidney failure and dialysis on the composition of solutes in plasma, little is known about their impact on the composition of gaseous compounds in exhaled breath. This study was designed to explore the effect of uremia and the hemodialysis (HD) procedure on the composition of exhaled breath. Breath samples were collected from 10 dialysis patients immediately before, during, and after a dialysis session. To determine the potential introduction of gaseous compounds from dialysis components, gasses emitted from dialyzers, tubing set, dialysate, and water supplies were collected. Prospective cohort study. 10 HD patients and 10 age-matched healthy individuals. Predictors include the dialyzers, tubing set, dialysate, and water supplies before, during, and after dialysis. Changes in the composition of exhaled breath. A 5-column/detector gas chromatography system was used to measure hydrocarbon, halocarbon, oxygenate, and alkyl nitrate compounds. Concentrations of 14 hydrocarbons and halocarbons in patients' breath rapidly increased after the onset of the HD treatment. All 14 compounds and 5 others not found in patients' breath were emitted from the dialyzers and tubing sets. Contrary to earlier reports, exhaled breath ethane concentrations in our dialysis patients were virtually unchanged during the HD treatment. Single-center study with a small sample size may limit the generalizability of the findings. The study documented the release of several potentially toxic hydrocarbons and halocarbons to patients from the dialyzer and tubing sets during the HD procedure. Because long-term exposure to these compounds may contribute to the morbidity and mortality in dialysis population, this issue should be considered in the manufacturing of the new generation of dialyzers and dialysis tubing sets. Copyright © 2012 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  17. Setting up of high-performance laser-induced breakdown

    Indian Academy of Sciences (India)

    2Laser and Plasma Technology Division, Bhabha Atomic Research Centre, ... analysis include environmental samples, biological samples, radioactive waste mate- ... applicability to different types of samples (solid, liquid and gas) make it ...

  18. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    Science.gov (United States)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  19. Patient identification in blood sampling.

    Science.gov (United States)

    Davidson, Anne; Bolton-Maggs, Paula

    The majority of adverse reports relating to blood transfusions result from human error, including misidentification of patients and incorrect labelling of samples. This article outlines best practice in blood sampling for transfusion (but is recommended for all pathology samples) and the role of patient empowerment in improving safety.

  20. Szegö's theorem on Parreau-Widom sets

    DEFF Research Database (Denmark)

    Christiansen, Jacob Stordal

    2012-01-01

    In this paper, we generalize Szego's theorem for orthogonal polynomials on the real line to infinite gap sets of Parreau–Widom type. This notion includes Cantor sets of positive measure. The Szego condition involves the equilibrium measure which in turn is absolutely continuous. Our approach builds...

  1. The Impact of Problem Sets on Student Learning

    Science.gov (United States)

    Kim, Myeong Hwan; Cho, Moon-Heum; Leonard, Karen Moustafa

    2012-01-01

    The authors examined the role of problem sets on student learning in university microeconomics. A total of 126 students participated in the study in consecutive years. independent samples t test showed that students who were not given answer keys outperformed students who were given answer keys. Multiple regression analysis showed that, along with…

  2. Clinical productivity of primary care nurse practitioners in ambulatory settings.

    Science.gov (United States)

    Xue, Ying; Tuttle, Jane

    Nurse practitioners are increasingly being integrated into primary care delivery to help meet the growing demand for primary care. It is therefore important to understand nurse practitioners' productivity in primary care practice. We examined nurse practitioners' clinical productivity in regard to number of patients seen per week, whether they had a patient panel, and patient panel size. We further investigated practice characteristics associated with their clinical productivity. We conducted cross-sectional analysis of the 2012 National Sample Survey of Nurse Practitioners. The sample included full-time primary care nurse practitioners in ambulatory settings. Multivariable survey regression analyses were performed to examine the relationship between practice characteristics and nurse practitioners' clinical productivity. Primary care nurse practitioners in ambulatory settings saw an average of 80 patients per week (95% confidence interval [CI]: 79-82), and 64% of them had their own patient panel. The average patient panel size was 567 (95% CI: 522-612). Nurse practitioners who had their own patient panel spent a similar percent of time on patient care and documentation as those who did not. However, those with a patient panel were more likely to provide a range of clinical services to most patients. Nurse practitioners' clinical productivity was associated with several modifiable practice characteristics such as practice autonomy and billing and payment policies. The estimated number of patients seen in a typical week by nurse practitioners is comparable to that by primary care physicians reported in the literature. However, they had a significantly smaller patient panel. Nurse practitioners' clinical productivity can be further improved. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. What's in a ray set: moving towards a unified ray set format

    Science.gov (United States)

    Muschaweck, Julius

    2011-10-01

    For the purpose of optical simulation, a plethora of formats exist to describe the properties of a light source. Except for the EULUMDAT and IES formats which describe sources in terms of aperture area and far field intensity, all these formats are vendor specific, and no generally accepted standard exists. Most illumination simulation software vendors use their own format for ray sets, which describe sources in terms of many rays. Some of them keep their format definition proprietary. Thus, software packages typically can read or write only their own specific format, although the actual data content is not so different. Typically, they describe origin and direction of each ray in 3D vectors, and use one more single number for magnitude, where magnitude may denote radiant flux, luminous flux (equivalently tristimulus Y), or tristimulus X and Z. Sometimes each ray also carries its wavelength, while other formats allow to specify an overall spectrum for the whole source. In addition, in at least one format, polarization properties are also included for each ray. This situation makes it inefficient and potentially error prone for light source manufacturers to provide ray data sets for their sources in many different formats. Furthermore, near field goniometer vendors again use their proprietary formats to store the source description in terms of luminance data, and offer their proprietary software to generate ray sets from this data base. Again, the plethora of ray sets make the ray set production inefficient and potentially error prone. In this paper, we propose to describe ray data sets in terms of phase space, as a step towards a standardized ray set format. It is well known that luminance and radiance can be defined as flux density in phase space: luminance is flux divided by etendue. Therefore, single rays can be thought of as center points of phase space cells, where each cell possesses its volume (i.e. etendue), its flux, and therefore its luminance. In

  4. D90: The Strongest Contributor to Setting Time in Mineral Trioxide Aggregate and Portland Cement.

    Science.gov (United States)

    Ha, William N; Bentz, Dale P; Kahler, Bill; Walsh, Laurence J

    2015-07-01

    The setting times of commercial mineral trioxide aggregate (MTA) and Portland cements vary. It was hypothesized that much of this variation was caused by differences in particle size distribution. Two gram samples from 11 MTA-type cements were analyzed by laser diffraction to determine their particle size distributions characterized by their percentile equivalent diameters (the 10th percentile, the median, and the 90th percentile [d90], respectively). Setting time data were received from manufacturers who performed indentation setting time tests as specified by the standards relevant to dentistry, ISO 6786 (9 respondents) or ISO 9917.1 (1 respondent), or not divulged to the authors (1 respondent). In a parallel experiment, 6 samples of different size graded Portland cements were produced using the same cement clinker. The measurement of setting time for Portland cement pastes was performed using American Society for Testing and Materials C 191. Cumulative heat release was measured using isothermal calorimetry to assess the reactions occurring during the setting of these pastes. In all experiments, linear correlations were assessed between setting times, heat release, and the 3 particle size parameters. Particle size varied considerably among MTA cements. For MTA cements, d90 was the particle size characteristic showing the highest positive linear correlation with setting time (r = 0.538). For Portland cement, d90 gave an even higher linear correlation for the initial setting time (r = 0.804) and the final setting time (r = 0.873) and exhibited a strong negative linear correlation for cumulative heat release (r = 0.901). Smaller particle sizes result in faster setting times, with d90 (the largest particles) being most closely correlated with the setting times of the samples. Copyright © 2015 American Association of Endodontists. All rights reserved.

  5. Environmental surveillance master sampling schedule

    Energy Technology Data Exchange (ETDEWEB)

    Bisping, L E

    1992-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest Laboratory (PNL) for the US Department of Energy (DOE). This document contains the planned schedule for routine sample collection for the Surface Environmental Surveillance Project (SESP) and Ground-Water Monitoring Project. Samples for radiological analyses include Air-Particulate Filter, gases and vapor; Water/Columbia River, Onsite Pond, Spring, Irrigation, and Drinking; Foodstuffs/Animal Products including Whole Milk, Poultry and Eggs, and Beef; Foodstuffs/Produce including Leafy Vegetables, Vegetables, and Fruit; Foodstuffs/Farm Products including Wine, Wheat and Alfalfa; Wildlife; Soil; Vegetation; and Sediment. Direct Radiation Measurements include Terrestrial Locations, Columbia River Shoreline Locations, and Onsite Roadway, Railway and Aerial, Radiation Surveys.

  6. An intercomparison exercise on radionuclides in sediment samples

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, S.P.

    1996-07-01

    An intercomparison exercise on artificial and natural radionuclides in two sediment samples, one from the Baltic Sea and one from the Kattegat, has been carried out under the EKO-1 project or the Nordic Safety Research Programme (NKS) for the period 1996-97. The measurement techniques have included direct gamma-ray spectrometry with Ge and NaI detectors, and radiochemical procedures followed by beta counting and alpha spectrometry. The participants have 21 laboratories. Results were submitted for Cs-137, Cs-134, Co-60, Sb-125, Pu-239,240, Pu-238, Am-241, Sr-90, Ra-226, Th-232 and K-40, Pb-210, Po-210 and U-235. The analytical performance of the participants was evaluated for those radionuclides where six or more data sets were received. Statistical tests were made to see if individual results agreed with overall average radionuclide concentrations in the two sediment materials within target standard deviations. The results of these tests show that for Cs-137, Cs-134, Ra-226, Th-232 and K-40 the analytical performance criteria were not met for 20-40% of the data sets. For plutonium isotopes the tests show that the performance criteria were not met for 13% of the data sets. Tests of overall analytical performance show that 61% of the data sets do not meet the combined performance criteria. This shows that there is room for considerable improvement of analytical quality for most of the laboratories that have participated in this intercomparison. The intercomparison exercise has furthermore demonstrated several elementary problems in analytical work. (au) 57 tabs., 16 ills., 1 ref.

  7. An intercomparison exercise on radionuclides in sediment samples

    International Nuclear Information System (INIS)

    Nielsen, S.P.

    1996-07-01

    An intercomparison exercise on artificial and natural radionuclides in two sediment samples, one from the Baltic Sea and one from the Kattegat, has been carried out under the EKO-1 project or the Nordic Safety Research Programme (NKS) for the period 1996-97. The measurement techniques have included direct gamma-ray spectrometry with Ge and NaI detectors, and radiochemical procedures followed by beta counting and alpha spectrometry. The participants have 21 laboratories. Results were submitted for Cs-137, Cs-134, Co-60, Sb-125, Pu-239,240, Pu-238, Am-241, Sr-90, Ra-226, Th-232 and K-40, Pb-210, Po-210 and U-235. The analytical performance of the participants was evaluated for those radionuclides where six or more data sets were received. Statistical tests were made to see if individual results agreed with overall average radionuclide concentrations in the two sediment materials within target standard deviations. The results of these tests show that for Cs-137, Cs-134, Ra-226, Th-232 and K-40 the analytical performance criteria were not met for 20-40% of the data sets. For plutonium isotopes the tests show that the performance criteria were not met for 13% of the data sets. Tests of overall analytical performance show that 61% of the data sets do not meet the combined performance criteria. This shows that there is room for considerable improvement of analytical quality for most of the laboratories that have participated in this intercomparison. The intercomparison exercise has furthermore demonstrated several elementary problems in analytical work. (au) 57 tabs., 16 ills., 1 ref

  8. Seeking Signs of Life on Mars: The Importance of Sedimentary Suites as Part of Mars Sample Return

    Science.gov (United States)

    iMOST Team; Mangold, N.; McLennan, S. M.; Czaja, A. D.; Ori, G. G.; Tosca, N. J.; Altieri, F.; Amelin, Y.; Ammannito, E.; Anand, M.; Beaty, D. W.; Benning, L. G.; Bishop, J. L.; Borg, L. E.; Boucher, D.; Brucato, J. R.; Busemann, H.; Campbell, K. A.; Carrier, B. L.; Debaille, V.; Des Marais, D. J.; Dixon, M.; Ehlmann, B. L.; Farmer, J. D.; Fernandez-Remolar, D. C.; Fogarty, J.; Glavin, D. P.; Goreva, Y. S.; Grady, M. M.; Hallis, L. J.; Harrington, A. D.; Hausrath, E. M.; Herd, C. D. K.; Horgan, B.; Humayun, M.; Kleine, T.; Kleinhenz, J.; Mackelprang, R.; Mayhew, L. E.; McCubbin, F. M.; McCoy, J. T.; McSween, H. Y.; Moser, D. E.; Moynier, F.; Mustard, J. F.; Niles, P. B.; Raulin, F.; Rettberg, P.; Rucker, M. A.; Schmitz, N.; Sefton-Nash, E.; Sephton, M. A.; Shaheen, R.; Shuster, D. L.; Siljestrom, S.; Smith, C. L.; Spry, J. A.; Steele, A.; Swindle, T. D.; ten Kate, I. L.; Usui, T.; Van Kranendonk, M. J.; Wadhwa, M.; Weiss, B. P.; Werner, S. C.; Westall, F.; Wheeler, R. M.; Zipfel, J.; Zorzano, M. P.

    2018-04-01

    Sedimentary, and especially lacustrine, depositional environments are high-priority geological/astrobiological settings for Mars Sample Return. We review the detailed investigations, measurements, and sample types required to evaluate such settings.

  9. The effectiveness of multi-component goal setting interventions for changing physical activity behaviour: a systematic review and meta-analysis.

    Science.gov (United States)

    McEwan, Desmond; Harden, Samantha M; Zumbo, Bruno D; Sylvester, Benjamin D; Kaulius, Megan; Ruissen, Geralyn R; Dowd, A Justine; Beauchamp, Mark R

    2016-01-01

    Drawing from goal setting theory (Latham & Locke, 1991; Locke & Latham, 2002; Locke et al., 1981), the purpose of this study was to conduct a systematic review and meta-analysis of multi-component goal setting interventions for changing physical activity (PA) behaviour. A literature search returned 41,038 potential articles. Included studies consisted of controlled experimental trials wherein participants in the intervention conditions set PA goals and their PA behaviour was compared to participants in a control group who did not set goals. A meta-analysis was ultimately carried out across 45 articles (comprising 52 interventions, 126 effect sizes, n = 5912) that met eligibility criteria using a random-effects model. Overall, a medium, positive effect (Cohen's d(SE) = .552(.06), 95% CI = .43-.67, Z = 9.03, p goal setting interventions in relation to PA behaviour was found. Moderator analyses across 20 variables revealed several noteworthy results with regard to features of the study, sample characteristics, PA goal content, and additional goal-related behaviour change techniques. In conclusion, multi-component goal setting interventions represent an effective method of fostering PA across a diverse range of populations and settings. Implications for effective goal setting interventions are discussed.

  10. International spinal cord injury skin and thermoregulation function basic data set.

    Science.gov (United States)

    Karlsson, A K; Krassioukov, A; Alexander, M S; Donovan, W; Biering-Sørensen, F

    2012-07-01

    To create an international spinal cord injury (SCI) skin and thermoregulation basic data set within the framework of the International SCI Data Sets. An international working group. The draft of the data set was developed by a working group comprising members appointed by the American Spinal Injury Association (ASIA), the International Spinal Cord Society (ISCoS) and a representative of the Executive Committee of the International SCI Standards and Data Sets. The final version of the data set was developed after review and comments by members of the Executive Committee of the International SCI Standards and Data Sets, the ISCoS Scientific Committee, ASIA Board, relevant and interested international organizations and societies, individual persons with specific interest and the ISCoS Council. To make the data set uniform, each variable and each response category within each variable have been specifically defined to promote the collection and reporting of comparable minimal data. Variables included in the present data set are: date of data collection, thermoregulation history after SCI, including hyperthermia or hypothermia (noninfectious or infectious), as well as the history of hyperhidrosis or hypohidrosis above or below level of lesion. Body temperature and the time of measurement are included. Details regarding the presence of any pressure ulcer and stage, location and size of the ulcer(s), date of appearance of the ulcer(s) and whether surgical treatment has been performed are included. The history of any pressure ulcer during the last 12 months is also noted.

  11. GEMAS: Colours of dry and moist agricultural soil samples of Europe

    Science.gov (United States)

    Klug, Martin; Fabian, Karl; Reimann, Clemens

    2016-04-01

    High resolution HDR colour images of all Ap samples from the GEMAS survey were acquired using a GeoTek Linescan camera. Three measurements of dry and wet samples with increasing exposure time and increasing illumination settings produced a set of colour images at 50μm resolution. Automated image processing was used to calibrate the six images per sample with respect to the synchronously measured X-Rite colorchecker chart. The calibrated images were then fit to Munsell soil colours that were measured in the same way. The results provide overview maps of dry and moist European soil colours. Because colour is closely linked to iron mineralogy, carbonate, silicate and organic carbon content the results can be correlated to magnetic, mineralogical, and geochemical properties. In combination with the full GEMAS chemical and physical measurements, this yields a valuable data set for calibration and interpretation of visible satellite colour data with respect to chemical composition and geological background, soil moisture, and soil degradation. This data set will help to develop new methods for world-wide characterization and monitoring of agricultural soils which is essential for quantifying geologic and human impact on the critical zone environment. It furthermore enables the scientific community and governmental authorities to monitor consequences of climatic change, to plan and administrate economic and ecological land use, and to use the data set for forensic applications.

  12. A systematic examination of a random sampling strategy for source apportionment calculations.

    Science.gov (United States)

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Data in support of the detection of genetically modified organisms (GMOs) in food and feed samples.

    Science.gov (United States)

    Alasaad, Noor; Alzubi, Hussein; Kader, Ahmad Abdul

    2016-06-01

    Food and feed samples were randomly collected from different sources, including local and imported materials from the Syrian local market. These included maize, barley, soybean, fresh food samples and raw material. GMO detection was conducted by PCR and nested PCR-based techniques using specific primers for the most used foreign DNA commonly used in genetic transformation procedures, i.e., 35S promoter, T-nos, epsps, cryIA(b) gene and nptII gene. The results revealed for the first time in Syria the presence of GM foods and feeds with glyphosate-resistant trait of P35S promoter and NOS terminator in the imported soybean samples with high frequency (5 out of the 6 imported soybean samples). While, tests showed negative results for the local samples. Also, tests revealed existence of GMOs in two imported maize samples detecting the presence of 35S promoter and nos terminator. Nested PCR results using two sets of primers confirmed our data. The methods applied in the brief data are based on DNA analysis by Polymerase Chain Reaction (PCR). This technique is specific, practical, reproducible and sensitive enough to detect up to 0.1% GMO in food and/or feedstuffs. Furthermore, all of the techniques mentioned are economic and can be applied in Syria and other developing countries. For all these reasons, the DNA-based analysis methods were chosen and preferred over protein-based analysis.

  14. Sampling and analysis plan (SAP) for WESF drains and TK-100 sump

    International Nuclear Information System (INIS)

    Simmons, F.M.

    1998-01-01

    The intent of this project is to determine whether the Waste Encapsulation and Storage Facility (WESF) floor drain piping and the TK-100 sump are free from contamination. TK-100 is currently used as a catch tank to transfer low level liquid waste from WESF to Tank Farms via B Plant. This system is being modified as part of the WESF decoupling since B Plant is being deactivated. As a result of the 1,1,1-trichloroethane (TCA) discovery in TK-100, the associated WESF floor drains and the pit sump need to be sampled. Breakdown constituents have been reviewed and found to be non-hazardous. There are 29 floor drains that tie into a common header leading into the tank. To prevent high exposure during sampling of the drains, TK-100 will be removed into the B Plant canyon and a new tank will be placed in the pit before any floor drain samples are taken. The sump will be sampled prior to TK-100 removal. A sample of the sludge and any liquid in the sump will be taken and analyzed for TCA and polychlorinated biphenyl (PCB). After the sump has been sampled, the vault floor will be flushed. The flush will be transferred from the sump into TK-100. TK-100 will be moved into B Plant. The vault will then be cleaned of debris and visually inspected. If there is no visual indication of TCA or PCB staining, the vault will be painted and a new tank installed. If there is an indication of TCA or PCB from laboratory analysis or staining, further negotiations will be required to determine a path forward. A total of 8 sets of three 40ml samples will be required for all of the floor drains and sump. The sump set will include one 125ml solid sample. The only analysis required will be for TCA in liquids. PCBs will be checked in sump solids only. The Sampling and Analysis Plan (SAP) is written to provide direction for the sampling and analytical activities of the 29 WESF floor drains and the TK-100 sump. The intent of this plan is to define the responsibilities of the various organizations

  15. Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots.

    Science.gov (United States)

    Gilbert, Hunter B; Webster, Robert J

    Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C.

  16. Thriving rough sets 10th anniversary : honoring professor Zdzisław Pawlak's life and legacy & 35 years of rough sets

    CERN Document Server

    Skowron, Andrzej; Yao, Yiyu; Ślęzak, Dominik; Polkowski, Lech

    2017-01-01

    This special book is dedicated to the memory of Professor Zdzisław Pawlak, the father of rough set theory, in order to commemorate both the 10th anniversary of his passing and 35 years of rough set theory. The book consists of 20 chapters distributed into four sections, which focus in turn on a historical review of Professor Zdzisław Pawlak and rough set theory; a review of the theory of rough sets; the state of the art of rough set theory; and major developments in rough set based data mining approaches. Apart from Professor Pawlak’s contributions to rough set theory, other areas he was interested in are also included. Moreover, recent theoretical studies and advances in applications are also presented. The book will offer a useful guide for researchers in Knowledge Engineering and Data Mining by suggesting new approaches to solving the problems they encounter.

  17. International spinal cord injury pulmonary function basic data set.

    Science.gov (United States)

    Biering-Sørensen, F; Krassioukov, A; Alexander, M S; Donovan, W; Karlsson, A-K; Mueller, G; Perkash, I; Sheel, A William; Wecht, J; Schilero, G J

    2012-06-01

    To develop the International Spinal Cord Injury (SCI) Pulmonary Function Basic Data Set within the framework of the International SCI Data Sets in order to facilitate consistent collection and reporting of basic bronchopulmonary findings in the SCI population. International. The SCI Pulmonary Function Data Set was developed by an international working group. The initial data set document was revised on the basis of suggestions from members of the Executive Committee of the International SCI Standards and Data Sets, the International Spinal Cord Society (ISCoS) Executive and Scientific Committees, American Spinal Injury Association (ASIA) Board, other interested organizations and societies and individual reviewers. In addition, the data set was posted for 2 months on ISCoS and ASIA websites for comments. The final International SCI Pulmonary Function Data Set contains questions on the pulmonary conditions diagnosed before spinal cord lesion,if available, to be obtained only once; smoking history; pulmonary complications and conditions after the spinal cord lesion, which may be collected at any time. These data include information on pneumonia, asthma, chronic obstructive pulmonary disease and sleep apnea. Current utilization of ventilator assistance including mechanical ventilation, diaphragmatic pacing, phrenic nerve stimulation and Bi-level positive airway pressure can be reported, as well as results from pulmonary function testing includes: forced vital capacity, forced expiratory volume in one second and peak expiratory flow. The complete instructions for data collection and the data sheet itself are freely available on the website of ISCoS (http://www.iscos.org.uk).

  18. A simulative comparison of respondent driven sampling with incentivized snowball sampling – the “strudel effect”

    Science.gov (United States)

    Gyarmathy, V. Anna; Johnston, Lisa G.; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A.

    2014-01-01

    Background Respondent driven sampling (RDS) and Incentivized Snowball Sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). Methods We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania (“original sample”) to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. Results The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1 to 12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. Conclusions When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called “strudel effect” is discussed in the paper. PMID:24360650

  19. Factor structure of the Essen Climate Evaluation Schema measure of social climate in a UK medium-security setting.

    Science.gov (United States)

    Milsom, Sophia A; Freestone, Mark; Duller, Rachel; Bouman, Marisa; Taylor, Celia

    2014-04-01

    Social climate has an influence on a number of treatment-related factors, including service users' behaviour, staff morale and treatment outcomes. Reliable assessment of social climate is, therefore, beneficial within forensic mental health settings. The Essen Climate Evaluation Schema (EssenCES) has been validated in forensic mental health services in the UK and Germany. Preliminary normative data have been produced for UK high-security national health services and German medium-security and high-security services. We aim to validate the use of the EssenCES scale (English version) and provide preliminary normative data in UK medium-security hospital settings. The EssenCES scale was completed in a medium-security mental health service as part of a service-wide audit. A total of 89 patients and 112 staff completed the EssenCES. The three-factor structure of the EssenCES and its internal construct validity were maintained within the sample. Scores from this medium-security hospital sample were significantly higher than those from earlier high-security hospital data, with three exceptions--'patient cohesion' according to the patients and 'therapeutic hold' according to staff and patients. Our data support the use of the EssenCES scale as a valid measure for assessing social climate within medium-security hospital settings. Significant differences between the means of high-security and medium-security service samples imply that degree of security is a relevant factor affecting the ward climate and that in monitoring quality of secure services, it is likely to be important to apply different scores to reflect standards. Copyright © 2013 John Wiley & Sons, Ltd.

  20. How iSamples (Internet of Samples in the Earth Sciences) Improves Sample and Data Stewardship in the Next Generation of Geoscientists

    Science.gov (United States)

    Hallett, B. W.; Dere, A. L. D.; Lehnert, K.; Carter, M.

    2016-12-01

    Vast numbers of physical samples are routinely collected by geoscientists to probe key scientific questions related to global climate change, biogeochemical cycles, magmatic processes, mantle dynamics, etc. Despite their value as irreplaceable records of nature the majority of these samples remain undiscoverable by the broader scientific community because they lack a digital presence or are not well-documented enough to facilitate their discovery and reuse for future scientific and educational use. The NSF EarthCube iSamples Research Coordination Network seeks to develop a unified approach across all Earth Science disciplines for the registration, description, identification, and citation of physical specimens in order to take advantage of the new opportunities that cyberinfrastructure offers. Even as consensus around best practices begins to emerge, such as the use of the International Geo Sample Number (IGSN), more work is needed to communicate these practices to investigators to encourage widespread adoption. Recognizing the importance of students and early career scientists in particular to transforming data and sample management practices, the iSamples Education and Training Working Group is developing training modules for sample collection, documentation, and management workflows. These training materials are made available to educators/research supervisors online at http://earthcube.org/group/isamples and can be modularized for supervisors to create a customized research workflow. This study details the design and development of several sample management tutorials, created by early career scientists and documented in collaboration with undergraduate research students in field and lab settings. Modules under development focus on rock outcrops, rock cores, soil cores, and coral samples, with an emphasis on sample management throughout the collection, analysis and archiving process. We invite others to share their sample management/registration workflows and to