WorldWideScience

Sample records for outcome-dependent sampling designs

  1. Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo

    2016-11-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.

  2. Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo

    2016-01-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134

  3. Graphical models for inference under outcome-dependent sampling

    DEFF Research Database (Denmark)

    Didelez, V; Kreiner, S; Keiding, N

    2010-01-01

    a node for the sampling indicator, assumptions about sampling processes can be made explicit. We demonstrate how to read off such graphs whether consistent estimation of the association between exposure and outcome is possible. Moreover, we give sufficient graphical conditions for testing and estimating......We consider situations where data have been collected such that the sampling depends on the outcome of interest and possibly further covariates, as for instance in case-control studies. Graphical models represent assumptions about the conditional independencies among the variables. By including...

  4. Statistical inference for the additive hazards model under outcome-dependent sampling.

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo

    2015-09-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.

  5. Joint analysis of binary and quantitative traits with data sharing and outcome-dependent sampling.

    Science.gov (United States)

    Zheng, Gang; Wu, Colin O; Kwak, Minjung; Jiang, Wenhua; Joo, Jungnam; Lima, Joao A C

    2012-04-01

    We study the analysis of a joint association between a genetic marker with both binary (case-control) and quantitative (continuous) traits, where the quantitative trait values are only available for the cases due to data sharing and outcome-dependent sampling. Data sharing becomes common in genetic association studies, and the outcome-dependent sampling is the consequence of data sharing, under which a phenotype of interest is not measured for some subgroup. The trend test (or Pearson's test) and F-test are often, respectively, used to analyze the binary and quantitative traits. Because of the outcome-dependent sampling, the usual F-test can be applied using the subgroup with the observed quantitative traits. We propose a modified F-test by also incorporating the genotype frequencies of the subgroup whose traits are not observed. Further, a combination of this modified F-test and Pearson's test is proposed by Fisher's combination of their P-values as a joint analysis. Because of the correlation of the two analyses, we propose to use a Gamma (scaled chi-squared) distribution to fit the asymptotic null distribution for the joint analysis. The proposed modified F-test and the joint analysis can also be applied to test single trait association (either binary or quantitative trait). Through simulations, we identify the situations under which the proposed tests are more powerful than the existing ones. Application to a real dataset of rheumatoid arthritis is presented. © 2012 Wiley Periodicals, Inc.

  6. [Saarland Growth Study: sampling design].

    Science.gov (United States)

    Danker-Hopfe, H; Zabransky, S

    2000-01-01

    The use of reference data to evaluate the physical development of children and adolescents is part of the daily routine in the paediatric ambulance. The construction of such reference data is based on the collection of extensive reference data. There are different kinds of reference data: cross sectional references, which are based on data collected from a big representative cross-sectional sample of the population, longitudinal references, which are based on follow-up surveys of usually smaller samples of individuals from birth to maturity, and mixed longitudinal references, which are a combination of longitudinal and cross-sectional reference data. The advantages and disadvantages of the different methods of data collection and the resulting reference data are discussed. The Saarland Growth Study was conducted for several reasons: growth processes are subject to secular changes, there are no specific reference data for children and adolescents from this part of the country and the growth charts in use in the paediatric praxis are possibly not appropriate any more. Therefore, the Saarland Growth Study served two purposes a) to create actual regional reference data and b) to create a database for future studies on secular trends in growth processes of children and adolescents from Saarland. The present contribution focusses on general remarks on the sampling design of (cross-sectional) growth surveys and its inferences for the design of the present study.

  7. Planetary Sample Caching System Design Options

    Science.gov (United States)

    Collins, Curtis; Younse, Paulo; Backes, Paul

    2009-01-01

    Potential Mars Sample Return missions would aspire to collect small core and regolith samples using a rover with a sample acquisition tool and sample caching system. Samples would need to be stored in individual sealed tubes in a canister that could be transfered to a Mars ascent vehicle and returned to Earth. A sample handling, encapsulation and containerization system (SHEC) has been developed as part of an integrated system for acquiring and storing core samples for application to future potential MSR and other potential sample return missions. Requirements and design options for the SHEC system were studied and a recommended design concept developed. Two families of solutions were explored: 1)transfer of a raw sample from the tool to the SHEC subsystem and 2)transfer of a tube containing the sample to the SHEC subsystem. The recommended design utilizes sample tool bit change out as the mechanism for transferring tubes to and samples in tubes from the tool. The SHEC subsystem design, called the Bit Changeout Caching(BiCC) design, is intended for operations on a MER class rover.

  8. Mobile Variable Depth Sampling System Design Study

    International Nuclear Information System (INIS)

    BOGER, R.M.

    2000-01-01

    A design study is presented for a mobile, variable depth sampling system (MVDSS) that will support the treatment and immobilization of Hanford LAW and HLW. The sampler can be deployed in a 4-inch tank riser and has a design that is based on requirements identified in the Level 2 Specification (latest revision). The waste feed sequence for the MVDSS is based on Phase 1, Case 3S6 waste feed sequence. Technical information is also presented that supports the design study

  9. Mobile Variable Depth Sampling System Design Study

    Energy Technology Data Exchange (ETDEWEB)

    BOGER, R.M.

    2000-08-25

    A design study is presented for a mobile, variable depth sampling system (MVDSS) that will support the treatment and immobilization of Hanford LAW and HLW. The sampler can be deployed in a 4-inch tank riser and has a design that is based on requirements identified in the Level 2 Specification (latest revision). The waste feed sequence for the MVDSS is based on Phase 1, Case 3S6 waste feed sequence. Technical information is also presented that supports the design study.

  10. Sample design effects in landscape genetics

    Science.gov (United States)

    Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.

    2012-01-01

    An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.

  11. Thermal probe design for Europa sample acquisition

    Science.gov (United States)

    Horne, Mera F.

    2018-01-01

    The planned lander missions to the surface of Europa will access samples from the subsurface of the ice in a search for signs of life. A small thermal drill (probe) is proposed to meet the sample requirement of the Science Definition Team's (SDT) report for the Europa mission. The probe is 2 cm in diameter and 16 cm in length and is designed to access the subsurface to 10 cm deep and to collect five ice samples of 7 cm3 each, approximately. The energy required to penetrate the top 10 cm of ice in a vacuum is 26 Wh, approximately, and to melt 7 cm3 of ice is 1.2 Wh, approximately. The requirement stated in the SDT report of collecting samples from five different sites can be accommodated with repeated use of the same thermal drill. For smaller sample sizes, a smaller probe of 1.0 cm in diameter with the same length of 16 cm could be utilized that would require approximately 6.4 Wh to penetrate the top 10 cm of ice, and 0.02 Wh to collect 0.1 g of sample. The thermal drill has the advantage of simplicity of design and operations and the ability to penetrate ice over a range of densities and hardness while maintaining sample integrity.

  12. Designing an enhanced groundwater sample collection system

    International Nuclear Information System (INIS)

    Schalla, R.

    1994-10-01

    As part of an ongoing technical support mission to achieve excellence and efficiency in environmental restoration activities at the Laboratory for Energy and Health-Related Research (LEHR), Pacific Northwest Laboratory (PNL) provided guidance on the design and construction of monitoring wells and identified the most suitable type of groundwater sampling pump and accessories for monitoring wells. The goal was to utilize a monitoring well design that would allow for hydrologic testing and reduce turbidity to minimize the impact of sampling. The sampling results of the newly designed monitoring wells were clearly superior to those of the previously installed monitoring wells. The new wells exhibited reduced turbidity, in addition to improved access for instrumentation and hydrologic testing. The variable frequency submersible pump was selected as the best choice for obtaining groundwater samples. The literature references are listed at the end of this report. Despite some initial difficulties, the actual performance of the variable frequency, submersible pump and its accessories was effective in reducing sampling time and labor costs, and its ease of use was preferred over the previously used bladder pumps. The surface seals system, called the Dedicator, proved to be useful accessory to prevent surface contamination while providing easy access for water-level measurements and for connecting the pump. Cost savings resulted from the use of the pre-production pumps (beta units) donated by the manufacturer for the demonstration. However, larger savings resulted from shortened field time due to the ease in using the submersible pumps and the surface seal access system. Proper deployment of the monitoring wells also resulted in cost savings and ensured representative samples

  13. Stratified sampling design based on data mining.

    Science.gov (United States)

    Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung

    2013-09-01

    To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.

  14. Outcome dependency alters the neural substrates of impression formation

    Science.gov (United States)

    Ames, Daniel L.; Fiske, Susan T.

    2015-01-01

    How do people maintain consistent impressions of other people when other people are often inconsistent? The present research addresses this question by combining recent neuroscientific insights with ecologically meaningful behavioral methods. Participants formed impressions of real people whom they met in a personally involving situation. fMRI and supporting behavioral data revealed that outcome dependency (i.e., depending on another person for a desired outcome) alters previously identified neural dynamics of impression formation. Consistent with past research, a functional localizer identified a region of dorsomedial PFC previously linked to social impression formation. In the main task, this ROI revealed the predicted patterns of activity across outcome dependency conditions: greater BOLD response when information confirmed (vs. violated) social expectations if participants were outcome-independent and the reverse pattern if participants were outcome-dependent. We suggest that, although social perceivers often discount expectancy-disconfirming information as noise, being dependent on another person for a desired outcome focuses impression-formation processing on the most diagnostic information, rather than on the most tractable information. PMID:23850465

  15. Design-based estimators for snowball sampling

    OpenAIRE

    Shafie, Termeh

    2010-01-01

    Snowball sampling, where existing study subjects recruit further subjects from amongtheir acquaintances, is a popular approach when sampling from hidden populations.Since people with many in-links are more likely to be selected, there will be a selectionbias in the samples obtained. In order to eliminate this bias, the sample data must beweighted. However, the exact selection probabilities are unknown for snowball samplesand need to be approximated in an appropriate way. This paper proposes d...

  16. 30 CFR 71.208 - Bimonthly sampling; designated work positions.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Bimonthly sampling; designated work positions... UNDERGROUND COAL MINES Sampling Procedures § 71.208 Bimonthly sampling; designated work positions. (a) Each... standard when quartz is present), respirable dust sampling of designated work positions shall begin on the...

  17. On efficiency of some ratio estimators in double sampling design ...

    African Journals Online (AJOL)

    In this paper, three sampling ratio estimators in double sampling design were proposed with the intention of finding an alternative double sampling design estimator to the conventional ratio estimator in double sampling design discussed by Cochran (1997), Okafor (2002) , Raj (1972) and Raj and Chandhok (1999).

  18. Analysing designed experiments in distance sampling

    Science.gov (United States)

    Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block

    2009-01-01

    Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...

  19. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  20. Lagoa Real design. Description and evaluation of sampling system

    International Nuclear Information System (INIS)

    Hashizume, B.K.

    1982-10-01

    This report describes the samples preparation system of drilling from Lagoa Real Design, aiming obtainment representative fraction of the half from drilling outlier. The error of sampling + analysis and analytical accuracy was obtainment by delayed neutron analysis. (author)

  1. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  2. A Frequency Domain Design Method For Sampled-Data Compensators

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Jannerup, Ole Erik

    1990-01-01

    A new approach to the design of a sampled-data compensator in the frequency domain is investigated. The starting point is a continuous-time compensator for the continuous-time system which satisfy specific design criteria. The new design method will graphically show how the discrete...

  3. Sampling design for use by the soil decontamination project

    International Nuclear Information System (INIS)

    Rutherford, D.W.; Stevens, J.R.

    1981-01-01

    This report proposes a general approach to the problem and discusses sampling of soil to map the contaminated area and to provide samples for characterizaton of soil components and contamination. Basic concepts in sample design are reviewed with reference to environmental transuranic studies. Common designs are reviewed and evaluated for use with specific objectives that might be required by the soil decontamination project. Examples of a hierarchial design pilot study and a combined hierarchial and grid study are proposed for the Rocky Flats 903 pad area

  4. System design description for sampling fuel in K basins

    International Nuclear Information System (INIS)

    Baker, R.B.

    1996-01-01

    This System Design Description provides: (1) statements of the Spent Nuclear Fuel Projects (SNFP) needs requiring sampling of fuel in the K East and K West Basins, (2) the sampling equipment functions and requirements, (3) a general work plan and the design logic being followed to develop the equipment, and (4) a summary description of the design for the sampling equipment. The report summarizes the integrated application of both the subject equipment and the canister sludge sampler in near-term characterization campaigns at K Basins

  5. Design and development of multiple sample counting setup

    International Nuclear Information System (INIS)

    Rath, D.P.; Murali, S.; Babu, D.A.R.

    2010-01-01

    Full text: The analysis of active samples on regular basis for ambient air activity and floor contamination from radio chemical lab accounts for major chunk of the operational activity in Health Physicist's responsibility. The requirement for daily air sample analysis on immediate counting and delayed counting from various labs in addition to samples of smear swipe check of lab led to the urge for development of system that could cater multiple sample analysis in a time programmed manner on a single sample loading. A multiple alpha/beta counting system for counting was designed and fabricated. It has arrangements for loading 10 samples in slots in order, get counted in a time programmed manner with results displayed and records maintained in PC. The paper describes the design and development of multiple sample counting setup presently in use at the facility has resulted in reduction of man-hour consumption in counting and recording of the results

  6. Probability sampling design in ethnobotanical surveys of medicinal plants

    Directory of Open Access Journals (Sweden)

    Mariano Martinez Espinosa

    2012-07-01

    Full Text Available Non-probability sampling design can be used in ethnobotanical surveys of medicinal plants. However, this method does not allow statistical inferences to be made from the data generated. The aim of this paper is to present a probability sampling design that is applicable in ethnobotanical studies of medicinal plants. The sampling design employed in the research titled "Ethnobotanical knowledge of medicinal plants used by traditional communities of Nossa Senhora Aparecida do Chumbo district (NSACD, Poconé, Mato Grosso, Brazil" was used as a case study. Probability sampling methods (simple random and stratified sampling were used in this study. In order to determine the sample size, the following data were considered: population size (N of 1179 families; confidence coefficient, 95%; sample error (d, 0.05; and a proportion (p, 0.5. The application of this sampling method resulted in a sample size (n of at least 290 families in the district. The present study concludes that probability sampling methods necessarily have to be employed in ethnobotanical studies of medicinal plants, particularly where statistical inferences have to be made using data obtained. This can be achieved by applying different existing probability sampling methods, or better still, a combination of such methods.

  7. Design of a gravity corer for near shore sediment sampling

    Digital Repository Service at National Institute of Oceanography (India)

    Bhat, S.T.; Sonawane, A.V.; Nayak, B.U.

    For the purpose of geotechnical investigation a gravity corer has been designed and fabricated to obtain undisturbed sediment core samples from near shore waters. The corer was successfully operated at 75 stations up to water depth 30 m. Simplicity...

  8. ANL small-sample calorimeter system design and operation

    International Nuclear Information System (INIS)

    Roche, C.T.; Perry, R.B.; Lewis, R.N.; Jung, E.A.; Haumann, J.R.

    1978-07-01

    The Small-Sample Calorimetric System is a portable instrument designed to measure the thermal power produced by radioactive decay of plutonium-containing fuels. The small-sample calorimeter is capable of measuring samples producing power up to 32 milliwatts at a rate of one sample every 20 min. The instrument is contained in two packages: a data-acquisition module consisting of a microprocessor with an 8K-byte nonvolatile memory, and a measurement module consisting of the calorimeter and a sample preheater. The total weight of the system is 18 kg

  9. Extending cluster lot quality assurance sampling designs for surveillance programs.

    Science.gov (United States)

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling

  11. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    OpenAIRE

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than ...

  12. Sample design for the residential energy consumption survey

    Energy Technology Data Exchange (ETDEWEB)

    1994-08-01

    The purpose of this report is to provide detailed information about the multistage area-probability sample design used for the Residential Energy Consumption Survey (RECS). It is intended as a technical report, for use by statisticians, to better understand the theory and procedures followed in the creation of the RECS sample frame. For a more cursory overview of the RECS sample design, refer to the appendix entitled ``How the Survey was Conducted,`` which is included in the statistical reports produced for each RECS survey year.

  13. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

    OpenAIRE

    Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we comp...

  14. Design compliance matrix waste sample container filling system for nested, fixed-depth sampling system

    International Nuclear Information System (INIS)

    BOGER, R.M.

    1999-01-01

    This design compliance matrix document provides specific design related functional characteristics, constraints, and requirements for the container filling system that is part of the nested, fixed-depth sampling system. This document addresses performance, external interfaces, ALARA, Authorization Basis, environmental and design code requirements for the container filling system. The container filling system will interface with the waste stream from the fluidic pumping channels of the nested, fixed-depth sampling system and will fill containers with waste that meet the Resource Conservation and Recovery Act (RCRA) criteria for waste that contains volatile and semi-volatile organic materials. The specifications for the nested, fixed-depth sampling system are described in a Level 2 Specification document (HNF-3483, Rev. 1). The basis for this design compliance matrix document is the Tank Waste Remediation System (TWRS) desk instructions for design Compliance matrix documents (PI-CP-008-00, Rev. 0)

  15. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  16. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Directory of Open Access Journals (Sweden)

    Lauren Hund

    Full Text Available Lot quality assurance sampling (LQAS surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  17. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Science.gov (United States)

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  18. Sample design considerations of indoor air exposure surveys

    International Nuclear Information System (INIS)

    Cox, B.G.; Mage, D.T.; Immerman, F.W.

    1988-01-01

    Concern about the potential for indoor air pollution has prompted recent surveys of radon and NO 2 concentrations in homes and personal exposure studies of volatile organics, carbon monoxide and pesticides, to name a few. The statistical problems in designing sample surveys that measure the physical environment are diverse and more complicated than those encountered in traditional surveys of human attitudes and attributes. This paper addresses issues encountered when designing indoor air quality (IAQ) studies. General statistical concepts related to target population definition, frame creation, and sample selection for area household surveys and telephone surveys are presented. The implications of different measurement approaches are discussed, and response rate considerations are described

  19. Latent spatial models and sampling design for landscape genetics

    Science.gov (United States)

    Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.

    2016-01-01

    We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.

  20. Mechanical design and simulation of an automatized sample exchanger

    International Nuclear Information System (INIS)

    Lopez, Yon; Gora, Jimmy; Bedregal, Patricia; Hernandez, Yuri; Baltuano, Oscar; Gago, Javier

    2013-01-01

    The design of a turntable type sample exchanger for irradiation and with a capacity for up to 20 capsules was performed. Its function is the automatic sending of samples contained in polyethylene capsules, for irradiation in the grid position of the reactor core, using a pneumatic system and further analysis by neutron activation. This study shows the structural design analysis and calculations in selecting motors and actuators. This development will improve efficiency in the analysis, reducing the contribution of the workers and also the radiation exposure time. (authors).

  1. A binary logistic regression model with complex sampling design of ...

    African Journals Online (AJOL)

    2017-09-03

    Sep 3, 2017 ... Bi-variable and multi-variable binary logistic regression model with complex sampling design was fitted. .... Data was entered into STATA-12 and analyzed using. SPSS-21. .... lack of access/too far or costs too much. 35. 1.2.

  2. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  3. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  4. Secondary Analysis under Cohort Sampling Designs Using Conditional Likelihood

    Directory of Open Access Journals (Sweden)

    Olli Saarela

    2012-01-01

    Full Text Available Under cohort sampling designs, additional covariate data are collected on cases of a specific type and a randomly selected subset of noncases, primarily for the purpose of studying associations with a time-to-event response of interest. With such data available, an interest may arise to reuse them for studying associations between the additional covariate data and a secondary non-time-to-event response variable, usually collected for the whole study cohort at the outset of the study. Following earlier literature, we refer to such a situation as secondary analysis. We outline a general conditional likelihood approach for secondary analysis under cohort sampling designs and discuss the specific situations of case-cohort and nested case-control designs. We also review alternative methods based on full likelihood and inverse probability weighting. We compare the alternative methods for secondary analysis in two simulated settings and apply them in a real-data example.

  5. Rapid Sampling of Hydrogen Bond Networks for Computational Protein Design.

    Science.gov (United States)

    Maguire, Jack B; Boyken, Scott E; Baker, David; Kuhlman, Brian

    2018-05-08

    Hydrogen bond networks play a critical role in determining the stability and specificity of biomolecular complexes, and the ability to design such networks is important for engineering novel structures, interactions, and enzymes. One key feature of hydrogen bond networks that makes them difficult to rationally engineer is that they are highly cooperative and are not energetically favorable until the hydrogen bonding potential has been satisfied for all buried polar groups in the network. Existing computational methods for protein design are ill-equipped for creating these highly cooperative networks because they rely on energy functions and sampling strategies that are focused on pairwise interactions. To enable the design of complex hydrogen bond networks, we have developed a new sampling protocol in the molecular modeling program Rosetta that explicitly searches for sets of amino acid mutations that can form self-contained hydrogen bond networks. For a given set of designable residues, the protocol often identifies many alternative sets of mutations/networks, and we show that it can readily be applied to large sets of residues at protein-protein interfaces or in the interior of proteins. The protocol builds on a recently developed method in Rosetta for designing hydrogen bond networks that has been experimentally validated for small symmetric systems but was not extensible to many larger protein structures and complexes. The sampling protocol we describe here not only recapitulates previously validated designs with performance improvements but also yields viable hydrogen bond networks for cases where the previous method fails, such as the design of large, asymmetric interfaces relevant to engineering protein-based therapeutics.

  6. ACS sampling system: design, implementation, and performance evaluation

    Science.gov (United States)

    Di Marcantonio, Paolo; Cirami, Roberto; Chiozzi, Gianluca

    2004-09-01

    By means of ACS (ALMA Common Software) framework we designed and implemented a sampling system which allows sampling of every Characteristic Component Property with a specific, user-defined, sustained frequency limited only by the hardware. Collected data are sent to various clients (one or more Java plotting widgets, a dedicated GUI or a COTS application) using the ACS/CORBA Notification Channel. The data transport is optimized: samples are cached locally and sent in packets with a lower and user-defined frequency to keep network load under control. Simultaneous sampling of the Properties of different Components is also possible. Together with the design and implementation issues we present the performance of the sampling system evaluated on two different platforms: on a VME based system using VxWorks RTOS (currently adopted by ALMA) and on a PC/104+ embedded platform using Red Hat 9 Linux operating system. The PC/104+ solution offers, as an alternative, a low cost PC compatible hardware environment with free and open operating system.

  7. The Study on Mental Health at Work: Design and sampling.

    Science.gov (United States)

    Rose, Uwe; Schiel, Stefan; Schröder, Helmut; Kleudgen, Martin; Tophoven, Silke; Rauch, Angela; Freude, Gabriele; Müller, Grit

    2017-08-01

    The Study on Mental Health at Work (S-MGA) generates the first nationwide representative survey enabling the exploration of the relationship between working conditions, mental health and functioning. This paper describes the study design, sampling procedures and data collection, and presents a summary of the sample characteristics. S-MGA is a representative study of German employees aged 31-60 years subject to social security contributions. The sample was drawn from the employment register based on a two-stage cluster sampling procedure. Firstly, 206 municipalities were randomly selected from a pool of 12,227 municipalities in Germany. Secondly, 13,590 addresses were drawn from the selected municipalities for the purpose of conducting 4500 face-to-face interviews. The questionnaire covers psychosocial working and employment conditions, measures of mental health, work ability and functioning. Data from personal interviews were combined with employment histories from register data. Descriptive statistics of socio-demographic characteristics and logistic regressions analyses were used for comparing population, gross sample and respondents. In total, 4511 face-to-face interviews were conducted. A test for sampling bias revealed that individuals in older cohorts participated more often, while individuals with an unknown educational level, residing in major cities or with a non-German ethnic background were slightly underrepresented. There is no indication of major deviations in characteristics between the basic population and the sample of respondents. Hence, S-MGA provides representative data for research on work and health, designed as a cohort study with plans to rerun the survey 5 years after the first assessment.

  8. The Study on Mental Health at Work: Design and sampling

    Science.gov (United States)

    Rose, Uwe; Schiel, Stefan; Schröder, Helmut; Kleudgen, Martin; Tophoven, Silke; Rauch, Angela; Freude, Gabriele; Müller, Grit

    2017-01-01

    Aims: The Study on Mental Health at Work (S-MGA) generates the first nationwide representative survey enabling the exploration of the relationship between working conditions, mental health and functioning. This paper describes the study design, sampling procedures and data collection, and presents a summary of the sample characteristics. Methods: S-MGA is a representative study of German employees aged 31–60 years subject to social security contributions. The sample was drawn from the employment register based on a two-stage cluster sampling procedure. Firstly, 206 municipalities were randomly selected from a pool of 12,227 municipalities in Germany. Secondly, 13,590 addresses were drawn from the selected municipalities for the purpose of conducting 4500 face-to-face interviews. The questionnaire covers psychosocial working and employment conditions, measures of mental health, work ability and functioning. Data from personal interviews were combined with employment histories from register data. Descriptive statistics of socio-demographic characteristics and logistic regressions analyses were used for comparing population, gross sample and respondents. Results: In total, 4511 face-to-face interviews were conducted. A test for sampling bias revealed that individuals in older cohorts participated more often, while individuals with an unknown educational level, residing in major cities or with a non-German ethnic background were slightly underrepresented. Conclusions: There is no indication of major deviations in characteristics between the basic population and the sample of respondents. Hence, S-MGA provides representative data for research on work and health, designed as a cohort study with plans to rerun the survey 5 years after the first assessment. PMID:28673202

  9. A phoswich detector design for improved spatial sampling in PET

    Science.gov (United States)

    Thiessen, Jonathan D.; Koschan, Merry A.; Melcher, Charles L.; Meng, Fang; Schellenberg, Graham; Goertzen, Andrew L.

    2018-02-01

    Block detector designs, utilizing a pixelated scintillator array coupled to a photosensor array in a light-sharing design, are commonly used for positron emission tomography (PET) imaging applications. In practice, the spatial sampling of these designs is limited by the crystal pitch, which must be large enough for individual crystals to be resolved in the detector flood image. Replacing the conventional 2D scintillator array with an array of phoswich elements, each consisting of an optically coupled side-by-side scintillator pair, may improve spatial sampling in one direction of the array without requiring resolving smaller crystal elements. To test the feasibility of this design, a 4 × 4 phoswich array was constructed, with each phoswich element consisting of two optically coupled, 3 . 17 × 1 . 58 × 10mm3 LSO crystals co-doped with cerium and calcium. The amount of calcium doping was varied to create a 'fast' LSO crystal with decay time of 32.9 ns and a 'slow' LSO crystal with decay time of 41.2 ns. Using a Hamamatsu R8900U-00-C12 position-sensitive photomultiplier tube (PS-PMT) and a CAEN V1720 250 MS/s waveform digitizer, we were able to show effective discrimination of the fast and slow LSO crystals in the phoswich array. Although a side-by-side phoswich array is feasible, reflections at the crystal boundary due to a mismatch between the refractive index of the optical adhesive (n = 1 . 5) and LSO (n = 1 . 82) caused it to behave optically as an 8 × 4 array rather than a 4 × 4 array. Direct coupling of each phoswich element to individual photodetector elements may be necessary with the current phoswich array design. Alternatively, in order to implement this phoswich design with a conventional light sharing PET block detector, a high refractive index optical adhesive is necessary to closely match the refractive index of LSO.

  10. Sampling and energy evaluation challenges in ligand binding protein design.

    Science.gov (United States)

    Dou, Jiayi; Doyle, Lindsey; Jr Greisen, Per; Schena, Alberto; Park, Hahnbeom; Johnsson, Kai; Stoddard, Barry L; Baker, David

    2017-12-01

    The steroid hormone 17α-hydroxylprogesterone (17-OHP) is a biomarker for congenital adrenal hyperplasia and hence there is considerable interest in development of sensors for this compound. We used computational protein design to generate protein models with binding sites for 17-OHP containing an extended, nonpolar, shape-complementary binding pocket for the four-ring core of the compound, and hydrogen bonding residues at the base of the pocket to interact with carbonyl and hydroxyl groups at the more polar end of the ligand. Eight of 16 designed proteins experimentally tested bind 17-OHP with micromolar affinity. A co-crystal structure of one of the designs revealed that 17-OHP is rotated 180° around a pseudo-two-fold axis in the compound and displays multiple binding modes within the pocket, while still interacting with all of the designed residues in the engineered site. Subsequent rounds of mutagenesis and binding selection improved the ligand affinity to nanomolar range, while appearing to constrain the ligand to a single bound conformation that maintains the same "flipped" orientation relative to the original design. We trace the discrepancy in the design calculations to two sources: first, a failure to model subtle backbone changes which alter the distribution of sidechain rotameric states and second, an underestimation of the energetic cost of desolvating the carbonyl and hydroxyl groups of the ligand. The difference between design model and crystal structure thus arises from both sampling limitations and energy function inaccuracies that are exacerbated by the near two-fold symmetry of the molecule. © 2017 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.

  11. A novel sampling design to explore gene-longevity associations

    DEFF Research Database (Denmark)

    De Rango, Francesco; Dato, Serena; Bellizzi, Dina

    2008-01-01

    To investigate the genetic contribution to familial similarity in longevity, we set up a novel experimental design where cousin-pairs born from siblings who were concordant or discordant for the longevity trait were analyzed. To check this design, two chromosomal regions already known to encompass...... from concordant and discordant siblings. In addition, we analyzed haplotype transmission from centenarians to offspring, and a statistically significant Transmission Ratio Distortion (TRD) was observed for both chromosomal regions in the discordant families (P=0.007 for 6p21.3 and P=0.015 for 11p15.......5). In concordant families, a marginally significant TRD was observed at 6p21.3 only (P=0.06). Although no significant difference emerged between the two groups of cousin-pairs, our study gave new insights on the hindrances to recruiting a suitable sample to obtain significant IBD data on longevity...

  12. Evaluation of design flood estimates with respect to sample size

    Science.gov (United States)

    Kobierska, Florian; Engeland, Kolbjorn

    2016-04-01

    Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.

  13. Visual Sample Plan (VSP) Software: Designs and Data Analyses for Sampling Contaminated Buildings

    International Nuclear Information System (INIS)

    Pulsipher, Brent A.; Wilson, John E.; Gilbert, Richard O.; Nuffer, Lisa L.; Hassig, Nancy L.

    2005-01-01

    A new module of the Visual Sample Plan (VSP) software has been developed to provide sampling designs and data analyses for potentially contaminated buildings. An important application is assessing levels of contamination in buildings after a terrorist attack. This new module, funded by DHS through the Combating Terrorism Technology Support Office, Technical Support Working Group, was developed to provide a tailored, user-friendly and visually-orientated buildings module within the existing VSP software toolkit, the latest version of which can be downloaded from http://dqo.pnl.gov/vsp. In case of, or when planning against, a chemical, biological, or radionuclide release within a building, the VSP module can be used to quickly and easily develop and visualize technically defensible sampling schemes for walls, floors, ceilings, and other surfaces to statistically determine if contamination is present, its magnitude and extent throughout the building and if decontamination has been effective. This paper demonstrates the features of this new VSP buildings module, which include: the ability to import building floor plans or to easily draw, manipulate, and view rooms in several ways; being able to insert doors, windows and annotations into a room; 3-D graphic room views with surfaces labeled and floor plans that show building zones that have separate air handing units. The paper will also discuss the statistical design and data analysis options available in the buildings module. Design objectives supported include comparing an average to a threshold when the data distribution is normal or unknown, and comparing measurements to a threshold to detect hotspots or to insure most of the area is uncontaminated when the data distribution is normal or unknown

  14. Mars Rover Sample Return aerocapture configuration design and packaging constraints

    Science.gov (United States)

    Lawson, Shelby J.

    1989-01-01

    This paper discusses the aerodynamics requirements, volume and mass constraints that lead to a biconic aeroshell vehicle design that protects the Mars Rover Sample Return (MRSR) mission elements from launch to Mars landing. The aerodynamic requirements for Mars aerocapture and entry and packaging constraints for the MRSR elements result in a symmetric biconic aeroshell that develops a L/D of 1.0 at 27.0 deg angle of attack. A significant problem in the study is obtaining a cg that provides adequate aerodynamic stability and performance within the mission imposed constraints. Packaging methods that relieve the cg problems include forward placement of aeroshell propellant tanks and incorporating aeroshell structure as lander structure. The MRSR missions developed during the pre-phase A study are discussed with dimensional and mass data included. Further study is needed for some missions to minimize MRSR element volume so that launch mass constraints can be met.

  15. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  16. The variance quadtree algorithm: use for spatial sampling design

    NARCIS (Netherlands)

    Minasny, B.; McBratney, A.B.; Walvoort, D.J.J.

    2007-01-01

    Spatial sampling schemes are mainly developed to determine sampling locations that can cover the variation of environmental properties in the area of interest. Here we proposed the variance quadtree algorithm for sampling in an area with prior information represented as ancillary or secondary

  17. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  18. Design unbiased estimation in line intersect sampling using segmented transects

    Science.gov (United States)

    David L.R. Affleck; Timothy G. Gregoire; Harry T. Valentine; Harry T. Valentine

    2005-01-01

    In many applications of line intersect sampling. transects consist of multiple, connected segments in a prescribed configuration. The relationship between the transect configuration and the selection probability of a population element is illustrated and a consistent sampling protocol, applicable to populations composed of arbitrarily shaped elements, is proposed. It...

  19. Using remote sensing images to design optimal field sampling schemes

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available sampling schemes case studies Optimized field sampling representing the overall distribution of a particular mineral Deriving optimal exploration target zones CONTINUUM REMOVAL for vegetation [13, 27, 46]. The convex hull transform is a method... of normalizing spectra [16, 41]. The convex hull technique is anal- ogous to fitting a rubber band over a spectrum to form a continuum. Figure 5 shows the concept of the convex hull transform. The differ- ence between the hull and the orig- inal spectrum...

  20. Designing waveforms for temporal encoding using a frequency sampling method

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jensen, Jørgen Arendt

    2007-01-01

    was compared to a linear frequency modulated signal with amplitude tapering, previously used in clinical studies for synthetic transmit aperture imaging. The latter had a relatively flat spectrum which implied that the waveform tried to excite all frequencies including ones with low amplification. The proposed......In this paper a method for designing waveforms for temporal encoding in medical ultrasound imaging is described. The method is based on least squares optimization and is used to design nonlinear frequency modulated signals for synthetic transmit aperture imaging. By using the proposed design method...... waveform, on the other hand, was designed so that only frequencies where the transducer had a large amplification were excited. Hereby, unnecessary heating of the transducer could be avoided and the signal-tonoise ratio could be increased. The experimental ultrasound scanner RASMUS was used to evaluate...

  1. Implications of Clinical Trial Design on Sample Size Requirements

    OpenAIRE

    Leon, Andrew C.

    2008-01-01

    The primary goal in designing a randomized controlled clinical trial (RCT) is to minimize bias in the estimate of treatment effect. Randomized group assignment, double-blinded assessments, and control or comparison groups reduce the risk of bias. The design must also provide sufficient statistical power to detect a clinically meaningful treatment effect and maintain a nominal level of type I error. An attempt to integrate neurocognitive science into an RCT poses additional challenges. Two par...

  2. Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.

    Science.gov (United States)

    Ojeda, Mario Miguel; Sahai, Hardeo

    2002-01-01

    Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…

  3. Conditional estimation of exponential random graph models from snowball sampling designs

    NARCIS (Netherlands)

    Pattison, Philippa E.; Robins, Garry L.; Snijders, Tom A. B.; Wang, Peng

    2013-01-01

    A complete survey of a network in a large population may be prohibitively difficult and costly. So it is important to estimate models for networks using data from various network sampling designs, such as link-tracing designs. We focus here on snowball sampling designs, designs in which the members

  4. Implications of clinical trial design on sample size requirements.

    Science.gov (United States)

    Leon, Andrew C

    2008-07-01

    The primary goal in designing a randomized controlled clinical trial (RCT) is to minimize bias in the estimate of treatment effect. Randomized group assignment, double-blinded assessments, and control or comparison groups reduce the risk of bias. The design must also provide sufficient statistical power to detect a clinically meaningful treatment effect and maintain a nominal level of type I error. An attempt to integrate neurocognitive science into an RCT poses additional challenges. Two particularly relevant aspects of such a design often receive insufficient attention in an RCT. Multiple outcomes inflate type I error, and an unreliable assessment process introduces bias and reduces statistical power. Here we describe how both unreliability and multiple outcomes can increase the study costs and duration and reduce the feasibility of the study. The objective of this article is to consider strategies that overcome the problems of unreliability and multiplicity.

  5. NEON terrestrial field observations: designing continental scale, standardized sampling

    Science.gov (United States)

    R. H. Kao; C.M. Gibson; R. E. Gallery; C. L. Meier; D. T. Barnett; K. M. Docherty; K. K. Blevins; P. D. Travers; E. Azuaje; Y. P. Springer; K. M. Thibault; V. J. McKenzie; M. Keller; L. F. Alves; E. L. S. Hinckley; J. Parnell; D. Schimel

    2012-01-01

    Rapid changes in climate and land use and the resulting shifts in species distributions and ecosystem functions have motivated the development of the National Ecological Observatory Network (NEON). Integrating across spatial scales from ground sampling to remote sensing, NEON will provide data for users to address ecological responses to changes in climate, land use,...

  6. Chemical and Metallurgy Research (CMR) Sample Tracking System Design Document

    International Nuclear Information System (INIS)

    Bargelski, C. J.; Berrett, D. E.

    1998-01-01

    The purpose of this document is to describe the system architecture of the Chemical and Metallurgy Research (CMR) Sample Tracking System at Los Alamos National Laboratory. During the course of the document observations are made concerning the objectives, constraints and limitations, technical approaches, and the technical deliverables

  7. Effects-Driven Participatory Design: Learning from Sampling Interruptions

    DEFF Research Database (Denmark)

    Brandrup, Morten; Østergaard, Kija Lin; Hertzum, Morten

    2017-01-01

    a sustained focus on pursued effects and uses the experience sampling method (ESM) to collect real-use feedback. To illustrate the use of the method we analyze a case that involves the organizational implementation of electronic whiteboards at a Danish hospital to support the clinicians’ intra...

  8. Mobile platform sampling for designing environmental sensor networks.

    Science.gov (United States)

    Budi, Setia; de Souza, Paulo; Timms, Greg; Susanto, Ferry; Malhotra, Vishv; Turner, Paul

    2018-02-09

    This paper proposes a method to design the deployment of sensor nodes in a new region where historical data is not available. A number of mobile platforms are simulated to build initial knowledge of the region. Further, an evolutionary algorithm is employed to find the optimum placement of a given number of sensor nodes that best represents the region of interest.

  9. Design of a groundwater sampling network for Minnesota

    International Nuclear Information System (INIS)

    Kanivetsky, R.

    1977-01-01

    This folio was compiled to facilitate the use of groundwater as a sampling medium to aid in exploration for hitherto undiscovered deposits of uranium in the subsurface rocks of Minnesota. The report consists of the following sheets of the hydrogeologic map of Minnesota: (1) map of bedrock hydrogeology, (2) generalized cross sections of the hydrogeologic map of Minnesota, showing both Quaternary deposits and bedrock, (3) map of waterwells that penetrate Precambrian rocks in Minnesota. A list of these wells, showing locations, names of owners, type of Precambrian aquifers penetrated, lithologic material of the aquifers, and well depths is provided in the appendix to this report. Structural settings, locations, and composition of the bedrock aquifers, movement of groundwater, and preliminary suggestions for a sampling program are discussed below under the heading Bedrock Hydrogeology of Minnesota. The map sheet showing Quaternary hydrogeology is not included in this report because the chemistry of groundwater in these deposits is not directly related to bedrock mineralization

  10. Measuring Radionuclides in the environment: radiological quantities and sampling designs

    International Nuclear Information System (INIS)

    Voigt, G.

    1998-10-01

    One aim of the workshop was to support and provide an ICRU report committee (International Union of Radiation Units) with actual information on techniques, data and knowledge of modern radioecology when radionuclides are to be measured in the environment. It has been increasingly recognised that some studies in radioecology, especially those involving both field sampling and laboratory measurements, have not paid adequate attention to the problem of obtaining representative, unbiased samples. This can greatly affect the quality of scientific interpretation, and the ability to manage the environment. Further, as the discipline of radioecology has developed, it has seen a growth in the numbers of quantities and units used, some of which are ill-defined and which are non-standardised. (orig.)

  11. Design of the CERN MEDICIS Collection and Sample Extraction System

    CERN Document Server

    Brown, Alexander

    MEDICIS is a new facility at CERN ISOLDE that aims to produce radio-isotopes for medical research. Possible designs for the collection and transport system for the collection of radio-isotopes was investigated. A system using readily available equipment was devised with the the aim of keeping costs to a minimum whilst maintaining the highest safety standards. FLUKA, a Monte Carlo radiation transport code, was used to simulate the radiation from the isotopes to be collected. Of the isotopes to be collected 44Sc was found to give the largest dose by simulating the collection of all isotopes of interest to CERN’s MEDICIS facility, for medical research. The simulations helped guide the amount of shielding used in the final design. Swiss Regulations stipulating allowed activity level of individual isotopes was also considered within the body of the work.

  12. Monitoring well design and sampling techniques at NAPL sites

    International Nuclear Information System (INIS)

    Collins, M.; Rohrman, W.R.; Drake, K.D.

    1992-01-01

    The existence of Non-Aqueous Phase Liquids (NAPLs) at many Superfund and RCRA hazardous waste sites has become a recognized problem in recent years. The large number of sites exhibiting this problem results from the fact that many of the most frequently used industrial solvents and petroleum products can exist as NAPLs. Hazardous waste constituents occurring as NAPLs possess a common characteristic that causes great concern during groundwater contamination evaluation: while solubility in water is generally very low, it is sufficient to cause groundwater to exceed Maximum Contamination Levels (MCLs). Thus, even a small quantity of NAPL within a groundwater regime can act as a point source with the ability to contaminate vast quantities of groundwater over time. This property makes it imperative that groundwater investigations focus heavily on characterizing the nature, extent, and migration pathways of NAPLs at sites where it exists. Two types of NAPLs may exist in a groundwater system. Water-immiscible liquid constituents having a specific gravity greater than one are termed Dense Non-Aqueous Phase Liquids, while those with a specific gravity less than one are considered Light Non-Aqueous Phase Liquids. For a groundwater investigation to properly characterize the two types of NAPLs, careful consideration must be given to the placement and sampling of groundwater monitoring wells. Unfortunately, technical reviewers at EPA Region VII and the Corps of Engineers find that many groundwater investigations fall short in characterizing NAPLs because several basic considerations were overlooked. Included among these are monitoring well location and screen placement with respect to the water table and significant confining units, and the ability of the well sampling method to obtain samples of NAPL. Depending on the specific gravity of the NAPL that occurs at a site, various considerations can substantially enhance adequate characterization of NAPL contaminants

  13. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  14. Reliability of impingement sampling designs: An example from the Indian Point station

    International Nuclear Information System (INIS)

    Mattson, M.T.; Waxman, J.B.; Watson, D.A.

    1988-01-01

    A 4-year data base (1976-1979) of daily fish impingement counts at the Indian Point electric power station on the Hudson River was used to compare the precision and reliability of three random-sampling designs: (1) simple random, (2) seasonally stratified, and (3) empirically stratified. The precision of daily impingement estimates improved logarithmically for each design as more days in the year were sampled. Simple random sampling was the least, and empirically stratified sampling was the most precise design, and the difference in precision between the two stratified designs was small. Computer-simulated sampling was used to estimate the reliability of the two stratified-random-sampling designs. A seasonally stratified sampling design was selected as the most appropriate reduced-sampling program for Indian Point station because: (1) reasonably precise and reliable impingement estimates were obtained using this design for all species combined and for eight common Hudson River fish by sampling only 30% of the days in a year (110 d); and (2) seasonal strata may be more precise and reliable than empirical strata if future changes in annual impingement patterns occur. The seasonally stratified design applied to the 1976-1983 Indian Point impingement data showed that selection of sampling dates based on daily species-specific impingement variability gave results that were more precise, but not more consistently reliable, than sampling allocations based on the variability of all fish species combined. 14 refs., 1 fig., 6 tabs

  15. Design development of robotic system for on line sampling in fuel reprocessing

    International Nuclear Information System (INIS)

    Balasubramanian, G.R.; Venugopal, P.R.; Padmashali, G.K.

    1990-01-01

    This presentation describes the design and developmental work that is being carried out for the design of an automated sampling system for fast reactor fuel reprocessing plants. The plant proposes to use integrated sampling system. The sample is taken across regular process streams from any intermediate hold up pot. A robot system is planned to take the sample from the sample pot, transfer it to the sample bottle, cap the bottle and transfer the bottle to a pneumatic conveying station. The system covers a large number of sample pots. Alternate automated systems are also examined (1). (author). 4 refs., 2 figs

  16. Sampling designs and methods for estimating fish-impingement losses at cooling-water intakes

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-01-01

    Several systems for estimating fish impingement at power plant cooling-water intakes are compared to determine the most statistically efficient sampling designs and methods. Compared to a simple random sampling scheme the stratified systematic random sampling scheme, the systematic random sampling scheme, and the stratified random sampling scheme yield higher efficiencies and better estimators for the parameters in two models of fish impingement as a time-series process. Mathematical results and illustrative examples of the applications of the sampling schemes to simulated and real data are given. Some sampling designs applicable to fish-impingement studies are presented in appendixes

  17. Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs

    Directory of Open Access Journals (Sweden)

    Faqir Muhammad

    2007-01-01

    Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.

  18. Sampling flies or sampling flaws? Experimental design and inference strength in forensic entomology.

    Science.gov (United States)

    Michaud, J-P; Schoenly, Kenneth G; Moreau, G

    2012-01-01

    Forensic entomology is an inferential science because postmortem interval estimates are based on the extrapolation of results obtained in field or laboratory settings. Although enormous gains in scientific understanding and methodological practice have been made in forensic entomology over the last few decades, a majority of the field studies we reviewed do not meet the standards for inference, which are 1) adequate replication, 2) independence of experimental units, and 3) experimental conditions that capture a representative range of natural variability. Using a mock case-study approach, we identify design flaws in field and lab experiments and suggest methodological solutions for increasing inference strength that can inform future casework. Suggestions for improving data reporting in future field studies are also proposed.

  19. Design review report for rotary mode core sample truck (RMCST) modifications for flammable gas tanks, preliminary design

    International Nuclear Information System (INIS)

    Corbett, J.E.

    1996-02-01

    This report documents the completion of a preliminary design review for the Rotary Mode Core Sample Truck (RMCST) modifications for flammable gas tanks. The RMCST modifications are intended to support core sampling operations in waste tanks requiring flammable gas controls. The objective of this review was to validate basic design assumptions and concepts to support a path forward leading to a final design. The conclusion reached by the review committee was that the design was acceptable and efforts should continue toward a final design review

  20. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  1. Optimal experiment design in a filtering context with application to sampled network data

    OpenAIRE

    Singhal, Harsh; Michailidis, George

    2010-01-01

    We examine the problem of optimal design in the context of filtering multiple random walks. Specifically, we define the steady state E-optimal design criterion and show that the underlying optimization problem leads to a second order cone program. The developed methodology is applied to tracking network flow volumes using sampled data, where the design variable corresponds to controlling the sampling rate. The optimal design is numerically compared to a myopic and a naive strategy. Finally, w...

  2. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    International Nuclear Information System (INIS)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B.

    2015-01-01

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging

  3. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B. [Radiation Impact Assessment Section, Radiological Safety Division, Indira Gandhi Centre for Atomic Research, Kalpakkam 603 102 (India)

    2015-07-15

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.

  4. Design of sampling tools for Monte Carlo particle transport code JMCT

    International Nuclear Information System (INIS)

    Shangguan Danhua; Li Gang; Zhang Baoyin; Deng Li

    2012-01-01

    A class of sampling tools for general Monte Carlo particle transport code JMCT is designed. Two ways are provided to sample from distributions. One is the utilization of special sampling methods for special distribution; the other is the utilization of general sampling methods for arbitrary discrete distribution and one-dimensional continuous distribution on a finite interval. Some open source codes are included in the general sampling method for the maximum convenience of users. The sampling results show sampling correctly from distribution which are popular in particle transport can be achieved with these tools, and the user's convenience can be assured. (authors)

  5. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    Science.gov (United States)

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  6. Quality-control design for surface-water sampling in the National Water-Quality Network

    Science.gov (United States)

    Riskin, Melissa L.; Reutter, David C.; Martin, Jeffrey D.; Mueller, David K.

    2018-04-10

    The data-quality objectives for samples collected at surface-water sites in the National Water-Quality Network include estimating the extent to which contamination, matrix effects, and measurement variability affect interpretation of environmental conditions. Quality-control samples provide insight into how well the samples collected at surface-water sites represent the true environmental conditions. Quality-control samples used in this program include field blanks, replicates, and field matrix spikes. This report describes the design for collection of these quality-control samples and the data management needed to properly identify these samples in the U.S. Geological Survey’s national database.

  7. Sampling designs matching species biology produce accurate and affordable abundance indices.

    Science.gov (United States)

    Harris, Grant; Farley, Sean; Russell, Gareth J; Butler, Matthew J; Selinger, Jeff

    2013-01-01

    Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km(2) cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions

  8. Sampling designs matching species biology produce accurate and affordable abundance indices

    Directory of Open Access Journals (Sweden)

    Grant Harris

    2013-12-01

    Full Text Available Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling, it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS data from 42 Alaskan brown bears (Ursus arctos. Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion, and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture

  9. Sampling designs matching species biology produce accurate and affordable abundance indices

    Science.gov (United States)

    Farley, Sean; Russell, Gareth J.; Butler, Matthew J.; Selinger, Jeff

    2013-01-01

    Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions, which

  10. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  11. Multi-saline sample distillation apparatus for hydrogen isotope analyses : design and accuracy

    Science.gov (United States)

    Hassan, Afifa Afifi

    1981-01-01

    A distillation apparatus for saline water samples was designed and tested. Six samples may be distilled simultaneously. The temperature was maintained at 400 C to ensure complete dehydration of the precipitating salts. Consequently, the error in the measured ratio of stable hydrogen isotopes resulting from incomplete dehydration of hydrated salts during distillation was eliminated. (USGS)

  12. Design/Operations review of core sampling trucks and associated equipment

    International Nuclear Information System (INIS)

    Shrivastava, H.P.

    1996-01-01

    A systematic review of the design and operations of the core sampling trucks was commissioned by Characterization Equipment Engineering of the Westinghouse Hanford Company in October 1995. The review team reviewed the design documents, specifications, operating procedure, training manuals and safety analysis reports. The review process, findings and corrective actions are summarized in this supporting document

  13. MUP, CEC-DES, STRADE. Codes for uncertainty propagation, experimental design and stratified random sampling techniques

    International Nuclear Information System (INIS)

    Amendola, A.; Astolfi, M.; Lisanti, B.

    1983-01-01

    The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems

  14. Spatiotemporally Representative and Cost-Efficient Sampling Design for Validation Activities in Wanglang Experimental Site

    Directory of Open Access Journals (Sweden)

    Gaofei Yin

    2017-11-01

    Full Text Available Spatiotemporally representative Elementary Sampling Units (ESUs are required for capturing the temporal variations in surface spatial heterogeneity through field measurements. Since inaccessibility often coexists with heterogeneity, a cost-efficient sampling design is mandatory. We proposed a sampling strategy to generate spatiotemporally representative and cost-efficient ESUs based on the conditioned Latin hypercube sampling scheme. The proposed strategy was constrained by multi-temporal Normalized Difference Vegetation Index (NDVI imagery, and the ESUs were limited within a sampling feasible region established based on accessibility criteria. A novel criterion based on the Overlapping Area (OA between the NDVI frequency distribution histogram from the sampled ESUs and that from the entire study area was used to assess the sampling efficiency. A case study in Wanglang National Nature Reserve in China showed that the proposed strategy improves the spatiotemporally representativeness of sampling (mean annual OA = 74.7% compared to the single-temporally constrained (OA = 68.7% and the random sampling (OA = 63.1% strategies. The introduction of the feasible region constraint significantly reduces in-situ labour-intensive characterization necessities at expenses of about 9% loss in the spatiotemporal representativeness of the sampling. Our study will support the validation activities in Wanglang experimental site providing a benchmark for locating the nodes of automatic observation systems (e.g., LAINet which need a spatially distributed and temporally fixed sampling design.

  15. Design for mosquito abundance, diversity, and phenology sampling within the National Ecological Observatory Network

    Science.gov (United States)

    Hoekman, D.; Springer, Yuri P.; Barker, C.M.; Barrera, R.; Blackmore, M.S.; Bradshaw, W.E.; Foley, D. H.; Ginsberg, Howard; Hayden, M. H.; Holzapfel, C. M.; Juliano, S. A.; Kramer, L. D.; LaDeau, S. L.; Livdahl, T. P.; Moore, C. G.; Nasci, R.S.; Reisen, W.K.; Savage, H. M.

    2016-01-01

    The National Ecological Observatory Network (NEON) intends to monitor mosquito populations across its broad geographical range of sites because of their prevalence in food webs, sensitivity to abiotic factors and relevance for human health. We describe the design of mosquito population sampling in the context of NEON’s long term continental scale monitoring program, emphasizing the sampling design schedule, priorities and collection methods. Freely available NEON data and associated field and laboratory samples, will increase our understanding of how mosquito abundance, demography, diversity and phenology are responding to land use and climate change.

  16. Baseline Design Compliance Matrix for the Rotary Mode Core Sampling System

    International Nuclear Information System (INIS)

    LECHELT, J.A.

    2000-01-01

    The purpose of the design compliance matrix (DCM) is to provide a single-source document of all design requirements associated with the fifteen subsystems that make up the rotary mode core sampling (RMCS) system. It is intended to be the baseline requirement document for the RMCS system and to be used in governing all future design and design verification activities associated with it. This document is the DCM for the RMCS system used on Hanford single-shell radioactive waste storage tanks. This includes the Exhauster System, Rotary Mode Core Sample Trucks, Universal Sampling System, Diesel Generator System, Distribution Trailer, X-Ray Cart System, Breathing Air Compressor, Nitrogen Supply Trailer, Casks and Cask Truck, Service Trailer, Core Sampling Riser Equipment, Core Sampling Support Trucks, Foot Clamp, Ramps and Platforms and Purged Camera System. Excluded items are tools such as light plants and light stands. Other items such as the breather inlet filter are covered by a different design baseline. In this case, the inlet breather filter is covered by the Tank Farms Design Compliance Matrix

  17. Practical iterative learning control with frequency domain design and sampled data implementation

    CERN Document Server

    Wang, Danwei; Zhang, Bin

    2014-01-01

    This book is on the iterative learning control (ILC) with focus on the design and implementation. We approach the ILC design based on the frequency domain analysis and address the ILC implementation based on the sampled data methods. This is the first book of ILC from frequency domain and sampled data methodologies. The frequency domain design methods offer ILC users insights to the convergence performance which is of practical benefits. This book presents a comprehensive framework with various methodologies to ensure the learnable bandwidth in the ILC system to be set with a balance between learning performance and learning stability. The sampled data implementation ensures effective execution of ILC in practical dynamic systems. The presented sampled data ILC methods also ensure the balance of performance and stability of learning process. Furthermore, the presented theories and methodologies are tested with an ILC controlled robotic system. The experimental results show that the machines can work in much h...

  18. Bionic Design for Mars Sampling Scoop Inspired by Himalayan Marmot Claw

    Directory of Open Access Journals (Sweden)

    Long Xue

    2016-01-01

    Full Text Available Cave animals are often adapted to digging and life underground, with claw toes similar in structure and function to a sampling scoop. In this paper, the clawed toes of the Himalayan marmot were selected as a biological prototype for bionic research. Based on geometric parameter optimization of the clawed toes, a bionic sampling scoop for use on Mars was designed. Using a 3D laser scanner, the point cloud data of the second front claw toe was acquired. Parametric equations and contour curves for the claw were then built with cubic polynomial fitting. We obtained 18 characteristic curve equations for the internal and external contours of the claw. A bionic sampling scoop was designed according to the structural parameters of Curiosity’s sampling shovel and the contours of the Himalayan marmot’s claw. Verifying test results showed that when the penetration angle was 45° and the sampling speed was 0.33 r/min, the bionic sampling scoops’ resistance torque was 49.6% less than that of the prototype sampling scoop. When the penetration angle was 60° and the sampling speed was 0.22 r/min, the resistance torque of the bionic sampling scoop was 28.8% lower than that of the prototype sampling scoop.

  19. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  20. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    Science.gov (United States)

    Pavlacky, David C; Lukacs, Paul M; Blakesley, Jennifer A; Skorkowsky, Robert C; Klute, David S; Hahn, Beth A; Dreitz, Victoria J; George, T Luke; Hanni, David J

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer's sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical

  1. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    Directory of Open Access Journals (Sweden)

    David C Pavlacky

    Full Text Available Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1 coordination across organizations and regions, 2 meaningful management and conservation objectives, and 3 rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17. We provide two examples for the Brewer's sparrow (Spizella breweri in BCR 17 demonstrating the ability of the design to 1 determine hierarchical population responses to landscape change and 2 estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous

  2. Planning Considerations for a Mars Sample Receiving Facility: Summary and Interpretation of Three Design Studies

    Science.gov (United States)

    Beaty, David W.; Allen, Carlton C.; Bass, Deborah S.; Buxbaum, Karen L.; Campbell, James K.; Lindstrom, David J.; Miller, Sylvia L.; Papanastassiou, Dimitri A.

    2009-10-01

    It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.

  3. Adaptive designs for the one-sample log-rank test.

    Science.gov (United States)

    Schmidt, Rene; Faldum, Andreas; Kwiecien, Robert

    2017-09-22

    Traditional designs in phase IIa cancer trials are single-arm designs with a binary outcome, for example, tumor response. In some settings, however, a time-to-event endpoint might appear more appropriate, particularly in the presence of loss to follow-up. Then the one-sample log-rank test might be the method of choice. It allows to compare the survival curve of the patients under treatment to a prespecified reference survival curve. The reference curve usually represents the expected survival under standard of the care. In this work, convergence of the one-sample log-rank statistic to Brownian motion is proven using Rebolledo's martingale central limit theorem while accounting for staggered entry times of the patients. On this basis, a confirmatory adaptive one-sample log-rank test is proposed where provision is made for data dependent sample size reassessment. The focus is to apply the inverse normal method. This is done in two different directions. The first strategy exploits the independent increments property of the one-sample log-rank statistic. The second strategy is based on the patient-wise separation principle. It is shown by simulation that the proposed adaptive test might help to rescue an underpowered trial and at the same time lowers the average sample number (ASN) under the null hypothesis as compared to a single-stage fixed sample design. © 2017, The International Biometric Society.

  4. Fixed-location hydroacoustic monitoring designs for estimating fish passage using stratified random and systematic sampling

    International Nuclear Information System (INIS)

    Skalski, J.R.; Hoffman, A.; Ransom, B.H.; Steig, T.W.

    1993-01-01

    Five alternate sampling designs are compared using 15 d of 24-h continuous hydroacoustic data to identify the most favorable approach to fixed-location hydroacoustic monitoring of salmonid outmigrants. Four alternative aproaches to systematic sampling are compared among themselves and with stratified random sampling (STRS). Stratifying systematic sampling (STSYS) on a daily basis is found to reduce sampling error in multiday monitoring studies. Although sampling precision was predictable with varying levels of effort in STRS, neither magnitude nor direction of change in precision was predictable when effort was varied in systematic sampling (SYS). Furthermore, modifying systematic sampling to include replicated (e.g., nested) sampling (RSYS) is further shown to provide unbiased point and variance estimates as does STRS. Numerous short sampling intervals (e.g., 12 samples of 1-min duration per hour) must be monitored hourly using RSYS to provide efficient, unbiased point and interval estimates. For equal levels of effort, STRS outperformed all variations of SYS examined. Parametric approaches to confidence interval estimates are found to be superior to nonparametric interval estimates (i.e., bootstrap and jackknife) in estimating total fish passage. 10 refs., 1 fig., 8 tabs

  5. Incorporating covariance estimation uncertainty in spatial sampling design for prediction with trans-Gaussian random fields

    Directory of Open Access Journals (Sweden)

    Gunter eSpöck

    2015-05-01

    Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.

  6. Design, analysis, and interpretation of field quality-control data for water-sampling projects

    Science.gov (United States)

    Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.

    2015-01-01

    The process of obtaining and analyzing water samples from the environment includes a number of steps that can affect the reported result. The equipment used to collect and filter samples, the bottles used for specific subsamples, any added preservatives, sample storage in the field, and shipment to the laboratory have the potential to affect how accurately samples represent the environment from which they were collected. During the early 1990s, the U.S. Geological Survey implemented policies to include the routine collection of quality-control samples in order to evaluate these effects and to ensure that water-quality data were adequately representing environmental conditions. Since that time, the U.S. Geological Survey Office of Water Quality has provided training in how to design effective field quality-control sampling programs and how to evaluate the resultant quality-control data. This report documents that training material and provides a reference for methods used to analyze quality-control data.

  7. Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products

    Science.gov (United States)

    Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun

    2011-10-01

    To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."

  8. A design-based approximation to the Bayes Information Criterion in finite population sampling

    Directory of Open Access Journals (Sweden)

    Enrico Fabrizi

    2014-05-01

    Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.

  9. Software documentation and user's manual for fish-impingement sampling design and estimation method computer programs

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-11-01

    This report contains a description of three computer programs that implement the theory of sampling designs and the methods for estimating fish-impingement at the cooling-water intakes of nuclear power plants as described in companion report ANL/ES-60. Complete FORTRAN listings of these programs, named SAMPLE, ESTIMA, and SIZECO, are given and augmented with examples of how they are used

  10. Gas and liquid sampling for closed canisters in K-West basins - functional design criteria

    International Nuclear Information System (INIS)

    Pitkoff, C.C.

    1994-01-01

    The purpose of this document is to provide functions and requirements for the design and fabrication of equipment for sampling closed canisters in the K-West basin. The samples will be used to help determine the state of the fuel elements in closed canisters. The characterization information obtained will support evaluation and development of processes required for safe storage and disposition of Spent Nuclear Fuel (SNF) materials

  11. Design of an automatic sample changer for the measurement of neutron flux by gamma spectrometry

    International Nuclear Information System (INIS)

    Gago, Javier; Bruna, Ruben; Baltuano, Oscar; Montoya, Eduardo; Descreaux, Killian

    2014-01-01

    This paper presents calculus, selection and components design for the construction of an automatic system in order to measure neutron flux in a working nuclear reactor by the gamma spectrometry technique using samples irradiated on the RP-10 nucleus. This system will perform the measurement of interchanging 100 samples in a programed and automatic way, reducing operation time by the user and obtaining more accurate measures. (authors).

  12. Sampling design for long-term regional trends in marine rocky intertidal communities

    Science.gov (United States)

    Irvine, Gail V.; Shelley, Alice

    2013-01-01

    Probability-based designs reduce bias and allow inference of results to the pool of sites from which they were chosen. We developed and tested probability-based designs for monitoring marine rocky intertidal assemblages at Glacier Bay National Park and Preserve (GLBA), Alaska. A multilevel design was used that varied in scale and inference. The levels included aerial surveys, extensive sampling of 25 sites, and more intensive sampling of 6 sites. Aerial surveys of a subset of intertidal habitat indicated that the original target habitat of bedrock-dominated sites with slope ≤30° was rare. This unexpected finding illustrated one value of probability-based surveys and led to a shift in the target habitat type to include steeper, more mixed rocky habitat. Subsequently, we evaluated the statistical power of different sampling methods and sampling strategies to detect changes in the abundances of the predominant sessile intertidal taxa: barnacles Balanomorpha, the mussel Mytilus trossulus, and the rockweed Fucus distichus subsp. evanescens. There was greatest power to detect trends in Mytilus and lesser power for barnacles and Fucus. Because of its greater power, the extensive, coarse-grained sampling scheme was adopted in subsequent years over the intensive, fine-grained scheme. The sampling attributes that had the largest effects on power included sampling of “vertical” line transects (vs. horizontal line transects or quadrats) and increasing the number of sites. We also evaluated the power of several management-set parameters. Given equal sampling effort, sampling more sites fewer times had greater power. The information gained through intertidal monitoring is likely to be useful in assessing changes due to climate, including ocean acidification; invasive species; trampling effects; and oil spills.

  13. Precision, time, and cost: a comparison of three sampling designs in an emergency setting

    Science.gov (United States)

    Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles

    2008-01-01

    The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster) and a 67 × 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data collection in emergency

  14. Precision, time, and cost: a comparison of three sampling designs in an emergency setting

    Directory of Open Access Journals (Sweden)

    Deconinck Hedwig

    2008-05-01

    Full Text Available Abstract The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster and a 67 × 3 cluster design (67 clusters, 3 observations per cluster. Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data

  15. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  16. Economic Design of Acceptance Sampling Plans in a Two-Stage Supply Chain

    Directory of Open Access Journals (Sweden)

    Lie-Fern Hsu

    2012-01-01

    Full Text Available Supply Chain Management, which is concerned with material and information flows between facilities and the final customers, has been considered the most popular operations strategy for improving organizational competitiveness nowadays. With the advanced development of computer technology, it is getting easier to derive an acceptance sampling plan satisfying both the producer's and consumer's quality and risk requirements. However, all the available QC tables and computer software determine the sampling plan on a noneconomic basis. In this paper, we design an economic model to determine the optimal sampling plan in a two-stage supply chain that minimizes the producer's and the consumer's total quality cost while satisfying both the producer's and consumer's quality and risk requirements. Numerical examples show that the optimal sampling plan is quite sensitive to the producer's product quality. The product's inspection, internal failure, and postsale failure costs also have an effect on the optimal sampling plan.

  17. Robotic Irradiated Sample Handling Concept Design in Reactor TRIGA PUSPATI using Simulation Software

    International Nuclear Information System (INIS)

    Mohd Khairulezwan Abdul Manan; Mohd Sabri Minhat; Ridzuan Abdul Mutalib; Zareen Khan Abdul Jalil Khan; Nurfarhana Ayuni Joha

    2015-01-01

    This paper introduces the concept design of an Robotic Irradiated Sample Handling Machine using graphical software application, designed as a general, flexible and open platform to work on robotics. Webots has proven to be a useful tool in many fields of robotics, such as manipulator programming, mobile robots control (wheeled, sub-aquatic and walking robots), distance computation, sensor simulation, collision detection, motion planning and so on. Webots is used as the common interface for all the applications. Some practical cases and application for this concept design are illustrated on the paper to present the possibilities of this simulation software. (author)

  18. OSIRIS-REx Touch-and-Go (TAG) Mission Design for Asteroid Sample Collection

    Science.gov (United States)

    May, Alexander; Sutter, Brian; Linn, Timothy; Bierhaus, Beau; Berry, Kevin; Mink, Ron

    2014-01-01

    The Origins Spectral Interpretation Resource Identification Security Regolith Explorer (OSIRIS-REx) mission is a NASA New Frontiers mission launching in September 2016 to rendezvous with the near-Earth asteroid Bennu in October 2018. After several months of proximity operations to characterize the asteroid, OSIRIS-REx flies a Touch-And-Go (TAG) trajectory to the asteroid's surface to collect at least 60 g of pristine regolith sample for Earth return. This paper provides mission and flight system overviews, with more details on the TAG mission design and key events that occur to safely and successfully collect the sample. An overview of the navigation performed relative to a chosen sample site, along with the maneuvers to reach the desired site is described. Safety monitoring during descent is performed with onboard sensors providing an option to abort, troubleshoot, and try again if necessary. Sample collection occurs using a collection device at the end of an articulating robotic arm during a brief five second contact period, while a constant force spring mechanism in the arm assists to rebound the spacecraft away from the surface. Finally, the sample is measured quantitatively utilizing the law of conservation of angular momentum, along with qualitative data from imagery of the sampling device. Upon sample mass verification, the arm places the sample into the Stardust-heritage Sample Return Capsule (SRC) for return to Earth in September 2023.

  19. Sampling Design of Soil Physical Properties in a Conilon Coffee Field

    Directory of Open Access Journals (Sweden)

    Eduardo Oliveira de Jesus Santos

    Full Text Available ABSTRACT Establishing the number of samples required to determine values of soil physical properties ultimately results in optimization of labor and allows better representation of such attributes. The objective of this study was to analyze the spatial variability of soil physical properties in a Conilon coffee field and propose a soil sampling method better attuned to conditions of the management system. The experiment was performed in a Conilon coffee field in Espírito Santo state, Brazil, under a 3.0 × 2.0 × 1.0 m (4,000 plants ha-1 double spacing design. An irregular grid, with dimensions of 107 × 95.7 m and 65 sampling points, was set up. Soil samples were collected from the 0.00-0.20 m depth from each sampling point. Data were analyzed under descriptive statistical and geostatistical methods. Using statistical parameters, the adequate number of samples for analyzing the attributes under study was established, which ranged from 1 to 11 sampling points. With the exception of particle density, all soil physical properties showed a spatial dependence structure best fitted to the spherical model. Establishment of the number of samples and spatial variability for the physical properties of soils may be useful in developing sampling strategies that minimize costs for farmers within a tolerable and predictable level of error.

  20. Architectural Design Space Exploration of an FPGA-based Compressed Sampling Engine

    DEFF Research Database (Denmark)

    El-Sayed, Mohammad; Koch, Peter; Le Moullec, Yannick

    2015-01-01

    We present the architectural design space exploration of a compressed sampling engine for use in a wireless heart-rate monitoring system. We show how parallelism affects execution time at the register transfer level. Furthermore, two example solutions (modified semi-parallel and full...

  1. An Alternative View of Some FIA Sample Design and Analysis Issues

    Science.gov (United States)

    Paul C. Van Deusen

    2005-01-01

    Sample design and analysis decisions are the result of compromises and inputs from many sources. The end result would likely change if different individuals or groups were involved in the planning process. Discussed here are some alternatives to the procedures that are currently being used for the annual inventory. The purpose is to indicate that alternatives exist and...

  2. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    Science.gov (United States)

    Debasish Saha; Armen R. Kemanian; Benjamin M. Rau; Paul R. Adler; Felipe Montes

    2017-01-01

    Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (...

  3. Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter

    Science.gov (United States)

    Guo, Xiang-Gui; Yang, Guang-Hong

    2012-04-01

    This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.

  4. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    Science.gov (United States)

    Thomas C. Edwards; D. Richard Cutler; Niklaus E. Zimmermann; Linda Geiser; Gretchen G. Moisen

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by...

  5. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    Directory of Open Access Journals (Sweden)

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  6. Design of modified annulus air sampling system for the detection of leakage in waste transfer line

    International Nuclear Information System (INIS)

    Deokar, U.V; Khot, A.R.; Mathew, P.; Ganesh, G.; Tripathi, R.M.; Srivastava, Srishti

    2018-01-01

    Various liquid waste streams are generated during the operation of reprocessing plant. The High Level (HL), Intermediate Level (IL) and Low Level (LL) liquid wastes generated, are transferred from reprocessing plant to Waste Management Facility. These respective waste streams are transferred through pipe-in-pipe lines along the shielded concrete trench. For detection of radioactive leakage from primary waste transfer line into secondary line, sampling of the annulus air between the two pipes is carried out. The currently installed pressurized annulus air sampling system did not have online leakage detection provision. Hence, there are chances of personal exposure and airborne activity in the working area. To overcome these design flaws, free air flow modified online annulus air sampling system with more safety features is designed

  7. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    International Nuclear Information System (INIS)

    Oliveira, Karina B. de; Oliveira, Bras H. de

    2013-01-01

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C 18 column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min−1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 ± 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  8. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica

    2013-01-15

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  9. Shielding design of highly activated sample storage at reactor TRIGA PUSPATI

    International Nuclear Information System (INIS)

    Naim Syauqi Hamzah; Julia Abdul Karim; Mohamad Hairie Rabir; Muhd Husamuddin Abdul Khalil; Mohd Amin Sharifuldin Salleh

    2010-01-01

    Radiation protection has always been one of the most important things considered in Reaktor Triga PUSPATI (RTP) management. Currently, demands on sample activation were increased from variety of applicant in different research field area. Radiological hazard may occur if the samples evaluation done were misjudge or miscalculated. At present, there is no appropriate storage for highly activated samples. For that purpose, special irradiated samples storage box should be provided in order to segregate highly activated samples that produce high dose level and typical activated samples that produce lower dose level (1 - 2 mR/ hr). In this study, thickness required by common shielding material such as lead and concrete to reduce highly activated radiotracer sample (potassium bromide) with initial exposure dose of 5 R/ hr to background level (0.05 mR/ hr) were determined. Analyses were done using several methods including conventional shielding equation, half value layer calculation and Micro shield computer code. Design of new irradiated samples storage box for RTP that capable to contain high level gamma radioactivity were then proposed. (author)

  10. Monitoring oil persistence on beaches : SCAT versus stratified random sampling designs

    International Nuclear Information System (INIS)

    Short, J.W.; Lindeberg, M.R.; Harris, P.M.; Maselko, J.M.; Pella, J.J.; Rice, S.D.

    2003-01-01

    In the event of a coastal oil spill, shoreline clean-up assessment teams (SCAT) commonly rely on visual inspection of the entire affected area to monitor the persistence of the oil on beaches. Occasionally, pits are excavated to evaluate the persistence of subsurface oil. This approach is practical for directing clean-up efforts directly following a spill. However, sampling of the 1989 Exxon Valdez oil spill in Prince William Sound 12 years later has shown that visual inspection combined with pit excavation does not offer estimates of contaminated beach area of stranded oil volumes. This information is needed to statistically evaluate the significance of change with time. Assumptions regarding the correlation of visually-evident surface oil and cryptic subsurface oil are usually not evaluated as part of the SCAT mandate. Stratified random sampling can avoid such problems and could produce precise estimates of oiled area and volume that allow for statistical assessment of major temporal trends and the extent of the impact. The 2001 sampling of the shoreline of Prince William Sound showed that 15 per cent of surface oil occurrences were associated with subsurface oil. This study demonstrates the usefulness of the stratified random sampling method and shows how sampling design parameters impact statistical outcome. Power analysis based on the study results, indicate that optimum power is derived when unnecessary stratification is avoided. It was emphasized that sampling effort should be balanced between choosing sufficient beaches for sampling and the intensity of sampling

  11. Designing a monitoring program to estimate estuarine survival of anadromous salmon smolts: simulating the effect of sample design on inference

    Science.gov (United States)

    Romer, Jeremy D.; Gitelman, Alix I.; Clements, Shaun; Schreck, Carl B.

    2015-01-01

    A number of researchers have attempted to estimate salmonid smolt survival during outmigration through an estuary. However, it is currently unclear how the design of such studies influences the accuracy and precision of survival estimates. In this simulation study we consider four patterns of smolt survival probability in the estuary, and test the performance of several different sampling strategies for estimating estuarine survival assuming perfect detection. The four survival probability patterns each incorporate a systematic component (constant, linearly increasing, increasing and then decreasing, and two pulses) and a random component to reflect daily fluctuations in survival probability. Generally, spreading sampling effort (tagging) across the season resulted in more accurate estimates of survival. All sampling designs in this simulation tended to under-estimate the variation in the survival estimates because seasonal and daily variation in survival probability are not incorporated in the estimation procedure. This under-estimation results in poorer performance of estimates from larger samples. Thus, tagging more fish may not result in better estimates of survival if important components of variation are not accounted for. The results of our simulation incorporate survival probabilities and run distribution data from previous studies to help illustrate the tradeoffs among sampling strategies in terms of the number of tags needed and distribution of tagging effort. This information will assist researchers in developing improved monitoring programs and encourage discussion regarding issues that should be addressed prior to implementation of any telemetry-based monitoring plan. We believe implementation of an effective estuary survival monitoring program will strengthen the robustness of life cycle models used in recovery plans by providing missing data on where and how much mortality occurs in the riverine and estuarine portions of smolt migration. These data

  12. AN EVALUATION OF PRIMARY DATA-COLLECTION MODES IN AN ADDRESS-BASED SAMPLING DESIGN.

    Science.gov (United States)

    Amaya, Ashley; Leclere, Felicia; Carris, Kari; Liao, Youlian

    2015-01-01

    As address-based sampling becomes increasingly popular for multimode surveys, researchers continue to refine data-collection best practices. While much work has been conducted to improve efficiency within a given mode, additional research is needed on how multimode designs can be optimized across modes. Previous research has not evaluated the consequences of mode sequencing on multimode mail and phone surveys, nor has significant research been conducted to evaluate mode sequencing on a variety of indicators beyond response rates. We conducted an experiment within the Racial and Ethnic Approaches to Community Health across the U.S. Risk Factor Survey (REACH U.S.) to evaluate two multimode case-flow designs: (1) phone followed by mail (phone-first) and (2) mail followed by phone (mail-first). We compared response rates, cost, timeliness, and data quality to identify differences across case-flow design. Because surveys often differ on the rarity of the target population, we also examined whether changes in the eligibility rate altered the choice of optimal case flow. Our results suggested that, on most metrics, the mail-first design was superior to the phone-first design. Compared with phone-first, mail-first achieved a higher yield rate at a lower cost with equivalent data quality. While the phone-first design initially achieved more interviews compared to the mail-first design, over time the mail-first design surpassed it and obtained the greatest number of interviews.

  13. Assessment of long-term gas sampling design at two commercial manure-belt layer barns.

    Science.gov (United States)

    Chai, Li-Long; Ni, Ji-Qin; Chen, Yan; Diehl, Claude A; Heber, Albert J; Lim, Teng T

    2010-06-01

    Understanding temporal and spatial variations of aerial pollutant concentrations is important for designing air quality monitoring systems. In long-term and continuous air quality monitoring in large livestock and poultry barns, these systems usually use location-shared analyzers and sensors and can only sample air at limited number of locations. To assess the validity of the gas sampling design at a commercial layer farm, a new methodology was developed to map pollutant gas concentrations using portable sensors under steady-state or quasi-steady-state barn conditions. Three assessment tests were conducted from December 2008 to February 2009 in two manure-belt layer barns. Each barn was 140.2 m long and 19.5 m wide and had 250,000 birds. Each test included four measurements of ammonia and carbon dioxide concentrations at 20 locations that covered all operating fans, including six of the fans used in the long-term sampling that represented three zones along the lengths of the barns, to generate data for complete-barn monitoring. To simulate the long-term monitoring, gas concentrations from the six long-term sampling locations were extracted from the 20 assessment locations. Statistical analyses were performed to test the variances (F-test) and sample means (t test) between the 6- and 20-sample data. The study clearly demonstrated ammonia and carbon dioxide concentration gradients that were characterized by increasing concentrations from the west to east ends of the barns following the under-cage manure-belt travel direction. Mean concentrations increased from 7.1 to 47.7 parts per million (ppm) for ammonia and from 2303 to 3454 ppm for carbon dioxide from the west to east of the barns. Variations of mean gas concentrations were much less apparent between the south and north sides of the barns, because they were 21.2 and 20.9 ppm for ammonia and 2979 and 2951 ppm for carbon dioxide, respectively. The null hypotheses that the variances and means between the 6- and 20

  14. Classifier-guided sampling for discrete variable, discontinuous design space exploration: Convergence and computational performance

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shahan, David W. [HRL Labs., LLC, Malibu, CA (United States); Seepersad, Carolyn Conner [Univ. of Texas, Austin, TX (United States)

    2014-04-22

    A classifier-guided sampling (CGS) method is introduced for solving engineering design optimization problems with discrete and/or continuous variables and continuous and/or discontinuous responses. The method merges concepts from metamodel-guided sampling and population-based optimization algorithms. The CGS method uses a Bayesian network classifier for predicting the performance of new designs based on a set of known observations or training points. Unlike most metamodeling techniques, however, the classifier assigns a categorical class label to a new design, rather than predicting the resulting response in continuous space, and thereby accommodates nondifferentiable and discontinuous functions of discrete or categorical variables. The CGS method uses these classifiers to guide a population-based sampling process towards combinations of discrete and/or continuous variable values with a high probability of yielding preferred performance. Accordingly, the CGS method is appropriate for discrete/discontinuous design problems that are ill-suited for conventional metamodeling techniques and too computationally expensive to be solved by population-based algorithms alone. In addition, the rates of convergence and computational properties of the CGS method are investigated when applied to a set of discrete variable optimization problems. Results show that the CGS method significantly improves the rate of convergence towards known global optima, on average, when compared to genetic algorithms.

  15. Within-otolith variability in chemical fingerprints: implications for sampling designs and possible environmental interpretation.

    Science.gov (United States)

    Di Franco, Antonio; Bulleri, Fabio; Pennetta, Antonio; De Benedetto, Giuseppe; Clarke, K Robert; Guidetti, Paolo

    2014-01-01

    Largely used as a natural biological tag in studies of dispersal/connectivity of fish, otolith elemental fingerprinting is usually analyzed by laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS). LA-ICP-MS produces an elemental fingerprint at a discrete time-point in the life of a fish and can generate data on within-otolith variability of that fingerprint. The presence of within-otolith variability has been previously acknowledged but not incorporated into experimental designs on the presumed, but untested, grounds of both its negligibility compared to among-otolith variability and of spatial autocorrelation among multiple ablations within an otolith. Here, using a hierarchical sampling design of spatial variation at multiple scales in otolith chemical fingerprints for two Mediterranean coastal fishes, we explore: 1) whether multiple ablations within an otolith can be used as independent replicates for significance tests among otoliths, and 2) the implications of incorporating within-otolith variability when assessing spatial variability in otolith chemistry at a hierarchy of spatial scales (different fish, from different sites, at different locations on the Apulian Adriatic coast). We find that multiple ablations along the same daily rings do not necessarily exhibit spatial dependency within the otolith and can be used to estimate residual variability in a hierarchical sampling design. Inclusion of within-otolith measurements reveals that individuals at the same site can show significant variability in elemental uptake. Within-otolith variability examined across the spatial hierarchy identifies differences between the two fish species investigated, and this finding leads to discussion of the potential for within-otolith variability to be used as a marker for fish exposure to stressful conditions. We also demonstrate that a 'cost'-optimal allocation of sampling effort should typically include some level of within-otolith replication in the

  16. Design of sample analysis device for iodine adsorption efficiency test in NPPs

    International Nuclear Information System (INIS)

    Ji Jinnan

    2015-01-01

    In nuclear power plants, iodine adsorption efficiency test is used to check the iodine adsorption efficiency of the iodine adsorber. The iodine adsorption efficiency can be calculated through the analysis of the test sample, and thus to determine if the performance of the adsorber meets the requirement on the equipment operation and emission. Considering the process of test and actual demand, in this paper, a special device for the analysis of this kind of test sample is designed. The application shows that the device is with convenient operation and high reliability and accurate calculation, and improves the experiment efficiency and reduces the experiment risk. (author)

  17. Sample requirements and design of an inter-laboratory trial for radiocarbon laboratories

    International Nuclear Information System (INIS)

    Bryant, Charlotte; Carmi, Israel; Cook, Gordon; Gulliksen, Steinar; Harkness, Doug; Heinemeier, Jan; McGee, Edward; Naysmith, Philip; Possnert, Goran; Scott, Marian; Plicht, Hans van der; Strydonck, Mark van

    2000-01-01

    An on-going inter-comparison programme which is focused on assessing and establishing consensus protocols to be applied in the identification, selection and sub-sampling of materials for subsequent 14 C analysis is described. The outcome of the programme will provide a detailed quantification of the uncertainties associated with 14 C measurements including the issues of accuracy and precision. Such projects have become recognised as a fundamental aspect of continuing laboratory quality assurance schemes, providing a mechanism for the harmonisation of measurements and for demonstrating the traceability of results. The design of this study and its rationale are described. In summary, a suite of core samples has been defined which will be made available to both AMS and radiometric laboratories. These core materials are representative of routinely dated material and their ages span the full range of the applied 14 C time-scale. Two of the samples are of wood from the German and Irish dendrochronologies, thus providing a direct connection to the master dendrochronological calibration curve. Further samples link this new inter-comparison to past studies. Sample size and precision have been identified as being of paramount importance in defining dating confidence, and so several core samples have been identified for more in-depth study of these practical issues. In addition to the core samples, optional samples have been identified and prepared specifically for either AMS and/or radiometric laboratories. For AMS laboratories, these include bone, textile, leather and parchment samples. Participation in the study requires a commitment to a minimum of 10 core analyses, with results to be returned within a year

  18. Field Investigation Plan for 1301-N and 1325-N Facilities Sampling to Support Remedial Design

    International Nuclear Information System (INIS)

    Weiss, S. G.

    1998-01-01

    This field investigation plan (FIP) provides for the sampling and analysis activities supporting the remedial design planning for the planned removal action for the 1301-N and 1325-N Liquid Waste Disposal Facilities (LWDFs), which are treatment, storage,and disposal (TSD) units (cribs/trenches). The planned removal action involves excavation, transportation, and disposal of contaminated material at the Environmental Restoration Disposal Facility (ERDF).An engineering study (BHI 1997) was performed to develop and evaluate various options that are predominantly influenced by the volume of high- and low-activity contaminated soil requiring removal. The study recommended that additional sampling be performed to supplement historical data for use in the remedial design

  19. Single-subject withdrawal designs in delayed matching-to-sample procedures

    OpenAIRE

    Eilifsen, Christoffer; Arntzen, Erik

    2011-01-01

    In most studies of delayed matching-to-sample (DMTS) and stimulus equivalence, the delay has remained fixed throughout a single experimental condition. We wanted to expand on the DMTS and stimulus equivalence literature by examining the effects of using titrating delays with different starting points during the establishment of conditional discriminations prerequisite for stimulus equivalence. In Experiment 1, a variation of a single-subject withdrawal design was used. Ten adults were exposed...

  20. Design, placement, and sampling of groundwater monitoring wells for the management of hazardous waste disposal facilities

    International Nuclear Information System (INIS)

    Tsai, S.Y.

    1988-01-01

    Groundwater monitoring is an important technical requirement in managing hazardous waste disposal facilities. The purpose of monitoring is to assess whether and how a disposal facility is affecting the underlying groundwater system. This paper focuses on the regulatory and technical aspects of the design, placement, and sampling of groundwater monitoring wells for hazardous waste disposal facilities. Such facilities include surface impoundments, landfills, waste piles, and land treatment facilities. 8 refs., 4 figs

  1. Impacts of Sample Design for Validation Data on the Accuracy of Feedforward Neural Network Classification

    Directory of Open Access Journals (Sweden)

    Giles M. Foody

    2017-08-01

    Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.

  2. Transfer function design based on user selected samples for intuitive multivariate volume exploration

    KAUST Repository

    Zhou, Liang; Hansen, Charles

    2013-01-01

    Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.

  3. Transfer function design based on user selected samples for intuitive multivariate volume exploration

    KAUST Repository

    Zhou, Liang

    2013-02-01

    Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.

  4. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    KAUST Repository

    Beck, Joakim

    2018-02-19

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized for a specified error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a single-loop Monte Carlo method that uses the Laplace approximation of the return value of the inner loop. The first demonstration example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  5. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    Science.gov (United States)

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl

    2018-06-01

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  6. [Design of standard voice sample text for subjective auditory perceptual evaluation of voice disorders].

    Science.gov (United States)

    Li, Jin-rang; Sun, Yan-yan; Xu, Wen

    2010-09-01

    To design a speech voice sample text with all phonemes in Mandarin for subjective auditory perceptual evaluation of voice disorders. The principles for design of a speech voice sample text are: The short text should include the 21 initials and 39 finals, this may cover all the phonemes in Mandarin. Also, the short text should have some meanings. A short text was made out. It had 155 Chinese words, and included 21 initials and 38 finals (the final, ê, was not included because it was rarely used in Mandarin). Also, the text covered 17 light tones and one "Erhua". The constituent ratios of the initials and finals presented in this short text were statistically similar as those in Mandarin according to the method of similarity of the sample and population (r = 0.742, P text were statistically not similar as those in Mandarin (r = 0.731, P > 0.05). A speech voice sample text with all phonemes in Mandarin was made out. The constituent ratios of the initials and finals presented in this short text are similar as those in Mandarin. Its value for subjective auditory perceptual evaluation of voice disorders need further study.

  7. Optimizing sampling design to deal with mist-net avoidance in Amazonian birds and bats.

    Directory of Open Access Journals (Sweden)

    João Tiago Marques

    Full Text Available Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas.

  8. Design of a sample acquistion system for the Mars exobiological penetrator

    Science.gov (United States)

    Thomson, Ron; Gwynne, Owen

    1988-01-01

    The Mars Exobiological Penetrator will be imbedded into several locations on the Martian surface. It contains various scientific instruments, such as an Alpha-Particle Instrument (API), Differential Scanning Calorimeter (DSC), Evolved Gas Analyzer (EGA) and accelerometers. A sample is required for analysis in the API and DSC. To avoid impact contaminated material, this sample must be taken from soil greater than 2 cm away from the penetrator shell. This study examines the design of a dedicated sampling system including deployment, suspension, fore/after body coupling, sample gathering and placement. To prevent subsurface material from entering the penetrator sampling compartment during impact, a plug is placed in the exit hole of the wall. A U-lever device is used to hold this plug in the penetrator wall. The U-lever rotates upon initial motion of the core-grinder mechanism (CGM), releasing the plug. Research points to a combination of coring and grinding as a plausible solution to the problem of dry drilling. The CGM, driven by two compressed springs, will be deployed along a tracking system. A slowly varying load i.e., springs, is favored over a fixed displacement motion because of its adaptability to different material hardness. However, to accommodate sampling in a low density soil, two dash pots set a maximum transverse velocity. In addition, minimal power use is achieved by unidirectional motion of the CGM. The sample will be transported to the scientific instruments by means of a sample placement tray that is driven by a compressed spring to avoid unnecessary power usage. This paper also explores possible modifications for size, weight, and time as well as possible future studies.

  9. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    Science.gov (United States)

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  10. A liquid scintillation counter specifically designed for samples deposited on a flat matrix

    International Nuclear Information System (INIS)

    Potter, C.G.; Warner, G.T.

    1986-01-01

    A prototype liquid scintillation counter has been designed to count samples deposited as a 6x16 array on a flat matrix. Applications include the counting of labelled cells processed by a cell harvester from 96-well microtitration plates onto glass fibre filters and of DNA samples directly deposited onto nitrocellulose or nylon transfer membranes (e.g. 'Genescreen' NEN) for genetic studies by dot-blot hybridisation. The whole filter is placed in a bag with 4-12 ml of scintillant, sufficient to count all 96 samples. Nearest-neighbour intersample cross talk ranged from 0.004% for 3 H to 0.015% for 32 P. Background was 1.4 counts/min for glass fibre and 0.7 counts/min for 'Genescreen' in the 3 H channel: for 14 C the respective figures were 5.3 and 4.3 counts/min. Counting efficiency for 3 H-labelled cells on glass fibre was 54%(E 2 /B=2053) and 26% for tritiated thymidine spotted on 'Genescreen'(E 2 /B=980). Similar 14 C samples gave figures on 97%(E 2 /B=1775) and 81(E 2 B=1526) respectively. Electron emission counting from samples containing 125 I and 51 Cr was also possible. (U.K.)

  11. Design of a Clean Room for Quality Control of an Environmental Sampling in KINAC

    International Nuclear Information System (INIS)

    Yoon, Jongho; Ahn, Gil Hoon; Seo, Hana; Han, Kitek; Park, Il Jin

    2014-01-01

    The objective of environmental sampling and analysis for safeguards is to characterize the nuclear materials handled and the activities conducted at the specific locations. The KINAC is responsible for the conclusions drawn from the analytical results provided by the analytical laboratories. To assure the KINAC of the continuity of the quality of the analytical results provided by the laboratories, the KINAC will implement a quality control(QC) programme. One of the QC programme is to prepare QC samples. The establishment of a clean room is needed to handle QC samples due to stringent control of contamination. The KINAC designed a clean facility with cleanliness of ISO Class 6, the Clean Room for Estimation and Assay of trace Nuclear materials(CREAN) to meet conflicting requirements of a clean room and for handling of nuclear materials according to Korean laws. The clean room will be expected to acquire of a radiation safety license under these conditions in this year and continue to improve it. The construction of the CREAN facility will be completed by the middle of 2015. In terms of QC programme, the establishment of a clean room is essential and will be not only very helpful for setting of quality control system for the national environmental sampling programme but also be applied for the environmental sample analysis techniques to the nuclear forensics

  12. Dealing with trade-offs in destructive sampling designs for occupancy surveys.

    Directory of Open Access Journals (Sweden)

    Stefano Canessa

    Full Text Available Occupancy surveys should be designed to minimise false absences. This is commonly achieved by increasing replication or increasing the efficiency of surveys. In the case of destructive sampling designs, in which searches of individual microhabitats represent the repeat surveys, minimising false absences leads to an inherent trade-off. Surveyors can sample more low quality microhabitats, bearing the resultant financial costs and producing wider-spread impacts, or they can target high quality microhabitats were the focal species is more likely to be found and risk more severe impacts on local habitat quality. We show how this trade-off can be solved with a decision-theoretic approach, using the Millewa Skink Hemiergis millewae from southern Australia as a case study. Hemiergis millewae is an endangered reptile that is best detected using destructive sampling of grass hummocks. Within sites that were known to be occupied by H. millewae, logistic regression modelling revealed that lizards were more frequently detected in large hummocks. If this model is an accurate representation of the detection process, searching large hummocks is more efficient and requires less replication, but this strategy also entails destruction of the best microhabitats for the species. We developed an optimisation tool to calculate the minimum combination of the number and size of hummocks to search to achieve a given cumulative probability of detecting the species at a site, incorporating weights to reflect the sensitivity of the results to a surveyor's priorities. The optimisation showed that placing high weight on minimising volume necessitates impractical replication, whereas placing high weight on minimising replication requires searching very large hummocks which are less common and may be vital for H. millewae. While destructive sampling methods are sometimes necessary, surveyors must be conscious of the ecological impacts of these methods. This study provides a

  13. Sampling design considerations for demographic studies: a case of colonial seabirds

    Science.gov (United States)

    Kendall, William L.; Converse, Sarah J.; Doherty, Paul F.; Naughton, Maura B.; Anders, Angela; Hines, James E.; Flint, Elizabeth

    2009-01-01

    For the purposes of making many informed conservation decisions, the main goal for data collection is to assess population status and allow prediction of the consequences of candidate management actions. Reducing the bias and variance of estimates of population parameters reduces uncertainty in population status and projections, thereby reducing the overall uncertainty under which a population manager must make a decision. In capture-recapture studies, imperfect detection of individuals, unobservable life-history states, local movement outside study areas, and tag loss can cause bias or precision problems with estimates of population parameters. Furthermore, excessive disturbance to individuals during capture?recapture sampling may be of concern because disturbance may have demographic consequences. We address these problems using as an example a monitoring program for Black-footed Albatross (Phoebastria nigripes) and Laysan Albatross (Phoebastria immutabilis) nesting populations in the northwestern Hawaiian Islands. To mitigate these estimation problems, we describe a synergistic combination of sampling design and modeling approaches. Solutions include multiple capture periods per season and multistate, robust design statistical models, dead recoveries and incidental observations, telemetry and data loggers, buffer areas around study plots to neutralize the effect of local movements outside study plots, and double banding and statistical models that account for band loss. We also present a variation on the robust capture?recapture design and a corresponding statistical model that minimizes disturbance to individuals. For the albatross case study, this less invasive robust design was more time efficient and, when used in combination with a traditional robust design, reduced the standard error of detection probability by 14% with only two hours of additional effort in the field. These field techniques and associated modeling approaches are applicable to studies of

  14. Sample Processor for Life on Icy Worlds (SPLIce): Design and Test Results

    Science.gov (United States)

    Chinn, Tori N.; Lee, Anthony K.; Boone, Travis D.; Tan, Ming X.; Chin, Matthew M.; McCutcheon, Griffin C.; Horne, Mera F.; Padgen, Michael R.; Blaich, Justin T.; Forgione, Joshua B.; hide

    2017-01-01

    We report the design, development, and testing of the Sample Processor for Life on Icy Worlds (SPLIce) system, a microfluidic sample processor to enable autonomous detection of signatures of life and measurements of habitability parameters in Ocean Worlds. This monolithic fluid processing-and-handling system (Figure 1; mass 0.5 kg) retrieves a 50-L-volume sample and prepares it to supply a suite of detection instruments, each with unique preparation needs. SPLIce has potential applications in orbiter missions that sample ocean plumes, such as found in Saturns icy moon Enceladus, or landed missions on the surface of icy satellites, such as Jupiters moon Europa. Answering the question Are we alone in the universe? is captivating and exceptionally challenging. Even general criteria that define life very broadly include a significant role for water [1,2]. Searches for extinct or extant life therefore prioritize locations of abundant water whether in ancient (Mars), or present (Europa and Enceladus) times. Only two previous planetary missions had onboard fluid processing: the Viking Biology Experiments [3] and Phoenixs Wet Chemistry Laboratory (WCL) [4]. SPLIce differs crucially from those systems, including its capability to process and distribute L-volume samples and the integration autonomous control of a wide range of fluidic functions, including: 1) retrieval of fluid samples from an evacuated sample chamber; 2) onboard multi-year storage of dehydrated reagents; 3) integrated pressure, pH, and conductivity measurement; 4) filtration and retention of insoluble particles for microscopy; 5) dilution or vacuum-driven concentration of samples to accommodate instrument working ranges; 6) removal of gas bubbles from sample aliquots; 7) unidirectional flow (check valves); 8) active flow-path selection (solenoid-actuated valves); 9) metered pumping in 100 nL volume increments. The SPLIce manifold, made of three thermally fused layers of precision-machined cyclo

  15. Within-otolith variability in chemical fingerprints: implications for sampling designs and possible environmental interpretation.

    Directory of Open Access Journals (Sweden)

    Antonio Di Franco

    Full Text Available Largely used as a natural biological tag in studies of dispersal/connectivity of fish, otolith elemental fingerprinting is usually analyzed by laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS. LA-ICP-MS produces an elemental fingerprint at a discrete time-point in the life of a fish and can generate data on within-otolith variability of that fingerprint. The presence of within-otolith variability has been previously acknowledged but not incorporated into experimental designs on the presumed, but untested, grounds of both its negligibility compared to among-otolith variability and of spatial autocorrelation among multiple ablations within an otolith. Here, using a hierarchical sampling design of spatial variation at multiple scales in otolith chemical fingerprints for two Mediterranean coastal fishes, we explore: 1 whether multiple ablations within an otolith can be used as independent replicates for significance tests among otoliths, and 2 the implications of incorporating within-otolith variability when assessing spatial variability in otolith chemistry at a hierarchy of spatial scales (different fish, from different sites, at different locations on the Apulian Adriatic coast. We find that multiple ablations along the same daily rings do not necessarily exhibit spatial dependency within the otolith and can be used to estimate residual variability in a hierarchical sampling design. Inclusion of within-otolith measurements reveals that individuals at the same site can show significant variability in elemental uptake. Within-otolith variability examined across the spatial hierarchy identifies differences between the two fish species investigated, and this finding leads to discussion of the potential for within-otolith variability to be used as a marker for fish exposure to stressful conditions. We also demonstrate that a 'cost'-optimal allocation of sampling effort should typically include some level of within

  16. Design and construction of a prototype vaporization calorimeter for the assay of radioisotopic samples

    International Nuclear Information System (INIS)

    Tormey, T.V.

    1979-10-01

    A prototype vaporization calorimeter has been designed and constructed for use in the assay of low power output radioisotopic samples. The prototype calorimeter design was based on that of a previous experimental instrument used by H.P. Stephens, to establish the feasibility of the vaporization calorimetry technique for this type of power measurement. The calorimeter is composed of a mechanical calorimeter assembly together with a data acquisition and control system. Detailed drawings of the calorimeter assembly are included and additional drawings are referenced. The data acquisition system is based on an HP 9825A programmable calculator. A description of the hardware is provided together with a listing of all system software programs. The operating procedure is outlined, including initial setup and operation of all related equipment. Preliminary system performance was evaluated by making a series of four measurements on two nominal 1.5W samples and on a nominal 0.75W sample. Data for these measurements indicate that the absolute accuracy (one standard deviation) is approx. = 0.0035W in this power range, resulting in an estimated relative one standard deviation accuracy of 0.24% at 1.5W and 0.48% at 0.75W

  17. DESIGN AND CALIBRATION OF A VIBRANT SAMPLE MAGNETOMETER: CHARACTERIZATION OF MAGNETIC MATERIALS

    Directory of Open Access Journals (Sweden)

    Freddy P. Guachun

    2018-01-01

    Full Text Available This paper presents the process followed in the implementation of a vibrating sample magnetometer (VSM, constructed with materials commonly found in an electromagnetism laboratory. It describes the design, construction, calibration and use in the characterization of some magnetic materials. A VSM measures the magnetic moment of a sample when it is vibrated perpendicular to a uniform magnetic field; Magnetization and magnetic susceptibility can be determined from these readings. This instrument stands out for its simplicity, versatility and low cost, but it is very sensitive and capable of eliminating or minimizing many sources of error that are found in other methods of measurement, allowing to obtain very accurate and reliable results. Its operation is based on the law of magnetic induction of Lenz-Faraday that consists in measuring the induced voltage in coils of detection produced by the variation of the magnetic flux that crosses them. The calibration of the VSM was performed by means of a standard sample (Magnetite and verified by means of a test sample (Nickel.

  18. The Design of Sample Driver System for Gamma Irradiator Facility at Thermal Column of Kartini Reactor

    International Nuclear Information System (INIS)

    Suyamto; Tasih Mulyono; Setyo Atmojo

    2007-01-01

    The design and construction of sample driver system for gamma irradiator facility at thermal column of Kartini reactor post operation has been carried out. The design and construction is based on the space of thermal column and the sample speed rotation which has to as low as possible in order the irradiation process can be more homogeneity. The electrical and mechanical calculation was done after fixation the electrical motor and transmission system which will be applied. By the assumption that the maximum sample weight is 50 kg, the electric motor specification is decided due to its rating i.e. single phase induction motor, run capacitor type, 0.5 HP; 220 V; 3.61 A, CCW and CW, rotation speed 1430 rpm. To achieve the low load rotation speed, motor speed was reduced twice using the conical reduction gear with the reduction ratio 3.9 and thread reduction gear with the reduction ratio 60. From the calculation it is found that power of motor is 118.06 watt, speed rotation of load sample is 6.11 rpm due to the no load rotation of motor 1430 rpm. From the test by varying weight of load up to 75 kg it is known that the device can be operated in a good condition, both in the two direction with the average speed of motor 1486 rpm and load 6.3 rpm respectively. So that the slip is 0.268 % and 0.314 % for no load and full load condition. The difference input current to the motor during no load and full load condition is relative small i.e. 0.14 A. The safety factor of motor is 316 % which is correspond to the weight of load 158 kg. (author)

  19. A two-phase sampling design for increasing detections of rare species in occupancy surveys

    Science.gov (United States)

    Pacifici, Krishna; Dorazio, Robert M.; Dorazio, Michael J.

    2012-01-01

    1. Occupancy estimation is a commonly used tool in ecological studies owing to the ease at which data can be collected and the large spatial extent that can be covered. One major obstacle to using an occupancy-based approach is the complications associated with designing and implementing an efficient survey. These logistical challenges become magnified when working with rare species when effort can be wasted in areas with none or very few individuals. 2. Here, we develop a two-phase sampling approach that mitigates these problems by using a design that places more effort in areas with higher predicted probability of occurrence. We compare our new sampling design to traditional single-season occupancy estimation under a range of conditions and population characteristics. We develop an intuitive measure of predictive error to compare the two approaches and use simulations to assess the relative accuracy of each approach. 3. Our two-phase approach exhibited lower predictive error rates compared to the traditional single-season approach in highly spatially correlated environments. The difference was greatest when detection probability was high (0·75) regardless of the habitat or sample size. When the true occupancy rate was below 0·4 (0·05-0·4), we found that allocating 25% of the sample to the first phase resulted in the lowest error rates. 4. In the majority of scenarios, the two-phase approach showed lower error rates compared to the traditional single-season approach suggesting our new approach is fairly robust to a broad range of conditions and design factors and merits use under a wide variety of settings. 5. Synthesis and applications. Conservation and management of rare species are a challenging task facing natural resource managers. It is critical for studies involving rare species to efficiently allocate effort and resources as they are usually of a finite nature. We believe our approach provides a framework for optimal allocation of effort while

  20. Design of cross-sensitive temperature and strain sensor based on sampled fiber grating

    Directory of Open Access Journals (Sweden)

    Zhang Xiaohang

    2017-02-01

    Full Text Available In this paper,a cross-sensitive temperature and strain sensor based on sampled fiber grating is designed.Its temperature measurement range is -50-200℃,and the strain measurement rangeis 0-2 000 με.The characteristics of the sensor are obtained using simulation method.Utilizing SPSS software,we found the dual-parameter matrix equations of measurement of temperature and strain,and calibrated the four sensing coefficients of the matrix equations.

  1. A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin

    Science.gov (United States)

    Blaschek, Michael; Duttmann, Rainer

    2015-04-01

    The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using

  2. A computational study of a fast sampling valve designed to sample soot precursors inside a forming diesel spray plume

    International Nuclear Information System (INIS)

    Dumitrescu, Cosmin; Puzinauskas, Paulius V.; Agrawal, Ajay K.; Liu, Hao; Daly, Daniel T.

    2009-01-01

    Accurate chemical reaction mechanisms are critically needed to fully optimize combustion strategies for modern internal-combustion engines. These mechanisms are needed to predict emission formation and the chemical heat release characteristics for traditional direct-injection diesel as well as recently-developed and proposed variant combustion strategies. Experimental data acquired under conditions representative of such combustion strategies are required to validate these reaction mechanisms. This paper explores the feasibility of developing a fast sampling valve which extracts reactants at known locations in the spray reaction structure to provide these data. CHEMKIN software is used to establish the reaction timescales which dictate the required fast sampling capabilities. The sampling process is analyzed using separate FLUENT and CHEMKIN calculations. The non-reacting FLUENT CFD calculations give a quantitative estimate of the sample quantity as well as the fluid mixing and thermal history. A CHEMKIN reactor network has been created that reflects these mixing and thermal time scales and allows a theoretical evaluation of the quenching process

  3. Design of Field Experiments for Adaptive Sampling of the Ocean with Autonomous Vehicles

    Science.gov (United States)

    Zheng, H.; Ooi, B. H.; Cho, W.; Dao, M. H.; Tkalich, P.; Patrikalakis, N. M.

    2010-05-01

    Due to the highly non-linear and dynamical nature of oceanic phenomena, the predictive capability of various ocean models depends on the availability of operational data. A practical method to improve the accuracy of the ocean forecast is to use a data assimilation methodology to combine in-situ measured and remotely acquired data with numerical forecast models of the physical environment. Autonomous surface and underwater vehicles with various sensors are economic and efficient tools for exploring and sampling the ocean for data assimilation; however there is an energy limitation to such vehicles, and thus effective resource allocation for adaptive sampling is required to optimize the efficiency of exploration. In this paper, we use physical oceanography forecasts of the coastal zone of Singapore for the design of a set of field experiments to acquire useful data for model calibration and data assimilation. The design process of our experiments relied on the oceanography forecast including the current speed, its gradient, and vorticity in a given region of interest for which permits for field experiments could be obtained and for time intervals that correspond to strong tidal currents. Based on these maps, resources available to our experimental team, including Autonomous Surface Craft (ASC) are allocated so as to capture the oceanic features that result from jets and vortices behind bluff bodies (e.g., islands) in the tidal current. Results are summarized from this resource allocation process and field experiments conducted in January 2009.

  4. A UAV-Based Fog Collector Design for Fine-Scale Aerobiological Sampling

    Science.gov (United States)

    Gentry, Diana; Guarro, Marcello; Demachkie, Isabella Siham; Stumfall, Isabel; Dahlgren, Robert P.

    2017-01-01

    Airborne microbes are found throughout the troposphere and into the stratosphere. Knowing how the activity of airborne microorganisms can alter water, carbon, and other geochemical cycles is vital to a full understanding of local and global ecosystems. Just as on the land or in the ocean, atmospheric regions vary in habitability; the underlying geochemical, climatic, and ecological dynamics must be characterized at different scales to be effectively modeled. Most aerobiological studies have focused on a high level: 'How high are airborne microbes found?' and 'How far can they travel?' Most fog and cloud water studies collect from stationary ground stations (point) or along flight transects (1D). To complement and provide context for this data, we have designed a UAV-based modified fog and cloud water collector to retrieve 4D-resolved samples for biological and chemical analysis.Our design uses a passive impacting collector hanging from a rigid rod suspended between two multi-rotor UAVs. The suspension design reduces the effect of turbulence and potential for contamination from the UAV downwash. The UAVs are currently modeled in a leader-follower configuration, taking advantage of recent advances in modular UAVs, UAV swarming, and flight planning.The collector itself is a hydrophobic mesh. Materials including Tyvek, PTFE, nylon, and polypropylene monofilament fabricated via laser cutting, CNC knife, or 3D printing were characterized for droplet collection efficiency using a benchtop atomizer and particle counter. Because the meshes can be easily and inexpensively fabricated, a set can be pre-sterilized and brought to the field for 'hot swapping' to decrease cross-contamination between flight sessions or use as negative controls.An onboard sensor and logging system records the time and location of each sample; when combined with flight tracking data, the samples can be resolved into a 4D volumetric map of the fog bank. Collected samples can be returned to the lab for

  5. Predictive Sampling of Rare Conformational Events in Aqueous Solution: Designing a Generalized Orthogonal Space Tempering Method.

    Science.gov (United States)

    Lu, Chao; Li, Xubin; Wu, Dongsheng; Zheng, Lianqing; Yang, Wei

    2016-01-12

    analysis suggests that because essential conformational events are mainly driven by the compensating fluctuations of essential solute-solvent and solute-solute interactions, commonly employed "predictive" sampling methods are unlikely to be effective on this seemingly "simple" system. The gOST development presented in this paper illustrates how to employ the OSS scheme for physics-based sampling method designs.

  6. Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model

    Science.gov (United States)

    Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.

    2017-09-01

    The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.

  7. Sampling design for the Study of Cardiovascular Risks in Adolescents (ERICA

    Directory of Open Access Journals (Sweden)

    Mauricio Teixeira Leite de Vasconcellos

    2015-05-01

    Full Text Available The Study of Cardiovascular Risk in Adolescents (ERICA aims to estimate the prevalence of cardiovascular risk factors and metabolic syndrome in adolescents (12-17 years enrolled in public and private schools of the 273 municipalities with over 100,000 inhabitants in Brazil. The study population was stratified into 32 geographical strata (27 capitals and five sets with other municipalities in each macro-region of the country and a sample of 1,251 schools was selected with probability proportional to size. In each school three combinations of shift (morning and afternoon and grade were selected, and within each of these combinations, one class was selected. All eligible students in the selected classes were included in the study. The design sampling weights were calculated by the product of the reciprocals of the inclusion probabilities in each sampling stage, and were later calibrated considering the projections of the numbers of adolescents enrolled in schools located in the geographical strata by sex and age.

  8. Coastal California's Fog as a Unique Habitable Niche: Design for Autonomous Sampling and Preliminary Aerobiological Characterization

    Science.gov (United States)

    Gentry, Diana; Cynthia Ouandji; Arismendi, Dillon; Guarro, Marcello; Demachkie, Isabella; Crosbie, Ewan; Dadashazar, Hossein; MacDonald, Alex B.; Wang, Zhen; Sorooshian, Armin; hide

    2017-01-01

    Just as on the land or in the ocean, atmospheric regions may be more or less hospitable to life. The aerobiosphere, or collection of living things in Earth's atmosphere, is poorly understood due to the small number and ad hoc nature of samples studied. However, we know viable airborne microbes play important roles, such as providing cloud condensation nuclei. Knowing the distribution of such microorganisms and how their activity can alter water, carbon, and other geochemical cycles is key to developing criteria for planetary habitability, particularly for potential habitats with wet atmospheres but little stable surface water. Coastal California has regular, dense fog known to play a major transport role in the local ecosystem. In addition to the significant local (1 km) geographical variation in typical fog, previous studies have found that changes in height above surface of as little as a few meters can yield significant differences in typical concentrations, populations and residence times. No single current sampling platform (ground-based impactors, towers, balloons, aircraft) is capable of accessing all of these regions of interest.A novel passive fog and cloud water sampler, consisting of a lightweight passive impactor suspended from autonomous aerial vehicles (UAVs), is being developed to allow 4D point sampling within a single fog bank, allowing closer study of small-scale (100 m) system dynamics. Fog and cloud droplet water samples from low-altitude aircraft flights in nearby coastal waters were collected and assayed to estimate the required sample volumes, flight times, and sensitivity thresholds of the system under design.125 cloud water samples were collected from 16 flights of the Center for Interdisciplinary Remotely Piloted Aircraft Studies (CIRPAS) instrumented Twin Otter, equipped with a sampling tube collector, occurring between 18 July and 12 August 2016 below 1 km altitude off the central coast. The collector was flushed first with 70 ethanol

  9. Importance of sampling design and analysis in animal population studies: a comment on Sergio et al

    Science.gov (United States)

    Kery, M.; Royle, J. Andrew; Schmid, Hans

    2008-01-01

    1. The use of predators as indicators and umbrellas in conservation has been criticized. In the Trentino region, Sergio et al. (2006; hereafter SEA) counted almost twice as many bird species in quadrats located in raptor territories than in controls. However, SEA detected astonishingly few species. We used contemporary Swiss Breeding Bird Survey data from an adjacent region and a novel statistical model that corrects for overlooked species to estimate the expected number of bird species per quadrat in that region. 2. There are two anomalies in SEA which render their results ambiguous. First, SEA detected on average only 6.8 species, whereas a value of 32 might be expected. Hence, they probably overlooked almost 80% of all species. Secondly, the precision of their mean species counts was greater in two-thirds of cases than in the unlikely case that all quadrats harboured exactly the same number of equally detectable species. This suggests that they detected consistently only a biased, unrepresentative subset of species. 3. Conceptually, expected species counts are the product of true species number and species detectability p. Plenty of factors may affect p, including date, hour, observer, previous knowledge of a site and mobbing behaviour of passerines in the presence of predators. Such differences in p between raptor and control quadrats could have easily created the observed effects. Without a method that corrects for such biases, or without quantitative evidence that species detectability was indeed similar between raptor and control quadrats, the meaning of SEA's counts is hard to evaluate. Therefore, the evidence presented by SEA in favour of raptors as indicator species for enhanced levels of biodiversity remains inconclusive. 4. Synthesis and application. Ecologists should pay greater attention to sampling design and analysis in animal population estimation. Species richness estimation means sampling a community. Samples should be representative for the

  10. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    Science.gov (United States)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically

  11. Sampling design and procedures for fixed surface-water sites in the Georgia-Florida coastal plain study unit, 1993

    Science.gov (United States)

    Hatzell, H.H.; Oaksford, E.T.; Asbury, C.E.

    1995-01-01

    The implementation of design guidelines for the National Water-Quality Assessment (NAWQA) Program has resulted in the development of new sampling procedures and the modification of existing procedures commonly used in the Water Resources Division of the U.S. Geological Survey. The Georgia-Florida Coastal Plain (GAFL) study unit began the intensive data collection phase of the program in October 1992. This report documents the implementation of the NAWQA guidelines by describing the sampling design and procedures for collecting surface-water samples in the GAFL study unit in 1993. This documentation is provided for agencies that use water-quality data and for future study units that will be entering the intensive phase of data collection. The sampling design is intended to account for large- and small-scale spatial variations, and temporal variations in water quality for the study area. Nine fixed sites were selected in drainage basins of different sizes and different land-use characteristics located in different land-resource provinces. Each of the nine fixed sites was sampled regularly for a combination of six constituent groups composed of physical and chemical constituents: field measurements, major ions and metals, nutrients, organic carbon, pesticides, and suspended sediments. Some sites were also sampled during high-flow conditions and storm events. Discussion of the sampling procedure is divided into three phases: sample collection, sample splitting, and sample processing. A cone splitter was used to split water samples for the analysis of the sampling constituent groups except organic carbon from approximately nine liters of stream water collected at four fixed sites that were sampled intensively. An example of the sample splitting schemes designed to provide the sample volumes required for each sample constituent group is described in detail. Information about onsite sample processing has been organized into a flowchart that describes a pathway for each of

  12. Finding Biomarker Signatures in Pooled Sample Designs: A Simulation Framework for Methodological Comparisons

    Directory of Open Access Journals (Sweden)

    Anna Telaar

    2010-01-01

    Full Text Available Detection of discriminating patterns in gene expression data can be accomplished by using various methods of statistical learning. It has been proposed that sample pooling in this context would have negative effects; however, pooling cannot always be avoided. We propose a simulation framework to explicitly investigate the parameters of patterns, experimental design, noise, and choice of method in order to find out which effects on classification performance are to be expected. We use a two-group classification task and simulated gene expression data with independent differentially expressed genes as well as bivariate linear patterns and the combination of both. Our results show a clear increase of prediction error with pool size. For pooled training sets powered partial least squares discriminant analysis outperforms discriminance analysis, random forests, and support vector machines with linear or radial kernel for two of three simulated scenarios. The proposed simulation approach can be implemented to systematically investigate a number of additional scenarios of practical interest.

  13. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    Science.gov (United States)

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  14. Sampling effects on the identification of roadkill hotspots: Implications for survey design.

    Science.gov (United States)

    Santos, Sara M; Marques, J Tiago; Lourenço, André; Medinas, Denis; Barbosa, A Márcia; Beja, Pedro; Mira, António

    2015-10-01

    Although locating wildlife roadkill hotspots is essential to mitigate road impacts, the influence of study design on hotspot identification remains uncertain. We evaluated how sampling frequency affects the accuracy of hotspot identification, using a dataset of vertebrate roadkills (n = 4427) recorded over a year of daily surveys along 37 km of roads. "True" hotspots were identified using this baseline dataset, as the 500-m segments where the number of road-killed vertebrates exceeded the upper 95% confidence limit of the mean, assuming a Poisson distribution of road-kills per segment. "Estimated" hotspots were identified likewise, using datasets representing progressively lower sampling frequencies, which were produced by extracting data from the baseline dataset at appropriate time intervals (1-30 days). Overall, 24.3% of segments were "true" hotspots, concentrating 40.4% of roadkills. For different groups, "true" hotspots accounted from 6.8% (bats) to 29.7% (small birds) of road segments, concentrating from 60% (lizards, lagomorphs, carnivores) of roadkills. Spatial congruence between "true" and "estimated" hotspots declined rapidly with increasing time interval between surveys, due primarily to increasing false negatives (i.e., missing "true" hotspots). There were also false positives (i.e., wrong "estimated" hotspots), particularly at low sampling frequencies. Spatial accuracy decay with increasing time interval between surveys was higher for smaller-bodied (amphibians, reptiles, small birds, small mammals) than for larger-bodied species (birds of prey, hedgehogs, lagomorphs, carnivores). Results suggest that widely used surveys at weekly or longer intervals may produce poor estimates of roadkill hotspots, particularly for small-bodied species. Surveying daily or at two-day intervals may be required to achieve high accuracy in hotspot identification for multiple species. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Design and Use of a Full Flow Sampling System (FFS) for the Quantification of Methane Emissions.

    Science.gov (United States)

    Johnson, Derek R; Covington, April N; Clark, Nigel N

    2016-06-12

    The use of natural gas continues to grow with increased discovery and production of unconventional shale resources. At the same time, the natural gas industry faces continued scrutiny for methane emissions from across the supply chain, due to methane's relatively high global warming potential (25-84x that of carbon dioxide, according to the Energy Information Administration). Currently, a variety of techniques of varied uncertainties exists to measure or estimate methane emissions from components or facilities. Currently, only one commercial system is available for quantification of component level emissions and recent reports have highlighted its weaknesses. In order to improve accuracy and increase measurement flexibility, we have designed, developed, and implemented a novel full flow sampling system (FFS) for quantification of methane emissions and greenhouse gases based on transportation emissions measurement principles. The FFS is a modular system that consists of an explosive-proof blower(s), mass airflow sensor(s) (MAF), thermocouple, sample probe, constant volume sampling pump, laser based greenhouse gas sensor, data acquisition device, and analysis software. Dependent upon the blower and hose configuration employed, the current FFS is able to achieve a flow rate ranging from 40 to 1,500 standard cubic feet per minute (SCFM). Utilization of laser-based sensors mitigates interference from higher hydrocarbons (C2+). Co-measurement of water vapor allows for humidity correction. The system is portable, with multiple configurations for a variety of applications ranging from being carried by a person to being mounted in a hand drawn cart, on-road vehicle bed, or from the bed of utility terrain vehicles (UTVs). The FFS is able to quantify methane emission rates with a relative uncertainty of ± 4.4%. The FFS has proven, real world operation for the quantification of methane emissions occurring in conventional and remote facilities.

  16. Design and Use of a Full Flow Sampling System (FFS) for the Quantification of Methane Emissions

    Science.gov (United States)

    Johnson, Derek R.; Covington, April N.; Clark, Nigel N.

    2016-01-01

    The use of natural gas continues to grow with increased discovery and production of unconventional shale resources. At the same time, the natural gas industry faces continued scrutiny for methane emissions from across the supply chain, due to methane's relatively high global warming potential (25-84x that of carbon dioxide, according to the Energy Information Administration). Currently, a variety of techniques of varied uncertainties exists to measure or estimate methane emissions from components or facilities. Currently, only one commercial system is available for quantification of component level emissions and recent reports have highlighted its weaknesses. In order to improve accuracy and increase measurement flexibility, we have designed, developed, and implemented a novel full flow sampling system (FFS) for quantification of methane emissions and greenhouse gases based on transportation emissions measurement principles. The FFS is a modular system that consists of an explosive-proof blower(s), mass airflow sensor(s) (MAF), thermocouple, sample probe, constant volume sampling pump, laser based greenhouse gas sensor, data acquisition device, and analysis software. Dependent upon the blower and hose configuration employed, the current FFS is able to achieve a flow rate ranging from 40 to 1,500 standard cubic feet per minute (SCFM). Utilization of laser-based sensors mitigates interference from higher hydrocarbons (C2+). Co-measurement of water vapor allows for humidity correction. The system is portable, with multiple configurations for a variety of applications ranging from being carried by a person to being mounted in a hand drawn cart, on-road vehicle bed, or from the bed of utility terrain vehicles (UTVs). The FFS is able to quantify methane emission rates with a relative uncertainty of ± 4.4%. The FFS has proven, real world operation for the quantification of methane emissions occurring in conventional and remote facilities. PMID:27341646

  17. The design of high-temperature thermal conductivity measurements apparatus for thin sample size

    Directory of Open Access Journals (Sweden)

    Hadi Syamsul

    2017-01-01

    Full Text Available This study presents the designing, constructing and validating processes of thermal conductivity apparatus using steady-state heat-transfer techniques with the capability of testing a material at high temperatures. This design is an improvement from ASTM D5470 standard where meter-bars with the equal cross-sectional area were used to extrapolate surface temperature and measure heat transfer across a sample. There were two meter-bars in apparatus where each was placed three thermocouples. This Apparatus using a heater with a power of 1,000 watts, and cooling water to stable condition. The pressure applied was 3.4 MPa at the cross-sectional area of 113.09 mm2 meter-bar and thermal grease to minimized interfacial thermal contact resistance. To determine the performance, the validating process proceeded by comparing the results with thermal conductivity obtained by THB 500 made by LINSEIS. The tests showed the thermal conductivity of the stainless steel and bronze are 15.28 Wm-1K-1 and 38.01 Wm-1K-1 with a difference of test apparatus THB 500 are −2.55% and 2.49%. Furthermore, this apparatus has the capability to measure the thermal conductivity of the material to a temperature of 400°C where the results for the thermal conductivity of stainless steel is 19.21 Wm-1K-1 and the difference was 7.93%.

  18. Changes in tar yields and cigarette design in samples of Chinese cigarettes, 2009 and 2012.

    Science.gov (United States)

    Schneller, Liane M; Zwierzchowski, Benjamin A; Caruso, Rosalie V; Li, Qiang; Yuan, Jiang; Fong, Geoffrey T; O'Connor, Richard J

    2015-11-01

    China is home to the greatest number of smokers as well as the greatest number of smoking-related deaths. An active and growing market of cigarettes marketed as 'light' or 'low tar' may keep health-concerned smokers from quitting, wrongly believing that such brands are less harmful. This study sought to observe changes in cigarette design characteristics and reported tar, nicotine and carbon monoxide (TNCO) levels in a sample of cigarette brands obtained in seven Chinese cities from 2009 to 2012. Cigarettes were purchased and shipped to Roswell Park Cancer Institute, where 91 pairs of packs were selected for physical cigarette design characteristic testing and recording of TNCO values. Data analysis was conducted using SPSS, and was initially characterised using descriptive statistics, correlations and generalised estimating equations to observe changes in brand varieties over time. Reported TNCO values on packs saw mean tar, nicotine and CO levels decrease from 2009 to 2012 by 7.9%, 4.5% and 6.0%, respectively. Ventilation was the only cigarette design feature that significantly changed over time (p<0.001), with an increase of 31.7%. Significant predictors of tar and CO yield overall were ventilation and per-cigarette tobacco weight, while for nicotine tobacco moisture was also an independent predictor of yield. The use of ventilation to decrease TNCO emissions is misleading smokers to believe that they are smoking a 'light/low' tar cigarette that is healthier, and is potentially forestalling the quitting behaviours that would begin to reduce the health burden of tobacco in China, and so should be prohibited. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. Design and Demonstration of a Material-Plasma Exposure Target Station for Neutron Irradiated Samples

    Energy Technology Data Exchange (ETDEWEB)

    Rapp, Juergen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Aaron, A. M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bell, Gary L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Burgess, Thomas W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ellis, Ronald James [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Giuliano, D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Howard, R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kiggans, James O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lessard, Timothy L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ohriner, Evan Keith [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Perkins, Dale E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Varma, Venugopal Koikal [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-10-20

    -state heat fluxes of 5–20 MW/m2 and ion fluxes up to 1024 m-2s-1. Since PFCs will have to withstand neutron irradiation displacement damage up to 50 dpa, the target station design must accommodate radioactive specimens (materials to be irradiated in HFIR or at SNS) to enable investigations of the impact of neutron damage on materials. Therefore, the system will have to be able to install and extract irradiated specimens using equipment and methods to avoid sample modification, control contamination, and minimize worker dose. Included in the design considerations will be an assessment of all the steps between neutron irradiation and post-exposure materials examination/characterization, as well as an evaluation of the facility hazard categorization. In particular, the factors associated with the acquisition of radioactive specimens and their preparation, transportation, experimental configuration at the plasma-specimen interface, post-plasma-exposure sample handling, and specimen preparation will be evaluated. Neutronics calculations to determine the dose rates of the samples were carried out for a large number of potential plasma-facing materials.

  20. Design and Demonstration of a Material-Plasma Exposure Target Station for Neutron Irradiated Samples

    International Nuclear Information System (INIS)

    Rapp, Juergen; Aaron, A. M.; Bell, Gary L.; Burgess, Thomas W.; Ellis, Ronald James; Giuliano, D.; Howard, R.; Kiggans, James O.; Lessard, Timothy L.; Ohriner, Evan Keith; Perkins, Dale E.; Varma, Venugopal Koikal

    2015-01-01

    5-20 MW/m"2 and ion fluxes up to 10"2"4 m"-"2s"-"1. Since PFCs will have to withstand neutron irradiation displacement damage up to 50 dpa, the target station design must accommodate radioactive specimens (materials to be irradiated in HFIR or at SNS) to enable investigations of the impact of neutron damage on materials. Therefore, the system will have to be able to install and extract irradiated specimens using equipment and methods to avoid sample modification, control contamination, and minimize worker dose. Included in the design considerations will be an assessment of all the steps between neutron irradiation and post-exposure materials examination/characterization, as well as an evaluation of the facility hazard categorization. In particular, the factors associated with the acquisition of radioactive specimens and their preparation, transportation, experimental configuration at the plasma-specimen interface, post-plasma-exposure sample handling, and specimen preparation will be evaluated. Neutronics calculations to determine the dose rates of the samples were carried out for a large number of potential plasma-facing materials.

  1. Two specialized delayed-neutron detector designs for assays of fissionable elements in water and sediment samples

    International Nuclear Information System (INIS)

    Balestrini, S.J.; Balagna, J.P.; Menlove, H.O.

    1976-01-01

    Two specialized neutron-sensitive detectors are described which are employed for rapid assays of fissionable elements by sensing for delayed neutrons emitted by samples after they have been irradiated in a nuclear reactor. The more sensitive of the two detectors, designed to assay for uranium in water samples, is 40% efficient; the other, designed for sediment sample assays, is 27% efficient. These detectors are also designed to operate under water as an inexpensive shielding against neutron leakage from the reactor and neutrons from cosmic rays. (Auth.)

  2. A simple and efficient alternative to implementing systematic random sampling in stereological designs without a motorized microscope stage.

    Science.gov (United States)

    Melvin, Neal R; Poda, Daniel; Sutherland, Robert J

    2007-10-01

    When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.

  3. Comparison of Sampling Designs for Estimating Deforestation from Landsat TM and MODIS Imagery: A Case Study in Mato Grosso, Brazil

    Directory of Open Access Journals (Sweden)

    Shanyou Zhu

    2014-01-01

    Full Text Available Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  4. Comparison of sampling designs for estimating deforestation from landsat TM and MODIS imagery: a case study in Mato Grosso, Brazil.

    Science.gov (United States)

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  5. Sampling designs for contaminant temporal trend analyses using sedentary species exemplified by the snails Bellamya aeruginosa and Viviparus viviparus.

    Science.gov (United States)

    Yin, Ge; Danielsson, Sara; Dahlberg, Anna-Karin; Zhou, Yihui; Qiu, Yanling; Nyberg, Elisabeth; Bignert, Anders

    2017-10-01

    Environmental monitoring typically assumes samples and sampling activities to be representative of the population being studied. Given a limited budget, an appropriate sampling strategy is essential to support detecting temporal trends of contaminants. In the present study, based on real chemical analysis data on polybrominated diphenyl ethers in snails collected from five subsites in Tianmu Lake, computer simulation is performed to evaluate three sampling strategies by the estimation of required sample size, to reach a detection of an annual change of 5% with a statistical power of 80% and 90% with a significant level of 5%. The results showed that sampling from an arbitrarily selected sampling spot is the worst strategy, requiring much more individual analyses to achieve the above mentioned criteria compared with the other two approaches. A fixed sampling site requires the lowest sample size but may not be representative for the intended study object e.g. a lake and is also sensitive to changes of that particular sampling site. In contrast, sampling at multiple sites along the shore each year, and using pooled samples when the cost to collect and prepare individual specimens are much lower than the cost for chemical analysis, would be the most robust and cost efficient strategy in the long run. Using statistical power as criterion, the results demonstrated quantitatively the consequences of various sampling strategies, and could guide users with respect of required sample sizes depending on sampling design for long term monitoring programs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Multiobjecitve Sampling Design for Calibration of Water Distribution Network Model Using Genetic Algorithm and Neural Network

    Directory of Open Access Journals (Sweden)

    Kourosh Behzadian

    2008-03-01

    Full Text Available In this paper, a novel multiobjective optimization model is presented for selecting optimal locations in the water distribution network (WDN with the aim of installing pressure loggers. The pressure data collected at optimal locations will be used later on in the calibration of the proposed WDN model. Objective functions consist of maximization of calibrated model prediction accuracy and minimization of the total cost for sampling design. In order to decrease the model run time, an optimization model has been developed using multiobjective genetic algorithm and adaptive neural network (MOGA-ANN. Neural networks (NNs are initially trained after a number of initial GA generations and periodically retrained and updated after generation of a specified number of full model-analyzed solutions. Trained NNs are replaced with the fitness evaluation of some chromosomes within the GA progress. Using cache prevents objective function evaluation of repetitive chromosomes within GA. Optimal solutions are obtained through pareto-optimal front with respect to the two objective functions. Results show that jointing NNs in MOGA for approximating portions of chromosomes’ fitness in each generation leads to considerable savings in model run time and can be promising for reducing run-time in optimization models with significant computational effort.

  7. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Application of binomial and multinomial probability statistics to the sampling design process of a global grain tracing and recall system

    Science.gov (United States)

    Small, coded, pill-sized tracers embedded in grain are proposed as a method for grain traceability. A sampling process for a grain traceability system was designed and investigated by applying probability statistics using a science-based sampling approach to collect an adequate number of tracers fo...

  9. Multi-saline sample distillation apparatus for hydrogen isotope analyses: design and accuracy. Water-resources investigations

    International Nuclear Information System (INIS)

    Hassan, A.A.

    1981-04-01

    A distillation apparatus for saline water samples was designed and tested. Six samples may be distilled simultaneously. The temperature was maintained at 400 degrees C to ensure complete dehydration of the precipitating salts. Consequently, the error in the measured ratio of stable hydrogen isotopes resulting from incomplete dehydration of hydrated salts during distillation was eliminated

  10. An instrument design and sample strategy for measuring soil respiration in the coastal temperate rain forest

    Science.gov (United States)

    Nay, S. M.; D'Amore, D. V.

    2009-12-01

    The coastal temperate rainforest (CTR) along the northwest coast of North America is a large and complex mosaic of forests and wetlands located on an undulating terrain ranging from sea level to thousands of meters in elevation. This biome stores a dynamic portion of the total carbon stock of North America. The fate of the terrestrial carbon stock is of concern due to the potential for mobilization and export of this store to both the atmosphere as carbon respiration flux and ocean as dissolved organic and inorganic carbon flux. Soil respiration is the largest export vector in the system and must be accurately measured to gain any comprehensive understanding of how carbon moves though this system. Suitable monitoring tools capable of measuring carbon fluxes at small spatial scales are essential for our understanding of carbon dynamics at larger spatial scales within this complex assemblage of ecosystems. We have adapted instrumentation and developed a sampling strategy for optimizing replication of soil respiration measurements to quantify differences among spatially complex landscape units of the CTR. We start with the design of the instrument to ease the technological, ergonomic and financial barriers that technicians encounter in monitoring the efflux of CO2 from the soil. Our sampling strategy optimizes the physical efforts of the field work and manages for the high variation of flux measurements encountered in this difficult environment of rough terrain, dense vegetation and wet climate. Our soil respirometer incorporates an infra-red gas analyzer (LiCor Inc. LI-820) and an 8300 cm3 soil respiration chamber; the device is durable, lightweight, easy to operate and can be built for under $5000 per unit. The modest unit price allows for a multiple unit fleet to be deployed and operated in an intensive field monitoring campaign. We use a large 346 cm2 collar to accommodate as much micro spatial variation as feasible and to facilitate repeated measures for tracking

  11. Design Review Report for formal review of safety class features of exhauster system for rotary mode core sampling

    International Nuclear Information System (INIS)

    JANICEK, G.P.

    2000-01-01

    Report documenting Formal Design Review conducted on portable exhausters used to support rotary mode core sampling of Hanford underground radioactive waste tanks with focus on Safety Class design features and control requirements for flammable gas environment operation and air discharge permitting compliance

  12. Design Review Report for formal review of safety class features of exhauster system for rotary mode core sampling

    Energy Technology Data Exchange (ETDEWEB)

    JANICEK, G.P.

    2000-06-08

    Report documenting Formal Design Review conducted on portable exhausters used to support rotary mode core sampling of Hanford underground radioactive waste tanks with focus on Safety Class design features and control requirements for flammable gas environment operation and air discharge permitting compliance.

  13. The significance of Sampling Design on Inference: An Analysis of Binary Outcome Model of Children’s Schooling Using Indonesian Large Multi-stage Sampling Data

    OpenAIRE

    Ekki Syamsulhakim

    2008-01-01

    This paper aims to exercise a rather recent trend in applied microeconometrics, namely the effect of sampling design on statistical inference, especially on binary outcome model. Many theoretical research in econometrics have shown the inappropriateness of applying i.i.dassumed statistical analysis on non-i.i.d data. These research have provided proofs showing that applying the iid-assumed analysis on a non-iid observations would result in an inflated standard errors which could make the esti...

  14. Osiris-Rex and Hayabusa2 Sample Cleanroom Design and Construction Planning at NASA-JSC

    Science.gov (United States)

    Righter, Kevin; Pace, Lisa F.; Messenger, Keiko

    2018-01-01

    Final Paper and not the abstract is attached. The OSIRIS-REx asteroid sample return mission launched to asteroid Bennu September 8, 2016. The spacecraft will arrive at Bennu in late 2019, orbit and map the asteroid, and perform a touch and go (TAG) sampling maneuver in July 2020. After confirma-tion of successful sample stowage, the spacecraft will return to Earth, and the sample return capsule (SRC) will land in Utah in September 2023. Samples will be recovered from Utah and then transported and stored in a new sample cleanroom at NASA Johnson Space Center in Houston. All curation-specific ex-amination and documentation activities related to Ben-nu samples will be conducted in the dedicated OSIRIS-REx sample cleanroom to be built at NASA-JSC.

  15. Accuracy assessment of the National Forest Inventory map of Mexico: sampling designs and the fuzzy characterization of landscapes

    Directory of Open Access Journals (Sweden)

    Stéphane Couturier

    2009-10-01

    Full Text Available There is no record so far in the literature of a comprehensive method to assess the accuracy of regional scale Land Cover/ Land Use (LCLU maps in the sub-tropical belt. The elevated biodiversity and the presence of highly fragmented classes hamper the use of sampling designs commonly employed in previous assessments of mainly temperate zones. A sampling design for assessing the accuracy of the Mexican National Forest Inventory (NFI map at community level is presented. A pilot study was conducted on the Cuitzeo Lake watershed region covering 400 000 ha of the 2000 Landsat-derived map. Various sampling designs were tested in order to find a trade-off between operational costs, a good spatial distribution of the sample and the inclusion of all scarcely distributed classes (‘rare classes’. A two-stage sampling design where the selection of Primary Sampling Units (PSU was done under separate schemes for commonly and scarcely distributed classes, showed best characteristics. A total of 2 023 punctual secondary sampling units were verified against their NFI map label. Issues regarding the assessment strategy and trends of class confusions are devised.

  16. An investigation of the effects of relevant samples and a comparison of verification versus discovery based lab design

    Science.gov (United States)

    Rieben, James C., Jr.

    This study focuses on the effects of relevance and lab design on student learning within the chemistry laboratory environment. A general chemistry conductivity of solutions experiment and an upper level organic chemistry cellulose regeneration experiment were employed. In the conductivity experiment, the two main variables studied were the effect of relevant (or "real world") samples on student learning and a verification-based lab design versus a discovery-based lab design. With the cellulose regeneration experiment, the effect of a discovery-based lab design vs. a verification-based lab design was the sole focus. Evaluation surveys consisting of six questions were used at three different times to assess student knowledge of experimental concepts. In the general chemistry laboratory portion of this study, four experimental variants were employed to investigate the effect of relevance and lab design on student learning. These variants consisted of a traditional (or verification) lab design, a traditional lab design using "real world" samples, a new lab design employing real world samples/situations using unknown samples, and the new lab design using real world samples/situations that were known to the student. Data used in this analysis were collected during the Fall 08, Winter 09, and Fall 09 terms. For the second part of this study a cellulose regeneration experiment was employed to investigate the effects of lab design. A demonstration creating regenerated cellulose "rayon" was modified and converted to an efficient and low-waste experiment. In the first variant students tested their products and verified a list of physical properties. In the second variant, students filled in a blank physical property chart with their own experimental results for the physical properties. Results from the conductivity experiment show significant student learning of the effects of concentration on conductivity and how to use conductivity to differentiate solution types with the

  17. Are quantitative trait-dependent sampling designs cost-effective for analysis of rare and common variants?

    Science.gov (United States)

    Yilmaz, Yildiz E; Bull, Shelley B

    2011-11-29

    Use of trait-dependent sampling designs in whole-genome association studies of sequence data can reduce total sequencing costs with modest losses of statistical efficiency. In a quantitative trait (QT) analysis of data from the Genetic Analysis Workshop 17 mini-exome for unrelated individuals in the Asian subpopulation, we investigate alternative designs that sequence only 50% of the entire cohort. In addition to a simple random sampling design, we consider extreme-phenotype designs that are of increasing interest in genetic association analysis of QTs, especially in studies concerned with the detection of rare genetic variants. We also evaluate a novel sampling design in which all individuals have a nonzero probability of being selected into the sample but in which individuals with extreme phenotypes have a proportionately larger probability. We take differential sampling of individuals with informative trait values into account by inverse probability weighting using standard survey methods which thus generalizes to the source population. In replicate 1 data, we applied the designs in association analysis of Q1 with both rare and common variants in the FLT1 gene, based on knowledge of the generating model. Using all 200 replicate data sets, we similarly analyzed Q1 and Q4 (which is known to be free of association with FLT1) to evaluate relative efficiency, type I error, and power. Simulation study results suggest that the QT-dependent selection designs generally yield greater than 50% relative efficiency compared to using the entire cohort, implying cost-effectiveness of 50% sample selection and worthwhile reduction of sequencing costs.

  18. Adaptive clinical trial designs with pre-specified rules for modifying the sample size: understanding efficient types of adaptation.

    Science.gov (United States)

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2013-04-15

    Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re-estimation promote its ability to avoid 'up-front' commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre-specified sampling plans, we evaluate alternative approaches in the context of well-defined, pre-specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre-specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others' prior research by demonstrating in realistic settings that simple and easily implemented pre-specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Design aspects of automation system for initial processing of fecal samples

    International Nuclear Information System (INIS)

    Sawant, Pramilla D.; Prabhu, Supreetha P.; Suja, A.; Wankhede, Sonal; Chaudhary, Seema; Rao, D.D.; Pradeepkumar, K.S.; Das, A.P.; Badodkar, B.D.

    2014-01-01

    The procedure for initial handling of the fecal samples at Bioassay Lab., Trombay is as follows: overnight fecal samples are collected from the worker in a kit consisting of a polythene bag placed in a wide mouth polythene container closed with an inner lid and a screw cap. Occupational worker collects the sample in the polythene bag. On receiving the sample, the polythene container along with the sample is weighed, polythene bag containing fecal sample is lifted out of the container using a pair of tongs placed inside a crucible and ashed inside a muffle furnace at 450℃. After complete ashing, the crucible containing white ash is taken-up for further radiochemical processing. This paper describes the various steps in developing a prototype automated system for initial handling of fecal samples. The proposed system for handling and processing of fecal samples is proposed to automate the above. The system once developed will help eliminate manual intervention till the ashing stage and reduce the biological hazard involved in handling such samples mentioned procedure

  20. System design specification for rotary mode core sample trucks No. 2, 3, and 4 programmable logic controller

    International Nuclear Information System (INIS)

    Dowell, J.L.; Akers, J.C.

    1995-01-01

    The system this document describes controls several functions of the Core Sample Truck(s) used to obtain nuclear waste samples from various underground storage tanks at Hanford. The system will monitor the sampling process and provide alarms and other feedback to insure the sampling process is performed within the prescribed operating envelope. The intended audience for this document is anyone associated with rotary or push mode core sampling. This document describes the Alarm and Control logic installed on Rotary Mode Core Sample Trucks (RMCST) number-sign 2, 3, and 4. It is intended to define the particular requirements of the RMCST alarm and control operation (not defined elsewhere) sufficiently for detailed design to implement on a Programmable Logic Controller (PLC)

  1. Representativeness-based sampling network design for the State of Alaska

    Science.gov (United States)

    Forrest M. Hoffman; Jitendra Kumar; Richard T. Mills; William W. Hargrove

    2013-01-01

    Resource and logistical constraints limit the frequency and extent of environmental observations, particularly in the Arctic, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent environmental variability at desired scales. A quantitative methodology for stratifying sampling domains, informing site selection,...

  2. Design of a radioactive gas sampling system for NESHAP compliance measurements of 41Ar

    International Nuclear Information System (INIS)

    Newton, G.J.; McDonald, M.J.; Ghanbari, F.; Hoover, M.D.; Barr, E.B.

    1994-01-01

    United States Department of Energy facilities are required to comply with the U.S. Environmental Protection Agency, National Emission Standard for Hazardous Air Pollutants (NESHAP) 40 CFR, part 61, subpart H. Compliance generally requires confirmatory measurements of emitted radionuclides. Although a number of standard procedures exist for extractive sampling of particle-associated radionuclides, sampling approaches for radioactive gases are less defined. Real-time, flow-through sampling of radioactive gases can be done when concentrations are high compared to interferences from background radiation. Cold traps can be used to collect and concentrate condensible effluents in applications where cryogenic conditions can be established and maintained. Commercially available gas-sampling cylinders can be used to capture grab samples of contaminated air under ambient or compressed conditions, if suitable sampling and control hardware are added to the cylinders. The purpose of the current study was to develop an efficient and compact set of sampling and control hardware for use with commercially available gas-sampling cylinders, and to demonstrate its use in NESHAP compliance testing of 41 Ar at two experimental research reactors

  3. Effects of sampling design on age ratios of migrants captured at stopover sites

    Science.gov (United States)

    Jeffrey F. Kelly; Deborah M. Finch

    2000-01-01

    Age classes of migrant songbirds often differ in migration timing. This difference creates the potential for age-ratios recorded at stopover sites to vary with the amount and distribution of sampling effort used. To test for these biases, we sub-sampled migrant capture data from the Middle Rio Grande Valley of New Mexico. We created data sets that reflected the age...

  4. Exploiting H infinity sampled-data control theory for high-precision electromechanical servo control design

    NARCIS (Netherlands)

    Oomen, T.A.E.; Wal, van de M.M.J.; Bosgra, O.H.

    2006-01-01

    Optimal design of digital controllers for industrial electromechanical servo systems using an Hinf-criterion is considered. Present industrial practice is to perform the control design in the continuous time domain and to discretize the controller a posteriori. This procedure involves unnecessary

  5. The PowerAtlas: a power and sample size atlas for microarray experimental design and research

    Directory of Open Access Journals (Sweden)

    Wang Jelai

    2006-02-01

    Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.

  6. Design and characterization of poly(dimethylsiloxane)-based valves for interfacing continuous-flow sampling to microchip electrophoresis.

    Science.gov (United States)

    Li, Michelle W; Huynh, Bryan H; Hulvey, Matthew K; Lunte, Susan M; Martin, R Scott

    2006-02-15

    This work describes the fabrication and evaluation of a poly(dimethyl)siloxane (PDMS)-based device that enables the discrete injection of a sample plug from a continuous-flow stream into a microchannel for subsequent analysis by electrophoresis. Devices were fabricated by aligning valving and flow channel layers followed by plasma sealing the combined layers onto a glass plate that contained fittings for the introduction of liquid sample and nitrogen gas. The design incorporates a reduced-volume pneumatic valve that actuates (on the order of hundreds of milliseconds) to allow analyte from a continuously flowing sampling channel to be injected into a separation channel for electrophoresis. The injector design was optimized to include a pushback channel to flush away stagnant sample associated with the injector dead volume. The effect of the valve actuation time, the pushback voltage, and the sampling stream flow rate on the performance of the device was characterized. Using the optimized design and an injection frequency of 0.64 Hz showed that the injection process is reproducible (RSD of 1.77%, n = 15). Concentration change experiments using fluorescein as the analyte showed that the device could achieve a lag time as small as 14 s. Finally, to demonstrate the potential uses of this device, the microchip was coupled to a microdialysis probe to monitor a concentration change and sample a fluorescein dye mixture.

  7. Adaptation of G-TAG Software for Validating Touch-and-Go Comet Surface Sampling Design Methodology

    Science.gov (United States)

    Mandic, Milan; Acikmese, Behcet; Blackmore, Lars

    2011-01-01

    The G-TAG software tool was developed under the R&TD on Integrated Autonomous Guidance, Navigation, and Control for Comet Sample Return, and represents a novel, multi-body dynamics simulation software tool for studying TAG sampling. The G-TAG multi-body simulation tool provides a simulation environment in which a Touch-and-Go (TAG) sampling event can be extensively tested. TAG sampling requires the spacecraft to descend to the surface, contact the surface with a sampling collection device, and then to ascend to a safe altitude. The TAG event lasts only a few seconds but is mission-critical with potentially high risk. Consequently, there is a need for the TAG event to be well characterized and studied by simulation and analysis in order for the proposal teams to converge on a reliable spacecraft design. This adaptation of the G-TAG tool was developed to support the Comet Odyssey proposal effort, and is specifically focused to address comet sample return missions. In this application, the spacecraft descends to and samples from the surface of a comet. Performance of the spacecraft during TAG is assessed based on survivability and sample collection performance. For the adaptation of the G-TAG simulation tool to comet scenarios, models are developed that accurately describe the properties of the spacecraft, approach trajectories, and descent velocities, as well as the models of the external forces and torques acting on the spacecraft. The adapted models of the spacecraft, descent profiles, and external sampling forces/torques were more sophisticated and customized for comets than those available in the basic G-TAG simulation tool. Scenarios implemented include the study of variations in requirements, spacecraft design (size, locations, etc. of the spacecraft components), and the environment (surface properties, slope, disturbances, etc.). The simulations, along with their visual representations using G-View, contributed to the Comet Odyssey New Frontiers proposal

  8. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    Science.gov (United States)

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Design Sensitivity Method for Sampling-Based RBDO with Fixed COV

    Science.gov (United States)

    2015-04-29

    contours of the input model at initial design d0 and RBDO optimum design dopt are shown. As the limit state functions are not linear and some input...Glasser, M. L., Moore, R. A., and Scott, T. C., 1990, "Evaluation of Classes of Definite Integrals Involving Elementary Functions via...Differentiation of Special Functions," Applicable Algebra in Engineering, Communication and Computing, 1(2), pp. 149-165. [25] Cho, H., Bae, S., Choi, K. K

  10. Report: Independent Environmental Sampling Shows Some Properties Designated by EPA as Available for Use Had Some Contamination

    Science.gov (United States)

    Report #15-P-0221, July 21, 2015. Some OIG sampling results showed contamination was still present at sites designated by the EPA as ready for reuse. This was unexpected and could signal a need to implement changes to ensure human health protection.

  11. Design and building of a homemade sample changer for automation of the irradiation in neutron activation analysis technique

    International Nuclear Information System (INIS)

    Gago, Javier; Hernandez, Yuri; Baltuano, Oscar; Bedregal, Patricia; Lopez, Yon; Urquizo, Rafael

    2014-01-01

    Because the RP-10 research reactor operates during weekends, it was necessary to design and build a sample changer for irradiation as part of the automation process of neutron activation analysis technique. The device is formed by an aluminum turntable disk which can accommodate 19 polyethylene capsules, containing samples to be sent using the pneumatic transfer system from the laboratory to the irradiation position. The system is operate by a control switchboard to send and return capsules in a variable preset time and by two different ways, allowing the determination of short, medium and long lived radionuclides. Also another mechanism is designed called 'exchange valve' for changing travel paths (pipelines) allowing the irradiated samples to be stored for a longer time in the reactor hall. The system design has allowed complete automation of this technique, enabling the irradiation of samples without the presence of an analyst. The design, construction and operation of the device is described and presented in this article. (authors).

  12. Statistical properties of mean stand biomass estimators in a LIDAR-based double sampling forest survey design.

    Science.gov (United States)

    H.E. Anderson; J. Breidenbach

    2007-01-01

    Airborne laser scanning (LIDAR) can be a valuable tool in double-sampling forest survey designs. LIDAR-derived forest structure metrics are often highly correlated with important forest inventory variables, such as mean stand biomass, and LIDAR-based synthetic regression estimators have the potential to be highly efficient compared to single-stage estimators, which...

  13. MCMC-ODPR: Primer design optimization using Markov Chain Monte Carlo sampling

    Directory of Open Access Journals (Sweden)

    Kitchen James L

    2012-11-01

    Full Text Available Abstract Background Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR algorithm. Results After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. Conclusions MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  14. MCMC-ODPR: primer design optimization using Markov Chain Monte Carlo sampling.

    Science.gov (United States)

    Kitchen, James L; Moore, Jonathan D; Palmer, Sarah A; Allaby, Robin G

    2012-11-05

    Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  15. Optimal sampling designs for estimation of Plasmodium falciparum clearance rates in patients treated with artemisinin derivatives

    Science.gov (United States)

    2013-01-01

    Background The emergence of Plasmodium falciparum resistance to artemisinins in Southeast Asia threatens the control of malaria worldwide. The pharmacodynamic hallmark of artemisinin derivatives is rapid parasite clearance (a short parasite half-life), therefore, the in vivo phenotype of slow clearance defines the reduced susceptibility to the drug. Measurement of parasite counts every six hours during the first three days after treatment have been recommended to measure the parasite clearance half-life, but it remains unclear whether simpler sampling intervals and frequencies might also be sufficient to reliably estimate this parameter. Methods A total of 2,746 parasite density-time profiles were selected from 13 clinical trials in Thailand, Cambodia, Mali, Vietnam, and Kenya. In these studies, parasite densities were measured every six hours until negative after treatment with an artemisinin derivative (alone or in combination with a partner drug). The WWARN Parasite Clearance Estimator (PCE) tool was used to estimate “reference” half-lives from these six-hourly measurements. The effect of four alternative sampling schedules on half-life estimation was investigated, and compared to the reference half-life (time zero, 6, 12, 24 (A1); zero, 6, 18, 24 (A2); zero, 12, 18, 24 (A3) or zero, 12, 24 (A4) hours and then every 12 hours). Statistical bootstrap methods were used to estimate the sampling distribution of half-lives for parasite populations with different geometric mean half-lives. A simulation study was performed to investigate a suite of 16 potential alternative schedules and half-life estimates generated by each of the schedules were compared to the “true” half-life. The candidate schedules in the simulation study included (among others) six-hourly sampling, schedule A1, schedule A4, and a convenience sampling schedule at six, seven, 24, 25, 48 and 49 hours. Results The median (range) parasite half-life for all clinical studies combined was 3.1 (0

  16. Data-driven soft sensor design with multiple-rate sampled data

    DEFF Research Database (Denmark)

    Lin, Bao; Recke, Bodil; Knudsen, Jørgen K.H.

    2007-01-01

    Multi-rate systems are common in industrial processes where quality measurements have slower sampling rate than other process variables. Since inter-sample information is desirable for effective quality control, different approaches have been reported to estimate the quality between samples......, including numerical interpolation, polynomial transformation, data lifting and weighted partial least squares (WPLS). Two modifications to the original data lifting approach are proposed in this paper: reformulating the extraction of a fast model as an optimization problem and ensuring the desired model...... properties through Tikhonov Regularization. A comparative investigation of the four approaches is performed in this paper. Their applicability, accuracy and robustness to process noise are evaluated on a single-input single output (SISO) system. The regularized data lifting and WPLS approaches...

  17. Conceptual design and sampling procedures of the biological programme of NuukBasic

    DEFF Research Database (Denmark)

    Aastrup, Peter; Nymand, Josephine; Raundrup, Katrine

    uorescence in three series of plots. Arthropods are sampled by means of yellow pitfall traps as well as in window traps. Microarthropods are sampled in soil cores and extracted in an extractor by gradually heating up soil. The avifauna is monitored with special emphasis on passerine birds. Only few...... Vegetation Index (NDVI). The fl ux of CO2 is measured in natural conditions as well as in manipulations simulating increased temperature, increased cloud cover, shorter growing season, and longer growing season. The effect of increased UV-B radiation on plant stress is studied by measuring chlorophyll fl...

  18. Algorithm/Architecture Co-design of the Generalized Sampling Theorem Based De-Interlacer.

    NARCIS (Netherlands)

    Beric, A.; Haan, de G.; Sethuraman, R.; Meerbergen, van J.

    2005-01-01

    De-interlacing is a major determinant of image quality in a modern display processing chain. The de-interlacing method based on the generalized sampling theorem (GST)applied to motion estimation and motion compensation provides the best de-interlacing results. With HDTV interlaced input material

  19. Sample requirements and design of an inter-laboratory trial for radiocarbon laboratories

    NARCIS (Netherlands)

    Bryant, C; Carmi, [No Value; Cook, G; Gulliksen, S; Harkness, D; Heinemeier, J; McGee, E; Naysmith, P; Possnert, G; van der Plicht, H; van Strydonck, M; Carmi, Israel

    2000-01-01

    An on-going inter-comparison programme which is focused on assessing and establishing consensus protocols to be applied in the identification, selection and sub-sampling of materials for subsequent C-14 analysis is described. The outcome of the programme will provide a detailed quantification of the

  20. Modified sampling design for age-0 fish electrofishing at beach habitats

    Czech Academy of Sciences Publication Activity Database

    Janáč, Michal; Jurajda, Pavel

    2010-01-01

    Roč. 30, č. 5 (2010), s. 1210-1220 ISSN 0275-5947 R&D Projects: GA MŠk LC522 Institutional research plan: CEZ:AV0Z60930519 Keywords : young-of-the-year * electrofishing * sampling Subject RIV: EH - Ecology, Behaviour Impact factor: 1.203, year: 2010

  1. Sample design and gamma-ray counting strategy of neutron activation system for triton burnup measurements in KSTAR

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Jungmin [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of); Cheon, Mun Seong [ITER Korea, National Fusion Research Institute, Daejeon (Korea, Republic of); Chung, Kyoung-Jae, E-mail: jkjlsh1@snu.ac.kr [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of); Hwang, Y.S. [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of)

    2016-11-01

    Highlights: • Sample design for triton burnup ratio measurement is carried out. • Samples for 14.1 MeV neutron measurements are selected for KSTAR. • Si and Cu are the most suitable materials for d-t neutron measurements. • Appropriate γ-ray counting strategies for each selected sample are established. - Abstract: On the purpose of triton burnup measurements in Korea Superconducting Tokamak Advanced Research (KSTAR) deuterium plasmas, appropriate neutron activation system (NAS) samples for 14.1 MeV d-t neutron measurements have been designed and gamma-ray counting strategy is established. Neutronics calculations are performed with the MCNP5 neutron transport code for the KSTAR neutral beam heated deuterium plasma discharges. Based on those calculations and the assumed d-t neutron yield, the activities induced by d-t neutrons are estimated with the inventory code FISPACT-2007 for candidate sample materials: Si, Cu, Al, Fe, Nb, Co, Ti, and Ni. It is found that Si, Cu, Al, and Fe are suitable for the KSATR NAS in terms of the minimum detectable activity (MDA) calculated based on the standard deviation of blank measurements. Considering background gamma-rays radiated from surrounding structures activated by thermalized fusion neutrons, appropriate gamma-ray counting strategy for each selected sample is established.

  2. Data-driven soft sensor design with multiple-rate sampled data: a comparative study

    DEFF Research Database (Denmark)

    Lin, Bao; Recke, Bodil; Schmidt, Torben M.

    2009-01-01

    to design quality soft sensors for cement kiln processes using data collected from a simulator and a plant log system. Preliminary results reveal that the WPLS approach is able to provide accurate one-step-ahead prediction. The regularized data lifting technique predicts the product quality of cement kiln...

  3. Addressing Underrepresentation in Sex Work Research: Reflections on Designing a Purposeful Sampling Strategy.

    Science.gov (United States)

    Bungay, Vicky; Oliffe, John; Atchison, Chris

    2016-06-01

    Men, transgender people, and those working in off-street locales have historically been underrepresented in sex work health research. Failure to include all sections of sex worker populations precludes comprehensive understandings about a range of population health issues, including potential variations in the manifestation of such issues within and between population subgroups, which in turn can impede the development of effective services and interventions. In this article, we describe our attempts to define, determine, and recruit a purposeful sample for a qualitative study examining the interrelationships between sex workers' health and the working conditions in the Vancouver off-street sex industry. Detailed is our application of ethnographic mapping approaches to generate information about population diversity and work settings within distinct geographical boundaries. Bearing in mind the challenges and the overwhelming discrimination sex workers experience, we scope recommendations for safe and effective purposeful sampling inclusive of sex workers' heterogeneity. © The Author(s) 2015.

  4. Statistical Methods and Sampling Design for Estimating Step Trends in Surface-Water Quality

    Science.gov (United States)

    Hirsch, Robert M.

    1988-01-01

    This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The J. L. Hodges-E. L. Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies is examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods.

  5. Control sample design using a geodemographic discriminator: An application of Super Profiles

    Science.gov (United States)

    Brown, Peter J. B.; McCulloch, Peter G.; Williams, Evelyn M. I.; Ashurst, Darren C.

    The development and application of an innovative sampling framework for use in a British study of the early detection of gastric cancer are described. The Super Profiles geodemographic discriminator is used in the identification of geographically distinct control and contrast areas from which samples of cancer registry case records may be drawn for comparison with the records of patients participating in the gastric cancer intervention project. Preliminary results of the application of the framework are presented and confirm its effectiveness in satisfactorily reflecting known patterns of variation in cancer occurrence by age, gender and social class. The method works well for cancers with a known and clear social gradient, such as lung and breast cancer, moderately well for gastric cancer and somewhat less well for oesophageal cancer, where the social class gradient is less clear.

  6. Sampling plan design and analysis for a low level radioactive waste disposal program

    International Nuclear Information System (INIS)

    Hassig, N.L.; Wanless, J.W.

    1989-01-01

    Low-level wastes that are candidates for BRC (below regulatory concern) disposal must be subjected to an extensive monitoring program to insure the wastes meet (potential) bulk property and contamination concentration BRC criteria for disposal. This paper addresses the statistical implications of using various methods to verify BRC criteria. While surface and volumetric monitoring each have their advantages and disadvantages, a dual, sequential monitoring process is the preferred choice from a statistical reliability perspective. With dual monitoring, measurements on the contamination are verifiable, and sufficient to allow for a complete characterization of the wastes. As these characterizations become more reliable and stable, something less than 100% sampling may be possible for release of wastes for BRC disposal. This paper provides a survey of the issues involved in the selection of a monitoring and sampling program for the disposal of BRC wastes

  7. A weighted sampling algorithm for the design of RNA sequences with targeted secondary structure and nucleotide distribution.

    Science.gov (United States)

    Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme

    2013-07-01

    The design of RNA sequences folding into predefined secondary structures is a milestone for many synthetic biology and gene therapy studies. Most of the current software uses similar local search strategies (i.e. a random seed is progressively adapted to acquire the desired folding properties) and more importantly do not allow the user to control explicitly the nucleotide distribution such as the GC-content in their sequences. However, the latter is an important criterion for large-scale applications as it could presumably be used to design sequences with better transcription rates and/or structural plasticity. In this article, we introduce IncaRNAtion, a novel algorithm to design RNA sequences folding into target secondary structures with a predefined nucleotide distribution. IncaRNAtion uses a global sampling approach and weighted sampling techniques. We show that our approach is fast (i.e. running time comparable or better than local search methods), seedless (we remove the bias of the seed in local search heuristics) and successfully generates high-quality sequences (i.e. thermodynamically stable) for any GC-content. To complete this study, we develop a hybrid method combining our global sampling approach with local search strategies. Remarkably, our glocal methodology overcomes both local and global approaches for sampling sequences with a specific GC-content and target structure. IncaRNAtion is available at csb.cs.mcgill.ca/incarnation/. Supplementary data are available at Bioinformatics online.

  8. Baseline Design Compliance Matrix for the Type 4 In Situ Vapor Samplers and Supernate and Sludge and Soft Saltcake Grab Sampling

    International Nuclear Information System (INIS)

    BOGER, R.M.

    2000-01-01

    The DOE has identified a need to sample vapor space, exhaust ducts, supernate, sludge, and soft saltcake in waste tanks that store radioactive waste. This document provides the Design Compliance Matrix (DCM) for the Type 4 In-Situ Vapor Sampling (ISVS) system and the Grab Sampling System that are used for completing this type of sampling function. The DCM identifies the design requirements and the source of the requirements for the Type 4 ISVS system and the Grab Sampling system. The DCM is a single-source compilation design requirements for sampling and sampling support equipment and supports the configuration management of these systems

  9. Exploring the utility of quantitative network design in evaluating Arctic sea ice thickness sampling strategies

    OpenAIRE

    Kaminski, T.; Kauker, F.; Eicken, H.; Karcher, M.

    2015-01-01

    We present a quantitative network design (QND) study of the Arctic sea ice-ocean system using a software tool that can evaluate hypothetical observational networks in a variational data assimilation system. For a demonstration, we evaluate two idealised flight transects derived from NASA's Operation IceBridge airborne ice surveys in terms of their potential to improve ten-day to five-month sea-ice forecasts. As target regions for the forecasts we select the Chukchi Sea, a...

  10. Design and performance of a multi-channel, multi-sampling, PSD-enabling integrated circuit

    International Nuclear Information System (INIS)

    Engel, G.L.; Hall, M.J.; Proctor, J.M.; Elson, J.M.; Sobotka, L.G.; Shane, R.; Charity, R.J.

    2009-01-01

    This paper presents the design and test results of an eight-channel prototype integrated circuit chip intended to greatly simplify the pulse-processing electronics needed for large arrays of scintillation detectors. Because the chip design employs (user-controlled) multi-region charge integration, particle identification is incorporated into the basic design. Each channel on the chip also contains a time-to-voltage converter which provides relative time information. The pulse-height integrals and the relative time are all stored on capacitors and are either reset, after a user controlled time, or sequentially read out if acquisition of the event is desired. Each of the three pulse-height sub-channels consists of a gated integrator with eight programmable charging rates and an externally programmable gate generator that defines the start (with four time ranges) and width (with four time ranges) of the gate relative to an external discriminator signal. The chip supports three triggering modes, two time ranges, two power modes, and produces four sparsified analog pulse trains (three for the integrators and another for the time) with synchronized addresses for off-chip digitization with a pipelined ADC. The eight-channel prototype chip occupies an area of 2.8 mmx5.7 mm, dissipates 60 mW (low-power mode), and was fabricated in the AMI 0.5-μm process (C5N).

  11. Design and performance of a multi-channel, multi-sampling, PSD-enabling integrated circuit

    Energy Technology Data Exchange (ETDEWEB)

    Engel, G.L., E-mail: gengel@siue.ed [Department of Electrical and Computer Engineering, VLSI Design Research Laboratory, Southern Illinois University Edwardsville, Engineering Building, Room 3043 Edwardsville, IL 62026 1081 (United States); Hall, M.J.; Proctor, J.M. [Department of Electrical and Computer Engineering, VLSI Design Research Laboratory, Southern Illinois University Edwardsville, Engineering Building, Room 3043 Edwardsville, IL 62026 1081 (United States); Elson, J.M.; Sobotka, L.G.; Shane, R.; Charity, R.J. [Departments of Chemistry and Physics, Washington University, Saint Louis, MO 63130 (United States)

    2009-12-21

    This paper presents the design and test results of an eight-channel prototype integrated circuit chip intended to greatly simplify the pulse-processing electronics needed for large arrays of scintillation detectors. Because the chip design employs (user-controlled) multi-region charge integration, particle identification is incorporated into the basic design. Each channel on the chip also contains a time-to-voltage converter which provides relative time information. The pulse-height integrals and the relative time are all stored on capacitors and are either reset, after a user controlled time, or sequentially read out if acquisition of the event is desired. Each of the three pulse-height sub-channels consists of a gated integrator with eight programmable charging rates and an externally programmable gate generator that defines the start (with four time ranges) and width (with four time ranges) of the gate relative to an external discriminator signal. The chip supports three triggering modes, two time ranges, two power modes, and produces four sparsified analog pulse trains (three for the integrators and another for the time) with synchronized addresses for off-chip digitization with a pipelined ADC. The eight-channel prototype chip occupies an area of 2.8 mmx5.7 mm, dissipates 60 mW (low-power mode), and was fabricated in the AMI 0.5-mum process (C5N).

  12. Designing and Developing an Augmented Reality Application: A Sample Of Chemistry Education

    Directory of Open Access Journals (Sweden)

    Zeynep Taçgın

    2016-09-01

    Full Text Available Augmented Reality has been accepted as an effective educational method and this review depends on philosophical background of cognitive science. This means, several channels –aural, visual, and interactivity, etc. - have been used to offer information in order to support individual learning styles. In this study, Natural User Interface- and Human Computer Interaction-based Augmented Reality application has been developed for the chemistry education. The purpose of this study is to design and develop a student-centered Augmented Reality environment to teach periodic table, and atomic structure of the elements and molecules. Head Mounted Display has been used to develop Augmented Reality system, and user control has been executed with hand motions (grab, drag, drop, select and rotate. The hand motion control has been used to improve spatial abilities of students in order to maximize the transferred knowledge. Use of the most common natural controlling tools (fingers and hands to interact with virtual objects instead of AR markers or other tools provides a more interactive, holistic, social and effective learning environment that authentically reflects the world around them. In this way, learners have an active role, and are not just passive receptors. Correspondingly, the developed NUI-based system has been constructed as design-based research and developed by using instructional design methods and principles to get reach of more effective and productive learning material. Features of this developed material consist of some fundamental components to create more intuitive and conductive tools in order to support Real World collaboration.

  13. Design and relevant sample calculations for a neutral particle energy diagnostic based on time of flight

    Energy Technology Data Exchange (ETDEWEB)

    Cecconello, M

    1999-05-01

    Extrap T2 will be equipped with a neutral particles energy diagnostic based on time of flight technique. In this report, the expected neutral fluxes for Extrap T2 are estimated and discussed in order to determine the feasibility and the limits of such diagnostic. These estimates are based on a 1D model of the plasma. The input parameters of such model are the density and temperature radial profiles of electrons and ions and the density of neutrals at the edge and in the centre of the plasma. The atomic processes included in the model are the charge-exchange and the electron-impact ionization processes. The results indicate that the plasma attenuation length varies from a/5 to a, a being the minor radius. Differential neutral fluxes, as well as the estimated power losses due to CX processes (2 % of the input power), are in agreement with experimental results obtained in similar devices. The expected impurity influxes vary from 10{sup 14} to 10{sup 11} cm{sup -2}s{sup -1}. The neutral particles detection and acquisition systems are discussed. The maximum detectable energy varies from 1 to 3 keV depending on the flight distances d. The time resolution is 0.5 ms. Output signals from the waveform recorder are foreseen in the range 0-200 mV. An 8-bit waveform recorder having 2 MHz sampling frequency and 100K sample of memory capacity is the minimum requirement for the acquisition system 20 refs, 19 figs.

  14. Spreading Design of Radioactivity in Sea Water, Algae and Fish Samples inthe Coastal of Muria Peninsula Area

    International Nuclear Information System (INIS)

    Sutjipto; Muryono; Sumining

    2000-01-01

    Spreading design of radioactivity in sea water, brown algae (phaeopyceae)and kerapu fish (epeniphelus) samples in the coastal of Muria peninsula areahas been studied. This research was carried out with designed beside to knowspreading each radioactivity but also spreading design in relation to thecontent of Pu-239 and Cs-137. Samples taken, preparation and analysis basedon the procedures of environmental radioactivity analysis. The instrumentused for the analysis radioactivity were alpha counter with detector ZnS, lowlevel beta counter modified P3TM-BATAN with detector GM and spectrometergamma with detector Ge(Li). Alpha radioactivity obtained of sea water, algaeand fish were the fluctuation form of the natural background. Radionuclide ofPu-239 in samples not detect, because its concentration/radioactivity stillbelow the maximum concentration detection value of Pu-239 for algae and fishwas that 1.10 Bq/g, whereas for sea water was that 0.07 Bq/mL. Result for theradioactivity which give the highest alpha radioactivity obtained on thekerapu fish was that 1.56 x 10 -3 Bq/g, beta radioactivity on sea water wasthat 1.75 x 10 2 mBq/L, gamma radioactivity of K-40 on brown algae was that3.72 x 10 -2 Bq/g and gamma radioactivity of Tl-208 on fish as mentionedabove was that 1.35 x 10 -2 Bq/g. All the peak spectrum gamma energy ofCs-137 do not detect with gamma counter, so there are not the radionuclide ofCs-137 in the samples. Spreading design of radioactivity which occur in thecoastal of Muria peninsula area for alpha radioactivity was found on kerapufish, beta radioactivities on sea water and gamma radioactivity on brownalgae and kerapu fish. (author)

  15. Sampling for quality assurance of grading decisions in diabetic retinopathy screening: designing the system to detect errors.

    Science.gov (United States)

    Slattery, Jim

    2005-01-01

    To evaluate various designs for a quality assurance system to detect and control human errors in a national screening programme for diabetic retinopathy. A computer simulation was performed of some possible ways of sampling the referral decisions made during grading and of different criteria for initiating more intensive QA investigations. The effectiveness of QA systems was assessed by the ability to detect a grader making occasional errors in referral. Substantial QA sample sizes are needed to ensure against inappropriate failure to refer. Detection of a grader who failed to refer one in ten cases can be achieved with a probability of 0.58 using an annual sample size of 300 and 0.77 using a sample size of 500. An unmasked verification of a sample of non-referrals by a specialist is the most effective method of internal QA for the diabetic retinopathy screening programme. Preferential sampling of those with some degree of disease may improve the efficiency of the system.

  16. Effects of physical activity calorie expenditure (PACE) labeling: study design and baseline sample characteristics.

    Science.gov (United States)

    Viera, Anthony J; Tuttle, Laura; Olsson, Emily; Gras-Najjar, Julie; Gizlice, Ziya; Hales, Derek; Linnan, Laura; Lin, Feng-Chang; Noar, Seth M; Ammerman, Alice

    2017-09-12

    Obesity and physical inactivity are responsible for more than 365,000 deaths per year and contribute substantially to rising healthcare costs in the US, making clear the need for effective public health interventions. Calorie labeling on menus has been implemented to guide consumer ordering behaviors, but effects on calories purchased has been minimal. In this project, we tested the effect of physical activity calorie expenditure (PACE) food labels on actual point-of-decision food purchasing behavior as well as physical activity. Using a two-group interrupted time series cohort study design in three worksite cafeterias, one cafeteria was assigned to the intervention condition, and the other two served as controls. Calories from food purchased in the cafeteria were assessed by photographs of meals (accompanied by notes made on-site) using a standardized calorie database and portion size-estimation protocol. Primary outcomes will be average calories purchased and minutes of moderate to vigorous physical activity (MVPA) by individuals in the cohorts. We will compare pre-post changes in study outcomes between study groups using piecewise generalized linear mixed model regressions (segmented regressions) with a single change point in our interrupted time-series study. The results of this project will provide evidence of the effectiveness of worksite cafeteria menu labeling, which could potentially inform policy intervention approaches. Labels that convey information in a more readily understandable manner may be more effective at motivating behavior change. Strengths of this study include its cohort design and its robust data capture methods using food photographs and accelerometry.

  17. Design and Development of a Robot-Based Automation System for Cryogenic Crystal Sample Mounting at the Advanced Photon Source

    International Nuclear Information System (INIS)

    Shu, D.; Preissner, C.; Nocher, D.; Han, Y.; Barraza, J.; Lee, P.; Lee, W.-K.; Cai, Z.; Ginell, S.; Alkire, R.; Lazarski, K.; Schuessler, R.; Joachimiak, A.

    2004-01-01

    X-ray crystallography is the primary method to determine the 3D structures of complex macromolecules at high resolution. In the years to come, the Advanced Photon Source (APS) and similar 3rd-generation synchrotron sources elsewhere will become the most powerful tools for studying atomic structures of biological molecules. One of the major bottlenecks in the x-ray data collection process is the constant need to change and realign the crystal sample. This is a very time- and manpower-consuming task. An automated sample mounting system will help to solve this bottleneck problem. We have developed a novel robot-based automation system for cryogenic crystal sample mounting at the APS. Design of the robot-based automation system, as well as its on-line test results at the Argonne Structural Biology Center (SBC) 19-BM experimental station, are presented in this paper

  18. Relative Efficiencies of a Three-Stage Versus a Two-Stage Sample Design For a New NLS Cohort Study. 22U-884-38.

    Science.gov (United States)

    Folsom, R. E.; Weber, J. H.

    Two sampling designs were compared for the planned 1978 national longitudinal survey of high school seniors with respect to statistical efficiency and cost. The 1972 survey used a stratified two-stage sample of high schools and seniors within schools. In order to minimize interviewer travel costs, an alternate sampling design was proposed,…

  19. Design of real-time monitoring and control system of 222Rn/220Rn sampling for radon chamber

    International Nuclear Information System (INIS)

    Wu Rongyan; Zhao Xiuliang; Zhang Meiqin; Yu Hong

    2008-01-01

    This paper describes the design of 222 Rn/ 220 Rn sampling monitoring and control system based on single-chip microcomputer of series Intel51. The hardware design involves the choosing and usage of sensors-chips, A/D conversion-chip, USB interface-chip, keyboard-chip, digital display-chip, photoelectric coupling isolation-chips and drive circuit-chips of the direct current pump. Software design is composed by software of Personal Computer (PC) and software of Single Chip Microcomputer (SCM). The data acquisition and conversion and the flux control of direct current pump are realized by using soft of Visual Basic and assemble language. The program flow charts are given. Furthermore, we improved the stability of the direct current pump by means of PID Control Algorithms. (authors)

  20. Development of blood extraction system designed by female mosquito's blood sampling mechanism for bio-MEMS

    Science.gov (United States)

    Tsuchiya, Kazuyoshi; Nakanishi, Naoyuki; Nakamachi, Eiji

    2005-02-01

    A compact and wearable wristwatch type Bio-MEMS such as a health monitoring system (HMS) to detect blood sugar level for diabetic patient, was newly developed. The HMS consists of (1) a indentation unit with a microneedle to generate the skin penetration force using a shape memory alloy(SMA) actuator, (2) a pumping unit using a bimorph PZT piezoelectric actuator to extract the blood and (3) a gold (Au) electrode as a biosensor immobilized GOx and attached to the gate electrode of MOSFET to detect the amount of Glucose in extracted blood. GOx was immobilized on a self assembled spacer combined with an Au electrode by the cross-link method using BSA as an additional bonding material. The device can extract blood in a few microliter through a painless microneedle with the negative pressure by deflection of the bimorph PZT piezoelectric actuator produced in the blood chamber, by the similar way the female mosquito extracts human blood with muscle motion to flex or relax. The performances of the liquid sampling ability of the pumping unit through a microneedle (3.8mm length, 100μm internal diameter) using the bimorph PZT piezoelectric microactuator were measured. The blood extraction micro device could extract human blood at the speed of 2μl/min, and it is enough volume to measure a glucose level, compared to the amount of commercial based glucose level monitor. The electrode embedded in the blood extraction device chamber could detect electrons generated by the hydrolysis of hydrogen peroxide produced by the reaction between GOx and glucose in a few microliter extracted blood, using the constant electric current measurement system of the MOSFET type hybrid biosensor. The output voltage for the glucose diluted in the chamber was increased lineally with increase of the glucose concentration.

  1. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  2. Miller Early Childhood Sustained Home-visiting (MECSH trial: design, method and sample description

    Directory of Open Access Journals (Sweden)

    Anderson Teresa

    2008-12-01

    Full Text Available Abstract Background Home visiting programs comprising intensive and sustained visits by professionals (usually nurses over the first two years of life show promise in promoting child health and family functioning, and ameliorating disadvantage. Australian evidence of the effectiveness of sustained nurse home visiting in early childhood is limited. This paper describes the method and cohort characteristics of the first Australian study of sustained home visiting commencing antenatally and continuing to child-age two years for at-risk mothers in a disadvantaged community (the Miller Early Childhood Sustained Home-visiting trial. Methods and design Mothers reporting risks for poorer parenting outcomes residing in an area of socioeconomic disadvantage were recruited between February 2003 and March 2005. Mothers randomised to the intervention group received a standardised program of nurse home visiting. Interviews and observations covering child, maternal, family and environmental issues were undertaken with mothers antenatally and at 1, 12 and 24 months postpartum. Standardised tests of child development and maternal-child interaction were undertaken at 18 and 30 months postpartum. Information from hospital and community heath records was also obtained. Discussion A total of 338 women were identified and invited to participate, and 208 were recruited to the study. Rates of active follow-up were 86% at 12 months, 74% at 24 months and 63% at 30 months postpartum. Participation in particular data points ranged from 66% at 1 month to 51% at 24 months postpartum. Rates of active follow-up and data point participation were not significantly different for the intervention or comparison group at any data point. Mothers who presented for antenatal care prior to 20 weeks pregnant, those with household income from full-time employment and those who reported being abused themselves as a child were more likely to be retained in the study. The Miller Early

  3. Design and evaluation of a new Peltier-cooled laser ablation cell with on-sample temperature control.

    Science.gov (United States)

    Konz, Ioana; Fernández, Beatriz; Fernández, M Luisa; Pereiro, Rosario; Sanz-Medel, Alfredo

    2014-01-27

    A new custom-built Peltier-cooled laser ablation cell is described. The proposed cryogenic cell combines a small internal volume (20 cm(3)) with a unique and reliable on-sample temperature control. The use of a flexible temperature sensor, directly located on the sample surface, ensures a rigorous sample temperature control throughout the entire analysis time and allows instant response to any possible fluctuation. In this way sample integrity and, therefore, reproducibility can be guaranteed during the ablation. The refrigeration of the proposed cryogenic cell combines an internal refrigeration system, controlled by a sensitive thermocouple, with an external refrigeration system. Cooling of the sample is directly carried out by 8 small (1 cm×1 cm) Peltier elements placed in a circular arrangement in the base of the cell. These Peltier elements are located below a copper plate where the sample is placed. Due to the small size of the cooling electronics and their circular allocation it was possible to maintain a peephole under the sample for illumination allowing a much better visualization of the sample, a factor especially important when working with structurally complex tissue sections. The analytical performance of the cryogenic cell was studied using a glass reference material (SRM NIST 612) at room temperature and at -20°C. The proposed cell design shows a reasonable signal washout (signal decay within less than 10 s to background level), high sensitivity and good signal stability (in the range 6.6-11.7%). Furthermore, high precision (0.4-2.6%) and accuracy (0.3-3.9%) in the isotope ratio measurements were also observed operating the cell both at room temperature and at -20°C. Finally, experimental results obtained for the cell application to qualitative elemental imaging of structurally complex tissue samples (e.g. eye sections from a native frozen porcine eye and fresh flower leaves) demonstrate that working in cryogenic conditions is critical in such

  4. Off-road sampling reveals a different grassland bird community than roadside sampling: implications for survey design and estimates to guide conservation

    Directory of Open Access Journals (Sweden)

    Troy I. Wellicome

    2014-06-01

    concern. Our results highlight the need to develop appropriate corrections for bias in estimates derived from roadside sampling, and the need to design surveys that sample bird communities across a more representative cross-section of the landscape, both near and far from roads.

  5. Optimization of sample preparation variables for wedelolactone from Eclipta alba using Box-Behnken experimental design followed by HPLC identification.

    Science.gov (United States)

    Patil, A A; Sachin, B S; Shinde, D B; Wakte, P S

    2013-07-01

    Coumestan wedelolactone is an important phytocomponent from Eclipta alba (L.) Hassk. It possesses diverse pharmacological activities, which have prompted the development of various extraction techniques and strategies for its better utilization. The aim of the present study is to develop and optimize supercritical carbon dioxide assisted sample preparation and HPLC identification of wedelolactone from E. alba (L.) Hassk. The response surface methodology was employed to study the optimization of sample preparation using supercritical carbon dioxide for wedelolactone from E. alba (L.) Hassk. The optimized sample preparation involves the investigation of quantitative effects of sample preparation parameters viz. operating pressure, temperature, modifier concentration and time on yield of wedelolactone using Box-Behnken design. The wedelolactone content was determined using validated HPLC methodology. The experimental data were fitted to second-order polynomial equation using multiple regression analysis and analyzed using the appropriate statistical method. By solving the regression equation and analyzing 3D plots, the optimum extraction conditions were found to be: extraction pressure, 25 MPa; temperature, 56 °C; modifier concentration, 9.44% and extraction time, 60 min. Optimum extraction conditions demonstrated wedelolactone yield of 15.37 ± 0.63 mg/100 g E. alba (L.) Hassk, which was in good agreement with the predicted values. Temperature and modifier concentration showed significant effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction method. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  6. Appreciating the difference between design-based and model-based sampling strategies in quantitative morphology of the nervous system.

    Science.gov (United States)

    Geuna, S

    2000-11-20

    Quantitative morphology of the nervous system has undergone great developments over recent years, and several new technical procedures have been devised and applied successfully to neuromorphological research. However, a lively debate has arisen on some issues, and a great deal of confusion appears to exist that is definitely responsible for the slow spread of the new techniques among scientists. One such element of confusion is related to uncertainty about the meaning, implications, and advantages of the design-based sampling strategy that characterize the new techniques. In this article, to help remove this uncertainty, morphoquantitative methods are described and contrasted on the basis of the inferential paradigm of the sampling strategy: design-based vs model-based. Moreover, some recommendations are made to help scientists judge the appropriateness of a method used for a given study in relation to its specific goals. Finally, the use of the term stereology to label, more or less expressly, only some methods is critically discussed. Copyright 2000 Wiley-Liss, Inc.

  7. Economic Statistical Design of Variable Sampling Interval X¯$\\overline X $ Control Chart Based on Surrogate Variable Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Lee Tae-Hoon

    2016-12-01

    Full Text Available In many cases, a X¯$\\overline X $ control chart based on a performance variable is used in industrial fields. Typically, the control chart monitors the measurements of a performance variable itself. However, if the performance variable is too costly or impossible to measure, and a less expensive surrogate variable is available, the process may be more efficiently controlled using surrogate variables. In this paper, we present a model for the economic statistical design of a VSI (Variable Sampling Interval X¯$\\overline X $ control chart using a surrogate variable that is linearly correlated with the performance variable. We derive the total average profit model from an economic viewpoint and apply the model to a Very High Temperature Reactor (VHTR nuclear fuel measurement system and derive the optimal result using genetic algorithms. Compared with the control chart based on a performance variable, the proposed model gives a larger expected net income per unit of time in the long-run if the correlation between the performance variable and the surrogate variable is relatively high. The proposed model was confined to the sample mean control chart under the assumption that a single assignable cause occurs according to the Poisson process. However, the model may also be extended to other types of control charts using a single or multiple assignable cause assumptions such as VSS (Variable Sample Size X¯$\\overline X $ control chart, EWMA, CUSUM charts and so on.

  8. Thermal Protection for Mars Sample Return Earth Entry Vehicle: A Grand Challenge for Design Methodology and Reliability Verification

    Science.gov (United States)

    Venkatapathy, Ethiraj; Gage, Peter; Wright, Michael J.

    2017-01-01

    Mars Sample Return is our Grand Challenge for the coming decade. TPS (Thermal Protection System) nominal performance is not the key challenge. The main difficulty for designers is the need to verify unprecedented reliability for the entry system: current guidelines for prevention of backward contamination require that the probability of spores larger than 1 micron diameter escaping into the Earth environment be lower than 1 million for the entire system, and the allocation to TPS would be more stringent than that. For reference, the reliability allocation for Orion TPS is closer to 11000, and the demonstrated reliability for previous human Earth return systems was closer to 1100. Improving reliability by more than 3 orders of magnitude is a grand challenge indeed. The TPS community must embrace the possibility of new architectures that are focused on reliability above thermal performance and mass efficiency. MSR (Mars Sample Return) EEV (Earth Entry Vehicle) will be hit with MMOD (Micrometeoroid and Orbital Debris) prior to reentry. A chute-less aero-shell design which allows for self-righting shape was baselined in prior MSR studies, with the assumption that a passive system will maximize EEV robustness. Hence the aero-shell along with the TPS has to take ground impact and not break apart. System verification will require testing to establish ablative performance and thermal failure but also testing of damage from MMOD, and structural performance at ground impact. Mission requirements will demand analysis, testing and verification that are focused on establishing reliability of the design. In this proposed talk, we will focus on the grand challenge of MSR EEV TPS and the need for innovative approaches to address challenges in modeling, testing, manufacturing and verification.

  9. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design.

    Science.gov (United States)

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.

  10. Balanced sampling

    NARCIS (Netherlands)

    Brus, D.J.

    2015-01-01

    In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling

  11. Delineamento experimental e tamanho de amostra para alface cultivada em hidroponia Experimental design and sample size for hydroponic lettuce crop

    Directory of Open Access Journals (Sweden)

    Valéria Schimitz Marodim

    2000-10-01

    Full Text Available Este estudo visa a estabelecer o delineamento experimental e o tamanho de amostra para a cultura da alface (Lactuca sativa em hidroponia, pelo sistema NFT (Nutrient film technique. O experimento foi conduzido no Laboratório de Cultivos Sem Solo/Hidroponia, no Departamento de Fitotecnia da Universidade Federal de Santa Maria e baseou-se em dados de massa de plantas. Os resultados obtidos mostraram que, usando estrutura de cultivo de alface em hidroponia sobre bancadas de fibrocimento com seis canais, o delineamento experimental adequado é blocos ao acaso se a unidade experimental for constituída de faixas transversais aos canais das bancadas, e deve ser inteiramente casualizado se a bancada for a unidade experimental; para a variável massa de plantas, o tamanho da amostra é de 40 plantas para uma semi-amplitude do intervalo de confiança em percentagem da média (d igual a 5% e de 7 plantas para um d igual a 20%.This study was carried out to establish the experimental design and sample size for hydroponic lettuce (Lactuca sativa crop under nutrient film technique. The experiment was conducted in the Laboratory of Hydroponic Crops of the Horticulture Department of the Federal University of Santa Maria. The evaluated traits were plant weight. Under hydroponic conditions on concrete bench with six ducts, the most indicated experimental design for lettuce is randomised blocks for duct transversal plots or completely randomised for bench plot. The sample size for plant weight should be 40 and 7 plants, respectively, for a confidence interval of mean percentage (d equal to 5% and 20%.

  12. Fabrication of an electrochemical sensor based on computationally designed molecularly imprinted polymer for the determination of mesalamine in real samples

    Energy Technology Data Exchange (ETDEWEB)

    Torkashvand, M. [Department of Analytical Chemistry, Razi University, Kermanshah (Iran, Islamic Republic of); Gholivand, M.B., E-mail: mbgholivand@yahoo.com [Department of Analytical Chemistry, Razi University, Kermanshah (Iran, Islamic Republic of); Taherkhani, F. [Department of Physical Chemistry, Razi University, Kermanshah (Iran, Islamic Republic of)

    2015-10-01

    A novel electrochemical sensor based on mesalamine molecularly imprinted polymer (MIP) film on a glassy carbon electrode was fabricated. Density functional theory (DFT) in gas and solution phases was developed to study the intermolecular interactions in the pre-polymerization mixture and to find the suitable functional monomers in MIP preparation. On the basis of computational results, o-phenylenediamine (OP), gallic acid (GA) and p-aminobenzoic acid (ABA) were selected as functional monomers. The MIP film was cast on glassy carbon electrode by electropolymerization of solution containing ternary monomers and then followed by Ag dendrites (AgDs) with nanobranch deposition. The surface feature of the modified electrode (AgDs/MIP/GCE) was characterized by scanning electron microscopy (SEM) and electrochemical impedance spectroscopy (EIS). Under the optimal experimental conditions, the peak current was proportional to the concentration of mesalamine ranging from 0.05 to 100 μM, with the detection limit of 0.015 μM. The proposed sensor was applied successfully for mesalamine determination in real samples. - Highlights: • The determination of MES using AgDs/MIP/GCE is reported for the first time. • The computer assisted design of terpolymer MIPs was used to screen monomers. • Theoretical results of DFT approach were in agreement with experimental results. • The sensor displayed a high selectivity for template in the presence of interferes. • The developed sensor has been applied to determine mesalamine in real samples.

  13. Fabrication of an electrochemical sensor based on computationally designed molecularly imprinted polymer for the determination of mesalamine in real samples

    International Nuclear Information System (INIS)

    Torkashvand, M.; Gholivand, M.B.; Taherkhani, F.

    2015-01-01

    A novel electrochemical sensor based on mesalamine molecularly imprinted polymer (MIP) film on a glassy carbon electrode was fabricated. Density functional theory (DFT) in gas and solution phases was developed to study the intermolecular interactions in the pre-polymerization mixture and to find the suitable functional monomers in MIP preparation. On the basis of computational results, o-phenylenediamine (OP), gallic acid (GA) and p-aminobenzoic acid (ABA) were selected as functional monomers. The MIP film was cast on glassy carbon electrode by electropolymerization of solution containing ternary monomers and then followed by Ag dendrites (AgDs) with nanobranch deposition. The surface feature of the modified electrode (AgDs/MIP/GCE) was characterized by scanning electron microscopy (SEM) and electrochemical impedance spectroscopy (EIS). Under the optimal experimental conditions, the peak current was proportional to the concentration of mesalamine ranging from 0.05 to 100 μM, with the detection limit of 0.015 μM. The proposed sensor was applied successfully for mesalamine determination in real samples. - Highlights: • The determination of MES using AgDs/MIP/GCE is reported for the first time. • The computer assisted design of terpolymer MIPs was used to screen monomers. • Theoretical results of DFT approach were in agreement with experimental results. • The sensor displayed a high selectivity for template in the presence of interferes. • The developed sensor has been applied to determine mesalamine in real samples

  14. Biased representation of disturbance rates in the roadside sampling frame in boreal forests: implications for monitoring design

    Directory of Open Access Journals (Sweden)

    Steven L. Van Wilgenburg

    2015-12-01

    Full Text Available The North American Breeding Bird Survey (BBS is the principal source of data to inform researchers about the status of and trend for boreal forest birds. Unfortunately, little BBS coverage is available in the boreal forest, where increasing concern over the status of species breeding there has increased interest in northward expansion of the BBS. However, high disturbance rates in the boreal forest may complicate roadside monitoring. If the roadside sampling frame does not capture variation in disturbance rates because of either road placement or the use of roads for resource extraction, biased trend estimates might result. In this study, we examined roadside bias in the proportional representation of habitat disturbance via spatial data on forest "loss," forest fires, and anthropogenic disturbance. In each of 455 BBS routes, the area disturbed within multiple buffers away from the road was calculated and compared against the area disturbed in degree blocks and BBS strata. We found a nonlinear relationship between bias and distance from the road, suggesting forest loss and forest fires were underrepresented below 75 and 100 m, respectively. In contrast, anthropogenic disturbance was overrepresented at distances below 500 m and underrepresented thereafter. After accounting for distance from road, BBS routes were reasonably representative of the degree blocks they were within, with only a few strata showing biased representation. In general, anthropogenic disturbance is overrepresented in southern strata, and forest fires are underrepresented in almost all strata. Similar biases exist when comparing the entire road network and the subset sampled by BBS routes against the amount of disturbance within BBS strata; however, the magnitude of biases differed. Based on our results, we recommend that spatial stratification and rotating panel designs be used to spread limited BBS and off-road sampling effort in an unbiased fashion and that new BBS routes

  15. Combining censored and uncensored data in a U-statistic: design and sample size implications for cell therapy research.

    Science.gov (United States)

    Moyé, Lemuel A; Lai, Dejian; Jing, Kaiyan; Baraniuk, Mary Sarah; Kwak, Minjung; Penn, Marc S; Wu, Colon O

    2011-01-01

    The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.

  16. Evolution in the design of a low sheath-flow interface for CE-MS and application to biological samples.

    Science.gov (United States)

    González-Ruiz, Víctor; Codesido, Santiago; Rudaz, Serge; Schappler, Julie

    2018-03-01

    Although several interfaces for CE-MS hyphenation are commercially available, the development of new versatile, simple and yet efficient and sensitive alternatives remains an important field of research. In a previous work, a simple low sheath-flow interface was developed from inexpensive parts. This interface features a design easy to build, maintain, and adapt to particular needs. The present work introduces an improved design of the previous interface. By reducing the diameter of the separation capillary and the emitter, a smaller Taylor cone is spontaneously formed, minimizing the zone dispersion while the analytes go through the interface and leading to less peak broadening associated to the ESI process. Numerical modeling allowed studying the mixing and diffusion processes taking place in the Taylor cone. The analytical performance of this new interface was tested with pharmaceutically relevant molecules and endogenous metabolites. The interface was eventually applied to the analysis of neural cell culture samples, allowing the identification of a panel of neurotransmission-related molecules. An excellent migration time repeatability was obtained (intra-day RSD 10 with an injected volume of 6.7 nL of biological extract. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Rationale, design, methodology and sample characteristics for the Vietnam pre-conceptual micronutrient supplementation trial (PRECONCEPT: a randomized controlled study

    Directory of Open Access Journals (Sweden)

    Nguyen Phuong H

    2012-10-01

    Full Text Available Abstract Background Low birth weight and maternal anemia remain intractable problems in many developing countries. The adequacy of the current strategy of providing iron-folic acid (IFA supplements only during pregnancy has been questioned given many women enter pregnancy with poor iron stores, the substantial micronutrient demand by maternal and fetal tissues, and programmatic issues related to timing and coverage of prenatal care. Weekly IFA supplementation for women of reproductive age (WRA improves iron status and reduces the burden of anemia in the short term, but few studies have evaluated subsequent pregnancy and birth outcomes. The Preconcept trial aims to determine whether pre-pregnancy weekly IFA or multiple micronutrient (MM supplementation will improve birth outcomes and maternal and infant iron status compared to the current practice of prenatal IFA supplementation only. This paper provides an overview of study design, methodology and sample characteristics from baseline survey data and key lessons learned. Methods/design We have recruited 5011 WRA in a double-blind stratified randomized controlled trial in rural Vietnam and randomly assigned them to receive weekly supplements containing either: 1 2800 μg folic acid 2 60 mg iron and 2800 μg folic acid or 3 MM. Women who become pregnant receive daily IFA, and are being followed through pregnancy, delivery, and up to three months post-partum. Study outcomes include birth outcomes and maternal and infant iron status. Data are being collected on household characteristics, maternal diet and mental health, anthropometry, infant feeding practices, morbidity and compliance. Discussion The study is timely and responds to the WHO Global Expert Consultation which identified the need to evaluate the long term benefits of weekly IFA and MM supplementation in WRA. Findings will generate new information to help guide policy and programs designed to reduce the burden of anemia in women and

  18. Design of a New Concentration Series for the Orthogonal Sample Design Approach and Estimation of the Number of Reactions in Chemical Systems.

    Science.gov (United States)

    Shi, Jiajia; Liu, Yuhai; Guo, Ran; Li, Xiaopei; He, Anqi; Gao, Yunlong; Wei, Yongju; Liu, Cuige; Zhao, Ying; Xu, Yizhuang; Noda, Isao; Wu, Jinguang

    2015-11-01

    A new concentration series is proposed for the construction of a two-dimensional (2D) synchronous spectrum for orthogonal sample design analysis to probe intermolecular interaction between solutes dissolved in the same solutions. The obtained 2D synchronous spectrum possesses the following two properties: (1) cross peaks in the 2D synchronous spectra can be used to reflect intermolecular interaction reliably, since interference portions that have nothing to do with intermolecular interaction are completely removed, and (2) the two-dimensional synchronous spectrum produced can effectively avoid accidental collinearity. Hence, the correct number of nonzero eigenvalues can be obtained so that the number of chemical reactions can be estimated. In a real chemical system, noise present in one-dimensional spectra may also produce nonzero eigenvalues. To get the correct number of chemical reactions, we classified nonzero eigenvalues into significant nonzero eigenvalues and insignificant nonzero eigenvalues. Significant nonzero eigenvalues can be identified by inspecting the pattern of the corresponding eigenvector with help of the Durbin-Watson statistic. As a result, the correct number of chemical reactions can be obtained from significant nonzero eigenvalues. This approach provides a solid basis to obtain insight into subtle spectral variations caused by intermolecular interaction.

  19. Design and sampling plan optimization for RT-qPCR experiments in plants: a case study in blueberry

    Directory of Open Access Journals (Sweden)

    Jose V Die

    2016-03-01

    Full Text Available The qPCR assay has become a routine technology in plant biotechnology and agricultural research. It is unlikely to be technically improved, but there are still challenges which center around minimizing the variability in results and transparency when reporting technical data in support of the conclusions of a study. There are a number of aspects of the pre- and post-assay workflow that contribute to variability of results. Here, through the study of the introduction of error in qPCR measurements at different stages of the workflow, we describe the most important causes of technical variability in a case study using blueberry. In this study, we found that the stage for which increasing the number of replicates would be the most beneficial depends on the tissue used. For example, we would recommend the use of more RT replicates when working with leaf tissue, while the use of more sampling (RNA extraction replicates would be recommended when working with stems or fruits to obtain the most optimal results. The use of more qPCR replicates provides the least benefit as it is the most reproducible step. By knowing the distribution of error over an entire experiment and the costs at each step, we have developed a script to identify the optimal sampling plan within the limits of a given budget. These findings should help plant scientists improve the design of qPCR experiments and refine their laboratory practices in order to conduct qPCR assays in a more reliable-manner to produce more consistent and reproducible data.

  20. A flexible Bayesian assessment for the expected impact of data on prediction confidence for optimal sampling designs

    Science.gov (United States)

    Leube, Philipp; Geiges, Andreas; Nowak, Wolfgang

    2010-05-01

    Incorporating hydrogeological data, such as head and tracer data, into stochastic models of subsurface flow and transport helps to reduce prediction uncertainty. Considering limited financial resources available for the data acquisition campaign, information needs towards the prediction goal should be satisfied in a efficient and task-specific manner. For finding the best one among a set of design candidates, an objective function is commonly evaluated, which measures the expected impact of data on prediction confidence, prior to their collection. An appropriate approach to this task should be stochastically rigorous, master non-linear dependencies between data, parameters and model predictions, and allow for a wide variety of different data types. Existing methods fail to fulfill all these requirements simultaneously. For this reason, we introduce a new method, denoted as CLUE (Cross-bred Likelihood Uncertainty Estimator), that derives the essential distributions and measures of data utility within a generalized, flexible and accurate framework. The method makes use of Bayesian GLUE (Generalized Likelihood Uncertainty Estimator) and extends it to an optimal design method by marginalizing over the yet unknown data values. Operating in a purely Bayesian Monte-Carlo framework, CLUE is a strictly formal information processing scheme free of linearizations. It provides full flexibility associated with the type of measurements (linear, non-linear, direct, indirect) and accounts for almost arbitrary sources of uncertainty (e.g. heterogeneity, geostatistical assumptions, boundary conditions, model concepts) via stochastic simulation and Bayesian model averaging. This helps to minimize the strength and impact of possible subjective prior assumptions, that would be hard to defend prior to data collection. Our study focuses on evaluating two different uncertainty measures: (i) expected conditional variance and (ii) expected relative entropy of a given prediction goal. The

  1. Practical experience applied to the design of injection and sample manifolds to perform in-place surveillance tests according to ANSI/ASME N-510

    Energy Technology Data Exchange (ETDEWEB)

    Banks, E.M.; Wikoff, W.O.; Shaffer, L.L. [NUCON International, Inc., Columbus, OH (United States)

    1997-08-01

    At the current level of maturity and experience in the nuclear industry, regarding testing of air treatment systems, it is now possible to design and qualify injection and sample manifolds for most applications. While the qualification of sample manifolds is still in its infancy, injection manifolds have reached a mature stage that helps to eliminate the {open_quotes}hit or miss{close_quotes} type of design. During the design phase, manifolds can be adjusted to compensate for poor airflow distribution, laminar flow conditions, and to take advantage of any system attributes. Experience has shown that knowing the system attributes before the design phase begins is an essential element to a successful manifold design. The use of a spreadsheet type program commonly found on most personal computers can afford a greater flexibility and a reduction in time spent in the design phase. The experience gained from several generations of manifold design has culminated in a set of general design guidelines. Use of these guidelines, along with a good understanding of the type of testing (theoretical and practical), can result in a good manifold design requiring little or no field modification. The requirements for manifolds came about because of the use of multiple banks of components and unconventional housing inlet configurations. Multiple banks of adsorbers and pre and post HEPA`s required that each bank be tested to insure that each one does not exceed a specific allowable leakage criterion. 5 refs., 5 figs., 1 tab.

  2. Estimation of genetic parameters and their sampling variances for quantitative traits in the type 2 modified augmented design

    OpenAIRE

    Frank M. You; Qijian Song; Gaofeng Jia; Yanzhao Cheng; Scott Duguid; Helen Booker; Sylvie Cloutier

    2016-01-01

    The type 2 modified augmented design (MAD2) is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic ...

  3. An Optimization-Based Reconfigurable Design for a 6-Bit 11-MHz Parallel Pipeline ADC with Double-Sampling S&H

    Directory of Open Access Journals (Sweden)

    Wilmar Carvajal

    2012-01-01

    Full Text Available This paper presents a 6 bit, 11 MS/s time-interleaved pipeline A/D converter design. The specification process, from block level to elementary circuits, is gradually covered to draw a design methodology. Both power consumption and mismatch between the parallel chain elements are intended to be reduced by using some techniques such as double and bottom-plate sampling, fully differential circuits, RSD digital correction, and geometric programming (GP optimization of the elementary analog circuits (OTAs and comparators design. Prelayout simulations of the complete ADC are presented to characterize the designed converter, which consumes 12 mW while sampling a 500 kHz input signal. Moreover, the block inside the ADC with the most stringent requirements in power, speed, and precision was sent to fabrication in a CMOS 0.35 μm AMS technology, and some postlayout results are shown.

  4. Stratified random sampling plans designed to assist in the determination of radon and radon daughter concentrations in underground uranium mine atmosphere

    International Nuclear Information System (INIS)

    Makepeace, C.E.

    1981-01-01

    Sampling strategies for the monitoring of deleterious agents present in uranium mine air in underground and surface mining areas are described. These methods are designed to prevent overexposure of the lining of the respiratory system of uranium miners to ionizing radiation from radon and radon daughters, and whole body overexposure to external gamma radiation. A detailed description is provided of stratified random sampling monitoring methodology for obtaining baseline data to be used as a reference for subsequent compliance assessment

  5. Sampling Development

    Science.gov (United States)

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  6. Design project of the dosimetry control system in the independent CO2 loop for cooling the samples irradiated in the RA reactor vertical experimental channels, Vol. V

    International Nuclear Information System (INIS)

    1964-01-01

    Design project of the dosimetry control system in the independent CO 2 loop for cooling the samples irradiated in the RA reactor vertical experimental channels includes the following: calculations of CO 2 gas activity, design of the dosimetry control system, review of the changes that should be done in the RA reactor building for installing the independent CO 2 loop, specification of the materials with cost estimation, engineering drawings of the system [sr

  7. Objective sampling design in a highly heterogeneous landscape - characterizing environmental determinants of malaria vector distribution in French Guiana, in the Amazonian region.

    Science.gov (United States)

    Roux, Emmanuel; Gaborit, Pascal; Romaña, Christine A; Girod, Romain; Dessay, Nadine; Dusfour, Isabelle

    2013-12-01

    Sampling design is a key issue when establishing species inventories and characterizing habitats within highly heterogeneous landscapes. Sampling efforts in such environments may be constrained and many field studies only rely on subjective and/or qualitative approaches to design collection strategy. The region of Cacao, in French Guiana, provides an excellent study site to understand the presence and abundance of Anopheles mosquitoes, their species dynamics and the transmission risk of malaria across various environments. We propose an objective methodology to define a stratified sampling design. Following thorough environmental characterization, a factorial analysis of mixed groups allows the data to be reduced and non-collinear principal components to be identified while balancing the influences of the different environmental factors. Such components defined new variables which could then be used in a robust k-means clustering procedure. Then, we identified five clusters that corresponded to our sampling strata and selected sampling sites in each stratum. We validated our method by comparing the species overlap of entomological collections from selected sites and the environmental similarities of the same sites. The Morisita index was significantly correlated (Pearson linear correlation) with environmental similarity based on i) the balanced environmental variable groups considered jointly (p = 0.001) and ii) land cover/use (p-value sampling approach. Land cover/use maps (based on high spatial resolution satellite images) were shown to be particularly useful when studying the presence, density and diversity of Anopheles mosquitoes at local scales and in very heterogeneous landscapes.

  8. "Best Practices in Using Large, Complex Samples: The Importance of Using Appropriate Weights and Design Effect Compensation"

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2011-09-01

    Full Text Available Large surveys often use probability sampling in order to obtain representative samples, and these data sets are valuable tools for researchers in all areas of science. Yet many researchers are not formally prepared to appropriately utilize these resources. Indeed, users of one popular dataset were generally found not to have modeled the analyses to take account of the complex sample (Johnson & Elliott, 1998 even when publishing in highly-regarded journals. It is well known that failure to appropriately model the complex sample can substantially bias the results of the analysis. Examples presented in this paper highlight the risk of error of inference and mis-estimation of parameters from failure to analyze these data sets appropriately.

  9. Box-Behnken design in modeling of solid-phase tea waste extraction for the removal of uranium from water samples

    International Nuclear Information System (INIS)

    Khajeh, Mostafa; Jahanbin, Elham; Ghaffari-Moghaddam, Mansour; Moghaddam, Zahra Safaei; Bohlooli, Mousa

    2015-01-01

    In this study, the solid-phase tea waste procedure was used for separation, preconcentration and determination of uranium from water samples by UV-Vis spectrophotometer. In addition, Box-Behnken experimental design was employed to investigated the influence of six variables including pH, mass of adsorbent, eluent volume, amount of 1-(2-pyridylazo)-2-naphthol (PAN); and sample and eluent flow rates on the extraction of analyte. High determination coefficient (R 2 ) of 0.972 and adjusted-R 2 of 0.943 showed the satisfactory adjustment of the polynomial regression model. This method was used for the extraction of uranium from real water samples.

  10. Analysis of Clinical Cohort Data Using Nested Case-control and Case-cohort Sampling Designs. A Powerful and Economical Tool.

    Science.gov (United States)

    Ohneberg, K; Wolkewitz, M; Beyersmann, J; Palomar-Martinez, M; Olaechea-Astigarraga, P; Alvarez-Lerma, F; Schumacher, M

    2015-01-01

    Sampling from a large cohort in order to derive a subsample that would be sufficient for statistical analysis is a frequently used method for handling large data sets in epidemiological studies with limited resources for exposure measurement. For clinical studies however, when interest is in the influence of a potential risk factor, cohort studies are often the first choice with all individuals entering the analysis. Our aim is to close the gap between epidemiological and clinical studies with respect to design and power considerations. Schoenfeld's formula for the number of events required for a Cox' proportional hazards model is fundamental. Our objective is to compare the power of analyzing the full cohort and the power of a nested case-control and a case-cohort design. We compare formulas for power for sampling designs and cohort studies. In our data example we simultaneously apply a nested case-control design with a varying number of controls matched to each case, a case cohort design with varying subcohort size, a random subsample and a full cohort analysis. For each design we calculate the standard error for estimated regression coefficients and the mean number of distinct persons, for whom covariate information is required. The formula for the power of a nested case-control design and the power of a case-cohort design is directly connected to the power of a cohort study using the well known Schoenfeld formula. The loss in precision of parameter estimates is relatively small compared to the saving in resources. Nested case-control and case-cohort studies, but not random subsamples yield an attractive alternative for analyzing clinical studies in the situation of a low event rate. Power calculations can be conducted straightforwardly to quantify the loss of power compared to the savings in the num-ber of patients using a sampling design instead of analyzing the full cohort.

  11. An investigation of the speeding-related crash designation through crash narrative reviews sampled via logistic regression.

    Science.gov (United States)

    Fitzpatrick, Cole D; Rakasi, Saritha; Knodler, Michael A

    2017-01-01

    Speed is one of the most important factors in traffic safety as higher speeds are linked to increased crash risk and higher injury severities. Nearly a third of fatal crashes in the United States are designated as "speeding-related", which is defined as either "the driver behavior of exceeding the posted speed limit or driving too fast for conditions." While many studies have utilized the speeding-related designation in safety analyses, no studies have examined the underlying accuracy of this designation. Herein, we investigate the speeding-related crash designation through the development of a series of logistic regression models that were derived from the established speeding-related crash typologies and validated using a blind review, by multiple researchers, of 604 crash narratives. The developed logistic regression model accurately identified crashes which were not originally designated as speeding-related but had crash narratives that suggested speeding as a causative factor. Only 53.4% of crashes designated as speeding-related contained narratives which described speeding as a causative factor. Further investigation of these crashes revealed that the driver contributing code (DCC) of "driving too fast for conditions" was being used in three separate situations. Additionally, this DCC was also incorrectly used when "exceeding the posted speed limit" would likely have been a more appropriate designation. Finally, it was determined that the responding officer only utilized one DCC in 82% of crashes not designated as speeding-related but contained a narrative indicating speed as a contributing causal factor. The use of logistic regression models based upon speeding-related crash typologies offers a promising method by which all possible speeding-related crashes could be identified. Published by Elsevier Ltd.

  12. Evaluating effectiveness of down-sampling for stratified designs and unbalanced prevalence in Random Forest models of tree species distributions in Nevada

    Science.gov (United States)

    Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino

    2012-01-01

    Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...

  13. Design and development of a highly sensitive, field portable plasma source instrument for on-line liquid stream monitoring and real-time sample analysis

    International Nuclear Information System (INIS)

    Duan, Yixiang; Su, Yongxuan; Jin, Zhe; Abeln, Stephen P.

    2000-01-01

    The development of a highly sensitive, field portable, low-powered instrument for on-site, real-time liquid waste stream monitoring is described in this article. A series of factors such as system sensitivity and portability, plasma source, sample introduction, desolvation system, power supply, and the instrument configuration, were carefully considered in the design of the portable instrument. A newly designed, miniature, modified microwave plasma source was selected as the emission source for spectroscopy measurement, and an integrated small spectrometer with a charge-coupled device detector was installed for signal processing and detection. An innovative beam collection system with optical fibers was designed and used for emission signal collection. Microwave plasma can be sustained with various gases at relatively low power, and it possesses high detection capabilities for both metal and nonmetal pollutants, making it desirable to use for on-site, real-time, liquid waste stream monitoring. An effective in situ sampling system was coupled with a high efficiency desolvation device for direct-sampling liquid samples into the plasma. A portable computer control system is used for data processing. The new, integrated instrument can be easily used for on-site, real-time monitoring in the field. The system possesses a series of advantages, including high sensitivity for metal and nonmetal elements; in situ sampling; compact structure; low cost; and ease of operation and handling. These advantages will significantly overcome the limitations of previous monitoring techniques and make great contributions to environmental restoration and monitoring. (c)

  14. Understanding the cluster randomised crossover design: a graphical illustraton of the components of variation and a sample size tutorial.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B

    2017-08-15

    In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of

  15. Study design and percent recoveries of anthropogenic organic compounds with and without the addition of ascorbic acid to preserve water samples containing free chlorine, 2004-06

    Science.gov (United States)

    Valder, Joshua F.; Delzer, Gregory C.; Price, Curtis V.; Sandstrom, Mark W.

    2008-01-01

    The National Water-Quality Assessment (NAWQA) Program of the U.S. Geological Survey (USGS) began implementing Source Water-Quality Assessments (SWQAs) in 2002 that focus on characterizing the quality of source water and finished water of aquifers and major rivers used by some of the larger community water systems in the United States. As used for SWQA studies, source water is the raw (ambient) water collected at the supply well prior to water treatment (for ground water) or the raw (ambient) water collected from the river near the intake (for surface water). Finished water is the water that is treated, which typically involves, in part, the addition of chlorine or other disinfection chemicals to remove pathogens, and is ready to be delivered to consumers. Finished water is collected before the water enters the distribution system. This report describes the study design and percent recoveries of anthropogenic organic compounds (AOCs) with and without the addition of ascorbic acid to preserve water samples containing free chlorine. The percent recoveries were determined by using analytical results from a laboratory study conducted in 2004 by the USGS's National Water Quality Laboratory (NWQL) and from data collected during 2004-06 for a field study currently (2008) being conducted by the USGS's NAWQA Program. The laboratory study was designed to determine if preserving samples with ascorbic acid (quenching samples) adversely affects analytical performance under controlled conditions. During the laboratory study, eight samples of reagent water were spiked for each of five analytical schedules evaluated. Percent recoveries from these samples were then compared in two ways: (1) four quenched reagent spiked samples analyzed on day 0 were compared with four quenched reagent spiked samples analyzed on day 7 or 14, and (2) the combined eight quenched reagent spiked samples analyzed on day 0, 7, or 14 were compared with eight laboratory reagent spikes (LRSs). Percent

  16. State of the Art of Language Learning Design Using Mobile Technology: Sample Apps and Some Critical Reflection

    Science.gov (United States)

    Bárcena, Elena; Read, Timothy; Underwood, Joshua; Obari, Hiroyuki; Cojocnean, Diana; Koyama, Toshiko; Pareja-Lora, Antonio; Calle, Cristina; Pomposo, Lourdes; Talaván, Noa; Ávila-Cabrera, José; Ibañez, Ana; Vermeulen, Anna; Jordano, María; Arús-Hita, Jorge; Rodríguez, Pilar; Castrillo, María Dolores; Kétyi, Andras; Selwood, Jaime; Gaved, Mark; Kukulska-Hulme, Agnes

    2015-01-01

    In this paper, experiences from different research groups illustrate the state-of-the-art of Mobile Assisted Language Learning (henceforth, MALL) in formal and non-formal education. These research samples represent recent and on-going progress made in the field of MALL at an international level and offer encouragement for practitioners who are…

  17. A sample design for globally consistent biomass estimation using lidar data from the Geoscience Laser Altimeter System (GLAS)

    Science.gov (United States)

    Sean P. Healey; Paul L. Patterson; Sassan S. Saatchi; Michael A. Lefsky; Andrew J. Lister; Elizabeth A. Freeman

    2012-01-01

    Lidar height data collected by the Geosciences Laser Altimeter System (GLAS) from 2002 to 2008 has the potential to form the basis of a globally consistent sample-based inventory of forest biomass. GLAS lidar return data were collected globally in spatially discrete full waveform "shots," which have been shown to be strongly correlated with aboveground forest...

  18. Design

    DEFF Research Database (Denmark)

    Volf, Mette

    This publication is unique in its demystification and operationalization of the complex and elusive nature of the design process. The publication portrays the designer’s daily work and the creative process, which the designer is a part of. Apart from displaying the designer’s work methods...... and design parameters, the publication shows examples from renowned Danish design firms. Through these examples the reader gets an insight into the designer’s reality....

  19. Population Pharmacokinetics of Gemcitabine and dFdU in Pancreatic Cancer Patients Using an Optimal Design, Sparse Sampling Approach.

    Science.gov (United States)

    Serdjebi, Cindy; Gattacceca, Florence; Seitz, Jean-François; Fein, Francine; Gagnière, Johan; François, Eric; Abakar-Mahamat, Abakar; Deplanque, Gael; Rachid, Madani; Lacarelle, Bruno; Ciccolini, Joseph; Dahan, Laetitia

    2017-06-01

    Gemcitabine remains a pillar in pancreatic cancer treatment. However, toxicities are frequently observed. Dose adjustment based on therapeutic drug monitoring might help decrease the occurrence of toxicities. In this context, this work aims at describing the pharmacokinetics (PK) of gemcitabine and its metabolite dFdU in pancreatic cancer patients and at identifying the main sources of their PK variability using a population PK approach, despite a sparse sampled-population and heterogeneous administration and sampling protocols. Data from 38 patients were included in the analysis. The 3 optimal sampling times were determined using KineticPro and the population PK analysis was performed on Monolix. Available patient characteristics, including cytidine deaminase (CDA) status, were tested as covariates. Correlation between PK parameters and occurrence of severe hematological toxicities was also investigated. A two-compartment model best fitted the gemcitabine and dFdU PK data (volume of distribution and clearance for gemcitabine: V1 = 45 L and CL1 = 4.03 L/min; for dFdU: V2 = 36 L and CL2 = 0.226 L/min). Renal function was found to influence gemcitabine clearance, and body surface area to impact the volume of distribution of dFdU. However, neither CDA status nor the occurrence of toxicities was correlated to PK parameters. Despite sparse sampling and heterogeneous administration and sampling protocols, population and individual PK parameters of gemcitabine and dFdU were successfully estimated using Monolix population PK software. The estimated parameters were consistent with previously published results. Surprisingly, CDA activity did not influence gemcitabine PK, which was explained by the absence of CDA-deficient patients enrolled in the study. This work suggests that even sparse data are valuable to estimate population and individual PK parameters in patients, which will be usable to individualize the dose for an optimized benefit to risk ratio.

  20. A SAMPLE STUDY ON THE IMPORTANCE AND THE EVALUATION OF THREE DIMENSIONAL EXPRESSION TECHNIQUES IN THE EDUCATION OF PLANTING DESIGN

    Directory of Open Access Journals (Sweden)

    Banu Çiçek Kurdoğlu

    2008-04-01

    Full Text Available :Drafts developed in graphical expression techniques and models formed in abstract manners and gradually becoming concrete are used for the exhibition of the targeted images in the design process, which is also a mental improvement process. Among the biggest difficulty beginner architecture students face is failing to make comments on the products they design in architecture design process; their spatial relationships and express them in two or three-dimensional models. Expression and modelling techniques to be used in this process are very important. In this study, a lesson programme enriched with two and three – dimensional model expression techniques for planting design education, which is of vital significance in landscape architecture departments, was developed and applied. Advantages and disadvantages of the programme were evaluated and some suggestions were offered. Consequently, importance of three dimensional expression techniques and need for them were re-emphasized and the efficiency of the modelling technique used in the study was determined under today’s and Turkey’s conditions.

  1. Validity and power of association testing in family-based sampling designs: evidence for and against the common wisdom.

    Science.gov (United States)

    Knight, Stacey; Camp, Nicola J

    2011-04-01

    Current common wisdom posits that association analyses using family-based designs have inflated type 1 error rates (if relationships are ignored) and independent controls are more powerful than familial controls. We explore these suppositions. We show theoretically that family-based designs can have deflated type-error rates. Through simulation, we examine the validity and power of family designs for several scenarios: cases from randomly or selectively ascertained pedigrees; and familial or independent controls. Family structures considered are as follows: sibships, nuclear families, moderate-sized and extended pedigrees. Three methods were considered with the χ(2) test for trend: variance correction (VC), weighted (weights assigned to account for genetic similarity), and naïve (ignoring relatedness) as well as the Modified Quasi-likelihood Score (MQLS) test. Selectively ascertained pedigrees had similar levels of disease enrichment; random ascertainment had no such restriction. Data for 1,000 cases and 1,000 controls were created under the null and alternate models. The VC and MQLS methods were always valid. The naïve method was anti-conservative if independent controls were used and valid or conservative in designs with familial controls. The weighted association method was generally valid for independent controls, and was conservative for familial controls. With regard to power, independent controls were more powerful for small-to-moderate selectively ascertained pedigrees, but familial and independent controls were equivalent in the extended pedigrees and familial controls were consistently more powerful for all randomly ascertained pedigrees. These results suggest a more complex situation than previously assumed, which has important implications for study design and analysis. © 2011 Wiley-Liss, Inc.

  2. Two‐phase designs for joint quantitative‐trait‐dependent and genotype‐dependent sampling in post‐GWAS regional sequencing

    Science.gov (United States)

    Espin‐Garcia, Osvaldo; Craiu, Radu V.

    2017-01-01

    ABSTRACT We evaluate two‐phase designs to follow‐up findings from genome‐wide association study (GWAS) when the cost of regional sequencing in the entire cohort is prohibitive. We develop novel expectation‐maximization‐based inference under a semiparametric maximum likelihood formulation tailored for post‐GWAS inference. A GWAS‐SNP (where SNP is single nucleotide polymorphism) serves as a surrogate covariate in inferring association between a sequence variant and a normally distributed quantitative trait (QT). We assess test validity and quantify efficiency and power of joint QT‐SNP‐dependent sampling and analysis under alternative sample allocations by simulations. Joint allocation balanced on SNP genotype and extreme‐QT strata yields significant power improvements compared to marginal QT‐ or SNP‐based allocations. We illustrate the proposed method and evaluate the sensitivity of sample allocation to sampling variation using data from a sequencing study of systolic blood pressure. PMID:29239496

  3. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    Science.gov (United States)

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  4. Optimal design of sampling and mapping schemes in the radiometric exploration of Chipilapa, El Salvador (Geo-statistics)

    International Nuclear Information System (INIS)

    Balcazar G, M.; Flores R, J.H.

    1992-01-01

    As part of the knowledge about the radiometric surface exploration, carried out in the geothermal field of Chipilapa, El Salvador, its were considered the geo-statistical parameters starting from the calculated variogram of the field data, being that the maxim distance of correlation of the samples in 'radon' in the different observation addresses (N-S, E-W, N W-S E, N E-S W), it was of 121 mts for the monitoring grill in future prospectus in the same area. Being derived of it an optimization (minimum cost) in the spacing of the field samples by means of geo-statistical techniques, without losing the detection of the anomaly. (Author)

  5. Considerations on sample holder design and custom-made non-polarizable electrodes for Spectral Induced Polarization measurements on unsaturated soils

    Science.gov (United States)

    Kaouane, C.; Chouteau, M. C.; Fauchard, C.; Cote, P.

    2014-12-01

    Spectral Induced Polarization (SIP) is a geophysical method sensitive to water content, saturation and grain size distribution. It could be used as an alternative to nuclear probes to assess the compaction of soils in road works. To evaluate the potential of SIP as a practical tool, we designed an experiment for complex conductivity measurements on unsaturated soil samples.Literature presents a large variety of sample holders and designs, each depending on the context. Although we might find some precise description about the sample holder, exact replication is not always possible. Furthermore, the potential measurements are often done using custom-made Ag/AgCl electrodes and very few indications are given on their reliability with time and temperature. Our objective is to perform complex conductivity measurements on soil samples compacted in a PVC cylindrical mould (10 cm-long, 5 cm-diameter) according to geotechnical standards. To expect homogeneous current density, electrical current is transmitted through the sample via chambers filled with agar gel. Agar gel is a good non-polarizable conductor within the frequency range (1 mHz -20kHz). Its electrical properties are slightly known. We measured increasing of agar-agar electrical conductivity in time. We modelled the influence of this variation on the measurement. If the electrodes are located on the sample, it is minimized. Because of the dimensions at stake and the need for simple design, potential electrodes are located outside the sample, hence the gel contributes to the measurements. Since the gel is fairly conductive, we expect to overestimate the sample conductivity. Potential electrodes are non-polarizable Ag/AgCl electrodes. To avoid any leakage, the KCl solution in the electrodes is replaced by saturated KCl-agar gel. These electrodes are low cost and show a low, stable, self-potential (<1mV). In addition, the technique of making electrode can be easily reproduced and storage and maintenance are simple

  6. Identification and verification of ultrafine particle affinity zones in urban neighbourhoods: sample design and data pre-processing.

    LENUS (Irish Health Repository)

    Harris, Paul

    2009-01-01

    A methodology is presented and validated through which long-term fixed site air quality measurements are used to characterise and remove temporal signals in sample-based measurements which have good spatial coverage but poor temporal resolution. The work has been carried out specifically to provide a spatial dataset of atmospheric ultrafine particle (UFP < 100 nm) data for ongoing epidemiologic cohort analysis but the method is readily transferable to wider epidemiologic investigations and research into the health effects of other pollutant species.

  7. Characterization of airborne particulate matter in Santiago, Chile. Part 1: design, sampling and analysis for an experimental campaign

    International Nuclear Information System (INIS)

    Toro E, P.

    1995-01-01

    This work describes the siting and sampling procedures of collecting airborne particulate matter in Santiago, Chile, determining its chemical composition and daily behaviour. The airborne particulate matter was collected onto polycarbonate membranes, one of fine pore and other of coarse pore, using Pm 10 samplers. The material was analyzed using neutron activation analysis., proton induced X ray emission, X ray fluorescence, voltametry, atomic absorption spectrometry, ion chromatography and isotope dilution. (author). 1 tab

  8. Short torch design for direct liquid sample introduction using conventional and micro-nebulizers for plasma spectrometry

    Science.gov (United States)

    Montaser, Akbar [Potomac, MD; Westphal, Craig S [Landenberg, PA; Kahen, Kaveh [Montgomery Village, MD; Rutkowski, William F [Arlington, VA

    2008-01-08

    An apparatus and method for providing direct liquid sample introduction using a nebulizer are provided. The apparatus and method include a short torch having an inner tube and an outer tube, and an elongated adapter having a cavity for receiving the nebulizer and positioning a nozzle tip of the nebulizer a predetermined distance from a tip of the outer tube of the short torch. The predetermined distance is preferably about 2-5 mm.

  9. Design and construction of a heat stage for investigations of samples by atomic force microscopy above ambient temperatures

    DEFF Research Database (Denmark)

    Bækmark, Thomas Rosleff; Bjørnholm, Thomas; Mouritsen, Ole G.

    1997-01-01

    The construction from simple and cheap commercially available parts of a miniature heat stage for the direct heating of samples studied with a commercially available optical-lever-detection atomic force microscope is reported. We demonstrate that by using this heat stage, atomic resolution can...... be obtained on highly oriented pyrolytic graphite at 52 °C. The heat stage is of potential use for the investigation of biological material at physiological temperatures. ©1997 American Institute of Physics....

  10. Preconcentration and determination of ceftazidime in real samples using dispersive liquid-liquid microextraction and high-performance liquid chromatography with the aid of experimental design.

    Science.gov (United States)

    Razmi, Rasoul; Shahpari, Behrouz; Pourbasheer, Eslam; Boustanifar, Mohammad Hasan; Azari, Zhila; Ebadi, Amin

    2016-11-01

    A rapid and simple method for the extraction and preconcentration of ceftazidime in aqueous samples has been developed using dispersive liquid-liquid microextraction followed by high-performance liquid chromatography analysis. The extraction parameters, such as the volume of extraction solvent and disperser solvent, salt effect, sample volume, centrifuge rate, centrifuge time, extraction time, and temperature in the dispersive liquid-liquid microextraction process, were studied and optimized with the experimental design methods. Firstly, for the preliminary screening of the parameters the taguchi design was used and then, the fractional factorial design was used for significant factors optimization. At the optimum conditions, the calibration curves for ceftazidime indicated good linearity over the range of 0.001-10 μg/mL with correlation coefficients higher than the 0.98, and the limits of detection were 0.13 and 0.17 ng/mL, for water and urine samples, respectively. The proposed method successfully employed to determine ceftazidime in water and urine samples and good agreement between the experimental data and predictive values has been achieved. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Design of a Polynomial Fuzzy Observer Controller With Sampled-Output Measurements for Nonlinear Systems Considering Unmeasurable Premise Variables

    OpenAIRE

    Liu, Chuang; Lam, H. K.

    2015-01-01

    In this paper, we propose a polynomial fuzzy observer controller for nonlinear systems, where the design is achieved through the stability analysis of polynomial-fuzzy-model-based (PFMB) observer-control system. The polynomial fuzzy observer estimates the system states using estimated premise variables. The estimated states are then employed by the polynomial fuzzy controller for the feedback control of nonlinear systems represented by the polynomial fuzzy model. The system stability of the P...

  12. Web-Face-to-Face Mixed-Mode Design in a Longitudinal Survey: Effects on Participation Rates, Sample Composition, and Costs

    Directory of Open Access Journals (Sweden)

    Bianchi Annamaria

    2017-06-01

    Full Text Available Sequential mixed-mode designs are increasingly considered as an alternative to interviewer-administered data collection, allowing researchers to take advantage of the benefits of each mode. We assess the effects of the introduction of a sequential web-face-to-face mixed-mode design over three waves of a longitudinal survey in which members were previously interviewed face-to-face. Findings are reported from a large-scale randomised experiment carried out on the UK Household Longitudinal Study. No differences are found between the mixed-mode design and face-to-face design in terms of cumulative response rates and only minimal differences in terms of sample composition. On the other hand, potential cost savings are evident.

  13. Lagoa Real design. Influence study of sampling support on reserve evaluation from AN-13 (Cachoeira mine), Lagoa Real, Bahia, Brazil

    International Nuclear Information System (INIS)

    Bastian, E.B.; Nogueira, R.A.C.

    1984-09-01

    This work analysis the effects followed by the change of sampling support from Cachoeira Mine. The differencies observed on U3 O8 tonnage oscillate between - 0,62 to + 0,01 %, while that the ore tonnage per reserve category change + 0,87% to + 1,59%, with the U3O8 ratio changing -1,52% to + 1,87%. This results mean the short influence carried by the support change in reserves, when the problem is analysed by the global aspect. (author)

  14. Molecular dynamics equation designed for realizing arbitrary density: Application to sampling method utilizing the Tsallis generalized distribution

    International Nuclear Information System (INIS)

    Fukuda, Ikuo; Nakamura, Haruki

    2010-01-01

    Several molecular dynamics techniques applying the Tsallis generalized distribution are presented. We have developed a deterministic dynamics to generate an arbitrary smooth density function ρ. It creates a measure-preserving flow with respect to the measure ρdω and realizes the density ρ under the assumption of the ergodicity. It can thus be used to investigate physical systems that obey such distribution density. Using this technique, the Tsallis distribution density based on a full energy function form along with the Tsallis index q ≥ 1 can be created. From the fact that an effective support of the Tsallis distribution in the phase space is broad, compared with that of the conventional Boltzmann-Gibbs (BG) distribution, and the fact that the corresponding energy-surface deformation does not change energy minimum points, the dynamics enhances the physical state sampling, in particular for a rugged energy surface spanned by a complicated system. Other feature of the Tsallis distribution is that it provides more degree of the nonlinearity, compared with the case of the BG distribution, in the deterministic dynamics equation, which is very useful to effectively gain the ergodicity of the dynamical system constructed according to the scheme. Combining such methods with the reconstruction technique of the BG distribution, we can obtain the information consistent with the BG ensemble and create the corresponding free energy surface. We demonstrate several sampling results obtained from the systems typical for benchmark tests in MD and from biomolecular systems.

  15. Application of Taguchi L16 design method for comparative study of ability of 3A zeolite in removal of Rhodamine B and Malachite green from environmental water samples

    Science.gov (United States)

    Rahmani, Mashaallah; Kaykhaii, Massoud; Sasani, Mojtaba

    2018-01-01

    This study aimed to investigate the efficiency of 3A zeolite as a novel adsorbent for removal of Rhodamine B and Malachite green dyes from water samples. To increase the removal efficiency, effecting parameters on adsorption process were investigated and optimized by adopting Taguchi design of experiments approach. The percentage contribution of each parameter on the removal of Rhodamine B and Malachite green dyes determined using ANOVA and showed that the most effective parameters in removal of RhB and MG by 3A zeolite are initial concentration of dye and pH, respectively. Under optimized condition, the amount predicted by Taguchi design method and the value obtained experimentally, showed good closeness (more than 94.86%). Good adsorption efficiency obtained for proposed methods indicates that, the 3A zeolite is capable to remove the significant amounts of Rhodamine B and Malachite green from environmental water samples.

  16. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    Science.gov (United States)

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  17. Out-of-pocket costs, primary care frequent attendance and sample selection: Estimates from a longitudinal cohort design.

    Science.gov (United States)

    Pymont, Carly; McNamee, Paul; Butterworth, Peter

    2018-03-20

    This paper examines the effect of out-of-pocket costs on subsequent frequent attendance in primary care using data from the Personality and Total Health (PATH) Through Life Project, a representative community cohort study from Canberra, Australia. The analysis sample comprised 1197 respondents with two or more GP consultations, and uses survey data linked to administrative health service use (Medicare) data which provides data on the number of consultations and out-of-pocket costs. Respondents identified in the highest decile of GP use in a year were defined as Frequent Attenders (FAs). Logistic regression models that did not account for potential selection effects showed that out-of-pocket costs incurred during respondents' prior two consultations were significantly associated with subsequent FA status. Respondents who incurred higher costs ($15-$35; or >$35) were less likely to become FAs than those who incurred no or low (attenders. Copyright © 2018. Published by Elsevier B.V.

  18. Data Summary Report for 116-N-1 and 116-N-3 Facility Soil Sampling to Support Remedial Design

    International Nuclear Information System (INIS)

    Ludowise, J. D.

    1999-01-01

    The 116-N-1 (1301-N) and 116-N-3 (1325-N) liquid waste disposal facilities (LWDFs) are to be remediated beginning in July 2000. Each LWDF consists of a crib and a trench. Under the proposed remedial action (DOE-RL 1998b), pipelines and above ground structures would be removed. Clean overburden material would be excavated and stockpiled. Contaminated soils would be excavated, treated (if required to meet Resource Conservation and Recovery Act of 1976 [RCRA] land disposal restrictions), and finally disposed at the Environmental Restoration Disposal Facility (ERDF). The sites would then be backfilled, graded, and revegetated. The purpose of this report is to summarize results of the sampling effort and discuss how they apply to the conceptual model of the sites and the planned remedial action under the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 and closure action under RCRA

  19. Design of a scanning probe microscope with advanced sample treatment capabilities: An atomic force microscope combined with a miniaturized inductively coupled plasma source

    International Nuclear Information System (INIS)

    Hund, Markus; Herold, Hans

    2007-01-01

    We describe the design and performance of an atomic force microscope (AFM) combined with a miniaturized inductively coupled plasma source working at a radio frequency of 27.12 MHz. State-of-the-art scanning probe microscopes (SPMs) have limited in situ sample treatment capabilities. Aggressive treatments such as plasma etching or harsh treatments such as etching in aggressive liquids typically require the removal of the sample from the microscope. Consequently, time consuming procedures are required if the same sample spot has to be imaged after successive processing steps. We have developed a first prototype of a SPM which features a quasi in situ sample treatment using a modified commercial atomic force microscope. A sample holder is positioned in a special reactor chamber; the AFM tip can be retracted by several millimeters so that the chamber can be closed for a treatment procedure. Most importantly, after the treatment, the tip is moved back to the sample with a lateral drift per process step in the 20 nm regime. The performance of the prototype is characterized by consecutive plasma etching of a nanostructured polymer film

  20. Development of a standard data base for FBR core nuclear design (XIII). Analysis of small sample reactivity experiments at ZPPR-9

    International Nuclear Information System (INIS)

    Sato, Wakaei; Fukushima, Manabu; Ishikawa, Makoto

    2000-09-01

    A comprehensive study to evaluate and accumulate the abundant results of fast reactor physics is now in progress at O-arai Engineering Center to improve analytical methods and prediction accuracy of nuclear design for large fast breeder cores such as future commercial FBRs. The present report summarizes the analytical results of sample reactivity experiments at ZPPR-9 core, which has not been evaluated by the latest analytical method yet. The intention of the work is to extend and further generalize the standard data base for FBR core nuclear design. The analytical results of the sample reactivity experiments (samples: PU-30, U-6, DU-6, SS-1 and B-1) at ZPPR-9 core in JUPITER series, with the latest nuclear data library JENDL-3.2 and the analytical method which was established by the JUPITER analysis, can be concluded as follows: The region-averaged final C/E values generally agreed with unity within 5% differences at the inner core region. However, the C/E values of every sample showed the radial space-dependency increasing from center to core edge, especially the discrepancy of B-1 was the largest by 10%. Next, the influence of the present analytical results for the ZPPR-9 sample reactivity to the cross-section adjustment was evaluated. The reference case was a unified cross-section set ADJ98 based on the recent JUPITER analysis. As a conclusion, the present analytical results have sufficient physical consistency with other JUPITER data, and possess qualification as a part of the standard data base for FBR nuclear design. (author)

  1. Box-Behnken design in modeling of solid-phase tea waste extraction for the removal of uranium from water samples

    Energy Technology Data Exchange (ETDEWEB)

    Khajeh, Mostafa; Jahanbin, Elham; Ghaffari-Moghaddam, Mansour; Moghaddam, Zahra Safaei [Zabol Univ. (Iran, Islamic Republic of). Dept. of Chemistry; Bohlooli, Mousa [Zabol Univ. (Iran, Islamic Republic of). Dept. of Biology

    2015-07-01

    In this study, the solid-phase tea waste procedure was used for separation, preconcentration and determination of uranium from water samples by UV-Vis spectrophotometer. In addition, Box-Behnken experimental design was employed to investigated the influence of six variables including pH, mass of adsorbent, eluent volume, amount of 1-(2-pyridylazo)-2-naphthol (PAN); and sample and eluent flow rates on the extraction of analyte. High determination coefficient (R{sup 2}) of 0.972 and adjusted-R{sup 2} of 0.943 showed the satisfactory adjustment of the polynomial regression model. This method was used for the extraction of uranium from real water samples.

  2. Comparing attitudes about legal sanctions and teratogenic effects for cocaine, alcohol, tobacco and caffeine: A randomized, independent samples design

    Directory of Open Access Journals (Sweden)

    Alanis Kelly L

    2006-02-01

    Full Text Available Abstract Background Establishing more sensible measures to treat cocaine-addicted mothers and their children is essential for improving U.S. drug policy. Favorable post-natal environments have moderated potential deleterious prenatal effects. However, since cocaine is an illicit substance having long been demonized, we hypothesized that attitudes toward prenatal cocaine exposure would be more negative than for licit substances, alcohol, nicotine and caffeine. Further, media portrayals about long-term outcomes were hypothesized to influence viewers' attitudes, measured immediately post-viewing. Reducing popular crack baby stigmas could influence future policy decisions by legislators. In Study 1, 336 participants were randomly assigned to 1 of 4 conditions describing hypothetical legal sanction scenarios for pregnant women using cocaine, alcohol, nicotine or caffeine. Participants rated legal sanctions against pregnant women who used one of these substances and risk potential for developing children. In Study 2, 139 participants were randomly assigned to positive, neutral and negative media conditions. Immediately post-viewing, participants rated prenatal cocaine-exposed or non-exposed teens for their academic performance and risk for problems at age18. Results Participants in Study 1 imposed significantly greater legal sanctions for cocaine, perceiving prenatal cocaine exposure as more harmful than alcohol, nicotine or caffeine. A one-way ANOVA for independent samples showed significant differences, beyond .0001. Post-hoc Sheffe test illustrated that cocaine was rated differently from other substances. In Study 2, a one-way ANOVA for independent samples was performed on difference scores for the positive, neutral or negative media conditions about prenatal cocaine exposure. Participants in the neutral and negative media conditions estimated significantly lower grade point averages and more problems for the teen with prenatal cocaine exposure

  3. [Sampling and measurement methods of the protocol design of the China Nine-Province Survey for blindness, visual impairment and cataract surgery].

    Science.gov (United States)

    Zhao, Jia-liang; Wang, Yu; Gao, Xue-cheng; Ellwein, Leon B; Liu, Hu

    2011-09-01

    To design the protocol of the China nine-province survey for blindness, visual impairment and cataract surgery to evaluate the prevalence and main causes of blindness and visual impairment, and the prevalence and outcomes of the cataract surgery. The protocol design was began after accepting the task for the national survey for blindness, visual impairment and cataract surgery from the Department of Medicine, Ministry of Health, China, in November, 2005. The protocol in Beijing Shunyi Eye Study in 1996 and Guangdong Doumen County Eye Study in 1997, both supported by World Health Organization, was taken as the basis for the protocol design. The relative experts were invited to discuss and prove the draft protocol. An international advisor committee was established to examine and approve the draft protocol. Finally, the survey protocol was checked and approved by the Department of Medicine, Ministry of Health, China and Prevention Program of Blindness and Deafness, WHO. The survey protocol was designed according to the characteristics and the scale of the survey. The contents of the protocol included determination of target population and survey sites, calculation of the sample size, design of the random sampling, composition and organization of the survey teams, determination of the examinee, the flowchart of the field work, survey items and methods, diagnostic criteria of blindness and moderate and sever visual impairment, the measures of the quality control, the methods of the data management. The designed protocol became the standard and practical protocol for the survey to evaluate the prevalence and main causes of blindness and visual impairment, and the prevalence and outcomes of the cataract surgery.

  4. Design

    DEFF Research Database (Denmark)

    Volf, Mette

    Design - proces & metode iBog®  er enestående i sit fokus på afmystificering og operationalisering af designprocessens flygtige og komplekse karakter. Udgivelsen går bag om designerens daglige arbejde og giver et indblik i den kreative skabelsesproces, som designeren er en del af. Udover et bredt...... indblik i designerens arbejdsmetoder og designparametre giver Design - proces & metode en række eksempler fra anerkendte designvirksomheder, der gør det muligt at komme helt tæt på designerens virkelighed....

  5. Inclusion of mobile phone numbers into an ongoing population health survey in New South Wales, Australia: design, methods, call outcomes, costs and sample representativeness.

    Science.gov (United States)

    Barr, Margo L; van Ritten, Jason J; Steel, David G; Thackway, Sarah V

    2012-11-22

    In Australia telephone surveys have been the method of choice for ongoing jurisdictional population health surveys. Although it was estimated in 2011 that nearly 20% of the Australian population were mobile-only phone users, the inclusion of mobile phone numbers into these existing landline population health surveys has not occurred. This paper describes the methods used for the inclusion of mobile phone numbers into an existing ongoing landline random digit dialling (RDD) health survey in an Australian state, the New South Wales Population Health Survey (NSWPHS). This paper also compares the call outcomes, costs and the representativeness of the resultant sample to that of the previous landline sample. After examining several mobile phone pilot studies conducted in Australia and possible sample designs (screening dual-frame and overlapping dual-frame), mobile phone numbers were included into the NSWPHS using an overlapping dual-frame design. Data collection was consistent, where possible, with the previous years' landline RDD phone surveys and between frames. Survey operational data for the frames were compared and combined. Demographic information from the interview data for mobile-only phone users, both, and total were compared to the landline frame using χ2 tests. Demographic information for each frame, landline and the mobile-only (equivalent to a screening dual frame design), and the frames combined (with appropriate overlap adjustment) were compared to the NSW demographic profile from the 2011 census using χ2 tests. In the first quarter of 2012, 3395 interviews were completed with 2171 respondents (63.9%) from the landline frame (17.6% landline only) and 1224 (36.1%) from the mobile frame (25.8% mobile only). Overall combined response, contact and cooperation rates were 33.1%, 65.1% and 72.2% respectively. As expected from previous research, the demographic profile of the mobile-only phone respondents differed most (more that were young, males, Aboriginal

  6. Design

    Science.gov (United States)

    Buchanan, Richard; Cross, Nigel; Durling, David; Nelson, Harold; Owen, Charles; Valtonen, Anna; Boling, Elizabeth; Gibbons, Andrew; Visscher-Voerman, Irene

    2013-01-01

    Scholars representing the field of design were asked to identify what they considered to be the most exciting and imaginative work currently being done in their field, as well as how that work might change our understanding. The scholars included Richard Buchanan, Nigel Cross, David Durling, Harold Nelson, Charles Owen, and Anna Valtonen. Scholars…

  7. The effects of dominance, regular inbreeding and sampling design on Q(ST), an estimator of population differentiation for quantitative traits.

    Science.gov (United States)

    Goudet, Jérôme; Büchi, Lucie

    2006-02-01

    To test whether quantitative traits are under directional or homogenizing selection, it is common practice to compare population differentiation estimates at molecular markers (F(ST)) and quantitative traits (Q(ST)). If the trait is neutral and its determinism is additive, then theory predicts that Q(ST) = F(ST), while Q(ST) > F(ST) is predicted under directional selection for different local optima, and Q(ST) sampling designs and find that it is always best to sample many populations (>20) with few families (five) rather than few populations with many families. Provided that estimates of Q(ST) are derived from individuals originating from many populations, we conclude that the pattern Q(ST) > F(ST), and hence the inference of directional selection for different local optima, is robust to the effect of nonadditive gene actions.

  8. Design and construction of an adiabatic calorimeter for samples of less than 1 cm3 in the temperature range T = 15 K to T = 350 K

    International Nuclear Information System (INIS)

    Lang, Brian E.; Boerio-Goates, Juliana; Woodfield, Brian F.

    2006-01-01

    A small-scale adiabatic calorimeter has been constructed as part of a larger project to study the thermodynamics of nanomaterials and to facilitate heat capacity measurements on samples of insufficient quantity to run on our current large-scale adiabatic apparatus. This calorimeter is designed to measure the heat capacity of samples whose volume is less than 0.8 cm 3 over a temperature range of T = 13 K to T = 350 K. Heat capacity results on copper, sapphire, and benzoic acid show the accuracy of the measurements to be better than ±0.4% for temperatures higher than T = 50 K. The reproducibility of these measurements is generally better than ±0.25%

  9. Establishing and evaluating bar-code technology in blood sampling system: a model based on human centered human-centered design method.

    Science.gov (United States)

    Chou, Shin-Shang; Yan, Hsiu-Fang; Huang, Hsiu-Ya; Tseng, Kuan-Jui; Kuo, Shu-Chen

    2012-01-01

    This study intended to use a human-centered design study method to develop a bar-code technology in blood sampling process. By using the multilevel analysis to gather the information, the bar-code technology has been constructed to identify the patient's identification, simplify the work process, and prevent medical error rates. A Technology Acceptance Model questionnaire was developed to assess the effectiveness of system and the data of patient's identification and sample errors were collected daily. The average scores of 8 items users' perceived ease of use was 25.21(3.72), 9 items users' perceived usefulness was 28.53(5.00), and 14 items task-technology fit was 52.24(7.09), the rate of patient identification error and samples with order cancelled were down to zero, however, new errors were generated after the new system deployed; which were the position of barcode stickers on the sample tubes. Overall, more than half of nurses (62.5%) were willing to use the new system.

  10. Design

    DEFF Research Database (Denmark)

    Jensen, Ole B.; Pettiway, Keon

    2017-01-01

    In this chapter, Ole B. Jensen takes a situational approach to mobilities to examine how ordinary life activities are structured by technology and design. Using “staging mobilities” as a theoretical approach, Jensen considers mobilities as overlapping, actions, interactions and decisions by desig...... by providing ideas about future research for investigating mobilities in situ as a kind of “staging,” which he notes is influenced by the “material turn” in social sciences....... with a brief description of how movement is studied within social sciences after the “mobilities turn” versus the idea of physical movement in transport geography and engineering. He then explains how “mobilities design” was derived from connections between traffic and architecture. Jensen concludes...

  11. The choice of ultrasound assisted extraction coupled with spectrophotometric for rapid determination of gallic acid in water samples: Central composite design for optimization of process variables.

    Science.gov (United States)

    Pooralhossini, Jaleh; Ghaedi, Mehrorang; Zanjanchi, Mohammad Ali; Asfaram, Arash

    2017-01-01

    A sensitive procedure namely ultrasound-assisted (UA) coupled dispersive nano solid-phase microextraction spectrophotometry (DNSPME-UV-Vis) was designed for preconcentration and subsequent determination of gallic acid (GA) from water samples, while the detailed of composition and morphology and also purity and structure of this new sorbent was identified by techniques like field emission scanning electron microscopy (FE-SEM), X-ray diffraction (XRD) and Energy-dispersive X-ray spectroscopy (EDX) techniques. Among conventional parameters viz. pH, amount of sorbent, sonication time and volume of elution solvent based on Response Surface Methodology (RSM) and central composite design according to statistics based contour the best operational conditions was set at pH of 2.0; 1.5mg sorbent, 4.0min sonication and 150μL ethanol. Under these pre-qualified conditions the method has linear response over wide concentration range of 15-6000ngmL -1 with a correlation coefficient of 0.9996. The good figure of merits like acceptable LOD (S/N=3) and LOQ (S/N=10) with numerical value of 2.923 and 9.744ngmL -1 , respectively and relative recovery between 95.54 and 100.02% show the applicability and efficiency of this method for real samples analysis with RSDs below 6.0%. Finally the method with good performance were used for monitoring under study analyte in various real samples like tap, river and mineral waters. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Social network recruitment for Yo Puedo: an innovative sexual health intervention in an underserved urban neighborhood—sample and design implications.

    Science.gov (United States)

    Minnis, Alexandra M; vanDommelen-Gonzalez, Evan; Luecke, Ellen; Cheng, Helen; Dow, William; Bautista-Arredondo, Sergio; Padian, Nancy S

    2015-02-01

    Most existing evidence-based sexual health interventions focus on individual-level behavior, even though there is substantial evidence that highlights the influential role of social environments in shaping adolescents' behaviors and reproductive health outcomes. We developed Yo Puedo, a combined conditional cash transfer and life skills intervention for youth to promote educational attainment, job training, and reproductive health wellness that we then evaluated for feasibility among 162 youth aged 16-21 years in a predominantly Latino community in San Francisco, CA. The intervention targeted youth's social networks and involved recruitment and randomization of small social network clusters. In this paper we describe the design of the feasibility study and report participants' baseline characteristics. Furthermore, we examined the sample and design implications of recruiting social network clusters as the unit of randomization. Baseline data provide evidence that we successfully enrolled high risk youth using a social network recruitment approach in community and school-based settings. Nearly all participants (95%) were high risk for adverse educational and reproductive health outcomes based on multiple measures of low socioeconomic status (81%) and/or reported high risk behaviors (e.g., gang affiliation, past pregnancy, recent unprotected sex, frequent substance use; 62%). We achieved variability in the study sample through heterogeneity in recruitment of the index participants, whereas the individuals within the small social networks of close friends demonstrated substantial homogeneity across sociodemographic and risk profile characteristics. Social networks recruitment was feasible and yielded a sample of high risk youth willing to enroll in a randomized study to evaluate a novel sexual health intervention.

  13. Determination of melamine in soil samples using surfactant-enhanced hollow fiber liquid phase microextraction followed by HPLC–UV using experimental design

    Directory of Open Access Journals (Sweden)

    Ali Sarafraz Yazdi

    2015-11-01

    Full Text Available Surfactant-enhanced hollow fiber liquid phase (SE-HF-LPME microextraction was applied for the extraction of melamine in conjunction with high performance liquid chromatography with UV detection (HPLC–UV. Sodium dodecyl sulfate (SDS was added firstly to the sample solution at pH 1.9 to form hydrophobic ion-pair with protonated melamine. Then the protonated melamine–dodecyl sulfate ion-pair (Mel–DS was extracted from aqueous phase into organic phase immobilized in the pores and lumen of the hollow fiber. After extraction, the analyte-enriched 1-octanol was withdrawn into the syringe and injected into the HPLC. Preliminary, one variable at a time method was applied to select the type of extraction solvent. Then, in screening step, the other variables that may affect the extraction efficiency of the analyte were studied using a fractional factorial design. In the next step, a central composite design was applied for optimization of the significant factors having positive effects on extraction efficiency. The optimum operational conditions included: sample volume, 5 mL; surfactant concentration, 1.5 mM; pH 1.9; stirring rate, 1500 rpm and extraction time, 60 min. Using the optimum conditions, the method was analytically evaluated. The detection limit, relative standard deviation and linear range were 0.005 μg mL−1, 4.0% (3 μg mL−1, n = 5 and 0.01–8 μg mL−1, respectively. The performance of the procedure in extraction of melamine from the soil samples was good according to its relative recoveries in different spiking levels (95–109%.

  14. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  15. Spatial distribution, sampling precision and survey design optimisation with non-normal variables: The case of anchovy (Engraulis encrasicolus) recruitment in Spanish Mediterranean waters

    Science.gov (United States)

    Tugores, M. Pilar; Iglesias, Magdalena; Oñate, Dolores; Miquel, Joan

    2016-02-01

    In the Mediterranean Sea, the European anchovy (Engraulis encrasicolus) displays a key role in ecological and economical terms. Ensuring stock sustainability requires the provision of crucial information, such as species spatial distribution or unbiased abundance and precision estimates, so that management strategies can be defined (e.g. fishing quotas, temporal closure areas or marine protected areas MPA). Furthermore, the estimation of the precision of global abundance at different sampling intensities can be used for survey design optimisation. Geostatistics provide a priori unbiased estimations of the spatial structure, global abundance and precision for autocorrelated data. However, their application to non-Gaussian data introduces difficulties in the analysis in conjunction with low robustness or unbiasedness. The present study applied intrinsic geostatistics in two dimensions in order to (i) analyse the spatial distribution of anchovy in Spanish Western Mediterranean waters during the species' recruitment season, (ii) produce distribution maps, (iii) estimate global abundance and its precision, (iv) analyse the effect of changing the sampling intensity on the precision of global abundance estimates and, (v) evaluate the effects of several methodological options on the robustness of all the analysed parameters. The results suggested that while the spatial structure was usually non-robust to the tested methodological options when working with the original dataset, it became more robust for the transformed datasets (especially for the log-backtransformed dataset). The global abundance was always highly robust and the global precision was highly or moderately robust to most of the methodological options, except for data transformation.

  16. Optimization of the Extraction of the Volatile Fraction from Honey Samples by SPME-GC-MS, Experimental Design, and Multivariate Target Functions

    Directory of Open Access Journals (Sweden)

    Elisa Robotti

    2017-01-01

    Full Text Available Head space (HS solid phase microextraction (SPME followed by gas chromatography with mass spectrometry detection (GC-MS is the most widespread technique to study the volatile profile of honey samples. In this paper, the experimental SPME conditions were optimized by a multivariate strategy. Both sensitivity and repeatability were optimized by experimental design techniques considering three factors: extraction temperature (from 50°C to 70°C, time of exposition of the fiber (from 20 min to 60 min, and amount of salt added (from 0 to 27.50%. Each experiment was evaluated by Principal Component Analysis (PCA that allows to take into consideration all the analytes at the same time, preserving the information about their different characteristics. Optimal extraction conditions were identified independently for signal intensity (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w and repeatability (extraction temperature: 50°C; extraction time: 60 min; salt percentage: 27.50% w/w and a final global compromise (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w was also reached. Considerations about the choice of the best internal standards were also drawn. The whole optimized procedure was than applied to the analysis of a multiflower honey sample and more than 100 compounds were identified.

  17. A binary logistic regression model with complex sampling design of unmet need for family planning among all women aged (15-49) in Ethiopia.

    Science.gov (United States)

    Workie, Demeke Lakew; Zike, Dereje Tesfaye; Fenta, Haile Mekonnen; Mekonnen, Mulusew Admasu

    2017-09-01

    Unintended pregnancy related to unmet need is a worldwide problem that affects societies. The main objective of this study was to identify the prevalence and determinants of unmet need for family planning among women aged (15-49) in Ethiopia. The Performance Monitoring and Accountability2020/Ethiopia was conducted in April 2016 at round-4 from 7494 women with two-stage-stratified sampling. Bi-variable and multi-variable binary logistic regression model with complex sampling design was fitted. The prevalence of unmet-need for family planning was 16.2% in Ethiopia. Women between the age range of 15-24 years were 2.266 times more likely to have unmet need family planning compared to above 35 years. Women who were currently married were about 8 times more likely to have unmet need family planning compared to never married women. Women who had no under-five child were 0.125 times less likely to have unmet need family planning compared to those who had more than two-under-5. The key determinants of unmet need family planning in Ethiopia were residence, age, marital-status, education, household members, birth-events and number of under-5 children. Thus the Government of Ethiopia would take immediate steps to address the causes of high unmet need for family planning among women.

  18. Influence of Rack Design and Disease Prevalence on Detection of Rodent Pathogens in Exhaust Debris Samples from Individually Ventilated Caging Systems.

    Science.gov (United States)

    Bauer, Beth A; Besch-Williford, Cynthia; Livingston, Robert S; Crim, Marcus J; Riley, Lela K; Myles, Matthew H

    2016-11-01

    Sampling of bedding debris within the exhaust systems of ventilated racks may be a mechanism for detecting murine pathogens in colony animals. This study examined the effectiveness of detecting pathogens by PCR analysis of exhaust debris samples collected from ventilated racks of 2 different rack designs, one with unfiltered air flow from within the cage to the air-exhaust pathway, and the other had a filter between the cage and the air-exhaust pathway. For 12 wk, racks were populated with either 1 or 5 cages of mice (3 mice per cage) infected with one of the following pathogens: mouse norovirus (MNV), mouse parvovirus (MPV), mouse hepatitis virus (MHV), Helicobacter spp., Pasteurella pneumotropica, pinworms, Entamoeba muris, Tritrichomonas muris, and fur mites. Pathogen shedding by infected mice was monitored throughout the study. In the filter-containing rack, PCR testing of exhaust plenums yielded negative results for all pathogens at all time points of the study. In the rack with open air flow, pathogens detected by PCR analysis of exhaust debris included MHV, Helicobacter spp., P. pneumotropica, pinworms, enteric protozoa, and fur mites; these pathogens were detected in racks housing either 1 or 5 cages of infected mice. Neither MPV nor MNV was detected in exhaust debris, even though prolonged viral shedding was confirmed. These results demonstrate that testing rack exhaust debris from racks with unfiltered air flow detected MHV, enteric bacteria and parasites, and fur mites. However, this method failed to reliably detect MNV or MPV infection of colony animals.

  19. THE FMOS-COSMOS SURVEY OF STAR-FORMING GALAXIES AT z ∼ 1.6. III. SURVEY DESIGN, PERFORMANCE, AND SAMPLE CHARACTERISTICS

    Energy Technology Data Exchange (ETDEWEB)

    Silverman, J. D.; Sugiyama, N. [Kavli Institute for the Physics and Mathematics of the Universe (WPI), The University of Tokyo Institutes for Advanced Study, The University of Tokyo, Kashiwa, 277-8583 (Japan); Kashino, D. [Division of Particle and Astrophysical Science, Graduate School of Science, Nagoya University, Nagoya, 464-8602 (Japan); Sanders, D.; Zahid, J.; Kewley, L. J.; Chu, J.; Hasinger, G. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI, 96822 (United States); Kartaltepe, J. S. [National Optical Astronomy Observatory, 950 N. Cherry Ave., Tucson, AZ, 85719 (United States); Arimoto, N. [Subaru Telescope, 650 North A’ohoku Place, Hilo, Hawaii, 96720 (United States); Renzini, A. [Instituto Nazionale de Astrofisica, Osservatorio Astronomico di Padova, vicolo dell’Osservatorio 5, I-35122, Padova, Italy, EU (Italy); Rodighiero, G.; Baronchelli, I. [Dipartimento di Fisica e Astronomia, Universita di Padova, vicolo Osservatorio, 3, I-35122, Padova (Italy); Daddi, E.; Juneau, S. [Laboratoire AIM, CEA/DSM-CNRS-Universite Paris Diderot, Irfu/Service d’Astrophysique, CEA Saclay (France); Nagao, T. [Graduate School of Science and Engineering, Ehime University, 2-5 Bunkyo-cho, Matsuyama 790-8577 (Japan); Lilly, S. J.; Carollo, C. M. [Institute of Astronomy, ETH Zürich, CH-8093, Zürich (Switzerland); Capak, P. [Spitzer Science Center, California Institute of Technology, Pasadena, CA 91125 (United States); Ilbert, O., E-mail: john.silverman@ipmu.jp [Aix Marseille Université, CNRS, LAM (Laboratoire d’Astrophysique de Marseille) UMR 7326, F-13388, Marseille (France); and others

    2015-09-15

    We present a spectroscopic survey of galaxies in the COSMOS field using the Fiber Multi-object Spectrograph (FMOS), a near-infrared instrument on the Subaru Telescope. Our survey is specifically designed to detect the Hα emission line that falls within the H-band (1.6–1.8 μm) spectroscopic window from star-forming galaxies with 1.4 < z < 1.7 and M{sub stellar} ≳ 10{sup 10} M{sub ⊙}. With the high multiplex capability of FMOS, it is now feasible to construct samples of over 1000 galaxies having spectroscopic redshifts at epochs that were previously challenging. The high-resolution mode (R ∼ 2600) effectively separates Hα and [N ii]λ6585, thus enabling studies of the gas-phase metallicity and photoionization state of the interstellar medium. The primary aim of our program is to establish how star formation depends on stellar mass and environment, both recognized as drivers of galaxy evolution at lower redshifts. In addition to the main galaxy sample, our target selection places priority on those detected in the far-infrared by Herschel/PACS to assess the level of obscured star formation and investigate, in detail, outliers from the star formation rate (SFR)—stellar mass relation. Galaxies with Hα detections are followed up with FMOS observations at shorter wavelengths using the J-long (1.11–1.35 μm) grating to detect Hβ and [O iii]λ5008 which provides an assessment of the extinction required to measure SFRs not hampered by dust, and an indication of embedded active galactic nuclei. With 460 redshifts measured from 1153 spectra, we assess the performance of the instrument with respect to achieving our goals, discuss inherent biases in the sample, and detail the emission-line properties. Our higher-level data products, including catalogs and spectra, are available to the community.

  20. Design and Proof-of-Concept Use of a Circular PMMA Platform with 16-Well Sample Capacity for Microwave-Accelerated Bioassays.

    Science.gov (United States)

    Mohammed, Muzaffer; Aslan, Kadir

    2013-01-01

    We demonstrate the design and the proof-of-concept use of a new, circular poly(methyl methacrylate)-based bioassay platform (PMMA platform), which affords for the rapid processing of 16 samples at once. The circular PMMA platform (5 cm in diameter) was coated with a silver nanoparticle film to accelerate the bioassay steps by microwave heating. A model colorimetric bioassay for biotinylated albumin (using streptavidin-labeled horse radish peroxidase) was performed on the PMMA platform coated with and without silver nanoparticles (a control experiment), and at room temperature and using microwave heating. It was shown that the simulated temperature profile of the PMMA platform during microwave heating were comparable to the real-time temperature profile during actual microwave heating of the constructed PMMA platform in a commercial microwave oven. The model colorimetric bioassay for biotinylated albumin was successfully completed in ~2 min (total assay time) using microwave heating, as compared to 90 min at room temperature (total assay time), which indicates a ~45-fold decrease in assay time. Our PMMA platform design afforded for significant reduction in non-specific interactions and low background signal as compared to non-silvered PMMA surfaces when employed in a microwave-accelerated bioassay carried out in a conventional microwave cavity.

  1. Dopamine efflux in the nucleus accumbens during within-session extinction, outcome-dependent, and habit-based instrumental responding for food reward.

    Science.gov (United States)

    Ahn, Soyon; Phillips, Anthony G

    2007-04-01

    Dopamine (DA) activity in the nucleus accumbens (NAc) is related to the general motivational effects of rewarding stimuli. Dickinson and colleagues have shown that initial acquisition of instrumental responding reflects action-outcome relationships based on instrumental incentive learning, which establishes the value of an outcome. Given that the sensitivity of responding to outcome devaluation is not affected by NAc lesions, it is unlikely that incentive learning during the action-outcome phase is mediated by DA activity in the NAc. DA efflux in the NAc after limited and extended training was compared on the assumption that comparable changes would be observed during both action-outcome- and habit-based phases of instrumental responding for food. This study also tested the hypothesis that increase in NAc DA activity is correlated with instrumental responding during extinction maintained by a conditioned stimulus paired with food. Rats were trained to lever press for food (random-interval 30 s schedule). On the 5th and 16th day of training, microdialysis samples were collected from the NAc or mediodorsal striatum (a control site for generalized activity) during instrumental responding in extinction and then for food reward, and analyzed for DA content using high performance liquid chromatography. Increase in DA efflux in the NAc accompanied responding for food pellets on both days 5 and 16, with the magnitude of increase significantly enhanced on day 16. DA efflux was also significantly elevated during responding in extinction only on day 16. These results support a role for NAc DA activity in Pavlovian, but not instrumental, incentive learning.

  2. Air Emissions Sampling from Vacuum Thermal Desorption for Mixed Wastes Designated with a Combustion Treatment Code for the Energy Solutions LLC Mixed Waste Facility

    International Nuclear Information System (INIS)

    Christensen, M.E.; Willoughby, O.H.

    2009-01-01

    EnergySolutions LLC is permitted by the State of Utah to treat organically-contaminated Mixed Waste by a vacuum thermal desorption (VTD) treatment process at its Clive, Utah treatment, storage, and disposal facility. The VTD process separates organics from organically-contaminated waste by heating the material in an inert atmosphere, and captures them as concentrated liquid by condensation. The majority of the radioactive materials present in the feed to the VTD are retained with the treated solids; the recovered aqueous and organic condensates are not radioactive. This is generally true when the radioactivity is present in solid form such as inorganic salts, metals or metallic oxides. The exception is when volatile radioactive materials are present such as radon gas, tritium, or carbon-14 organic chemicals. Volatile radioactive materials are a small fraction of the feed material. On August 28, 2006, EnergySolutions submitted a request to the USEPA for a variance to the Land Disposal Restrictions (LDR) standards for wastes designated with the combustion treatment code (CMBST). The final rule granting a site specific treatment variance was effective June 13, 2008. This variance is an alternative treatment standard to treatment by CMBST required for these wastes under USEPA's rules. The State of Utah provides oversight of the VTD processing operations. A demonstration test for treating CMBST-coded wastes was performed on April 29, 2008 through May 1, 2008. Three separate process cycles were conducted during this test. Both solid/liquid samples and emission samples were collected each day during the demonstration test. To adequately challenge the unit, feed material was spiked with trichloroethylene, o-cresol, dibenzofuran, and coal tar. Emission testing was conducted by EnergySolutions' emissions test contractor and sampling for radioactivity within the off-gas was completed by EnergySolutions' Health Physics department. This report discusses the emission testing

  3. Sampling methods

    International Nuclear Information System (INIS)

    Loughran, R.J.; Wallbrink, P.J.; Walling, D.E.; Appleby, P.G.

    2002-01-01

    Methods for the collection of soil samples to determine levels of 137 Cs and other fallout radionuclides, such as excess 210 Pb and 7 Be, will depend on the purposes (aims) of the project, site and soil characteristics, analytical capacity, the total number of samples that can be analysed and the sample mass required. The latter two will depend partly on detector type and capabilities. A variety of field methods have been developed for different field conditions and circumstances over the past twenty years, many of them inherited or adapted from soil science and sedimentology. The use of them inherited or adapted from soil science and sedimentology. The use of 137 Cs in erosion studies has been widely developed, while the application of fallout 210 Pb and 7 Be is still developing. Although it is possible to measure these nuclides simultaneously, it is common for experiments to designed around the use of 137 Cs along. Caesium studies typically involve comparison of the inventories found at eroded or sedimentation sites with that of a 'reference' site. An accurate characterization of the depth distribution of these fallout nuclides is often required in order to apply and/or calibrate the conversion models. However, depending on the tracer involved, the depth distribution, and thus the sampling resolution required to define it, differs. For example, a depth resolution of 1 cm is often adequate when using 137 Cs. However, fallout 210 Pb and 7 Be commonly has very strong surface maxima that decrease exponentially with depth, and fine depth increments are required at or close to the soil surface. Consequently, different depth incremental sampling methods are required when using different fallout radionuclides. Geomorphic investigations also frequently require determination of the depth-distribution of fallout nuclides on slopes and depositional sites as well as their total inventories

  4. Design and building of a homemade sample changer for automation of the irradiation in neutron activation analysis technique; Diseno y construccion de un prototipo de intercambiador para la automatizacion de la tecnica de analisis por activacion neutronica

    Energy Technology Data Exchange (ETDEWEB)

    Gago, Javier; Hernandez, Yuri; Baltuano, Oscar; Bedregal, Patricia [Direccion de Investigacion y Desarrollo, Instituto Peruano de Energia Nuclear, Lima (Peru); Lopez, Yon [Universidad Nacional de Ingenieria, Lima (Peru); Urquizo, Rafael [Universidad Tecnologica del Peru, Lima (Peru)

    2014-07-01

    Because the RP-10 research reactor operates during weekends, it was necessary to design and build a sample changer for irradiation as part of the automation process of neutron activation analysis technique. The device is formed by an aluminum turntable disk which can accommodate 19 polyethylene capsules, containing samples to be sent using the pneumatic transfer system from the laboratory to the irradiation position. The system is operate by a control switchboard to send and return capsules in a variable preset time and by two different ways, allowing the determination of short, medium and long lived radionuclides. Also another mechanism is designed called 'exchange valve' for changing travel paths (pipelines) allowing the irradiated samples to be stored for a longer time in the reactor hall. The system design has allowed complete automation of this technique, enabling the irradiation of samples without the presence of an analyst. The design, construction and operation of the device is described and presented in this article. (authors).

  5. Spectroelectrochemical Sensing Based on Multimode Selectivity simultaneously Achievable in a Single Device. 11. Design and Evaluation of a Small Portable Sensor for the Determination of Ferrocyanide in Hanford Waste Samples

    International Nuclear Information System (INIS)

    Stegemiller, Michael L.; Heineman, William R.; Seliskar, Carl J.; Ridgway, Thomas H.; Bryan, Samuel A.; Hubler, Timothy L.; Sell, Richard L.

    2003-01-01

    Spectroelectrochemical sensing based on multimode selectivity simultaneously achievable in a single device. 11. Design and evaluation of a small portable sensor for the determination of ferrocyanide in Hanford waste samples

  6. Orthogonal Design Study on Factors Affecting the Determination of Common Odors in Water Samples by Headspace Solid-Phase Microextraction Coupled to GC/MS

    Directory of Open Access Journals (Sweden)

    Shifu Peng

    2013-01-01

    Full Text Available Geosmin and 2-MIB are responsible for the majority of earthy and musty events related to the drinking water. These two odorants have extremely low odor threshold concentrations at ng L−1 level in the water, so a simple and sensitive method for the analysis of such trace levels was developed by headspace solid-phase microextraction coupled to gas chromatography/mass spectrometry. In this study, the orthogonal experiment design L32 (49 was applied to arrange and optimize experimental conditions. The optimum was the following: temperatures of extraction and desorption, 65°C and 260°C, respectively; times of extraction and desorption, 30 min and 5 min, respectively; ionic strength, 25% (w/v; rotate-speed, 600 rpm; solution pH, 5.0. Under the optimized conditions, limits of detection (S/N=3 were 0.04 and 0.13 ng L−1 for geosmin and 2-MIB, respectively. Calculated calibration curves gave high levels of linearity with a correlation coefficient value of 0.9999 for them. Finally, the proposed method was applied to water samples, which were previously analyzed and confirmed to be free of target analytes. Besides, the proposal method was applied to test environmental water samples. The RSDs were 2.75%~3.80% and 4.35%~7.6% for geosmin and 2-MIB, respectively, and the recoveries were 91%~107% and 91%~104% for geosmin and 2-MIB, respectively.

  7. Use of constrained mixture design for optimization of method for determination of zinc and manganese in tea leaves employing slurry sampling

    Energy Technology Data Exchange (ETDEWEB)

    Almeida Bezerra, Marcos, E-mail: mbezerra47@yahoo.com.br [Universidade Estadual do Sudoeste da Bahia, Laboratorio de Quimica Analitica, 45200-190, Jequie, Bahia (Brazil); Teixeira Castro, Jacira [Universidade Federal do Reconcavo da Bahia, Centro de Ciencias Exatas e Tecnologicas, 44380-000, Cruz das Almas, Bahia (Brazil); Coelho Macedo, Reinaldo; Goncalves da Silva, Douglas [Universidade Estadual do Sudoeste da Bahia, Laboratorio de Quimica Analitica, 45200-190, Jequie, Bahia (Brazil)

    2010-06-18

    A slurry suspension sampling technique has been developed for manganese and zinc determination in tea leaves by using flame atomic absorption spectrometry. The proportions of liquid-phase of the slurries composed by HCl, HNO{sub 3} and Triton X-100 solutions have been optimized applying a constrained mixture design. The optimized conditions were 200 mg of sample ground in a tungsten carbide balls mill (particle size < 100 {mu}m), dilution in a liquid-phase composed by 2.0 mol L{sup -1} nitric, 2.0 mol L{sup -1} hydrochloric acid and 2.5% Triton X-100 solutions (in the proportions of 50%, 12% and 38% respectively), sonication time of 10 min and final slurry volume of 50.0 mL. This method allowed the determination of manganese and zinc by FAAS, with detection limits of 0.46 and 0.66 {mu}g g{sup -1}, respectively. The precisions, expressed as relative standard deviation (RSD), are 6.9 and 5.5% (n = 10), for concentrations of manganese and zinc of 20 and 40 {mu}g g{sup -1}, respectively. The accuracy of the method was confirmed by analysis of the certified apple leaves (NIST 1515) and spinach leaves (NIST 1570a). The proposed method was applied for the determination of manganese and zinc in tea leaves used for the preparation of infusions. The obtained concentrations varied between 42 and 118 {mu}g g{sup -1} and 18.6 and 90 {mu}g g{sup -1}, respectively, for manganese and zinc. The results were compared with those obtained by an acid digestion procedure and determination of the elements by FAAS. There was no significant difference between the results obtained by the two methods based on a paired t-test (at 95% confidence level).

  8. Ultrasonic assisted dispersive solid-phase microextraction of Eriochrome Cyanine R from water sample on ultrasonically synthesized lead (II) dioxide nanoparticles loaded on activated carbon: Experimental design methodology.

    Science.gov (United States)

    Bahrani, Sonia; Ghaedi, Mehrorang; Mansoorkhani, Mohammad Javad Khoshnood; Asfaram, Arash; Bazrafshan, Ali Akbar; Purkait, Mihir Kumar

    2017-01-01

    The present research focus on designing an appropriate dispersive solid-phase microextraction (UA-DSPME) for preconcentration and determination of Eriochrome Cyanine R (ECR) in aqueous solutions with aid of sonication using lead (II) dioxide nanoparticles loaded on activated carbon (PbO-NPs-AC). This material was fully identified with XRD and SEM. Influence of pH, amounts of sorbent, type and volume of eluent, and sonication time on response properties were investigated and optimized by central composite design (CCD) combined with surface response methodology using STATISTICA. Among different solvents, dimethyl sulfoxide (DMSO) was selected as an efficient eluent, which its combination by present nanoparticles and application of ultrasound waves led to enhancement in mass transfer. The predicted maximum extraction (100%) under the optimum conditions of the process variables viz. pH 4.5, eluent 200μL, adsorbent dosage 2.5mg and 5min sonication was close to the experimental value (99.50%). at optimum conditions some experimental features like wide 5-2000ngmL -1 ECR, low detection limit (0.43ngmL -1 , S/N=3:1) and good repeatability and reproducibility (relative standard deviation, <5.5%, n=12) indicate versatility in successful applicability of present method for real sample analysis. Investigation of accuracy by spiking known concentration of ECR over 200-600ngmL -1 gave mean recoveries from 94.850% to 101.42% under optimal conditions. The procedure was also applied for the pre-concentration and subsequent determination of ECR in tap and waste waters. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Preconcentration of valsartan by dispersive liquid-liquid microextraction based on solidification of floating organic drop and its determination in urine sample: Central composite design.

    Science.gov (United States)

    Pebdani, Arezou Amiri; Shabani, Ali Mohammad Haji; Dadfarnia, Shayesteh; Talebianpoor, Mohammad Sharif; Khodadoust, Saeid

    2016-05-01

    In this work, a fast, easy, and efficient dispersive liquid-liquid microextraction method based on solidification of floating organic drop followed by high-performance liquid chromatography with UV detection was developed for the separation/preconcentration and determination of the drug valsartan. Experimental design was applied for the optimization of the effective variables (such as volume of extracting and dispersing solvents, ionic strength, and pH) on the extraction efficiency of valsartan from urine samples. The optimized values were 250.0 μL ethanol, 65.0 μL 1-dodecanol, 4.0% w/v NaCl, pH 3.8, 1.0 min extraction time, and 4.0 min centrifugation at 4000 rpm min(-1) . The linear response (r(2) = 0.997) was obtained in the range of 0.013-10.0 μg mL(-1) with a limit of detection of 4.0 ng mL(-1) and relative standard deviations of less than 5.0 % (n = 6). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Road Safety Data, Collection, Transfer and Analysis DaCoTa. Workpackage 6, Driver Behaviour Monitoring through Naturalistic Driving: Deliverable 6.2: Part B: Sampling techniques and naturalistic driving study design.

    NARCIS (Netherlands)

    Commandeur, J.J.F.

    2015-01-01

    In this document we provide an overview of sampling and estimation methods that can be used to obtain population values of risk exposure data and safety performance indicators based on naturalistic driving study designs. More specifically, we discuss how to determine the optimal sample size required

  11. Bloodstream infections, antibiotic resistance and the practice of blood culture sampling in Germany: study design of a Thuringia-wide prospective population-based study (AlertsNet).

    Science.gov (United States)

    Karch, André; Schmitz, Roland P; Rißner, Florian; Castell, Stefanie; Töpel, Sandra; Jakob, Matthias; Brunkhorst, Frank M; Mikolajczyk, Rafael T

    2015-12-15

    Bloodstream infections are a major cause of death worldwide; blood culture (BC) sampling remains the most important tool for their diagnosis. Current data suggest that BC rates in German hospitals are considerably lower than recommended; this points to shortfalls in the application of microbiological analyses. Since early and appropriate BC diagnostics are associated with reduced case fatality rates and a shorter duration of antimicrobial therapy, a multicomponent study for the improvement of BC diagnostics was developed. An electronic BC registry established for the German Federal state of Thuringia is the structural basis of this study. The registry includes individual patient data (microbiological results and clinical data) and institutional information for all clinically relevant positive BCs at the participating centres. First, classic result quality indicators for bloodstream infections (eg, sepsis rates) will be studied using Poisson regression models (adjusted for institutional characteristics) in order to derive relative ranks for feedback to clinical institutions. Second, a target value will be established for the process indicator BC rate. On the basis of this target value, recommendations will be made for a given combination of institutional characteristics as a reference for future use in quality control. An interventional study aiming at the improvement of BC rates will be conducted thereafter. On the basis of the results of a survey in the participating institutions, a targeted educational intervention will be developed. The success of the educational intervention will be measured by changes in the process indicator and the result indicators over time using a pre-post design. Ethics approval was obtained from the Ethics committee of the University Hospital Jena and from the Ethics committee of the State Chamber of Physicians of Thuringia. Findings of AlertsNet will be disseminated through public media releases and publications in peer

  12. Linking morphodynamic response with sediment mass balance on the Colorado River in Marble Canyon: issues of scale, geomorphic setting, and sampling design

    Science.gov (United States)

    Grams, Paul E.; Topping, David J.; Schmidt, John C.; Hazel, Joseph E.; Kaplinski, Matt

    2013-01-01

    Measurements of morphologic change are often used to infer sediment mass balance. Such measurements may, however, result in gross errors when morphologic changes over short reaches are extrapolated to predict changes in sediment mass balance for long river segments. This issue is investigated by examination of morphologic change and sediment influx and efflux for a 100 km segment of the Colorado River in Grand Canyon, Arizona. For each of four monitoring intervals within a 7 year study period, the direction of sand-storage response within short morphologic monitoring reaches was consistent with the flux-based sand mass balance. Both budgeting methods indicate that sand storage was stable or increased during the 7 year period. Extrapolation of the morphologic measurements outside the monitoring reaches does not, however, provide a reasonable estimate of the magnitude of sand-storage change for the 100 km study area. Extrapolation results in large errors, because there is large local variation in site behavior driven by interactions between the flow and local bed topography. During the same flow regime and reach-average sediment supply, some locations accumulate sand while others evacuate sand. The interaction of local hydraulics with local channel geometry exerts more control on local morphodynamic response than sand supply over an encompassing river segment. Changes in the upstream supply of sand modify bed responses but typically do not completely offset the effect of local hydraulics. Thus, accurate sediment budgets for long river segments inferred from reach-scale morphologic measurements must incorporate the effect of local hydraulics in a sampling design or avoid extrapolation altogether.

  13. Evaluation of the iPLEX(®) Sample ID Plus Panel designed for the Sequenom MassARRAY(®) system. A SNP typing assay developed for human identification and sample tracking based on the SNPforID panel

    DEFF Research Database (Denmark)

    Johansen, P; Andersen, J D; Børsting, Claus

    2013-01-01

    on the peak height and the signal to noise data exported from the TYPER 4.0 software. With the forensic analysis parameters, all inconsistencies were eliminated in reactions with ≥10ng DNA. However, the average call rate decreased to 69.9%. The iPLEX(®) Sample ID Plus Panel was tested on 10 degraded samples......Sequenom launched the first commercial SNP typing kit for human identification, named the iPLEX(®) Sample ID Plus Panel. The kit amplifies 47 of the 52 SNPs in the SNPforID panel, amelogenin and two Y-chromosome SNPs in one multiplex PCR. The SNPs were analyzed by single base extension (SBE......) and Matrix Assisted Laser Desorption/Ionization-Time of Flight Mass Spectrometry (MALDI-TOF MS). In this study, we evaluated the accuracy and sensitivity of the iPLEX(®) Sample ID Plus Panel by comparing the typing results of the iPLEX(®) Sample ID Plus Panel with those obtained with our ISO 17025 accredited...

  14. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  15. Graph sampling

    OpenAIRE

    Zhang, L.-C.; Patone, M.

    2017-01-01

    We synthesise the existing theory of graph sampling. We propose a formal definition of sampling in finite graphs, and provide a classification of potential graph parameters. We develop a general approach of Horvitz–Thompson estimation to T-stage snowball sampling, and present various reformulations of some common network sampling methods in the literature in terms of the outlined graph sampling theory.

  16. Observing the continental-scale carbon balance: assessment of sampling complementarity and redundancy in a terrestrial assimilation system by means of quantitative network design

    OpenAIRE

    Kaminski, T.; Rayner, P. J.; Vossbeck, M.; Scholze, M.; Koffi, E.

    2012-01-01

    This paper investigates the relationship between the heterogeneity of the terrestrial carbon cycle and the optimal design of observing networks to constrain it. We combine the methods of quantitative network design and carbon-cycle data assimilation to a hierarchy of increasingly heterogeneous descriptions of the European terrestrial biosphere as indicated by increasing diversity of plant functional types. We employ three types of observat...

  17. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  18. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  19. Laser sampling

    International Nuclear Information System (INIS)

    Gorbatenko, A A; Revina, E I

    2015-01-01

    The review is devoted to the major advances in laser sampling. The advantages and drawbacks of the technique are considered. Specific features of combinations of laser sampling with various instrumental analytical methods, primarily inductively coupled plasma mass spectrometry, are discussed. Examples of practical implementation of hybrid methods involving laser sampling as well as corresponding analytical characteristics are presented. The bibliography includes 78 references

  20. An empirical analysis of the precision of estimating the numbers of neurons and glia in human neocortex using a fractionator-design with sub-sampling

    DEFF Research Database (Denmark)

    Lyck, L.; Santamaria, I.D.; Pakkenberg, B.

    2009-01-01

    Improving histomorphometric analysis of the human neocortex by combining stereological cell counting with immunchistochemical visualisation of specific neuronal and glial cell populations is a methodological challenge. To enable standardized immunohistochemical staining, the amount of brain tissue...... at each level of sampling was determined empirically. The methodology was tested in three brains analysing the contribution of the multi-step sampling procedure to the precision on the estimated total numbers of immunohistochemically defined NeuN expressing (NeuN(+)) neurons and CD45(+) microglia...

  1. Ecotoxicology statistical sampling

    International Nuclear Information System (INIS)

    Saona, G.

    2012-01-01

    This presentation introduces to general concepts in toxicology sample designs such as the distribution of organic or inorganic contaminants, a microbiological contamination, and the determination of the position in an eco toxicological bioassays ecosystem.

  2. Principal component analysis applied to Fourier transform infrared spectroscopy for the design of calibration sets for glycerol prediction models in wine and for the detection and classification of outlier samples.

    Science.gov (United States)

    Nieuwoudt, Helene H; Prior, Bernard A; Pretorius, Isak S; Manley, Marena; Bauer, Florian F

    2004-06-16

    Principal component analysis (PCA) was used to identify the main sources of variation in the Fourier transform infrared (FT-IR) spectra of 329 wines of various styles. The FT-IR spectra were gathered using a specialized WineScan instrument. The main sources of variation included the reducing sugar and alcohol content of the samples, as well as the stage of fermentation and the maturation period of the wines. The implications of the variation between the different wine styles for the design of calibration models with accurate predictive abilities were investigated using glycerol calibration in wine as a model system. PCA enabled the identification and interpretation of samples that were poorly predicted by the calibration models, as well as the detection of individual samples in the sample set that had atypical spectra (i.e., outlier samples). The Soft Independent Modeling of Class Analogy (SIMCA) approach was used to establish a model for the classification of the outlier samples. A glycerol calibration for wine was developed (reducing sugar content 8% v/v) with satisfactory predictive ability (SEP = 0.40 g/L). The RPD value (ratio of the standard deviation of the data to the standard error of prediction) was 5.6, indicating that the calibration is suitable for quantification purposes. A calibration for glycerol in special late harvest and noble late harvest wines (RS 31-147 g/L, alcohol > 11.6% v/v) with a prediction error SECV = 0.65 g/L, was also established. This study yielded an analytical strategy that combined the careful design of calibration sets with measures that facilitated the early detection and interpretation of poorly predicted samples and outlier samples in a sample set. The strategy provided a powerful means of quality control, which is necessary for the generation of accurate prediction data and therefore for the successful implementation of FT-IR in the routine analytical laboratory.

  3. How Much Confidence Can We Have in EU-SILC? Complex Sample Designs and the Standard Error of the Europe 2020 Poverty Indicators

    Science.gov (United States)

    Goedeme, Tim

    2013-01-01

    If estimates are based on samples, they should be accompanied by appropriate standard errors and confidence intervals. This is true for scientific research in general, and is even more important if estimates are used to inform and evaluate policy measures such as those aimed at attaining the Europe 2020 poverty reduction target. In this article I…

  4. Soil sampling

    International Nuclear Information System (INIS)

    Fortunati, G.U.; Banfi, C.; Pasturenzi, M.

    1994-01-01

    This study attempts to survey the problems associated with techniques and strategies of soil sampling. Keeping in mind the well defined objectives of a sampling campaign, the aim was to highlight the most important aspect of representativeness of samples as a function of the available resources. Particular emphasis was given to the techniques and particularly to a description of the many types of samplers which are in use. The procedures and techniques employed during the investigations following the Seveso accident are described. (orig.)

  5. Modification of gDNA extraction from soil for PCR designed for the routine examination of soil samples contaminated with Toxocara spp. eggs.

    Science.gov (United States)

    Borecka, A; Gawor, J

    2008-06-01

    A modification of gDNA extraction was developed for the polymerase chain reaction (PCR) technique, intended for the detection and differentiation of Toxocara spp. eggs in soil or sediments. Sand samples from sandpits confirmed as being contaminated with Toxocara spp. eggs by the flotation technique were analysed by PCR. The use of proteinase K made it possible to obtain genomic DNA from the sample without needing to isolate eggs using flotation or to inactivate PCR inhibitors present in the sand. Specific primers in the PCR reaction allowed discrimination between T. canis and T. cati eggs. The modification simplified the procedure, thanks to eliminating the step of gDNA isolation from eggs, which is both laborious and difficult.

  6. An empirical analysis of the precision of estimating the numbers of neurons and glia in human neocortex using a fractionator-design with sub-sampling

    DEFF Research Database (Denmark)

    Lyck, L.; Santamaria, I.D.; Pakkenberg, B.

    2009-01-01

    Improving histomorphometric analysis of the human neocortex by combining stereological cell counting with immunchistochemical visualisation of specific neuronal and glial cell populations is a methodological challenge. To enable standardized immunohistochemical staining, the amount of brain tissue...... to be stained and analysed by cell counting was efficiently reduced using a fractionator protocol involving several steps of sub-sampling. Since no mathematical or statistical tools exist to predict the variance originating from repeated sampling in complex structures like the human neocortex, the variance....... The results showed that it was possible, but not straight forward, to combine immunohistochemistry and the optical fractionator for estimation of specific subpopulations of brain cells in human neocortex. (C) 2009 Elsevier B.V. All rights reserved Udgivelsesdato: 2009/9/15...

  7. Choosing a design to fit the situation: how to improve specificity and positive predictive values using Bayesian lot quality assurance sampling

    OpenAIRE

    Olives, Casey; Pagano, Marcello

    2013-01-01

    Background Lot Quality Assurance Sampling (LQAS) is a provably useful tool for monitoring health programmes. Although LQAS ensures acceptable Producer and Consumer risks, the literature alleges that the method suffers from poor specificity and positive predictive values (PPVs). We suggest that poor LQAS performance is due, in part, to variation in the true underlying distribution. However, until now the role of the underlying distribution in expected performance has not been adequately examined.

  8. Language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik

    1998-01-01

    This article has two aims: [1] to present a revised version of the sampling method that was originally proposed in 1993 by Rijkhoff, Bakker, Hengeveld and Kahrel, and [2] to discuss a number of other approaches to language sampling in the light of our own method. We will also demonstrate how our...... sampling method is used with different genetic classifications (Voegelin & Voegelin 1977, Ruhlen 1987, Grimes ed. 1997) and argue that —on the whole— our sampling technique compares favourably with other methods, especially in the case of exploratory research....

  9. Choosing a design to fit the situation: how to improve specificity and positive predictive values using Bayesian lot quality assurance sampling.

    Science.gov (United States)

    Olives, Casey; Pagano, Marcello

    2013-02-01

    Lot Quality Assurance Sampling (LQAS) is a provably useful tool for monitoring health programmes. Although LQAS ensures acceptable Producer and Consumer risks, the literature alleges that the method suffers from poor specificity and positive predictive values (PPVs). We suggest that poor LQAS performance is due, in part, to variation in the true underlying distribution. However, until now the role of the underlying distribution in expected performance has not been adequately examined. We present Bayesian-LQAS (B-LQAS), an approach to incorporating prior information into the choice of the LQAS sample size and decision rule, and explore its properties through a numerical study. Additionally, we analyse vaccination coverage data from UNICEF's State of the World's Children in 1968-1989 and 2008 to exemplify the performance of LQAS and B-LQAS. Results of our numerical study show that the choice of LQAS sample size and decision rule is sensitive to the distribution of prior information, as well as to individual beliefs about the importance of correct classification. Application of the B-LQAS approach to the UNICEF data improves specificity and PPV in both time periods (1968-1989 and 2008) with minimal reductions in sensitivity and negative predictive value. LQAS is shown to be a robust tool that is not necessarily prone to poor specificity and PPV as previously alleged. In situations where prior or historical data are available, B-LQAS can lead to improvements in expected performance.

  10. Sample preparation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Sample preparation prior to HPLC analysis is certainly one of the most important steps to consider in trace or ultratrace analysis. For many years scientists have tried to simplify the sample preparation process. It is rarely possible to inject a neat liquid sample or a sample where preparation may not be any more complex than dissolution of the sample in a given solvent. The last process alone can remove insoluble materials, which is especially helpful with the samples in complex matrices if other interactions do not affect extraction. Here, it is very likely a large number of components will not dissolve and are, therefore, eliminated by a simple filtration process. In most cases, the process of sample preparation is not as simple as dissolution of the component interest. At times, enrichment is necessary, that is, the component of interest is present in very large volume or mass of material. It needs to be concentrated in some manner so a small volume of the concentrated or enriched sample can be injected into HPLC. 88 refs

  11. Sample size requirements for separating out the effects of combination treatments: Randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis

    Directory of Open Access Journals (Sweden)

    Farrar Jeremy

    2011-02-01

    Full Text Available Abstract Background In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 × 2 factorial design. Methods We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. Results In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Conclusions Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 × 2 factorial design to detect effects of

  12. Sample size requirements for separating out the effects of combination treatments: randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis.

    Science.gov (United States)

    Wolbers, Marcel; Heemskerk, Dorothee; Chau, Tran Thi Hong; Yen, Nguyen Thi Bich; Caws, Maxine; Farrar, Jeremy; Day, Jeremy

    2011-02-02

    In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 x 2 factorial design. We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 x 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the

  13. Environmental sampling

    International Nuclear Information System (INIS)

    Puckett, J.M.

    1998-01-01

    Environmental Sampling (ES) is a technology option that can have application in transparency in nuclear nonproliferation. The basic process is to take a sample from the environment, e.g., soil, water, vegetation, or dust and debris from a surface, and through very careful sample preparation and analysis, determine the types, elemental concentration, and isotopic composition of actinides in the sample. The sample is prepared and the analysis performed in a clean chemistry laboratory (CCL). This ES capability is part of the IAEA Strengthened Safeguards System. Such a Laboratory is planned to be built by JAERI at Tokai and will give Japan an intrinsic ES capability. This paper presents options for the use of ES as a transparency measure for nuclear nonproliferation

  14. Quantification of physical activity using the QAPACE Questionnaire: a two stage cluster sample design survey of children and adolescents attending urban school.

    Science.gov (United States)

    Barbosa, Nicolas; Sanchez, Carlos E; Patino, Efrain; Lozano, Benigno; Thalabard, Jean C; LE Bozec, Serge; Rieu, Michel

    2016-05-01

    Quantification of physical activity as energy expenditure is important since youth for the prevention of chronic non communicable diseases in adulthood. It is necessary to quantify physical activity expressed in daily energy expenditure (DEE) in school children and adolescents between 8-16 years, by age, gender and socioeconomic level (SEL) in Bogotá. This is a Two Stage Cluster Survey Sample. From a universe of 4700 schools and 760000 students from three existing socioeconomic levels in Bogotá (low, medium and high). The random sample was 20 schools and 1840 students (904 boys and 936 girls). Foreshadowing desertion of participants and inconsistency in the questionnaire responses, the sample size was increased. Thus, 6 individuals of each gender for each of the nine age groups were selected, resulting in a total sample of 2160 individuals. Selected students filled the QAPACE questionnaire under supervision. The data was analyzed comparing means with multivariate general linear model. Fixed factors used were: gender (boys and girls), age (8 to 16 years old) and tri-strata SEL (low, medium and high); as independent variables were assessed: height, weight, leisure time, expressed in hours/day and dependent variable: daily energy expenditure DEE (kJ.kg-1.day-1): during leisure time (DEE-LT), during school time (DEE-ST), during vacation time (DEE-VT), and total mean DEE per year (DEEm-TY) RESULTS: Differences in DEE by gender, in boys, LT and all DEE, with the SEL all variables were significant; but age-SEL was only significant in DEE-VT. In girls, with the SEL all variables were significant. The post hoc multiple comparisons tests were significant with age using Fisher's Least Significant Difference (LSD) test in all variables. For both genders and for all SELs the values in girls had the higher value except SEL high (5-6) The boys have higher values in DEE-LT, DEE-ST, DEE-VT; except in DEEm-TY in SEL (5-6) In SEL (5-6) all DEEs for both genders are highest. For SEL

  15. Chapter 11: Sample Design Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Khawaja, M. Sami [The Cadmus Group, Portland, OR (United States); Rushton, Josh [The Cadmus Group, Portland, OR (United States); Keeling, Josh [The Cadmus Group, Portland, OR (United States)

    2017-09-01

    Evaluating an energy efficiency program requires assessing the total energy and demand saved through all of the energy efficiency measures provided by the program. For large programs, the direct assessment of savings for each participant would be cost-prohibitive. Even if a program is small enough that a full census could be managed, such an undertaking would almost always be an inefficient use of evaluation resources. The bulk of this chapter describes methods for minimizing and quantifying sampling error. Measurement error and regression error are discussed in various contexts in other chapters.

  16. The French national survey on food consumption of children under 3 years of age - Nutri-Bébé 2013: design, methodology, population sampling and feeding practices.

    Science.gov (United States)

    Chouraqui, Jean-Pierre; Tavoularis, Gabriel; Emery, Yves; Francou, Aurée; Hébel, Pascale; Bocquet, Magali; Hankard, Régis; Turck, Dominique

    2018-02-01

    To update the data on food consumption and practices in children under 3 years of age in metropolitan France. The Nutri-Bébé 2013 cross-sectional study selected a random sample, according to the quota sampling method. After giving their informed consent, parents had to record the food consumption during three non-consecutive days framed by two face-to-face interviews, using for quantitative information different portion size measurement aids. One thousand one hundred and eighty-four children were enrolled. Mothers' mean age was 30·8 (sd 5·4) years; 38 % were primiparous; 89 % lived with a partner; 60 % had an occupation. Of the infants younger than 4 months, 31 % were breast-fed. One thousand and thirty-five children consumed infant formula followed by growing-up milk in 63 % of them; solid foods were introduced at a mean age of 5·4 (sd 2·13) months. From 8 months onwards, 25 % of children consumed the same foods as their parents on a more or less regular basis; 29 % ate in front of a screen, with a daily average screen time of 43·0 (sd 40·4) min. This robust survey highlights the low prevalence and duration of breast-feeding in France and shows a modest improvement since the previous survey of 2005 in the observance of recommendations concerning other feeding practices. The frequent consumption of adult foods and the screen time are of concern.

  17. Spherical sampling

    CERN Document Server

    Freeden, Willi; Schreiner, Michael

    2018-01-01

    This book presents, in a consistent and unified overview, results and developments in the field of today´s spherical sampling, particularly arising in mathematical geosciences. Although the book often refers to original contributions, the authors made them accessible to (graduate) students and scientists not only from mathematics but also from geosciences and geoengineering. Building a library of topics in spherical sampling theory it shows how advances in this theory lead to new discoveries in mathematical, geodetic, geophysical as well as other scientific branches like neuro-medicine. A must-to-read for everybody working in the area of spherical sampling.

  18. Validation of protein intake assessed from weighed dietary records against protein estimated from 24 h urine samples in children, adolescents and young adults participating in the Dortmund Nutritional and Longitudinally Designed (DONALD) Study

    DEFF Research Database (Denmark)

    Bokhof, Beate; Günther, Anke L B; Berg-Beckhoff, Gabriele

    2010-01-01

    from a simultaneously collected 24 h urine sample. DESIGN: Cross-sectional analyses including 439 participants of the Dortmund Nutritional and Longitudinally Designed (DONALD) Study from four age groups (3-4, 7-8, 11-13 and 18-23 years). Mean differences, Pearson correlation coefficients (r), cross.......5 (95 % CI -18.7, -8.3) g/d at age 18-23 years. Correlation coefficients were r = 0.7 for the total study sample and ranged from r = 0.5 to 0.6 in the different age groups. Both methods classified 85 % into the same/adjacent quartile for the whole study group (83-86 % for the different age groups) and 2...

  19. Amostras complexas em inquéritos populacionais: planejamento e implicações na análise estatística dos dados Complex Sampling Design in Population Surveys: Planning and effects on statistical data analysis

    Directory of Open Access Journals (Sweden)

    Célia Landmann Szwarcwald

    2008-05-01

    health status of the population and satisfaction with healthcare from the user's point of view. Most national health surveys do not use simple random sampling, either due to budget restrictions or because time constraints associated with data collection. In general, a combination of several probabilistic sampling methods is used to select a representative sample of the population, which is called complex sampling design. Among the several sampling techniques, the most frequently used are simple random sampling, stratified sampling and cluster sampling. As a result of this process, the next concern is the statistical analysis of the data from complex samples. This paper deals with issues related to data analysis obtained from surveys using complex sampling designs. It discusses the problems that arise when the statistical analysis does not incorporate the sampling design. When the design is neglected, traditional statistical analysis, based on the assumption of simple random sampling, might produce improper results not only for the mean estimates but also for standard errors, thus compromising results, hypothesis testing, and survey conclusions. The World Health Survey (WHS carried out in Brazil, in 2003, is used to exemplify complex sampling methods.

  20. Positronium in the AEgIS experiment: study on its emission from nanochanneled samples and design of a new apparatus for Rydberg excitations

    CERN Document Server

    Di Noto, Lea

    This experimental thesis has been done in the framework of AEgIS (Antimatter Experiment: Gravity, Interferometry, Spectroscopy), an experiment installed at CERN, whose primary goal is the measurement of the Earth's gravitational acceleration on anti-hydrogen. The antiatoms will be produced by the charge exchange reaction, where a cloud of Ps in Rydberg states interacts with cooled trapped antiprotons. Since the charge exchange cross section depends on Ps velocity and quantum number, the velocity distribution of Ps emitted by a positron-positronium converter as well as its excitation in Rydberg states have to be studied and optimized. In this thesis Ps cooling and emission into vacuum from nanochannelled silicon targets was studied by performing Time of Flight measurements with a dedicated apparatus conceived to receive the slow positron beam as produced at the Trento laboratory or at the NEPOMUC facility at Munich. Measurements were done by varying the positron implantation energy, the sample temperature and ...

  1. Design of a thermal neutron field by 252Cf source for measurement of 10B concentrations in the blood samples for BNCT

    International Nuclear Information System (INIS)

    Naito, H.; Sakurai, Y.; Maruhashi, A.

    2006-01-01

    10 B concentrations in the blood samples for BNCT has been estimated due to amounts of prompt gamma rays from 10 B in the fields of thermal neutrons from a special guide tube attached to research reactor. A system using radioisotopes as the source of thermal neutron fields has advantages that are convenient and low cost because it doesn't need running of a reactor or an accelerator. The validity of 252 Cf as a neutron source for 10 B concentrations detection system was investigated. This system is composed of D 2 O moderator, Pb reflector/filter, C reflector, and LiF filter. A thermal neutron field with low background gamma-rays is obtained. A large source of 252 Cf is required to obtain a sufficient flux. (author)

  2. A Bayesian Justification for Random Sampling in Sample Survey

    Directory of Open Access Journals (Sweden)

    Glen Meeden

    2012-07-01

    Full Text Available In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.

  3. A practical guide for the design and implementation of the double-spike technique for precise determination of molybdenum isotope compositions of environmental samples.

    Science.gov (United States)

    Skierszkan, E K; Amini, M; Weis, D

    2015-03-01

    The isotopic double-spike method allows for the determination of stable isotope ratios by multi-collector inductively coupled plasma-mass spectrometry (MC-ICP-MS) with accuracy and precision in the range of ∼0.02 ‰ amu(-1), but its adoption has been hindered by the perceived difficulties in double-spike calibration and implementation. To facilitate the implementation of the double-spike approach, an explanation of the calibration and validation of a (97)Mo-(100)Mo double-spike protocol is given in more detail than has been presented elsewhere. The long-term external standard reproducibility is 0.05 ‰ on δ(98/95)Mo measurements of standards. δ(98/95)Mo values for seawater and U.S. Geological Survey (USGS) reference materials SDO-1 and BCR-2 measured in this study are 2.13 ± 0.04 ‰ (2 SD, n = 3), 0.79 ± 0.05 ‰ (2 SD, n = 11), and -0.04 ± 0.10 ‰ (2 SD, n = 3) relative to the NIST-SRM-3134. The double-spike method corrects for laboratory and instrumental fractionation which are not accounted for using other mass bias correction methods. Spike/sample molar ratios between 0.4 and 0.8 provide accurate isotope measurements; outside of this range, isotope measurements are inaccurate but corrections are possible when standards and samples are spiked at a similar ratio.

  4. Fluidic sampling

    International Nuclear Information System (INIS)

    Houck, E.D.

    1992-01-01

    This paper covers the development of the fluidic sampler and its testing in a fluidic transfer system. The major findings of this paper are as follows. Fluidic jet samples can dependably produce unbiased samples of acceptable volume. The fluidic transfer system with a fluidic sampler in-line will transfer water to a net lift of 37.2--39.9 feet at an average ratio of 0.02--0.05 gpm (77--192 cc/min). The fluidic sample system circulation rate compares very favorably with the normal 0.016--0.026 gpm (60--100 cc/min) circulation rate that is commonly produced for this lift and solution with the jet-assisted airlift sample system that is normally used at ICPP. The volume of the sample taken with a fluidic sampler is dependant on the motive pressure to the fluidic sampler, the sample bottle size and on the fluidic sampler jet characteristics. The fluidic sampler should be supplied with fluid having the motive pressure of the 140--150 percent of the peak vacuum producing motive pressure for the jet in the sampler. Fluidic transfer systems should be operated by emptying a full pumping chamber to nearly empty or empty during the pumping cycle, this maximizes the solution transfer rate

  5. A new system for the simultaneous measurement of δ{sup 13}C and δ{sup 15}N by IRMS and radiocarbon by AMS on gaseous samples: Design features and performances of the gas handling interface

    Energy Technology Data Exchange (ETDEWEB)

    Braione, Eugenia; Maruccio, Lucio; Quarta, Gianluca; D’Elia, Marisa; Calcagnile, Lucio, E-mail: lucio.calcagnile@unisalento.it

    2015-10-15

    We present the general design features and preliminary performances of a new system for the simultaneous AMS-{sup 14}C and IRMS δ{sup 13}C and δ{sup 15}N measurements on samples with masses in the μg range. The system consists of an elemental analyzer (EA), a gas splitting unit (GSU), a IRMS system, a gas handling interface (GHI) and a sputtering ion source capable of accepting gaseous samples. A detailed description of the system and of the control software supporting unattended operation are presented together with the first performance tests carried out by analyzing samples with masses ranging from 8 μgC to 2.4 mgC. The performances of the system were tested in term of stability of the ion beam extracted from the ion source, precision and accuracy of the results by comparing the measured isotopic ratios with those expected for reference materials.

  6. Sample Acquisition for Materials in Planetary Exploration (SAMPLE), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — ORBITEC proposes to analyze, design, and develop a device for autonomous lunar surface/subsurface sampling and processing applications. The Sample Acquisition for...

  7. Triparental families: a new genetic-epidemiological design applied to drug abuse, alcohol use disorders, and criminal behavior in a Swedish national sample.

    Science.gov (United States)

    Kendler, Kenneth S; Ohlsson, Henrik; Sundquist, Jan; Sundquist, Kristina

    2015-06-01

    The authors sought to clarify the sources of parent-offspring resemblance for drug abuse, alcohol use disorders, and criminal behavior, using a novel genetic-epidemiological design. Using national registries, the authors identified rates of drug abuse, alcohol use disorders, and criminal behavior in 41,360 Swedish individuals born between 1960 and 1990 and raised in triparental families comprising a biological mother who reared them, a "not-lived-with" biological father, and a stepfather. When each syndrome was examined individually, hazard rates for drug abuse in offspring of parents with drug abuse were highest for mothers (2.80, 95% CI=2.23-3.38), intermediate for not-lived-with fathers (2.45, 95% CI=2.14-2.79), and lowest for stepfathers (1.99, 95% CI=1.55-2.56). The same pattern was seen for alcohol use disorders (2.23, 95% CI=1.93-2.58; 1.84, 95% CI=1.69-2.00; and 1.27, 95% CI=1.12-1.43) and criminal behavior (1.55, 95% CI=1.44-1.66; 1.46, 95% CI=1.40-1.52; and 1.30, 95% CI=1.23-1.37). When all three syndromes were examined together, specificity of cross-generational transmission was highest for mothers, intermediate for not-lived-with fathers, and lowest for stepfathers. Analyses of intact families and other not-lived-with parents and stepparents showed similar cross-generation transmission for these syndromes in mothers and fathers, supporting the representativeness of results from triparental families. A major strength of the triparental design is its inclusion, within a single family, of parents who provide, to a first approximation, their offspring with genes plus rearing, genes only, and rearing only. For drug abuse, alcohol use disorders, and criminal behavior, the results of this study suggest that parent-offspring transmission involves both genetic and environmental processes, with genetic factors being somewhat more important. These results should be interpreted in the context of the strengths and limitations of national registry data.

  8. Exploring obesogenic environments: the design and development of the migrant obesogenic perception of the environment questionnaire (MOPE-Q) using a sample of Iranian migrants in Australia.

    Science.gov (United States)

    Delavari, Maryam; Sønderlund, Anders Larrabee; Mellor, David; Mohebbi, Mohammadreza; Swinburn, Boyd

    2014-06-06

    Although there are a number of studies examining the effect of migration on obesity, these studies tend to focus on the role of acculturation in this relationship. However, there are indications that the change in environment may also be an important factor. Indeed, there is a considerable lack of psychometric tools designed to assess the association between environment and migrant health behaviour. The current study aimed to assess the literature on the link between environment and health for migrants, and on the basis of this information, design and develop the Migrant Obesogenic Perception of the Environment questionnaire (MOPE-Q). The MOPE-Q is the first comprehensive measure of the impact of environmental factors on migrant health behaviour related to physical activity, food habits and body image concern, as well as weight change. Using a systematic approach, an initial pool of items for the questionnaire was developed and refined on the basis of rigorous content and face validity assessments and factor analysis. Further, reliability tests and test re-test studies were undertaken. Differences between Iranian and Australian environmental factors as they relate to obesogenic behaviour were explored using the developed measure. A total of 36 items were developed for the MOPE-Q. Principal factor analysis identified three similar factor structures of environmental factors related to obesity (categorized in terms of facilitators, barriers and pressures) for each country. The final questionnaire consisted of four distinct subscales pertaining specifically to the Australian environment and five subscales pertaining to the Iranian environment, accounting for 59% and 63%, respectively, of the total variance in obesity rates. Data suggests that the MOPE-Q is a reliable and valid self-report measure for assessing the relationship between environmental factors linked to obesity and obesogenic behaviour for this particular migrant group. The variations in environmental

  9. Design, construction, and application of XRF systems for analysis of trace elements in clinical, biological, and environmental samples. Progress report, August 1, 1975--June 30, 1976

    International Nuclear Information System (INIS)

    Laurer, G.R.; Kneip, T.J.

    1976-01-01

    A separation technique has been developed allowing preparation of fixed volumes of rbc's and plasma. Replicate sampling shows rbc and plasma specimen weights of 43 +- 1.4 mg and 36 +- 1.9 mg, eliminating large background variation due to weight differences of whole blood specimens. An improved specimen geometry has significantly increased the signal to background ratio for low Z elements and with the separation technique, allows quantitative determination with good precision, of changes in the levels of Cl, K, Ca, Fe, Cu, Zn and Br, and in the intra/extra cellular ratios of these elements. Pb is detectable at ''normal'' levels (20 μg percent), depending on hematocrit with a one-minute count using 3 sigma confidence. Measurements of primate blood indicate a possible rise in bromine level with pregnancy in baboons. In preparation for use in proton-accelerator room, a wheel-based, rack-mount has been constructed which allows transportation of the XRF system for on-site and off-site analyses

  10. Analysis for reflection peaks of multiple-phase-shift based sampled fiber Bragg gratings and application in high channel-count filter design.

    Science.gov (United States)

    Wen, Kun Hua; Yan, Lian Shan; Pan, Wei; Luo, Bin; Zou, Xi Hua; Ye, Jia; Ma, Ya Nan

    2009-10-10

    An analytical expression for calculating the reflection-peak wavelengths (RPWs) of a uniform sampled fiber Bragg grating (SFBG) with the multiple-phase-shift (MPS) technique is derived through Fourier transform of the index modulation. The new expression can accurately depict the RPWs incorporating various parameters such as the duty cycle and the DC index change. The effectiveness of the derived expression is further confirmed by comparing the RPWs estimated from the expression with the simulated reflective spectra using the piecewise uniform method. And the reflective spectrum has been well optimized by introducing the Gaussian apodization function to suppress the sidelobes without any wavelength shift on the RPWs. Then, a high-channel-count comb filter based on MPS is proposed by cascading two or more SFBGs with different Bragg periods but with the same RPWs. Noticeably, the RPWs of the new structured SFBG can also be accurately calculated through the expression. Furthermore, the number of spectral channels can be controlled by choosing gratings with specified difference Bragg periods.

  11. Coupling methods for multistage sampling

    OpenAIRE

    Chauvet, Guillaume

    2015-01-01

    Multistage sampling is commonly used for household surveys when there exists no sampling frame, or when the population is scattered over a wide area. Multistage sampling usually introduces a complex dependence in the selection of the final units, which makes asymptotic results quite difficult to prove. In this work, we consider multistage sampling with simple random without replacement sampling at the first stage, and with an arbitrary sampling design for further stages. We consider coupling ...

  12. Design and Development of Low Cost, Simple, Rapid and Safe, Modified Field Kits for the Visual Detection and Determination of Arsenic in Drinking Water Samples

    Directory of Open Access Journals (Sweden)

    Y. Anjaneyulu

    2005-08-01

    Full Text Available Arsenic is naturally found in surface and ground waters and the inorganic forms of arsenic are the most toxic forms. The adverse health effects of arsenic may involve the respiratory, gastrointestinal, cardiovascular, nervous, and haematopoietic systems. Arsenic contamination in drinking water is a global problem widely seen in Bangladesh and West Bengal of the Indian sub continent. As there is a great demand for field test kits due to the anticipated reduction of the US EPA arsenic standard from 50ppb to 10ppb a field kit which offers rapid, simple and safe method for precise estimation of arsenic at 10ppb in drinking water samples is developed. Field methods, based on the mercuric-bromide-stain, consist of three different major parts, which are carried out stepwise. The first part of the procedure is to remove serious interference caused by hydrogen sulphide. In commercially available kits either the sulphide is oxidized to sulphate and the excess oxidizing reagent removed prior to the hydride generation step or, the hydrogen sulphide is filtered out by passing the gas stream through a filter impregnated with lead acetate during the hydride generation step. The present method employs cupric chloride in combination with ferric chloride or Fenton’s reagent for the removal of hydrogen sulphide, which is rapid, simple and more efficient. Other interferences at this step of the analyses are normally not expected for drinking water analysis. In the second step, the generation of the arsine gas involves the classical way of using zinc metal and hydrochloric acid, which produce the ‘nascent’ hydrogen, which is the actual reducing agent. Hydrochloric acid can be replaced by sulfamic acid, which is solid and avoids a major disadvantage of having to handle a corrosive liquid in the field. The arsine gas produces a yellowish spot on the reagent paper. Depending on the arsenic content, either, Yellow – H

  13. Specified assurance level sampling procedure

    International Nuclear Information System (INIS)

    Willner, O.

    1980-11-01

    In the nuclear industry design specifications for certain quality characteristics require that the final product be inspected by a sampling plan which can demonstrate product conformance to stated assurance levels. The Specified Assurance Level (SAL) Sampling Procedure has been developed to permit the direct selection of attribute sampling plans which can meet commonly used assurance levels. The SAL procedure contains sampling plans which yield the minimum sample size at stated assurance levels. The SAL procedure also provides sampling plans with acceptance numbers ranging from 0 to 10, thus, making available to the user a wide choice of plans all designed to comply with a stated assurance level

  14. Rationale and design of the HOME trial: A pragmatic randomized controlled trial of home-based human papillomavirus (HPV) self-sampling for increasing cervical cancer screening uptake and effectiveness in a U.S. healthcare system.

    Science.gov (United States)

    Winer, Rachel L; Tiro, Jasmin A; Miglioretti, Diana L; Thayer, Chris; Beatty, Tara; Lin, John; Gao, Hongyuan; Kimbel, Kilian; Buist, Diana S M

    2018-01-01

    Women who delay or do not attend Papanicolaou (Pap) screening are at increased risk for cervical cancer. Trials in countries with organized screening programs have demonstrated that mailing high-risk (hr) human papillomavirus (HPV) self-sampling kits to under-screened women increases participation, but U.S. data are lacking. HOME is a pragmatic randomized controlled trial set within a U.S. integrated healthcare delivery system to compare two programmatic approaches for increasing cervical cancer screening uptake and effectiveness in under-screened women (≥3.4years since last Pap) aged 30-64years: 1) usual care (annual patient reminders and ad hoc outreach by clinics) and 2) usual care plus mailed hrHPV self-screening kits. Over 2.5years, eligible women were identified through electronic medical record (EMR) data and randomized 1:1 to the intervention or control arm. Women in the intervention arm were mailed kits with pre-paid envelopes to return samples to the central clinical laboratory for hrHPV testing. Results were documented in the EMR to notify women's primary care providers of appropriate follow-up. Primary outcomes are detection and treatment of cervical neoplasia. Secondary outcomes are cervical cancer screening uptake, abnormal screening results, and women's experiences and attitudes towards hrHPV self-sampling and follow-up of hrHPV-positive results (measured through surveys and interviews). The trial was designed to evaluate whether a programmatic strategy incorporating hrHPV self-sampling is effective in promoting adherence to the complete screening process (including follow-up of abnormal screening results and treatment). The objective of this report is to describe the rationale and design of this pragmatic trial. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Psychometric testing on the NLN Student Satisfaction and Self-Confidence in Learning, Simulation Design Scale, and Educational Practices Questionnaire using a sample of pre-licensure novice nurses.

    Science.gov (United States)

    Franklin, Ashley E; Burns, Paulette; Lee, Christopher S

    2014-10-01

    In 2006, the National League for Nursing published three measures related to novice nurses' beliefs about self-confidence, scenario design, and educational practices associated with simulation. Despite the extensive use of these measures, little is known about their reliability and validity. The psychometric properties of the Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire were studied among a sample of 2200 surveys completed by novice nurses from a liberal arts university in the southern United States. Psychometric tests included item analysis, confirmatory and exploratory factor analyses in randomly-split subsamples, concordant and discordant validity, and internal consistency. All three measures have sufficient reliability and validity to be used in education research. There is room for improvement in content validity with the Student Satisfaction and Self-Confidence in Learning and Simulation Design Scale. This work provides robust evidence to ensure that judgments made about self-confidence after simulation, simulation design and educational practices are valid and reliable. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Development of a one-step RT-PCR assay for detection of pancoronaviruses (α-, β-, γ-, and δ-coronaviruses) using newly designed degenerate primers for porcine and avian `fecal samples.

    Science.gov (United States)

    Hu, Hui; Jung, Kwonil; Wang, Qiuhong; Saif, Linda J; Vlasova, Anastasia N

    2018-06-01

    Coronaviruses (CoVs) are critical human and animal pathogens because of their potential to cause severe epidemics of respiratory or enteric diseases. In pigs, the newly emerged porcine deltacoronavirus (PDCoV) and re-emerged porcine epidemic diarrhea virus (PEDV) reported in the US and Asia, as well as the discovery of novel CoVs in wild bats or birds, has necessitated development of improved detection and control measures for these CoVs. Because the previous pancoronavirus (panCoV) RT-PCR established in our laboratory in 2007-2011 did not detect deltacoronaviruses (δ-CoVs) in swine fecal and serum samples, our goal was to develop a new panCoV RT-PCR assay to detect known human and animal CoVs, including δ-CoVs. In this study, we designed a new primer set to amplify a 668 bp-region within the RNA-dependent RNA polymerase (RdRP) gene that encodes the most conserved protein domain of α-, β-, γ-, and δ-CoVs. We established a one-step panCoV RT-PCR assay and standardized the assay conditions. The newly established panCoV RT-PCR assay was demonstrated to have a high sensitivity and specificity. Using a panel of 60 swine biological samples (feces, intestinal contents, and sera) characterized by PEDV, PDCoV and transmissible gastroenteritis virus-specific RT-PCR assays, we demonstrated that sensitivity and specificity of the newly established panCoV RT-PCR assay were 100%. 400 avian fecal (RNA) samples were further tested simultaneously for CoV by the new panCoV RT-PCR and a one-step RT-PCR assay with the δ-CoV nucleocapsid-specific universal primers. Four of 400 avian samples were positive for CoV, three of which were positive for δ-CoV by the conventional RT-PCR. PanCoV RT-PCR fragments for 3 of the 4 CoVs were sequenced. Phylogenetic analysis revealed the presence of one γ-CoV and two δ-CoV in the sequenced samples. The newly designed panCoV RT-PCR assay should be useful for the detection of currently known CoVs in animal biological samples. Copyright © 2018

  17. Urine sample collection protocols for bioassay samples

    Energy Technology Data Exchange (ETDEWEB)

    MacLellan, J.A.; McFadden, K.M.

    1992-11-01

    In vitro radiobioassay analyses are used to measure the amount of radioactive material excreted by personnel exposed to the potential intake of radioactive material. The analytical results are then used with various metabolic models to estimate the amount of radioactive material in the subject`s body and the original intake of radioactive material. Proper application of these metabolic models requires knowledge of the excretion period. It is normal practice to design the bioassay program based on a 24-hour excretion sample. The Hanford bioassay program simulates a total 24-hour urine excretion sample with urine collection periods lasting from one-half hour before retiring to one-half hour after rising on two consecutive days. Urine passed during the specified periods is collected in three 1-L bottles. Because the daily excretion volume given in Publication 23 of the International Commission on Radiological Protection (ICRP 1975, p. 354) for Reference Man is 1.4 L, it was proposed to use only two 1-L bottles as a cost-saving measure. This raised the broader question of what should be the design capacity of a 24-hour urine sample kit.

  18. Urine sample collection protocols for bioassay samples

    Energy Technology Data Exchange (ETDEWEB)

    MacLellan, J.A.; McFadden, K.M.

    1992-11-01

    In vitro radiobioassay analyses are used to measure the amount of radioactive material excreted by personnel exposed to the potential intake of radioactive material. The analytical results are then used with various metabolic models to estimate the amount of radioactive material in the subject's body and the original intake of radioactive material. Proper application of these metabolic models requires knowledge of the excretion period. It is normal practice to design the bioassay program based on a 24-hour excretion sample. The Hanford bioassay program simulates a total 24-hour urine excretion sample with urine collection periods lasting from one-half hour before retiring to one-half hour after rising on two consecutive days. Urine passed during the specified periods is collected in three 1-L bottles. Because the daily excretion volume given in Publication 23 of the International Commission on Radiological Protection (ICRP 1975, p. 354) for Reference Man is 1.4 L, it was proposed to use only two 1-L bottles as a cost-saving measure. This raised the broader question of what should be the design capacity of a 24-hour urine sample kit.

  19. Analysis of industry-generated data. Part 1: a baseline for the development of a tool to assist the milk industry in designing sampling plans for controlling aflatoxin M1 in milk.

    Science.gov (United States)

    Trevisani, Marcello; Farkas, Zsuzsa; Serraino, Andrea; Zambrini, Angelo Vittorio; Pizzamiglio, Valentina; Giacometti, Federica; Ámbrus, Arpád

    2014-01-01

    The presence of aflatoxin M1 (AFM1) in milk was assessed in Italy in the framework of designing a monitoring plan actuated by the milk industry in the period 2005-10. Overall, 21,969 samples were taken from tankers collecting milk from 690 dairy farms. The milk samples were representative of the consignments of co-mingled milk received from multiple (two to six) farms. Systematic, biweekly sampling of consignments involved each of the 121 districts (70 in the North, 17 in the Central and 34 in the South regions of Italy). AFM1 concentration was measured using an enzyme-linked immunoassay method (validated within the range of 5-100 ng kg(-1)) whereas an HPLC method was used for the quantification of levels in the samples that had concentrations higher than 100 ng kg(-1). Process control charts using data collected in three processing plants illustrate, as an example, the seasonal variation of the contamination. The mean concentration of AFM1 was in the range between 11 and 19 ng kg(-1). The 90th and 99th percentile values were 19-34 and 41-91 ng kg(-1), respectively, and values as high as 280 ng kg(-1) were reached in 2008. The number of non-compliant consignments (those with an AFM1 concentration above the statutory limit of 50 ng kg(-1)) varied between 0.3% and 3.1% per year, with peaks in September, after the maize harvest season. The variability between different regions was not significant. The results show that controlling the aflatoxins in feed at farm level was inadequate, consequently screening of raw milk prior to processing was needed. The evaluation of the AFM1 contamination level observed during a long-term period can provide useful data for defining the frequency of sampling.

  20. Independent random sampling methods

    CERN Document Server

    Martino, Luca; Míguez, Joaquín

    2018-01-01

    This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the li...

  1. Use of Doehlert and constrained mixture designs in the development of a photo-oxidation procedure using UV radiation/H2O2 for decomposition of landfill leachate samples and determination of metals by flame atomic absorption spectrometry

    Directory of Open Access Journals (Sweden)

    Marcos A. Bezerra

    2015-03-01

    Full Text Available This work proposes the use of photo-oxidation degradation with UV radiation/H2O2 as sample treatment for the determination of Fe, Zn, Mn, Ni and Co in municipal solid waste landfill leachate by flame atomic absorption spectrometry (FAAS. Three variables (pH, irradiation time and buffer concentration were optimized using Doehlert design and the proportions of mixture components submitted to UV radiation (leachate sample, buffer solution and H2O2 30%, v/v were optimized using a constrained mixture design. Using the experimental conditions established, this procedure allows limits of detection of 0.075, 0.025, 0.010, 0.075 and 0.041 µg mL-1, and the precision levels expressed as relative standard (%RSD, 0.5 µg mL-1 were 3.6, 1.8, 1.3, 3.3 and 1.7%, for Fe, Mn, Zn, Ni and Co respectively. Recovery tests were carried out for evaluation of the procedure accuracy and recoveries were between 92 and 106% for the studied metals. This procedure has been applied for the analysis of the landfill leachate collected in Jequié, a city of the southwestern region of the State of Bahia, Brazil. The results were compared with those obtained by acid digestion. There was no significant difference between the results obtained by the two methods based on paired t-test at 95% confidence level.

  2. A population-based nested case control study on recurrent pneumonias in children with severe generalized cerebral palsy: ethical considerations of the design and representativeness of the study sample

    Directory of Open Access Journals (Sweden)

    Benninga Marc A

    2005-07-01

    Full Text Available Abstract Background In children with severe generalized cerebral palsy, pneumonias are a major health issue. Malnutrition, dysphagia, gastro-oesophageal reflux, impaired respiratory function and constipation are hypothesized risk factors. Still, no data are available on the relative contribution of these possible risk factors in the described population. This paper describes the initiation of a study in 194 children with severe generalized cerebral palsy, on the prevalence and on the impact of these hypothesized risk factors of recurrent pneumonias. Methods/Design A nested case-control design with 18 months follow-up was chosen. Dysphagia, respiratory function and constipation will be assessed at baseline, malnutrition and gastro-oesophageal reflux at the end of the follow-up. The study population consists of a representative population sample of children with severe generalized cerebral palsy. Inclusion was done through care-centres in a predefined geographical area and not through hospitals. All measurements will be done on-site which sets high demands on all measurements. If these demands were not met in "gold standard" methods, other methods were chosen. Although the inclusion period was prolonged, the desired sample size of 300 children was not met. With a consent rate of 33%, nearly 10% of all eligible children in the Netherlands are included (n = 194. The study population is subtly different from the non-participants with regard to severity of dysphagia and prevalence rates of pneumonias and gastro-oesophageal reflux. Discussion Ethical issues complicated the study design. Assessment of malnutrition and gastro-oesophageal reflux at baseline was considered unethical, since these conditions can be easily treated. Therefore, we postponed these diagnostics until the end of the follow-up. In order to include a representative sample, all eligible children in a predefined geographical area had to be contacted. To increase the consent rate, on

  3. Sampling or gambling

    Energy Technology Data Exchange (ETDEWEB)

    Gy, P.M.

    1981-12-01

    Sampling can be compared to no other technique. A mechanical sampler must above all be selected according to its aptitude for supressing or reducing all components of the sampling error. Sampling is said to be correct when it gives all elements making up the batch of matter submitted to sampling an uniform probability of being selected. A sampler must be correctly designed, built, installed, operated and maintained. When the conditions of sampling correctness are not strictly respected, the sampling error can no longer be controlled and can, unknown to the user, be unacceptably large: the sample is no longer representative. The implementation of an incorrect sampler is a form of gambling and this paper intends to show that at this game the user is nearly always the loser in the long run. The users' and the manufacturers' interests may diverge and the standards which should safeguard the users' interests very often fail to do so by tolerating or even recommending incorrect techniques such as the implementation of too narrow cutters traveling too fast through the stream to be sampled.

  4. A Monte Carlo simulation study comparing linear regression, beta regression, variable-dispersion beta regression and fractional logit regression at recovering average difference measures in a two sample design.

    Science.gov (United States)

    Meaney, Christopher; Moineddin, Rahim

    2014-01-24

    In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the

  5. Xeml Lab: a tool that supports the design of experiments at a graphical interface and generates computer-readable metadata files, which capture information about genotypes, growth conditions, environmental perturbations and sampling strategy.

    Science.gov (United States)

    Hannemann, Jan; Poorter, Hendrik; Usadel, Björn; Bläsing, Oliver E; Finck, Alex; Tardieu, Francois; Atkin, Owen K; Pons, Thijs; Stitt, Mark; Gibon, Yves

    2009-09-01

    Data mining depends on the ability to access machine-readable metadata that describe genotypes, environmental conditions, and sampling times and strategy. This article presents Xeml Lab. The Xeml Interactive Designer provides an interactive graphical interface at which complex experiments can be designed, and concomitantly generates machine-readable metadata files. It uses a new eXtensible Mark-up Language (XML)-derived dialect termed XEML. Xeml Lab includes a new ontology for environmental conditions, called Xeml Environment Ontology. However, to provide versatility, it is designed to be generic and also accepts other commonly used ontology formats, including OBO and OWL. A review summarizing important environmental conditions that need to be controlled, monitored and captured as metadata is posted in a Wiki (http://www.codeplex.com/XeO) to promote community discussion. The usefulness of Xeml Lab is illustrated by two meta-analyses of a large set of experiments that were performed with Arabidopsis thaliana during 5 years. The first reveals sources of noise that affect measurements of metabolite levels and enzyme activities. The second shows that Arabidopsis maintains remarkably stable levels of sugars and amino acids across a wide range of photoperiod treatments, and that adjustment of starch turnover and the leaf protein content contribute to this metabolic homeostasis.

  6. Operational air sampling report

    International Nuclear Information System (INIS)

    Lyons, C.L.

    1994-03-01

    Nevada Test Site vertical shaft and tunnel events generate beta/gamma fission products. The REECo air sampling program is designed to measure these radionuclides at various facilities supporting these events. The current testing moratorium and closure of the Decontamination Facility has decreased the scope of the program significantly. Of the 118 air samples collected in the only active tunnel complex, only one showed any airborne fission products. Tritiated water vapor concentrations were very similar to previously reported levels. The 206 air samples collected at the Area-6 decontamination bays and laundry were again well below any Derived Air Concentration calculation standard. Laboratory analyses of these samples were negative for any airborne fission products

  7. A statistical design of experiments for optimizing the MALDI-TOF-MS sample preparation of polymers. An application in the assessment of the thermo-mechanical degradation mechanisms of poly (ethylene terephthalate)

    International Nuclear Information System (INIS)

    Badia, J.D.; Stroemberg, E.; Ribes-Greus, A.; Karlsson, S.

    2011-01-01

    The sample preparation procedure for MALDI-TOF MS of polymers is addressed in this study by the application of a statistical Design of Experiments (DoE). Industrial poly (ethylene terephthalate) (PET) was chosen as model polymer. Different experimental settings (levels) for matrixes, analyte/matrix proportions and concentrations of cationization agent were considered. The quality parameters used for the analysis were signal-to-noise ratio and resolution. A closer inspection of the statistical results provided the study not only with the best combination of factors for the MALDI sample preparation, but also with a better understanding of the influence of the different factors, individually or in combination, to the signal. The application of DoE for the improvement of the MALDI measure of PET stated that the best combination of factors and levels was the following: matrix (dithranol), proportion analyte/matrix/cationization agent (1/15/1, V/V/V), and concentration of cationization agent (2 g L -1 ). In a second part, multiple processing by means of successive injection cycles was used to simulate the thermo-mechanical degradation effects on the oligomeric distribution of PET under mechanical recycling. The application of MALDI-TOF-MS showed that thermo-mechanical degradation primarily affected initially predominant cyclic species. Several degradation mechanisms were proposed, remarking intramolecular transesterification and hydrolysis. The ether links of the glycol unit in PET were shown to act as potential reaction sites, driving the main reactions of degradation.

  8. Application of carbon nanotubes modified with a Keggin polyoxometalate as a new sorbent for the hollow-fiber micro-solid-phase extraction of trace naproxen in hair samples with fluorescence spectrophotometry using factorial experimental design.

    Science.gov (United States)

    Naddaf, Ezzat; Ebrahimi, Mahmoud; Es'haghi, Zarrin; Bamoharram, Fatemeh Farrash

    2015-07-01

    A sensitive technique to determinate naproxen in hair samples was developed using hollow-fiber micro-solid-phase combined with fluorescence spectrophotometry. The incorporation of multi-walled carbon nanotubes modified with a Keggin polyoxometalate into a silica matrix prepared by the sol-gel method was reported. In this research, the Keggin carbon nanotubes /silica composite was used in the pores and lumen of a hollow fiber as the hollow-fiber micro-solid-phase extraction device. The device was used for the microextraction of the analyte from hair and water samples under the optimized conditions. An orthogonal array experimental design with an OA24 (4(6) ) matrix was employed to optimize the conditions. The effect of six factors influencing the extraction efficiency was investigated: pH, salt, volume of donor and desorption phase, extraction and desorption time. The effect of each factor was estimated using individual contributions as response functions in the screening process. Analysis of variance was employed for estimating the main significant factors and their contributions in the extraction. Calibration curve plot displayed linearity over a range of 0.2-10 ng/mL with detection limits of 0.072 and 0.08 ng/mL for hair and aqueous samples, respectively. The relative recoveries in the hair and aqueous matrices ranged from 103-95%. The relative standard deviation for fiber-to-fiber repeatability was 3.9%. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Isotherms and kinetic study of ultrasound-assisted adsorption of malachite green and Pb2+ ions from aqueous samples by copper sulfide nanorods loaded on activated carbon: Experimental design optimization.

    Science.gov (United States)

    Sharifpour, Ebrahim; Khafri, Hossein Zare; Ghaedi, Mehrorang; Asfaram, Arash; Jannesar, Ramin

    2018-01-01

    Copper sulfide nanorods loaded on activated carbon (CuS-NRs-AC) was synthesized and used for simultaneous ultrasound-assisted adsorption of malachite green (MG) and Pb 2+ ions from aqueous solution. Following characterization of CuS-NRs-AC were investigated by SEM, EDX, TEM and XRD, the effects of pH (2.0-10), amount of adsorbent (0.003-0.011g), MG concentration (5-25mgL -1 ), Pb 2+ concentration (3-15mgL -1 ) and sonication time (1.5-7.5min) and their interactions on responses were investigated by central composite design (CCD) and response surface methodology. According to desirability function on the Design Expert optimum removal (99.4%±1.0 for MG and 68.3±1.8 for Pb 2+ ions) was obtained at pH 6.0, 0.009g CuS-NRs-AC, 6.0min mixing by sonication and 15 and 6mgL -1 for MG and Pb 2+ ions, respectively. High determination coefficient (R 2 >0.995), Pred-R 2 -value (>0.920) and Adju-R 2 -value (>0.985) all are good indication of best agreement between the experimental and design modelling. The adsorption kinetics follows the pseudo-second order model and adsorption isotherm follows the Langmuir model with maximum adsorption capacity of 145.98 and 47.892mgg -1 for MG and Pb 2+ ions, respectively. This adsorbent over short contact time is good choice for simultaneous removal of large content of both MG and Pb 2+ ions from wastewater sample. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Radioactive air sampling methods

    CERN Document Server

    Maiello, Mark L

    2010-01-01

    Although the field of radioactive air sampling has matured and evolved over decades, it has lacked a single resource that assimilates technical and background information on its many facets. Edited by experts and with contributions from top practitioners and researchers, Radioactive Air Sampling Methods provides authoritative guidance on measuring airborne radioactivity from industrial, research, and nuclear power operations, as well as naturally occuring radioactivity in the environment. Designed for industrial hygienists, air quality experts, and heath physicists, the book delves into the applied research advancing and transforming practice with improvements to measurement equipment, human dose modeling of inhaled radioactivity, and radiation safety regulations. To present a wide picture of the field, it covers the international and national standards that guide the quality of air sampling measurements and equipment. It discusses emergency response issues, including radioactive fallout and the assets used ...

  11. Comparison between dispersive solid-phase and dispersive liquid-liquid microextraction combined with spectrophotometric determination of malachite green in water samples based on ultrasound-assisted and preconcentration under multi-variable experimental design optimization.

    Science.gov (United States)

    Alipanahpour Dil, Ebrahim; Ghaedi, Mehrorang; Asfaram, Arash; Zare, Fahimeh; Mehrabi, Fatemeh; Sadeghfar, Fardin

    2017-11-01

    The ultrasound-assisted dispersive solid-phase microextraction (USA-DSPME) and the ultrasound-assisted dispersive liquid-liquid microextraction (USA-DLLME) developed for as an ultra preconcentration and/or technique for the determination of malachite green (MG) in water samples. Central composite design based on analysis of variance and desirability function guide finding best operational conditions and represent dependency of response to variables viz. volume of extraction, eluent and disperser solvent, pH, adsorbent mass and ultrasonication time has significant influence on methods efficiency. Optimum conditions was set for USA-DSPME as: 1mg CNTs/Zn:ZnO@Ni 2 P-NCs; 4min sonication time and 130μL eluent at pH 6.0. Meanwhile optimum point for USA-DLLME conditions were fixed at pH 6.0; 4min sonication time and 130, 650μL and 10mL of extraction solvent (CHCl 3 ), disperser solvent (ethanol) and sample volume, respectively. Under the above specified best operational conditions, the enrichment factors for the USA-DSPME and USA-DLLME were 88.89 and 147.30, respectively. The methods has linear response in the range of 20.0 to 4000.0ngmL -1 with the correlation coefficients (r) between 0.9980 to 0.9995, while its reasonable detection limits viz. 1.386 to 2.348ngmL -1 and good relative standard deviations varied from 1.1% to 2.8% (n=10) candidate this method for successful monitoring of analyte from various media. The relative recoveries of the MG dye from water samples at spiking level of 500ngmL -1 were in the range between 94.50% and 98.86%. The proposed methods has been successfully applied to the analysis of the MG dye in water samples, and a satisfactory result was obtained. Copyright © 2017. Published by Elsevier B.V.

  12. Sample size determination and power

    CERN Document Server

    Ryan, Thomas P, Jr

    2013-01-01

    THOMAS P. RYAN, PhD, teaches online advanced statistics courses for Northwestern University and The Institute for Statistics Education in sample size determination, design of experiments, engineering statistics, and regression analysis.

  13. Coordination of Conditional Poisson Samples

    Directory of Open Access Journals (Sweden)

    Grafström Anton

    2015-12-01

    Full Text Available Sample coordination seeks to maximize or to minimize the overlap of two or more samples. The former is known as positive coordination, and the latter as negative coordination. Positive coordination is mainly used for estimation purposes and to reduce data collection costs. Negative coordination is mainly performed to diminish the response burden of the sampled units. Poisson sampling design with permanent random numbers provides an optimum coordination degree of two or more samples. The size of a Poisson sample is, however, random. Conditional Poisson (CP sampling is a modification of the classical Poisson sampling that produces a fixed-size πps sample. We introduce two methods to coordinate Conditional Poisson samples over time or simultaneously. The first one uses permanent random numbers and the list-sequential implementation of CP sampling. The second method uses a CP sample in the first selection and provides an approximate one in the second selection because the prescribed inclusion probabilities are not respected exactly. The methods are evaluated using the size of the expected sample overlap, and are compared with their competitors using Monte Carlo simulation. The new methods provide a good coordination degree of two samples, close to the performance of Poisson sampling with permanent random numbers.

  14. Comparison of two microextraction methods based on solidification of floating organic droplet for the determination of multiclass analytes in river water samples by liquid chromatography tandem mass spectrometry using Central Composite Design.

    Science.gov (United States)

    Asati, Ankita; Satyanarayana, G N V; Patel, Devendra K

    2017-09-01

    Two low density organic solvents based liquid-liquid microextraction methods, namely Vortex assisted liquid-liquid microextraction based on solidification of floating organic droplet (VALLME-SFO) and Dispersive liquid-liquid microextraction based on solidification of floating organic droplet(DLLME-SFO) have been compared for the determination of multiclass analytes (pesticides, plasticizers, pharmaceuticals and personal care products) in river water samples by using liquid chromatography tandem mass spectrometry (LC-MS/MS). The effect of various experimental parameters on the efficiency of the two methods and their optimum values were studied with the aid of Central Composite Design (CCD) and Response Surface Methodology(RSM). Under optimal conditions, VALLME-SFO was validated in terms of limit of detection, limit of quantification, dynamic linearity range, determination of coefficient, enrichment factor and extraction recovery for which the respective values were (0.011-0.219ngmL -1 ), (0.035-0.723ngmL -1 ), (0.050-0.500ngmL -1 ), (R 2 =0.992-0.999), (40-56), (80-106%). However, when the DLLME-SFO method was validated under optimal conditions, the range of values of limit of detection, limit of quantification, dynamic linearity range, determination of coefficient, enrichment factor and extraction recovery were (0.025-0.377ngmL -1 ), (0.083-1.256ngmL -1 ), (0.100-1.000ngmL -1 ), (R 2 =0.990-0.999), (35-49), (69-98%) respectively. Interday and intraday precisions were calculated as percent relative standard deviation (%RSD) and the values were ≤15% for VALLME-SFO and DLLME-SFO methods. Both methods were successfully applied for determining multiclass analytes in river water samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Simultaneous spectrophotometric determination of crystal violet and malachite green in water samples using partial least squares regression and central composite design after preconcentration by dispersive solid-phase extraction.

    Science.gov (United States)

    Razi-Asrami, Mahboobeh; Ghasemi, Jahan B; Amiri, Nayereh; Sadeghi, Seyed Jamal

    2017-04-01

    In this paper, a simple, fast, and inexpensive method is introduced for the simultaneous spectrophotometric determination of crystal violet (CV) and malachite green (MG) contents in aquatic samples using partial least squares regression (PLS) as a multivariate calibration technique after preconcentration by graphene oxide (GO). The method was based on the sorption and desorption of analytes onto GO and direct determination by ultraviolet-visible spectrophotometric techniques. GO was synthesized according to Hummers method. To characterize the shape and structure of GO, FT-IR, SEM, and XRD were used. The effective factors on the extraction efficiency such as pH, extraction time, and the amount of adsorbent were optimized using central composite design. The optimum values of these factors were 6, 15 min, and 12 mg, respectively. The maximum capacity of GO for the adsorption of CV and MG was 63.17 and 77.02 mg g -1 , respectively. Preconcentration factors and extraction recoveries were obtained and were 19.6, 98% for CV and 20, 100% for MG, respectively. LOD and linear dynamic ranges for CV and MG were 0.009, 0.03-0.3, 0.015, and 0.05-0.5 (μg mL -1 ), respectively. The intra-day and inter-day relative standard deviations were 1.99 and 0.58 for CV and 1.69 and 3.13 for MG at the concentration level of 50 ng mL -1 , respectively. Finally, the proposed DSPE/PLS method was successfully applied for the simultaneous determination of the trace amount of CV and MG in the real water samples.

  16. Methodology Series Module 5: Sampling Strategies

    OpenAIRE

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ? Sampling Method?. There are essentially two types of sampling methods: 1) probability sampling ? based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling ? based on researcher's choice, population that accessible & available. Some of the non-probabilit...

  17. Bottom sample taker

    Energy Technology Data Exchange (ETDEWEB)

    Garbarenko, O V; Slonimskiy, L D

    1982-01-01

    In order to improve the quality of the samples taken during offshore exploration from benthic sediments, the proposed design of the sample taker has a device which makes it possible to regulate the depth of submersion of the core lifter. For this purpose the upper part of the core lifter has an inner delimiting ring, and within the core lifter there is a piston suspended on a cable. The position of the piston in relation to the core lifter is previously assigned depending on the compactness of the benthic sediments and is fixed by tension of the cable which is held by a clamp in the cover of the core taker housing. When lowered to the bottom, the core taker is released, and under the influence of hydrostatic pressure of sea water, it enters the sediments. The magnitude of penetration is limited by the distance between the piston and the stopping ring. The piston also guarantees better preservation of the sample when the instrument is lifted to the surface.

  18. Device for sampling HTGR recycle fuel particles

    International Nuclear Information System (INIS)

    Suchomel, R.R.; Lackey, W.J.

    1977-03-01

    Devices for sampling High-Temperature Gas-Cooled Reactor fuel microspheres were evaluated. Analysis of samples obtained with each of two specially designed passive samplers were compared with data generated by more common techniques. A ten-stage two-way sampler was found to produce a representative sample with a constant batch-to-sample ratio

  19. Development of SYVAC sampling techniques

    International Nuclear Information System (INIS)

    Prust, J.O.; Dalrymple, G.J.

    1985-04-01

    This report describes the requirements of a sampling scheme for use with the SYVAC radiological assessment model. The constraints on the number of samples that may be taken is considered. The conclusions from earlier studies using the deterministic generator sampling scheme are summarised. The method of Importance Sampling and a High Dose algorithm, which are designed to preferentially sample in the high dose region of the parameter space, are reviewed in the light of experience gained from earlier studies and the requirements of a site assessment and sensitivity analyses. In addition the use of an alternative numerical integration method for estimating risk is discussed. It is recommended that the method of Importance Sampling is developed and tested for use with SYVAC. An alternative numerical integration method is not recommended for investigation at this stage but should be the subject of future work. (author)

  20. Sampling and sample processing in pesticide residue analysis.

    Science.gov (United States)

    Lehotay, Steven J; Cook, Jo Marie

    2015-05-13

    Proper sampling and sample processing in pesticide residue analysis of food and soil have always been essential to obtain accurate results, but the subject is becoming a greater concern as approximately 100 mg test portions are being analyzed with automated high-throughput analytical methods by agrochemical industry and contract laboratories. As global food trade and the importance of monitoring increase, the food industry and regulatory laboratories are also considering miniaturized high-throughput methods. In conjunction with a summary of the symposium "Residues in Food and Feed - Going from Macro to Micro: The Future of Sample Processing in Residue Analytical Methods" held at the 13th IUPAC International Congress of Pesticide Chemistry, this is an opportune time to review sampling theory and sample processing for pesticide residue analysis. If collected samples and test portions do not adequately represent the actual lot from which they came and provide meaningful results, then all costs, time, and efforts involved in implementing programs using sophisticated analytical instruments and techniques are wasted and can actually yield misleading results. This paper is designed to briefly review the often-neglected but crucial topic of sample collection and processing and put the issue into perspective for the future of pesticide residue analysis. It also emphasizes that analysts should demonstrate the validity of their sample processing approaches for the analytes/matrices of interest and encourages further studies on sampling and sample mass reduction to produce a test portion.

  1. Design Methodology - Design Synthesis

    DEFF Research Database (Denmark)

    Andreasen, Mogens Myrup

    2003-01-01

    Design Methodology is part of our practice and our knowledge about designing, and it has been strongly supported by the establishing and work of a design research community. The aim of this article is to broaden the reader¿s view of designing and Design Methodology. This is done by sketching...... the development of Design Methodology through time and sketching some important approaches and methods. The development is mainly forced by changing industrial condition, by the growth of IT support for designing, but also by the growth of insight into designing created by design researchers.......ABSTRACT Design Methodology shall be seen as our understanding of how to design; it is an early (emerging late 60ies) and original articulation of teachable and learnable methodics. The insight is based upon two sources: the nature of the designed artefacts and the nature of human designing. Today...

  2. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  3. Possibilities for automating coal sampling

    Energy Technology Data Exchange (ETDEWEB)

    Helekal, J; Vankova, J

    1987-11-01

    Outlines sampling equipment in use (AVR-, AVP-, AVN- and AVK-series samplers and RDK- and RDH-series separators produced by the Coal Research Institute, Ostrava; extractors, crushers and separators produced by ORGREZ). The Ostrava equipment covers bituminous coal needs while ORGREZ provides equipment for energy coal requirements. This equipment is designed to handle coal up to 200 mm in size at a throughput of up to 1200 t/h. Automation of sampling equipment is foreseen.

  4. Designing Material Materialising Design

    DEFF Research Database (Denmark)

    Nicholas, Paul

    2013-01-01

    Designing Material Materialising Design documents five projects developed at the Centre for Information Technology and Architecture (CITA) at the Royal Danish Academy of Fine Arts, School of Architecture. These projects explore the idea that new designed materials might require new design methods....... Focusing on fibre reinforced composites, this book sustains an exploration into the design and making of elastically tailored architectural structures that rely on the use of computational design to predict sensitive interdependencies between geometry and behaviour. Developing novel concepts...

  5. Signal sampling circuit

    NARCIS (Netherlands)

    Louwsma, S.M.; Vertregt, Maarten

    2011-01-01

    A sampling circuit for sampling a signal is disclosed. The sampling circuit comprises a plurality of sampling channels adapted to sample the signal in time-multiplexed fashion, each sampling channel comprising a respective track-and-hold circuit connected to a respective analogue to digital

  6. Signal sampling circuit

    NARCIS (Netherlands)

    Louwsma, S.M.; Vertregt, Maarten

    2010-01-01

    A sampling circuit for sampling a signal is disclosed. The sampling circuit comprises a plurality of sampling channels adapted to sample the signal in time-multiplexed fashion, each sampling channel comprising a respective track-and-hold circuit connected to a respective analogue to digital

  7. Robotic system for process sampling

    International Nuclear Information System (INIS)

    Dyches, G.M.

    1985-01-01

    A three-axis cartesian geometry robot for process sampling was developed at the Savannah River Laboratory (SRL) and implemented in one of the site radioisotope separations facilities. Use of the robot reduces personnel radiation exposure and contamination potential by routinely handling sample containers under operator control in a low-level radiation area. This robot represents the initial phase of a longer term development program to use robotics for further sample automation. Preliminary design of a second generation robot with additional capabilities is also described. 8 figs

  8. NID Copper Sample Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kouzes, Richard T.; Zhu, Zihua

    2011-02-01

    The current focal point of the nuclear physics program at PNNL is the MAJORANA DEMONSTRATOR, and the follow-on Tonne-Scale experiment, a large array of ultra-low background high-purity germanium detectors, enriched in 76Ge, designed to search for zero-neutrino double-beta decay (0νββ). This experiment requires the use of germanium isotopically enriched in 76Ge. The DEMONSTRATOR will utilize 76Ge from Russia, but for the Tonne-Scale experiment it is hoped that an alternate technology under development at Nonlinear Ion Dynamics (NID) will be a viable, US-based, lower-cost source of separated material. Samples of separated material from NID require analysis to determine the isotopic distribution and impurities. The MAJORANA DEMONSTRATOR is a DOE and NSF funded project with a major science impact. DOE is funding NID through an SBIR grant for development of their separation technology for application to the Tonne-Scale experiment. The Environmental Molecular Sciences facility (EMSL), a DOE user facility at PNNL, has the required mass spectroscopy instruments for making these isotopic measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR and for the development of the future separation technology required for the Tonne-Scale experiment. A sample of isotopically separated copper was provided by NID to PNNL for isotopic analysis as a test of the NID technology. The results of that analysis are reported here.

  9. NID Copper Sample Analysis

    International Nuclear Information System (INIS)

    Kouzes, Richard T.; Zhu, Zihua

    2011-01-01

    The current focal point of the nuclear physics program at PNNL is the MAJORANA DEMONSTRATOR, and the follow-on Tonne-Scale experiment, a large array of ultra-low background high-purity germanium detectors, enriched in 76 Ge, designed to search for zero-neutrino double-beta decay (0νββ). This experiment requires the use of germanium isotopically enriched in 76 Ge. The DEMONSTRATOR will utilize 76 Ge from Russia, but for the Tonne-Scale experiment it is hoped that an alternate technology under development at Nonlinear Ion Dynamics (NID) will be a viable, US-based, lower-cost source of separated material. Samples of separated material from NID require analysis to determine the isotopic distribution and impurities. The MAJORANA DEMONSTRATOR is a DOE and NSF funded project with a major science impact. DOE is funding NID through an SBIR grant for development of their separation technology for application to the Tonne-Scale experiment. The Environmental Molecular Sciences facility (EMSL), a DOE user facility at PNNL, has the required mass spectroscopy instruments for making these isotopic measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR and for the development of the future separation technology required for the Tonne-Scale experiment. A sample of isotopically separated copper was provided by NID to PNNL for isotopic analysis as a test of the NID technology. The results of that analysis are reported here.

  10. Tritium sampling and measurement

    International Nuclear Information System (INIS)

    Wood, M.J.; McElroy, R.G.; Surette, R.A.; Brown, R.M.

    1993-01-01

    Current methods for sampling and measuring tritium are described. Although the basic techniques have not changed significantly over the last 10 y, there have been several notable improvements in tritium measurement instrumentation. The design and quality of commercial ion-chamber-based and gas-flow-proportional-counter-based tritium monitors for tritium-in-air have improved, an indirect result of fusion-related research in the 1980s. For tritium-in-water analysis, commercial low-level liquid scintillation spectrometers capable of detecting tritium-in-water concentrations as low as 0.65 Bq L-1 for counting times of 500 min are available. The most sensitive method for tritium-in-water analysis is still 3He mass spectrometry. Concentrations as low as 0.35 mBq L-1 can be detected with current equipment. Passive tritium-oxide-in-air samplers are now being used for workplace monitoring and even in some environmental sampling applications. The reliability, convenience, and low cost of passive tritium-oxide-in-air samplers make them attractive options for many monitoring applications. Airflow proportional counters currently under development look promising for measuring tritium-in-air in the presence of high gamma and/or noble gas backgrounds. However, these detectors are currently limited by their poor performance in humidities over 30%. 133 refs

  11. NID Copper Sample Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kouzes, Richard T.; Zhu, Zihua

    2011-09-12

    The current focal point of the nuclear physics program at PNNL is the MAJORANA DEMONSTRATOR, and the follow-on Tonne-Scale experiment, a large array of ultra-low background high-purity germanium detectors, enriched in 76Ge, designed to search for zero-neutrino double-beta decay (0νββ). This experiment requires the use of germanium isotopically enriched in 76Ge. The MAJORANA DEMONSTRATOR is a DOE and NSF funded project with a major science impact. The DEMONSTRATOR will utilize 76Ge from Russia, but for the Tonne-Scale experiment it is hoped that an alternate technology, possibly one under development at Nonlinear Ion Dynamics (NID), will be a viable, US-based, lower-cost source of separated material. Samples of separated material from NID require analysis to determine the isotopic distribution and impurities. DOE is funding NID through an SBIR grant for development of their separation technology for application to the Tonne-Scale experiment. The Environmental Molecular Sciences facility (EMSL), a DOE user facility at PNNL, has the required mass spectroscopy instruments for making isotopic measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR and for the development of the future separation technology required for the Tonne-Scale experiment. A sample of isotopically separated copper was provided by NID to PNNL in January 2011 for isotopic analysis as a test of the NID technology. The results of that analysis are reported here. A second sample of isotopically separated copper was provided by NID to PNNL in August 2011 for isotopic analysis as a test of the NID technology. The results of that analysis are also reported here.

  12. An automatic sample changer for use on the SNS

    International Nuclear Information System (INIS)

    1982-10-01

    A design for an Automatic Room Temperature Sample Changer suitable for any completely contained sample, gas, liquid or solid, has been produced. Samples can be moved in any sequence into the neutron beam. The design was evolved primarily to suit SNS instruments. A prototype was constructed specifically for the LAD spectrometer having ten sample positions. The accuracy of the sample positioning was determined. (author)

  13. A gigahertz sampling transient analyzer

    International Nuclear Information System (INIS)

    Andrieu, F.; Balanca, C.; Bernet, J.M.; Lejeune, G.

    1975-01-01

    The AN 800 equipment was designed for the purpose of digital conversion of fast signals. The recording device had to be located close to the sensor. The equipment had to be highly reliable and had to transmit its output signals before and electromagnetic pulse disturbance. The sampling approach, with its readiness to digitalization, was selected as more convenient

  14. Writing for Distance Education. Samples Booklet.

    Science.gov (United States)

    International Extension Coll., Cambridge (England).

    Approaches to the format, design, and layout of printed instructional materials for distance education are illustrated in 36 samples designed to accompany the manual, "Writing for Distance Education." Each sample is presented on a single page with a note pointing out its key features. Features illustrated include use of typescript layout, a comic…

  15. On the Sampling

    OpenAIRE

    Güleda Doğan

    2017-01-01

    This editorial is on statistical sampling, which is one of the most two important reasons for editorial rejection from our journal Turkish Librarianship. The stages of quantitative research, the stage in which we are sampling, the importance of sampling for a research, deciding on sample size and sampling methods are summarised briefly.

  16. Information sampling behavior with explicit sampling costs

    Science.gov (United States)

    Juni, Mordechai Z.; Gureckis, Todd M.; Maloney, Laurence T.

    2015-01-01

    The decision to gather information should take into account both the value of information and its accrual costs in time, energy and money. Here we explore how people balance the monetary costs and benefits of gathering additional information in a perceptual-motor estimation task. Participants were rewarded for touching a hidden circular target on a touch-screen display. The target’s center coincided with the mean of a circular Gaussian distribution from which participants could sample repeatedly. Each “cue” — sampled one at a time — was plotted as a dot on the display. Participants had to repeatedly decide, after sampling each cue, whether to stop sampling and attempt to touch the hidden target or continue sampling. Each additional cue increased the participants’ probability of successfully touching the hidden target but reduced their potential reward. Two experimental conditions differed in the initial reward associated with touching the hidden target and the fixed cost per cue. For each condition we computed the optimal number of cues that participants should sample, before taking action, to maximize expected gain. Contrary to recent claims that people gather less information than they objectively should before taking action, we found that participants over-sampled in one experimental condition, and did not significantly under- or over-sample in the other. Additionally, while the ideal observer model ignores the current sample dispersion, we found that participants used it to decide whether to stop sampling and take action or continue sampling, a possible consequence of imperfect learning of the underlying population dispersion across trials. PMID:27429991

  17. How Sample Size Affects a Sampling Distribution

    Science.gov (United States)

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  18. Methodology Series Module 5: Sampling Strategies.

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  19. Methodology series module 5: Sampling strategies

    Directory of Open Access Journals (Sweden)

    Maninder Singh Setia

    2016-01-01

    Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability sampling – based on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.

  20. Designing Communication Design

    DEFF Research Database (Denmark)

    Løvlie, Anders Sundnes

    2016-01-01

    Innovating in the field of new media genres requires methods for producing designs that can succeed in being disseminated and used outside of design research labs. This article uses the author's experiences with the development of university courses in communication design to address the research...... question: How can we design courses to give students the competencies they need to work as designers of new media? Based on existing approaches from UX design and other fields, I present a model that has demonstrated its usefulness in the development of commercial products and services. The model...

  1. Radioactivity in environmental samples

    International Nuclear Information System (INIS)

    Fornaro, Laura

    2001-01-01

    The objective of this practical work is to familiarize the student with radioactivity measures in environmental samples. For that were chosen samples a salt of natural potassium, a salt of uranium or torio and a sample of drinkable water

  2. DNA Sampling Hook

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The DNA Sampling Hook is a significant improvement on a method of obtaining a tissue sample from a live fish in situ from an aquatic environment. A tissue sample...

  3. Iowa Geologic Sampling Points

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — Point locations of geologic samples/files in the IGS repository. Types of samples include well cuttings, outcrop samples, cores, drillers logs, measured sections,...

  4. Network and adaptive sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Combining the two statistical techniques of network sampling and adaptive sampling, this book illustrates the advantages of using them in tandem to effectively capture sparsely located elements in unknown pockets. It shows how network sampling is a reliable guide in capturing inaccessible entities through linked auxiliaries. The text also explores how adaptive sampling is strengthened in information content through subsidiary sampling with devices to mitigate unmanageable expanding sample sizes. Empirical data illustrates the applicability of both methods.

  5. Handling missing data in ranked set sampling

    CERN Document Server

    Bouza-Herrera, Carlos N

    2013-01-01

    The existence of missing observations is a very important aspect to be considered in the application of survey sampling, for example. In human populations they may be caused by a refusal of some interviewees to give the true value for the variable of interest. Traditionally, simple random sampling is used to select samples. Most statistical models are supported by the use of samples selected by means of this design. In recent decades, an alternative design has started being used, which, in many cases, shows an improvement in terms of accuracy compared with traditional sampling. It is called R

  6. NASA Lunar and Meteorite Sample Disk Program

    Science.gov (United States)

    Foxworth, Suzanne

    2017-01-01

    The Lunar and Meteorite Sample Disk Program is designed for K-12 classroom educators who work in K-12 schools, museums, libraries, or planetariums. Educators have to be certified to borrow the Lunar and Meteorite Sample Disks by attending a NASA Certification Workshop provided by a NASA Authorized Sample Disk Certifier.

  7. Mixed Methods Sampling: A Typology with Examples

    Science.gov (United States)

    Teddlie, Charles; Yu, Fen

    2007-01-01

    This article presents a discussion of mixed methods (MM) sampling techniques. MM sampling involves combining well-established qualitative and quantitative techniques in creative ways to answer research questions posed by MM research designs. Several issues germane to MM sampling are presented including the differences between probability and…

  8. Final Sampling and Analysis Plan for Background Sampling, Fort Sheridan, Illinois

    National Research Council Canada - National Science Library

    1995-01-01

    .... This Background Sampling and Analysis Plan (BSAP) is designed to address this issue through the collection of additional background samples at Fort Sheridan to support the statistical analysis and the Baseline Risk Assessment (BRA...

  9. Environmental surveillance master sampling schedule

    International Nuclear Information System (INIS)

    Bisping, L.E.

    1995-02-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest Laboratory (PNL) for the U.S. Department of Energy (DOE). This document contains the planned 1994 schedules for routine collection of samples for the Surface Environmental Surveillance Project (SESP), Drinking Water Project, and Ground-Water Surveillance Project. Samples are routinely collected for the SESP and analyzed to determine the quality of air, surface water, soil, sediment, wildlife, vegetation, foodstuffs, and farm products at Hanford Site and surrounding communities. The responsibility for monitoring onsite drinking water falls outside the scope of the SESP. PNL conducts the drinking water monitoring project concurrent with the SESP to promote efficiency and consistency, utilize expertise developed over the years, and reduce costs associated with management, procedure development, data management, quality control, and reporting. The ground-water sampling schedule identifies ground-water sampling .events used by PNL for environmental surveillance of the Hanford Site. Sampling is indicated as annual, semi-annual, quarterly, or monthly in the sampling schedule. Some samples are collected and analyzed as part of ground-water monitoring and characterization programs at Hanford (e.g. Resources Conservation and Recovery Act (RCRA), Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), or Operational). The number of samples planned by other programs are identified in the sampling schedule by a number in the analysis column and a project designation in the Cosample column. Well sampling events may be merged to avoid redundancy in cases where sampling is planned by both-environmental surveillance and another program

  10. Environmental surveillance master sampling schedule

    Energy Technology Data Exchange (ETDEWEB)

    Bisping, L.E.

    1995-02-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest Laboratory (PNL) for the U.S. Department of Energy (DOE). This document contains the planned 1994 schedules for routine collection of samples for the Surface Environmental Surveillance Project (SESP), Drinking Water Project, and Ground-Water Surveillance Project. Samples are routinely collected for the SESP and analyzed to determine the quality of air, surface water, soil, sediment, wildlife, vegetation, foodstuffs, and farm products at Hanford Site and surrounding communities. The responsibility for monitoring onsite drinking water falls outside the scope of the SESP. PNL conducts the drinking water monitoring project concurrent with the SESP to promote efficiency and consistency, utilize expertise developed over the years, and reduce costs associated with management, procedure development, data management, quality control, and reporting. The ground-water sampling schedule identifies ground-water sampling .events used by PNL for environmental surveillance of the Hanford Site. Sampling is indicated as annual, semi-annual, quarterly, or monthly in the sampling schedule. Some samples are collected and analyzed as part of ground-water monitoring and characterization programs at Hanford (e.g. Resources Conservation and Recovery Act (RCRA), Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), or Operational). The number of samples planned by other programs are identified in the sampling schedule by a number in the analysis column and a project designation in the Cosample column. Well sampling events may be merged to avoid redundancy in cases where sampling is planned by both-environmental surveillance and another program.

  11. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    International Nuclear Information System (INIS)

    Shine, E. P.; Poirier, M. R.

    2013-01-01

    Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and data interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and

  12. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    Energy Technology Data Exchange (ETDEWEB)

    Shine, E. P.; Poirier, M. R.

    2013-10-29

    Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and data interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and

  13. Utilizing the Zero-One Linear Programming Constraints to Draw Multiple Sets of Matched Samples from a Non-Treatment Population as Control Groups for the Quasi-Experimental Design

    Science.gov (United States)

    Li, Yuan H.; Yang, Yu N.; Tompkins, Leroy J.; Modarresi, Shahpar

    2005-01-01

    The statistical technique, "Zero-One Linear Programming," that has successfully been used to create multiple tests with similar characteristics (e.g., item difficulties, test information and test specifications) in the area of educational measurement, was deemed to be a suitable method for creating multiple sets of matched samples to be…

  14. Sampling procedures and tables

    International Nuclear Information System (INIS)

    Franzkowski, R.

    1980-01-01

    Characteristics, defects, defectives - Sampling by attributes and by variables - Sample versus population - Frequency distributions for the number of defectives or the number of defects in the sample - Operating characteristic curve, producer's risk, consumer's risk - Acceptable quality level AQL - Average outgoing quality AOQ - Standard ISQ 2859 - Fundamentals of sampling by variables for fraction defective. (RW)

  15. Effective sample labeling

    International Nuclear Information System (INIS)

    Rieger, J.T.; Bryce, R.W.

    1990-01-01

    Ground-water samples collected for hazardous-waste and radiological monitoring have come under strict regulatory and quality assurance requirements as a result of laws such as the Resource Conservation and Recovery Act. To comply with these laws, the labeling system used to identify environmental samples had to be upgraded to ensure proper handling and to protect collection personnel from exposure to sample contaminants and sample preservatives. The sample label now used as the Pacific Northwest Laboratory is a complete sample document. In the event other paperwork on a labeled sample were lost, the necessary information could be found on the label

  16. Rationale and design of the iPap trial: a randomized controlled trial of home-based HPV self-sampling for improving participation in cervical screening by never- and under-screened women in Australia

    International Nuclear Information System (INIS)

    Sultana, Farhana; Gertig, Dorota M; English, Dallas R; Simpson, Julie A; Brotherton, Julia ML; Drennan, Kelly; Mullins, Robyn; Heley, Stella; Wrede, C David; Saville, Marion

    2014-01-01

    Organized screening based on Pap tests has substantially reduced deaths from cervical cancer in many countries, including Australia. However, the impact of the program depends upon the degree to which women participate. A new method of screening, testing for human papillomavirus (HPV) DNA to detect the virus that causes cervical cancer, has recently become available. Because women can collect their own samples for this test at home, it has the potential to overcome some of the barriers to Pap tests. The iPap trial will evaluate whether mailing an HPV self-sampling kit increases participation by never- and under-screened women within a cervical screening program. The iPap trial is a parallel randomized controlled, open label, trial. Participants will be Victorian women age 30–69 years, for whom there is either no record on the Victorian Cervical Cytology Registry (VCCR) of a Pap test (never-screened) or the last recorded Pap test was between five to fifteen years ago (under-screened). Enrolment information from the Victorian Electoral Commission will be linked to the VCCR to determine the never-screened women. Variables that will be used for record linkage include full name, address and date of birth. Never- and under-screened women will be randomly allocated to either receive an invitation letter with an HPV self-sampling kit or a reminder letter to attend for a Pap test, which is standard practice for women overdue for a test in Victoria. All resources have been focus group tested. The primary outcome will be the proportion of women who participate, by returning an HPV self-sampling kit for women in the self-sampling arm, and notification of a Pap test result to the Registry for women in the Pap test arm at 3 and 6 months after mailout. The most important secondary outcome is the proportion of test-positive women who undergo further investigations at 6 and 12 months after mailout of results. The iPap trial will provide strong evidence about whether HPV self-sampling

  17. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  18. Methodology Series Module 5: Sampling Strategies

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  19. Enhanced conformational sampling using enveloping distribution sampling.

    Science.gov (United States)

    Lin, Zhixiong; van Gunsteren, Wilfred F

    2013-10-14

    To lessen the problem of insufficient conformational sampling in biomolecular simulations is still a major challenge in computational biochemistry. In this article, an application of the method of enveloping distribution sampling (EDS) is proposed that addresses this challenge and its sampling efficiency is demonstrated in simulations of a hexa-β-peptide whose conformational equilibrium encompasses two different helical folds, i.e., a right-handed 2.7(10∕12)-helix and a left-handed 3(14)-helix, separated by a high energy barrier. Standard MD simulations of this peptide using the GROMOS 53A6 force field did not reach convergence of the free enthalpy difference between the two helices even after 500 ns of simulation time. The use of soft-core non-bonded interactions in the centre of the peptide did enhance the number of transitions between the helices, but at the same time led to neglect of relevant helical configurations. In the simulations of a two-state EDS reference Hamiltonian that envelops both the physical peptide and the soft-core peptide, sampling of the conformational space of the physical peptide ensures that physically relevant conformations can be visited, and sampling of the conformational space of the soft-core peptide helps to enhance the transitions between the two helices. The EDS simulations sampled many more transitions between the two helices and showed much faster convergence of the relative free enthalpy of the two helices compared with the standard MD simulations with only a slightly larger computational effort to determine optimized EDS parameters. Combined with various methods to smoothen the potential energy surface, the proposed EDS application will be a powerful technique to enhance the sampling efficiency in biomolecular simulations.

  20. Sampling in practice

    DEFF Research Database (Denmark)

    Esbensen, Kim Harry; Petersen, Lars

    2005-01-01

    A basic knowledge of the Theory of Sampling (TOS) and a set of only eight sampling unit operations is all the practical sampler needs to ensure representativeness of samples extracted from all kinds of lots: production batches, - truckloads, - barrels, sub-division in the laboratory, sampling...... in nature and in the field (environmental sampling, forestry, geology, biology), from raw materials or manufactory processes etc. We here can only give a brief introduction to the Fundamental Sampling Principle (FSP) and these eight Sampling Unit Operations (SUO’s). Always respecting FSP and invoking only...... the necessary SUO’s (dependent on the practical situation) is the only prerequisite needed for eliminating all sampling bias and simultaneously minimizing sampling variance, and this is in addition a sure guarantee for making the final analytical results trustworthy. No reliable conclusions can be made unless...

  1. Bayesian Geostatistical Design

    DEFF Research Database (Denmark)

    Diggle, Peter; Lophaven, Søren Nymand

    2006-01-01

    locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model...

  2. The Hospital Microbiome Project: Meeting Report for the 1st Hospital Microbiome Project Workshop on sampling design and building science measurements, Chicago, USA, June 7th-8th 2012.

    Science.gov (United States)

    Smith, Daniel; Alverdy, John; An, Gary; Coleman, Maureen; Garcia-Houchins, Sylvia; Green, Jessica; Keegan, Kevin; Kelley, Scott T; Kirkup, Benjamin C; Kociolek, Larry; Levin, Hal; Landon, Emily; Olsiewski, Paula; Knight, Rob; Siegel, Jeffrey; Weber, Stephen; Gilbert, Jack

    2013-04-15

    This report details the outcome of the 1st Hospital Microbiome Project workshop held on June 7th-8th, 2012 at the University of Chicago, USA. The workshop was arranged to determine the most appropriate sampling strategy and approach to building science measurement to characterize the development of a microbial community within a new hospital pavilion being built at the University of Chicago Medical Center. The workshop made several recommendations and led to the development of a full proposal to the Alfred P. Sloan Foundation as well as to the creation of the Hospital Microbiome Consortium.

  3. Sampling of ore

    International Nuclear Information System (INIS)

    Boehme, R.C.; Nicholas, B.L.

    1987-01-01

    This invention relates to a method of an apparatus for ore sampling. The method includes the steps of periodically removing a sample of the output material of a sorting machine, weighing each sample so that each is of the same weight, measuring a characteristic such as the radioactivity, magnetivity or the like of each sample, subjecting at least an equal portion of each sample to chemical analysis to determine the mineral content of the sample and comparing the characteristic measurement with desired mineral content of the chemically analysed portion of the sample to determine the characteristic/mineral ratio of the sample. The apparatus includes an ore sample collector, a deflector for deflecting a sample of ore particles from the output of an ore sorter into the collector and means for moving the deflector from a first position in which it is clear of the particle path from the sorter to a second position in which it is in the particle path at predetermined time intervals and for predetermined time periods to deflect the sample particles into the collector. The apparatus conveniently includes an ore crusher for comminuting the sample particle, a sample hopper means for weighing the hopper, a detector in the hopper for measuring a characteristic such as radioactivity, magnetivity or the like of particles in the hopper, a discharge outlet from the hopper and means for feeding the particles from the collector to the crusher and then to the hopper

  4. Prompting Designers to Design

    DEFF Research Database (Denmark)

    Ahmed, Saeema

    2006-01-01

    Recent research suggest that engineering designers need assistance to understand what information is relevant for their particular design problem. They require guidance in formulating their queries and also to understand what information is relevant for them. This paper presents an approach to pr...

  5. Genetic Sample Inventory

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database archives genetic tissue samples from marine mammals collected primarily from the U.S. east coast. The collection includes samples from field programs,...

  6. Superposition Enhanced Nested Sampling

    Directory of Open Access Journals (Sweden)

    Stefano Martiniani

    2014-08-01

    Full Text Available The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  7. Chorionic villus sampling

    Science.gov (United States)

    ... medlineplus.gov/ency/article/003406.htm Chorionic villus sampling To use the sharing features on this page, please enable JavaScript. Chorionic villus sampling (CVS) is a test some pregnant women have ...

  8. Sampling on Quasicrystals

    OpenAIRE

    Grepstad, Sigrid

    2011-01-01

    We prove that quasicrystals are universal sets of stable sampling in any dimension. Necessary and sufficient density conditions for stable sampling and interpolation sets in one dimension are studied in detail.

  9. Genetic Sample Inventory - NRDA

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database archives genetic tissue samples from marine mammals collected in the North-Central Gulf of Mexico from 2010-2015. The collection includes samples from...

  10. Test sample handling apparatus

    International Nuclear Information System (INIS)

    1981-01-01

    A test sample handling apparatus using automatic scintillation counting for gamma detection, for use in such fields as radioimmunoassay, is described. The apparatus automatically and continuously counts large numbers of samples rapidly and efficiently by the simultaneous counting of two samples. By means of sequential ordering of non-sequential counting data, it is possible to obtain precisely ordered data while utilizing sample carrier holders having a minimum length. (U.K.)

  11. Laboratory Sampling Guide

    Science.gov (United States)

    2012-05-11

    environment, and by ingestion of foodstuffs that have incorporated C-14 by photosynthesis . Like tritium, C-14 is a very low energy beta emitter and is... bacterial growth and to minimize development of solids in the sample. • Properly identify each sample container with name, SSN, and collection start and...sampling in the same cardboard carton. The sample may be kept cool or frozen during collection to control odor and bacterial growth. • Once

  12. High speed network sampling

    OpenAIRE

    Rindalsholt, Ole Arild

    2005-01-01

    Master i nettverks- og systemadministrasjon Classical Sampling methods play an important role in the current practice of Internet measurement. With today’s high speed networks, routers cannot manage to generate complete Netflow data for every packet. They have to perform restricted sampling. This thesis summarizes some of the most important sampling schemes and their applications before diving into an analysis on the effect of sampling Netflow records.

  13. Mars Sample Handling Functionality

    Science.gov (United States)

    Meyer, M. A.; Mattingly, R. L.

    2018-04-01

    The final leg of a Mars Sample Return campaign would be an entity that we have referred to as Mars Returned Sample Handling (MRSH.) This talk will address our current view of the functional requirements on MRSH, focused on the Sample Receiving Facility (SRF).

  14. IAEA Sampling Plan

    Energy Technology Data Exchange (ETDEWEB)

    Geist, William H. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-15

    The objectives for this presentation are to describe the method that the IAEA uses to determine a sampling plan for nuclear material measurements; describe the terms detection probability and significant quantity; list the three nuclear materials measurement types; describe the sampling method applied to an item facility; and describe multiple method sampling.

  15. Developing Water Sampling Standards

    Science.gov (United States)

    Environmental Science and Technology, 1974

    1974-01-01

    Participants in the D-19 symposium on aquatic sampling and measurement for water pollution assessment were informed that determining the extent of waste water stream pollution is not a cut and dry procedure. Topics discussed include field sampling, representative sampling from storm sewers, suggested sampler features and application of improved…

  16. FUZZY ACCEPTANCE SAMPLING AND CHARACTERISTIC CURVES

    Directory of Open Access Journals (Sweden)

    Ebru Turano?lu

    2012-02-01

    Full Text Available Acceptance sampling is primarily used for the inspection of incoming or outgoing lots. Acceptance sampling refers to the application of specific sampling plans to a designated lot or sequence of lots. The parameters of acceptance sampling plans are sample sizes and acceptance numbers. In some cases, it may not be possible to define acceptance sampling parameters as crisp values. These parameters can be expressed by linguistic variables. The fuzzy set theory can be successfully used to cope with the vagueness in these linguistic expressions for acceptance sampling. In this paper, the main distributions of acceptance sampling plans are handled with fuzzy parameters and their acceptance probability functions are derived. Then the characteristic curves of acceptance sampling are examined under fuzziness. Illustrative examples are given.

  17. Systematic versus random sampling in stereological studies.

    Science.gov (United States)

    West, Mark J

    2012-12-01

    The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.

  18. Generalized sampling in Julia

    DEFF Research Database (Denmark)

    Jacobsen, Christian Robert Dahl; Nielsen, Morten; Rasmussen, Morten Grud

    2017-01-01

    Generalized sampling is a numerically stable framework for obtaining reconstructions of signals in different bases and frames from their samples. For example, one can use wavelet bases for reconstruction given frequency measurements. In this paper, we will introduce a carefully documented toolbox...... for performing generalized sampling in Julia. Julia is a new language for technical computing with focus on performance, which is ideally suited to handle the large size problems often encountered in generalized sampling. The toolbox provides specialized solutions for the setup of Fourier bases and wavelets....... The performance of the toolbox is compared to existing implementations of generalized sampling in MATLAB....

  19. The Lunar Sample Compendium

    Science.gov (United States)

    Meyer, Charles

    2009-01-01

    The Lunar Sample Compendium is a succinct summary of the data obtained from 40 years of study of Apollo and Luna samples of the Moon. Basic petrographic, chemical and age information is compiled, sample-by-sample, in the form of an advanced catalog in order to provide a basic description of each sample. The LSC can be found online using Google. The initial allocation of lunar samples was done sparingly, because it was realized that scientific techniques would improve over the years and new questions would be formulated. The LSC is important because it enables scientists to select samples within the context of the work that has already been done and facilitates better review of proposed allocations. It also provides back up material for public displays, captures information found only in abstracts, grey literature and curatorial databases and serves as a ready access to the now-vast scientific literature.

  20. Image Sampling with Quasicrystals

    Directory of Open Access Journals (Sweden)

    Mark Grundland

    2009-07-01

    Full Text Available We investigate the use of quasicrystals in image sampling. Quasicrystals produce space-filling, non-periodic point sets that are uniformly discrete and relatively dense, thereby ensuring the sample sites are evenly spread out throughout the sampled image. Their self-similar structure can be attractive for creating sampling patterns endowed with a decorative symmetry. We present a brief general overview of the algebraic theory of cut-and-project quasicrystals based on the geometry of the golden ratio. To assess the practical utility of quasicrystal sampling, we evaluate the visual effects of a variety of non-adaptive image sampling strategies on photorealistic image reconstruction and non-photorealistic image rendering used in multiresolution image representations. For computer visualization of point sets used in image sampling, we introduce a mosaic rendering technique.

  1. Infill sampling criteria to locate extremes

    CSIR Research Space (South Africa)

    Watson, AG

    1995-07-01

    Full Text Available Three problem-dependent meanings for engineering ''extremes'' are motivated, established, and translated into formal geostatistical (model-based) criteria for designing infill sample networks. (I) Locate an area within the domain of interest where a...

  2. Mars Sample Return Architecture Assessment Study

    Science.gov (United States)

    Centuori, S.; Hermosín, P.; Martín, J.; De Zaiacomo, G.; Colin, S.; Godfrey, A.; Myles, J.; Johnson, H.; Sachdev, T.; Ahmed, R.

    2018-04-01

    Current paper presents the results of ESA funded activity "Mars Sample Return Architecture Assessment Study" carried-out by DEIMOS Space, Lockheed Martin UK Ampthill, and MDA Corporation, where more than 500 mission design options have been studied.

  3. Water sample-collection and distribution system

    Science.gov (United States)

    Brooks, R. R.

    1978-01-01

    Collection and distribution system samples water from six designated stations, filtered if desired, and delivers it to various analytical sensors. System may be controlled by Water Monitoring Data Acquisition System or operated manually.

  4. Nitrate Waste Treatment Sampling and Analysis Plan

    Energy Technology Data Exchange (ETDEWEB)

    Vigil-Holterman, Luciana R. [Los Alamos National Laboratory; Martinez, Patrick Thomas [Los Alamos National Laboratory; Garcia, Terrence Kerwin [Los Alamos National Laboratory

    2017-07-05

    This plan is designed to outline the collection and analysis of nitrate salt-bearing waste samples required by the New Mexico Environment Department- Hazardous Waste Bureau in the Los Alamos National Laboratory (LANL) Hazardous Waste Facility Permit (Permit).

  5. Standardized sampling system for reactor coolants

    International Nuclear Information System (INIS)

    Divine, J.R.; Munson, L.F.; Nelson, J.L.; McDowell, R.L.; Jankowski, M.W.

    1982-09-01

    A three-pronged approach was developed to reach the objectives of acceptable coolant sampling, assessment of occupational exposure from corrosion products, and model development for the transport and buildup of corrosion products. Emphasis is on sampler design

  6. Sample Return Systems for Extreme Environments

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed work seeks to design, develop and test a hard impact penetrator/sampler that can withstand the hard impact and enable the sample to be returned to...

  7. Winged design; Befluegeltes Design

    Energy Technology Data Exchange (ETDEWEB)

    Weber, Tilman

    2013-10-15

    Today the wind rotor blades are about 80 meter long. To keep them light and transportable designers and material scientists come up with a number of ideas. Will the carbon fiber prevail against the glass fiber? [German] Ueber 80 Meter messen die laengsten Rotorblaetter heute. Um sie leicht und transportfaehig zu halten, lassen sich Designer und Materialforscher einiges einfallen. Wird sich die Kohlefaser gegen die Glasfaser durchsetzen?.

  8. Systematic sampling of discrete and continuous populations: sample selection and the choice of estimator

    Science.gov (United States)

    Harry T. Valentine; David L. R. Affleck; Timothy G. Gregoire

    2009-01-01

    Systematic sampling is easy, efficient, and widely used, though it is not generally recognized that a systematic sample may be drawn from the population of interest with or without restrictions on randomization. The restrictions or the lack of them determine which estimators are unbiased, when using the sampling design as the basis for inference. We describe the...

  9. Dansk Design

    DEFF Research Database (Denmark)

    Dickson, Thomas

    Indhold: Hvad er design?; Hvor kommer dansk design fra?; Produktdesign; Tekstil- og tøjdesign; Design af møbler og boligindretning; Bygningen og design; Arbejdets design; Transportdesign; Offentligt design; Grafisk design; Nye tider og en ny slags design...

  10. Leveraging model-based study designs and serial micro-sampling techniques to understand the oral pharmacokinetics of the potent LTB4 inhibitor, CP-105696, for mouse pharmacology studies.

    Science.gov (United States)

    Spilker, Mary E; Chung, Heekyung; Visswanathan, Ravi; Bagrodia, Shubha; Gernhardt, Steven; Fantin, Valeria R; Ellies, Lesley G

    2017-07-01

    1. Leukotriene B4 (LTB4) is a proinflammatory mediator important in the progression of a number of inflammatory diseases. Preclinical models can explore the role of LTB4 in pathophysiology using tool compounds, such as CP-105696, that modulate its activity. To support preclinical pharmacology studies, micro-sampling techniques and mathematical modeling were used to determine the pharmacokinetics of CP-105696 in mice within the context of systemic inflammation induced by a high-fat diet (HFD). 2. Following oral administration of doses > 35 mg/kg, CP-105696 kinetics can be described by a one-compartment model with first order absorption. The compound's half-life is 44-62 h with an apparent volume of distribution of 0.51-0.72 L/kg. Exposures in animals fed an HFD are within 2-fold of those fed a normal chow diet. Daily dosing at 100 mg/kg was not tolerated and resulted in a >20% weight loss in the mice. 3. CP-105696's long half-life has the potential to support a twice weekly dosing schedule. Given that most chronic inflammatory diseases will require long-term therapies, these results are useful in determining the optimal dosing schedules for preclinical studies using CP-105696.

  11. Rationale, study design and sample characteristics of a randomized controlled trial of directly administered antiretroviral therapy for HIV-infected prisoners transitioning to the community - a potential conduit to improved HIV treatment outcomes.

    Science.gov (United States)

    Saber-Tehrani, Ali Shabahang; Springer, Sandra A; Qiu, Jingjun; Herme, Maua; Wickersham, Jeffrey; Altice, Frederick L

    2012-03-01

    HIV-infected prisoners experience poor HIV treatment outcomes post-release. Directly administered antiretroviral therapy (DAART) is a CDC-designated, evidence-based adherence intervention for drug users, yet untested among released prisoners. Sentenced HIV-infected prisoners on antiretroviral therapy (ART) and returning to New Haven or Hartford, Connecticut were recruited and randomized 2:1 to a prospective controlled trial (RCT) of 6 months of DAART versus self-administered therapy (SAT); all subjects received case management services. Subjects meeting DSM-IV criteria for opioid dependence were offered immediate medication-assisted treatment. Trained outreach workers provided DAART once-daily, seven days per week, including behavioral skills training during the last intervention month. Both study groups were assessed for 6 months after the intervention period. Assessments occurred within 90 days pre-release (baseline), day of release, and then monthly for 12 months. Viral load (VL) and CD4 testing was conducted baseline and quarterly; genotypic resistance testing was conducted at baseline, 6 and 12 months. The primary outcome was pre-defined as viral suppression (VLHIV treatment outcomes after release from prison, a period associated with adverse HIV and other medical consequences. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Professional ASPNET Design Patterns

    CERN Document Server

    Millett, Scott

    2010-01-01

    Professional ASP.NET Design Patterns will show you how to implement design patterns in real ASP.NET applications by introducing you to the basic OOP skills needed to understand and interpret design patterns. A sample application used throughout the book is an enterprise level ASP.NET website with multi-tiered, SOA design techniques that can be applied to your future ASP.NET projects. Read about each design pattern in detail, including how to interpret the UML design, how to implement it in ASP.NET, its importance for ASP.NET development, and how it's integrated into the final project.

  13. Statistical sampling strategies

    International Nuclear Information System (INIS)

    Andres, T.H.

    1987-01-01

    Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized

  14. Statistical distribution sampling

    Science.gov (United States)

    Johnson, E. S.

    1975-01-01

    Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.

  15. Sampling Lesbian, Gay, and Bisexual Populations

    Science.gov (United States)

    Meyer, Ilan H.; Wilson, Patrick A.

    2009-01-01

    Sampling has been the single most influential component of conducting research with lesbian, gay, and bisexual (LGB) populations. Poor sampling designs can result in biased results that will mislead other researchers, policymakers, and practitioners. Investigators wishing to study LGB populations must therefore devote significant energy and…

  16. Statistical sampling methods for soils monitoring

    Science.gov (United States)

    Ann M. Abbott

    2010-01-01

    Development of the best sampling design to answer a research question should be an interactive venture between the land manager or researcher and statisticians, and is the result of answering various questions. A series of questions that can be asked to guide the researcher in making decisions that will arrive at an effective sampling plan are described, and a case...

  17. The validation of a computer-adaptive test (CAT) for assessing health-related quality of life in children and adolescents in a clinical sample: study design, methods and first results of the Kids-CAT study.

    Science.gov (United States)

    Barthel, D; Otto, C; Nolte, S; Meyrose, A-K; Fischer, F; Devine, J; Walter, O; Mierke, A; Fischer, K I; Thyen, U; Klein, M; Ankermann, T; Rose, M; Ravens-Sieberer, U

    2017-05-01

    Recently, we developed a computer-adaptive test (CAT) for assessing health-related quality of life (HRQoL) in children and adolescents: the Kids-CAT. It measures five generic HRQoL dimensions. The aims of this article were (1) to present the study design and (2) to investigate its psychometric properties in a clinical setting. The Kids-CAT study is a longitudinal prospective study with eight measurements over one year at two University Medical Centers in Germany. For validating the Kids-CAT, 270 consecutive 7- to 17-year-old patients with asthma (n = 52), diabetes (n = 182) or juvenile arthritis (n = 36) answered well-established HRQoL instruments (Pediatric Quality of Life Inventory™ (PedsQL), KIDSCREEN-27) and scales measuring related constructs (e.g., social support, self-efficacy). Measurement precision, test-retest reliability, convergent and discriminant validity were investigated. The mean standard error of measurement ranged between .38 and .49 for the five dimensions, which equals a reliability between .86 and .76, respectively. The Kids-CAT measured most reliably in the lower HRQoL range. Convergent validity was supported by moderate to high correlations of the Kids-CAT dimensions with corresponding PedsQL dimensions ranging between .52 and .72. A lower correlation was found between the social dimensions of both instruments. Discriminant validity was confirmed by lower correlations with non-corresponding subscales of the PedsQL. The Kids-CAT measures pediatric HRQoL reliably, particularly in lower areas of HRQoL. Its test-retest reliability should be re-investigated in future studies. The validity of the instrument was demonstrated. Overall, results suggest that the Kids-CAT is a promising candidate for detecting psychosocial needs in chronically ill children.

  18. Hanford Sampling Quality Management Plan (HSQMP)

    International Nuclear Information System (INIS)

    Hyatt, J.E.

    1995-06-01

    HSQMP establishes quality requirements in response to DOE Order 5700. 6C and to 10 Code of Federal Regulations 830.120. HSQMP is designed to meet the needs of Richland Operations Office for controlling the quality of services provided by sampling operations. It is issued through the Analytical Services Program of the Waste Programs Division. This document describes the Environmental Sampling and Analysis Program activities considered to represent the best management activities necessary to achieve a sampling program with adequate control

  19. Big Data, Small Sample.

    Science.gov (United States)

    Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan

    2017-05-20

    Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.

  20. Sampling system and method

    Science.gov (United States)

    Decker, David L.; Lyles, Brad F.; Purcell, Richard G.; Hershey, Ronald Lee

    2013-04-16

    The present disclosure provides an apparatus and method for coupling conduit segments together. A first pump obtains a sample and transmits it through a first conduit to a reservoir accessible by a second pump. The second pump further conducts the sample from the reservoir through a second conduit.

  1. Simple street tree sampling

    Science.gov (United States)

    David J. Nowak; Jeffrey T. Walton; James Baldwin; Jerry. Bond

    2015-01-01

    Information on street trees is critical for management of this important resource. Sampling of street tree populations provides an efficient means to obtain street tree population information. Long-term repeat measures of street tree samples supply additional information on street tree changes and can be used to report damages from catastrophic events. Analyses of...

  2. Sample pretretment in microsystems

    DEFF Research Database (Denmark)

    Perch-Nielsen, Ivan R.

    2003-01-01

    : Sample preparation → DNA amplification → DNA analysis. The overall goal of the project is integration of as many as possible of these steps. This thesis covers mainly pretreatment in a microchip. Some methods for sample pretreatment have been tested. Most conventional is fluorescence activated cell sort......When a sample, e.g. from a patient, is processed using conventional methods, the sample must be transported to the laboratory where it is analyzed, after which the results is sent back. By integrating the separate steps of the analysis in a micro total analysis system (μTAS), results can...... be obtained fast and better. Preferably with all the processes from sample to signal moved to the bedside of the patient. Of course there is still much to learn and study in the process of miniaturization. DNA analysis is one process subject to integration. There are roughly three steps in a DNA analysis...

  3. Biological sample collector

    Science.gov (United States)

    Murphy, Gloria A [French Camp, CA

    2010-09-07

    A biological sample collector is adapted to a collect several biological samples in a plurality of filter wells. A biological sample collector may comprise a manifold plate for mounting a filter plate thereon, the filter plate having a plurality of filter wells therein; a hollow slider for engaging and positioning a tube that slides therethrough; and a slide case within which the hollow slider travels to allow the tube to be aligned with a selected filter well of the plurality of filter wells, wherein when the tube is aligned with the selected filter well, the tube is pushed through the hollow slider and into the selected filter well to sealingly engage the selected filter well and to allow the tube to deposit a biological sample onto a filter in the bottom of the selected filter well. The biological sample collector may be portable.

  4. Sampling system for a boiling reactor NPP

    International Nuclear Information System (INIS)

    Zabelin, A.I.; Yakovleva, E.D.; Solov'ev, Yu.A.

    1976-01-01

    Investigations and pilot running of the nuclear power plant with a VK-50 boiling reactor reveal the necessity of normalizing the design system of water sampling and of mandatory replacement of the needle-type throttle device by a helical one. A method for designing a helical throttle device has been worked out. The quantitative characteristics of depositions of corrosion products along the line of reactor water sampling are presented. Recommendations are given on the organizaton of the sampling system of a nuclear power plant with BWR type reactors

  5. An unbiased estimator of the variance of simple random sampling using mixed random-systematic sampling

    OpenAIRE

    Padilla, Alberto

    2009-01-01

    Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...

  6. Representative mass reduction in sampling

    DEFF Research Database (Denmark)

    Petersen, Lars; Esbensen, Harry Kim; Dahl, Casper Kierulf

    2004-01-01

    We here present a comprehensive survey of current mass reduction principles and hardware available in the current market. We conduct a rigorous comparison study of the performance of 17 field and/or laboratory instruments or methods which are quantitatively characterized (and ranked) for accuracy...... dividers, the Boerner Divider, the ??spoon method??, alternate/fractional shoveling and grab sampling. Only devices based on riffle splitting principles (static or rotational) passes the ultimate representativity test (with minor, but significant relative differences). Grab sampling, the overwhelmingly...... most often used mass reduction method, performs appallingly?its use must be discontinued (with the singular exception for completely homogenized fine powders). Only proper mass reduction (i.e. carried out in complete compliance with all appropriate design principles, maintenance and cleaning rules) can...

  7. Waste tank characterization sampling limits

    International Nuclear Information System (INIS)

    Tusler, L.A.

    1994-01-01

    This document is a result of the Plant Implementation Team Investigation into delayed reporting of the exotherm in Tank 241-T-111 waste samples. The corrective actions identified are to have immediate notification of appropriate Tank Farm Operations Shift Management if analyses with potential safety impact exceed established levels. A procedure, WHC-IP-0842 Section 12.18, ''TWRS Approved Sampling and Data Analysis by Designated Laboratories'' (WHC 1994), has been established to require all tank waste sampling (including core, auger and supernate) and tank vapor samples be performed using this document. This document establishes levels for specified analysis that require notification of the appropriate shift manager. The following categories provide numerical values for analysis that may indicate that a tank is either outside the operating specification or should be evaluated for inclusion on a Watch List. The information given is intended to translate an operating limit such as heat load, expressed in Btu/hour, to an analysis related limit, in this case cesium-137 and strontium-90 concentrations. By using the values provided as safety flags, the analytical laboratory personnel can notify a shift manager that a tank is in potential violation of an operating limit or that a tank should be considered for inclusion on a Watch List. The shift manager can then take appropriate interim measures until a final determination is made by engineering personnel

  8. PFP Wastewater Sampling Facility

    International Nuclear Information System (INIS)

    Hirzel, D.R.

    1995-01-01

    This test report documents the results obtained while conducting operational testing of the sampling equipment in the 225-WC building, the PFP Wastewater Sampling Facility. The Wastewater Sampling Facility houses equipment to sample and monitor the PFP's liquid effluents before discharging the stream to the 200 Area Treated Effluent Disposal Facility (TEDF). The majority of the streams are not radioactive and discharges from the PFP Heating, Ventilation, and Air Conditioning (HVAC). The streams that might be contaminated are processed through the Low Level Waste Treatment Facility (LLWTF) before discharging to TEDF. The sampling equipment consists of two flow-proportional composite samplers, an ultrasonic flowmeter, pH and conductivity monitors, chart recorder, and associated relays and current isolators to interconnect the equipment to allow proper operation. Data signals from the monitors are received in the 234-5Z Shift Office which contains a chart recorder and alarm annunciator panel. The data signals are also duplicated and sent to the TEDF control room through the Local Control Unit (LCU). Performing the OTP has verified the operability of the PFP wastewater sampling system. This Operability Test Report documents the acceptance of the sampling system for use

  9. Contributions to sampling statistics

    CERN Document Server

    Conti, Pier; Ranalli, Maria

    2014-01-01

    This book contains a selection of the papers presented at the ITACOSM 2013 Conference, held in Milan in June 2013. ITACOSM is the bi-annual meeting of the Survey Sampling Group S2G of the Italian Statistical Society, intended as an international  forum of scientific discussion on the developments of theory and application of survey sampling methodologies and applications in human and natural sciences. The book gathers research papers carefully selected from both invited and contributed sessions of the conference. The whole book appears to be a relevant contribution to various key aspects of sampling methodology and techniques; it deals with some hot topics in sampling theory, such as calibration, quantile-regression and multiple frame surveys, and with innovative methodologies in important topics of both sampling theory and applications. Contributions cut across current sampling methodologies such as interval estimation for complex samples, randomized responses, bootstrap, weighting, modeling, imputati...

  10. Waste classification sampling plan

    International Nuclear Information System (INIS)

    Landsman, S.D.

    1998-01-01

    The purpose of this sampling is to explain the method used to collect and analyze data necessary to verify and/or determine the radionuclide content of the B-Cell decontamination and decommissioning waste stream so that the correct waste classification for the waste stream can be made, and to collect samples for studies of decontamination methods that could be used to remove fixed contamination present on the waste. The scope of this plan is to establish the technical basis for collecting samples and compiling quantitative data on the radioactive constituents present in waste generated during deactivation activities in B-Cell. Sampling and radioisotopic analysis will be performed on the fixed layers of contamination present on structural material and internal surfaces of process piping and tanks. In addition, dose rate measurements on existing waste material will be performed to determine the fraction of dose rate attributable to both removable and fixed contamination. Samples will also be collected to support studies of decontamination methods that are effective in removing the fixed contamination present on the waste. Sampling performed under this plan will meet criteria established in BNF-2596, Data Quality Objectives for the B-Cell Waste Stream Classification Sampling, J. M. Barnett, May 1998

  11. ALARA ASSESSMENT OF SETTLER SLUDGE SAMPLING METHODS

    International Nuclear Information System (INIS)

    Nelsen, L.A.

    2009-01-01

    The purpose of this assessment is to compare underwater and above water settler sludge sampling methods to determine if the added cost for underwater sampling for the sole purpose of worker dose reductions is justified. Initial planning for sludge sampling included container, settler and knock-out-pot (KOP) sampling. Due to the significantly higher dose consequence of KOP sludge, a decision was made to sample KOP underwater to achieve worker dose reductions. Additionally, initial plans were to utilize the underwater sampling apparatus for settler sludge. Since there are no longer plans to sample KOP sludge, the decision for underwater sampling for settler sludge needs to be revisited. The present sampling plan calls for spending an estimated $2,500,000 to design and construct a new underwater sampling system (per A21 C-PL-001 RevOE). This evaluation will compare and contrast the present method of above water sampling to the underwater method that is planned by the Sludge Treatment Project (STP) and determine if settler samples can be taken using the existing sampling cart (with potentially minor modifications) while maintaining doses to workers As Low As Reasonably Achievable (ALARA) and eliminate the need for costly redesigns, testing and personnel retraining

  12. ALARA ASSESSMENT OF SETTLER SLUDGE SAMPLING METHODS

    Energy Technology Data Exchange (ETDEWEB)

    NELSEN LA

    2009-01-30

    The purpose of this assessment is to compare underwater and above water settler sludge sampling methods to determine if the added cost for underwater sampling for the sole purpose of worker dose reductions is justified. Initial planning for sludge sampling included container, settler and knock-out-pot (KOP) sampling. Due to the significantly higher dose consequence of KOP sludge, a decision was made to sample KOP underwater to achieve worker dose reductions. Additionally, initial plans were to utilize the underwater sampling apparatus for settler sludge. Since there are no longer plans to sample KOP sludge, the decision for underwater sampling for settler sludge needs to be revisited. The present sampling plan calls for spending an estimated $2,500,000 to design and construct a new underwater sampling system (per A21 C-PL-001 RevOE). This evaluation will compare and contrast the present method of above water sampling to the underwater method that is planned by the Sludge Treatment Project (STP) and determine if settler samples can be taken using the existing sampling cart (with potentially minor modifications) while maintaining doses to workers As Low As Reasonably Achievable (ALARA) and eliminate the need for costly redesigns, testing and personnel retraining.

  13. Present status of NMCC and sample preparation method for bio-samples

    International Nuclear Information System (INIS)

    Futatsugawa, S.; Hatakeyama, S.; Saitou, S.; Sera, K.

    1993-01-01

    In NMCC(Nishina Memorial Cyclotron Center) we are doing researches on PET of nuclear medicine (Positron Emission Computed Tomography) and PIXE analysis (Particle Induced X-ray Emission) using a small cyclotron of compactly designed. The NMCC facilities have been opened to researchers of other institutions since April 1993. The present status of NMCC is described. Bio-samples (medical samples, plants, animals and environmental samples) have mainly been analyzed by PIXE in NMCC. Small amounts of bio-samples for PIXE are decomposed quickly and easily in a sealed PTFE (polytetrafluoroethylene) vessel with a microwave oven. This sample preparation method of bio-samples also is described. (author)

  14. Statistical sampling for holdup measurement

    International Nuclear Information System (INIS)

    Picard, R.R.; Pillay, K.K.S.

    1986-01-01

    Nuclear materials holdup is a serious problem in many operating facilities. Estimating amounts of holdup is important for materials accounting and, sometimes, for process safety. Clearly, measuring holdup in all pieces of equipment is not a viable option in terms of time, money, and radiation exposure to personnel. Furthermore, 100% measurement is not only impractical but unnecessary for developing estimated values. Principles of statistical sampling are valuable in the design of cost effective holdup monitoring plans and in qualifying uncertainties in holdup estimates. The purpose of this paper is to describe those principles and to illustrate their use

  15. Failure Probability Estimation Using Asymptotic Sampling and Its Dependence upon the Selected Sampling Scheme

    Directory of Open Access Journals (Sweden)

    Martinásková Magdalena

    2017-12-01

    Full Text Available The article examines the use of Asymptotic Sampling (AS for the estimation of failure probability. The AS algorithm requires samples of multidimensional Gaussian random vectors, which may be obtained by many alternative means that influence the performance of the AS method. Several reliability problems (test functions have been selected in order to test AS with various sampling schemes: (i Monte Carlo designs; (ii LHS designs optimized using the Periodic Audze-Eglājs (PAE criterion; (iii designs prepared using Sobol’ sequences. All results are compared with the exact failure probability value.

  16. Preferential sampling in veterinary parasitological surveillance

    Directory of Open Access Journals (Sweden)

    Lorenzo Cecconi

    2016-04-01

    Full Text Available In parasitological surveillance of livestock, prevalence surveys are conducted on a sample of farms using several sampling designs. For example, opportunistic surveys or informative sampling designs are very common. Preferential sampling refers to any situation in which the spatial process and the sampling locations are not independent. Most examples of preferential sampling in the spatial statistics literature are in environmental statistics with focus on pollutant monitors, and it has been shown that, if preferential sampling is present and is not accounted for in the statistical modelling and data analysis, statistical inference can be misleading. In this paper, working in the context of veterinary parasitology, we propose and use geostatistical models to predict the continuous and spatially-varying risk of a parasite infection. Specifically, breaking with the common practice in veterinary parasitological surveillance to ignore preferential sampling even though informative or opportunistic samples are very common, we specify a two-stage hierarchical Bayesian model that adjusts for preferential sampling and we apply it to data on Fasciola hepatica infection in sheep farms in Campania region (Southern Italy in the years 2013-2014.

  17. Network Sampling with Memory: A proposal for more efficient sampling from social networks

    Science.gov (United States)

    Mouw, Ted; Verdery, Ashton M.

    2013-01-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246

  18. Issues in environmental survey design

    International Nuclear Information System (INIS)

    Iachan, R.

    1989-01-01

    Several environmental survey design issues are discussed and illustrated with surveys designed by Research Triangle Institute statisticians. Issues related to sampling and nonsampling errors are illustrated for indoor air quality surveys, radon surveys, pesticide surveys, and occupational and personal exposure surveys. Sample design issues include the use of auxiliary information (e.g. for stratification), and sampling in time. We also discuss the reduction and estimation of nonsampling errors, including nonresponse and measurement bias

  19. Sample Return Robot

    Data.gov (United States)

    National Aeronautics and Space Administration — This Challenge requires demonstration of an autonomous robotic system to locate and collect a set of specific sample types from a large planetary analog area and...

  20. Mini MAX - Medicaid Sample

    Data.gov (United States)

    U.S. Department of Health & Human Services — To facilitate wider use of MAX, CMS contracted with Mathematica to convene a technical expert panel (TEP) and determine the feasibility of creating a sample file for...