WorldWideScience

Sample records for sampling strategy allowed

  1. Validated sampling strategy for assessing contaminants in soil stockpiles

    International Nuclear Information System (INIS)

    Lame, Frank; Honders, Ton; Derksen, Giljam; Gadella, Michiel

    2005-01-01

    Dutch legislation on the reuse of soil requires a sampling strategy to determine the degree of contamination. This sampling strategy was developed in three stages. Its main aim is to obtain a single analytical result, representative of the true mean concentration of the soil stockpile. The development process started with an investigation into how sample pre-treatment could be used to obtain representative results from composite samples of heterogeneous soil stockpiles. Combining a large number of random increments allows stockpile heterogeneity to be fully represented in the sample. The resulting pre-treatment method was then combined with a theoretical approach to determine the necessary number of increments per composite sample. At the second stage, the sampling strategy was evaluated using computerised models of contaminant heterogeneity in soil stockpiles. The now theoretically based sampling strategy was implemented by the Netherlands Centre for Soil Treatment in 1995. It was applied to all types of soil stockpiles, ranging from clean to heavily contaminated, over a period of four years. This resulted in a database containing the analytical results of 2570 soil stockpiles. At the final stage these results were used for a thorough validation of the sampling strategy. It was concluded that the model approach has indeed resulted in a sampling strategy that achieves analytical results representative of the mean concentration of soil stockpiles. - A sampling strategy that ensures analytical results representative of the mean concentration in soil stockpiles is presented and validated

  2. Soil sampling strategies: Evaluation of different approaches

    Energy Technology Data Exchange (ETDEWEB)

    De Zorzi, Paolo [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Metrologia Ambientale, Via di Castel Romano, 100-00128 Roma (Italy)], E-mail: paolo.dezorzi@apat.it; Barbizzi, Sabrina; Belli, Maria [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Metrologia Ambientale, Via di Castel Romano, 100-00128 Roma (Italy); Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia [Agenzia Regionale per la Prevenzione e Protezione dell' Ambiente del Veneto, ARPA Veneto, U.O. Centro Qualita Dati, Via Spalato, 14-36045 Vicenza (Italy)

    2008-11-15

    The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2{sigma}, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.

  3. Soil sampling strategies: Evaluation of different approaches

    International Nuclear Information System (INIS)

    De Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia

    2008-01-01

    The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2σ, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies

  4. Soil sampling strategies: evaluation of different approaches.

    Science.gov (United States)

    de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia

    2008-11-01

    The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2sigma, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.

  5. A comparative proteomics method for multiple samples based on a 18O-reference strategy and a quantitation and identification-decoupled strategy.

    Science.gov (United States)

    Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin

    2017-08-15

    Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Spent nuclear fuel sampling strategy

    International Nuclear Information System (INIS)

    Bergmann, D.W.

    1995-01-01

    This report proposes a strategy for sampling the spent nuclear fuel (SNF) stored in the 105-K Basins (105-K East and 105-K West). This strategy will support decisions concerning the path forward SNF disposition efforts in the following areas: (1) SNF isolation activities such as repackaging/overpacking to a newly constructed staging facility; (2) conditioning processes for fuel stabilization; and (3) interim storage options. This strategy was developed without following the Data Quality Objective (DQO) methodology. It is, however, intended to augment the SNF project DQOS. The SNF sampling is derived by evaluating the current storage condition of the SNF and the factors that effected SNF corrosion/degradation

  7. Adaptive sampling strategies with high-throughput molecular dynamics

    Science.gov (United States)

    Clementi, Cecilia

    Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.

  8. User-driven sampling strategies in image exploitation

    Science.gov (United States)

    Harvey, Neal; Porter, Reid

    2013-12-01

    Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-driven sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. User-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. In preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.

  9. Evaluation of sampling strategies to estimate crown biomass

    Directory of Open Access Journals (Sweden)

    Krishna P Poudel

    2015-01-01

    Full Text Available Background Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire modeling. However, crown biomass is difficult to predict because of the variability within and among species and sites. Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies. In this study, we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass. Methods Using data collected from 20 destructively sampled trees, we evaluated 11 different sampling strategies using six evaluation statistics: bias, relative bias, root mean square error (RMSE, relative RMSE, amount of biomass sampled, and relative biomass sampled. We also evaluated the performance of the selected sampling strategies when different numbers of branches (3, 6, 9, and 12 are selected from each tree. Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass. Results Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled. However, the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled. Under the stratified sampling strategy, selecting unequal number of branches per stratum produced approximately similar results to simple random sampling, but it further decreased RMSE when information on branch diameter is used in the design and estimation phases. Conclusions Use of

  10. Comparison of active and passive sampling strategies for the monitoring of pesticide contamination in streams

    Science.gov (United States)

    Assoumani, Azziz; Margoum, Christelle; Guillemain, Céline; Coquery, Marina

    2014-05-01

    The monitoring of water bodies regarding organic contaminants, and the determination of reliable estimates of concentrations are challenging issues, in particular for the implementation of the Water Framework Directive. Several strategies can be applied to collect water samples for the determination of their contamination level. Grab sampling is fast, easy, and requires little logistical and analytical needs in case of low frequency sampling campaigns. However, this technique lacks of representativeness for streams with high variations of contaminant concentrations, such as pesticides in rivers located in small agricultural watersheds. Increasing the representativeness of this sampling strategy implies greater logistical needs and higher analytical costs. Average automated sampling is therefore a solution as it allows, in a single analysis, the determination of more accurate and more relevant estimates of concentrations. Two types of automatic samplings can be performed: time-related sampling allows the assessment of average concentrations, whereas flow-dependent sampling leads to average flux concentrations. However, the purchase and the maintenance of automatic samplers are quite expensive. Passive sampling has recently been developed as an alternative to grab or average automated sampling, to obtain at lower cost, more realistic estimates of the average concentrations of contaminants in streams. These devices allow the passive accumulation of contaminants from large volumes of water, resulting in ultratrace level detection and smoothed integrative sampling over periods ranging from days to weeks. They allow the determination of time-weighted average (TWA) concentrations of the dissolved fraction of target contaminants, but they need to be calibrated in controlled conditions prior to field applications. In other words, the kinetics of the uptake of the target contaminants into the sampler must be studied in order to determine the corresponding sampling rate

  11. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  12. Comparison of sampling strategies for tobacco retailer inspections to maximize coverage in vulnerable areas and minimize cost.

    Science.gov (United States)

    Lee, Joseph G L; Shook-Sa, Bonnie E; Bowling, J Michael; Ribisl, Kurt M

    2017-06-23

    In the United States, tens of thousands of inspections of tobacco retailers are conducted each year. Various sampling choices can reduce travel costs, emphasize enforcement in areas with greater non-compliance, and allow for comparability between states and over time. We sought to develop a model sampling strategy for state tobacco retailer inspections. Using a 2014 list of 10,161 North Carolina tobacco retailers, we compared results from simple random sampling; stratified, clustered at the ZIP code sampling; and, stratified, clustered at the census tract sampling. We conducted a simulation of repeated sampling and compared approaches for their comparative level of precision, coverage, and retailer dispersion. While maintaining an adequate design effect and statistical precision appropriate for a public health enforcement program, both stratified, clustered ZIP- and tract-based approaches were feasible. Both ZIP and tract strategies yielded improvements over simple random sampling, with relative improvements, respectively, of average distance between retailers (reduced 5.0% and 1.9%), percent Black residents in sampled neighborhoods (increased 17.2% and 32.6%), percent Hispanic residents in sampled neighborhoods (reduced 2.2% and increased 18.3%), percentage of sampled retailers located near schools (increased 61.3% and 37.5%), and poverty rate in sampled neighborhoods (increased 14.0% and 38.2%). States can make retailer inspections more efficient and targeted with stratified, clustered sampling. Use of statistically appropriate sampling strategies like these should be considered by states, researchers, and the Food and Drug Administration to improve program impact and allow for comparisons over time and across states. The authors present a model tobacco retailer sampling strategy for promoting compliance and reducing costs that could be used by U.S. states and the Food and Drug Administration (FDA). The design is feasible to implement in North Carolina. Use of

  13. Sample preparation composite and replicate strategy for assay of solid oral drug products.

    Science.gov (United States)

    Harrington, Brent; Nickerson, Beverly; Guo, Michele Xuemei; Barber, Marc; Giamalva, David; Lee, Carlos; Scrivens, Garry

    2014-12-16

    In pharmaceutical analysis, the results of drug product assay testing are used to make decisions regarding the quality, efficacy, and stability of the drug product. In order to make sound risk-based decisions concerning drug product potency, an understanding of the uncertainty of the reportable assay value is required. Utilizing the most restrictive criteria in current regulatory documentation, a maximum variability attributed to method repeatability is defined for a drug product potency assay. A sampling strategy that reduces the repeatability component of the assay variability below this predefined maximum is demonstrated. The sampling strategy consists of determining the number of dosage units (k) to be prepared in a composite sample of which there may be a number of equivalent replicate (r) sample preparations. The variability, as measured by the standard error (SE), of a potency assay consists of several sources such as sample preparation and dosage unit variability. A sampling scheme that increases the number of sample preparations (r) and/or number of dosage units (k) per sample preparation will reduce the assay variability and thus decrease the uncertainty around decisions made concerning the potency of the drug product. A maximum allowable repeatability component of the standard error (SE) for the potency assay is derived using material in current regulatory documents. A table of solutions for the number of dosage units per sample preparation (r) and number of replicate sample preparations (k) is presented for any ratio of sample preparation and dosage unit variability.

  14. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Directory of Open Access Journals (Sweden)

    Jake M Ferguson

    2014-06-01

    Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  15. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  16. Efficient sampling of complex network with modified random walk strategies

    Science.gov (United States)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  17. New sampling strategy using a Bayesian approach to assess iohexol clearance in kidney transplant recipients.

    Science.gov (United States)

    Benz-de Bretagne, I; Le Guellec, C; Halimi, J M; Gatault, P; Barbet, C; Alnajjar, A; Büchler, M; Lebranchu, Y; Andres, Christian Robert; Vourcʼh, P; Blasco, H

    2012-06-01

    Glomerular filtration rate (GFR) measurement is a major issue in kidney transplant recipients for clinicians. GFR can be determined by estimating the plasma clearance of iohexol, a nonradiolabeled compound. For practical and convenient application for patients and caregivers, it is important that a minimal number of samples are drawn. The aim of this study was to develop and validate a Bayesian model with fewer samples for reliable prediction of GFR in kidney transplant recipients. Iohexol plasma concentration-time curves from 95 patients were divided into an index (n = 63) and a validation set (n = 32). Samples (n = 4-6 per patient) were obtained during the elimination phase, that is, between 120 and 270 minutes. Individual reference values of iohexol clearance (CL(iohexol)) were calculated from k (elimination slope) and V (volume of distribution from intercept). Individual CL(iohexol) values were then introduced into the Bröchner-Mortensen equation to obtain the GFR (reference value). A population pharmacokinetic model was developed from the index set and validated using standard methods. For the validation set, we tested various combinations of 1, 2, or 3 sampling time to estimate CL(iohexol). According to the different combinations tested, a maximum a posteriori Bayesian estimation of CL(iohexol) was obtained from population parameters. Individual estimates of GFR were compared with individual reference values through analysis of bias and precision. A capability analysis allowed us to determine the best sampling strategy for Bayesian estimation. A 1-compartment model best described our data. Covariate analysis showed that uremia, serum creatinine, and age were significantly associated with k(e), and weight with V. The strategy, including samples drawn at 120 and 270 minutes, allowed accurate prediction of GFR (mean bias: -3.71%, mean imprecision: 7.77%). With this strategy, about 20% of individual predictions were outside the bounds of acceptance set at ± 10

  18. A Bayesian sampling strategy for hazardous waste site characterization

    International Nuclear Information System (INIS)

    Skalski, J.R.

    1987-12-01

    Prior knowledge based on historical records or physical evidence often suggests the existence of a hazardous waste site. Initial surveys may provide additional or even conflicting evidence of site contamination. This article presents a Bayes sampling strategy that allocates sampling at a site using this prior knowledge. This sampling strategy minimizes the environmental risks of missing chemical or radionuclide hot spots at a waste site. The environmental risk is shown to be proportional to the size of the undetected hot spot or inversely proportional to the probability of hot spot detection. 12 refs., 2 figs

  19. Sampling strategies in antimicrobial resistance monitoring: evaluating how precision and sensitivity vary with the number of animals sampled per farm.

    Directory of Open Access Journals (Sweden)

    Takehisa Yamamoto

    Full Text Available Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.

  20. Effective sampling strategy to detect food and feed contamination

    NARCIS (Netherlands)

    Bouzembrak, Yamine; Fels, van der Ine

    2018-01-01

    Sampling plans for food safety hazards are aimed to be used to determine whether a lot of food is contaminated (with microbiological or chemical hazards) or not. One of the components of sampling plans is the sampling strategy. The aim of this study was to compare the performance of three

  1. Novel strategies for sample preparation in forensic toxicology.

    Science.gov (United States)

    Samanidou, Victoria; Kovatsi, Leda; Fragou, Domniki; Rentifis, Konstantinos

    2011-09-01

    This paper provides a review of novel strategies for sample preparation in forensic toxicology. The review initially outlines the principle of each technique, followed by sections addressing each class of abused drugs separately. The novel strategies currently reviewed focus on the preparation of various biological samples for the subsequent determination of opiates, benzodiazepines, amphetamines, cocaine, hallucinogens, tricyclic antidepressants, antipsychotics and cannabinoids. According to our experience, these analytes are the most frequently responsible for intoxications in Greece. The applications of techniques such as disposable pipette extraction, microextraction by packed sorbent, matrix solid-phase dispersion, solid-phase microextraction, polymer monolith microextraction, stir bar sorptive extraction and others, which are rapidly gaining acceptance in the field of toxicology, are currently reviewed.

  2. Limited-sampling strategies for anti-infective agents: systematic review.

    Science.gov (United States)

    Sprague, Denise A; Ensom, Mary H H

    2009-09-01

    Area under the concentration-time curve (AUC) is a pharmacokinetic parameter that represents overall exposure to a drug. For selected anti-infective agents, pharmacokinetic-pharmacodynamic parameters, such as AUC/MIC (where MIC is the minimal inhibitory concentration), have been correlated with outcome in a few studies. A limited-sampling strategy may be used to estimate pharmacokinetic parameters such as AUC, without the frequent, costly, and inconvenient blood sampling that would be required to directly calculate the AUC. To discuss, by means of a systematic review, the strengths, limitations, and clinical implications of published studies involving a limited-sampling strategy for anti-infective agents and to propose improvements in methodology for future studies. The PubMed and EMBASE databases were searched using the terms "anti-infective agents", "limited sampling", "optimal sampling", "sparse sampling", "AUC monitoring", "abbreviated AUC", "abbreviated sampling", and "Bayesian". The reference lists of retrieved articles were searched manually. Included studies were classified according to modified criteria from the US Preventive Services Task Force. Twenty studies met the inclusion criteria. Six of the studies (involving didanosine, zidovudine, nevirapine, ciprofloxacin, efavirenz, and nelfinavir) were classified as providing level I evidence, 4 studies (involving vancomycin, didanosine, lamivudine, and lopinavir-ritonavir) provided level II-1 evidence, 2 studies (involving saquinavir and ceftazidime) provided level II-2 evidence, and 8 studies (involving ciprofloxacin, nelfinavir, vancomycin, ceftazidime, ganciclovir, pyrazinamide, meropenem, and alpha interferon) provided level III evidence. All of the studies providing level I evidence used prospectively collected data and proper validation procedures with separate, randomly selected index and validation groups. However, most of the included studies did not provide an adequate description of the methods or

  3. Potential-Decomposition Strategy in Markov Chain Monte Carlo Sampling Algorithms

    International Nuclear Information System (INIS)

    Shangguan Danhua; Bao Jingdong

    2010-01-01

    We introduce the potential-decomposition strategy (PDS), which can he used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in phase space, then, by rejecting some trial samples, the target distributions can be sampled in an unbiased manner. Furthermore, if the accepted trial samples are insufficient, they can be recycled as initial states to form more unbiased samples. This strategy can greatly improve efficiency when the original potential has multiple metastable states separated by large barriers. We apply PDS to the 2d Ising model and a double-well potential model with a large barrier, demonstrating in these two representative examples that convergence is accelerated by orders of magnitude.

  4. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  5. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  6. Population pharmacokinetic analysis of clopidogrel in healthy Jordanian subjects with emphasis optimal sampling strategy.

    Science.gov (United States)

    Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A

    2013-05-01

    Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Decision Tree and Survey Development for Support in Agricultural Sampling Strategies during Nuclear and Radiological Emergencies

    International Nuclear Information System (INIS)

    Yi, Amelia Lee Zhi; Dercon, Gerd

    2017-01-01

    In the event of a severe nuclear or radiological accident, the release of radionuclides results in contamination of land surfaces affecting agricultural and food resources. Speedy accumulation of information and guidance on decision making is essential in enhancing the ability of stakeholders to strategize for immediate countermeasure strategies. Support tools such as decision trees and sampling protocols allow for swift response by governmental bodies and assist in proper management of the situation. While such tools exist, they focus mainly on protecting public well-being and not food safety management strategies. Consideration of the latter is necessary as it has long-term implications especially to agriculturally dependent Member States. However, it is a research gap that remains to be filled.

  8. A simulation approach to assessing sampling strategies for insect pests: an example with the balsam gall midge.

    Directory of Open Access Journals (Sweden)

    R Drew Carleton

    Full Text Available Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with "pre-sampling" data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n ∼ 100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand was the most efficient, with sample means converging on true mean density for sample sizes of n ∼ 25-40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods.

  9. Utilizing the ultrasensitive Schistosoma up-converting phosphor lateral flow circulating anodic antigen (UCP-LF CAA) assay for sample pooling-strategies.

    Science.gov (United States)

    Corstjens, Paul L A M; Hoekstra, Pytsje T; de Dood, Claudia J; van Dam, Govert J

    2017-11-01

    Methodological applications of the high sensitivity genus-specific Schistosoma CAA strip test, allowing detection of single worm active infections (ultimate sensitivity), are discussed for efficient utilization in sample pooling strategies. Besides relevant cost reduction, pooling of samples rather than individual testing can provide valuable data for large scale mapping, surveillance, and monitoring. The laboratory-based CAA strip test utilizes luminescent quantitative up-converting phosphor (UCP) reporter particles and a rapid user-friendly lateral flow (LF) assay format. The test includes a sample preparation step that permits virtually unlimited sample concentration with urine, reaching ultimate sensitivity (single worm detection) at 100% specificity. This facilitates testing large urine pools from many individuals with minimal loss of sensitivity and specificity. The test determines the average CAA level of the individuals in the pool thus indicating overall worm burden and prevalence. When requiring test results at the individual level, smaller pools need to be analysed with the pool-size based on expected prevalence or when unknown, on the average CAA level of a larger group; CAA negative pools do not require individual test results and thus reduce the number of tests. Straightforward pooling strategies indicate that at sub-population level the CAA strip test is an efficient assay for general mapping, identification of hotspots, determination of stratified infection levels, and accurate monitoring of mass drug administrations (MDA). At the individual level, the number of tests can be reduced i.e. in low endemic settings as the pool size can be increased as opposed to prevalence decrease. At the sub-population level, average CAA concentrations determined in urine pools can be an appropriate measure indicating worm burden. Pooling strategies allowing this type of large scale testing are feasible with the various CAA strip test formats and do not affect

  10. Measurement of radioactivity in the environment - Soil - Part 2: Guidance for the selection of the sampling strategy, sampling and pre-treatment of samples

    International Nuclear Information System (INIS)

    2007-01-01

    This part of ISO 18589 specifies the general requirements, based on ISO 11074 and ISO/IEC 17025, for all steps in the planning (desk study and area reconnaissance) of the sampling and the preparation of samples for testing. It includes the selection of the sampling strategy, the outline of the sampling plan, the presentation of general sampling methods and equipment, as well as the methodology of the pre-treatment of samples adapted to the measurements of the activity of radionuclides in soil. This part of ISO 18589 is addressed to the people responsible for determining the radioactivity present in soil for the purpose of radiation protection. It is applicable to soil from gardens, farmland, urban or industrial sites, as well as soil not affected by human activities. This part of ISO 18589 is applicable to all laboratories regardless of the number of personnel or the range of the testing performed. When a laboratory does not undertake one or more of the activities covered by this part of ISO 18589, such as planning, sampling or testing, the corresponding requirements do not apply. Information is provided on scope, normative references, terms and definitions and symbols, principle, sampling strategy, sampling plan, sampling process, pre-treatment of samples and recorded information. Five annexes inform about selection of the sampling strategy according to the objectives and the radiological characterization of the site and sampling areas, diagram of the evolution of the sample characteristics from the sampling site to the laboratory, example of sampling plan for a site divided in three sampling areas, example of a sampling record for a single/composite sample and example for a sample record for a soil profile with soil description. A bibliography is provided

  11. Adaptive Angular Sampling for SPECT Imaging

    OpenAIRE

    Li, Nan; Meng, Ling-Jian

    2011-01-01

    This paper presents an analytical approach for performing adaptive angular sampling in single photon emission computed tomography (SPECT) imaging. It allows for a rapid determination of the optimum sampling strategy that minimizes image variance in regions-of-interest (ROIs). The proposed method consists of three key components: (a) a set of close-form equations for evaluating image variance and resolution attainable with a given sampling strategy, (b) a gradient-based algor...

  12. Mars Sample Return - Launch and Detection Strategies for Orbital Rendezvous

    Science.gov (United States)

    Woolley, Ryan C.; Mattingly, Richard L.; Riedel, Joseph E.; Sturm, Erick J.

    2011-01-01

    This study sets forth conceptual mission design strategies for the ascent and rendezvous phase of the proposed NASA/ESA joint Mars Sample Return Campaign. The current notional mission architecture calls for the launch of an acquisition/cache rover in 2018, an orbiter with an Earth return vehicle in 2022, and a fetch rover and ascent vehicle in 2024. Strategies are presented to launch the sample into a coplanar orbit with the Orbiter which facilitate robust optical detection, orbit determination, and rendezvous. Repeating ground track orbits exist at 457 and 572 km which provide multiple launch opportunities with similar geometries for detection and rendezvous.

  13. Mars Sample Return: Launch and Detection Strategies for Orbital Rendezvous

    Science.gov (United States)

    Woolley, Ryan C.; Mattingly, Richard L.; Riedel, Joseph E.; Sturm, Erick J.

    2011-01-01

    This study sets forth conceptual mission design strategies for the ascent and rendezvous phase of the proposed NASA/ESA joint Mars Sample Return Campaign. The current notional mission architecture calls for the launch of an acquisition/ caching rover in 2018, an Earth return orbiter in 2022, and a fetch rover with ascent vehicle in 2024. Strategies are presented to launch the sample into a nearly coplanar orbit with the Orbiter which would facilitate robust optical detection, orbit determination, and rendezvous. Repeating ground track orbits existat 457 and 572 km which would provide multiple launch opportunities with similar geometries for detection and rendezvous.

  14. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  15. Assessment of sampling strategies for estimation of site mean concentrations of stormwater pollutants.

    Science.gov (United States)

    McCarthy, David T; Zhang, Kefeng; Westerlund, Camilla; Viklander, Maria; Bertrand-Krajewski, Jean-Luc; Fletcher, Tim D; Deletic, Ana

    2018-02-01

    The estimation of stormwater pollutant concentrations is a primary requirement of integrated urban water management. In order to determine effective sampling strategies for estimating pollutant concentrations, data from extensive field measurements at seven different catchments was used. At all sites, 1-min resolution continuous flow measurements, as well as flow-weighted samples, were taken and analysed for total suspend solids (TSS), total nitrogen (TN) and Escherichia coli (E. coli). For each of these parameters, the data was used to calculate the Event Mean Concentrations (EMCs) for each event. The measured Site Mean Concentrations (SMCs) were taken as the volume-weighted average of these EMCs for each parameter, at each site. 17 different sampling strategies, including random and fixed strategies were tested to estimate SMCs, which were compared with the measured SMCs. The ratios of estimated/measured SMCs were further analysed to determine the most effective sampling strategies. Results indicate that the random sampling strategies were the most promising method in reproducing SMCs for TSS and TN, while some fixed sampling strategies were better for estimating the SMC of E. coli. The differences in taking one, two or three random samples were small (up to 20% for TSS, and 10% for TN and E. coli), indicating that there is little benefit in investing in collection of more than one sample per event if attempting to estimate the SMC through monitoring of multiple events. It was estimated that an average of 27 events across the studied catchments are needed for characterising SMCs of TSS with a 90% confidence interval (CI) width of 1.0, followed by E.coli (average 12 events) and TN (average 11 events). The coefficient of variation of pollutant concentrations was linearly and significantly correlated to the 90% confidence interval ratio of the estimated/measured SMCs (R 2  = 0.49; P sampling frequency needed to accurately estimate SMCs of pollutants. Crown

  16. NAIMA: target amplification strategy allowing quantitative on-chip detection of GMOs.

    Science.gov (United States)

    Morisset, Dany; Dobnik, David; Hamels, Sandrine; Zel, Jana; Gruden, Kristina

    2008-10-01

    We have developed a novel multiplex quantitative DNA-based target amplification method suitable for sensitive, specific and quantitative detection on microarray. This new method named NASBA Implemented Microarray Analysis (NAIMA) was applied to GMO detection in food and feed, but its application can be extended to all fields of biology requiring simultaneous detection of low copy number DNA targets. In a first step, the use of tailed primers allows the multiplex synthesis of template DNAs in a primer extension reaction. A second step of the procedure consists of transcription-based amplification using universal primers. The cRNA product is further on directly ligated to fluorescent dyes labelled 3DNA dendrimers allowing signal amplification and hybridized without further purification on an oligonucleotide probe-based microarray for multiplex detection. Two triplex systems have been applied to test maize samples containing several transgenic lines, and NAIMA has shown to be sensitive down to two target copies and to provide quantitative data on the transgenic contents in a range of 0.1-25%. Performances of NAIMA are comparable to singleplex quantitative real-time PCR. In addition, NAIMA amplification is faster since 20 min are sufficient to achieve full amplification.

  17. Mendelian breeding units versus standard sampling strategies: mitochondrial DNA variation in southwest Sardinia

    Directory of Open Access Journals (Sweden)

    Daria Sanna

    2011-01-01

    Full Text Available We report a sampling strategy based on Mendelian Breeding Units (MBUs, representing an interbreeding group of individuals sharing a common gene pool. The identification of MBUs is crucial for case-control experimental design in association studies. The aim of this work was to evaluate the possible existence of bias in terms of genetic variability and haplogroup frequencies in the MBU sample, due to severe sample selection. In order to reach this goal, the MBU sampling strategy was compared to a standard selection of individuals according to their surname and place of birth. We analysed mitochondrial DNA variation (first hypervariable segment and coding region in unrelated healthy subjects from two different areas of Sardinia: the area around the town of Cabras and the western Campidano area. No statistically significant differences were observed when the two sampling methods were compared, indicating that the stringent sample selection needed to establish a MBU does not alter original genetic variability and haplogroup distribution. Therefore, the MBU sampling strategy can be considered a useful tool in association studies of complex traits.

  18. Sampling strategies for indoor radon investigations

    International Nuclear Information System (INIS)

    Prichard, H.M.

    1983-01-01

    Recent investigations prompted by concern about the environmental effects of residential energy conservation have produced many accounts of indoor radon concentrations far above background levels. In many instances time-normalized annual exposures exceeded the 4 WLM per year standard currently used for uranium mining. Further investigations of indoor radon exposures are necessary to judge the extent of the problem and to estimate the practicality of health effects studies. A number of trends can be discerned as more indoor surveys are reported. It is becoming increasingly clear that local geological factors play a major, if not dominant role in determining the distribution of indoor radon concentrations in a given area. Within a giving locale, indoor radon concentrations tend to be log-normally distributed, and sample means differ markedly from one region to another. The appreciation of geological factors and the general log-normality of radon distributions will improve the accuracy of population dose estimates and facilitate the design of preliminary health effects studies. The relative merits of grab samples, short and long term integrated samples, and more complicated dose assessment strategies are discussed in the context of several types of epidemiological investigations. A new passive radon sampler with a 24 hour integration time is described and evaluated as a tool for pilot investigations

  19. Chapter 2: Sampling strategies in forest hydrology and biogeochemistry

    Science.gov (United States)

    Roger C. Bales; Martha H. Conklin; Branko Kerkez; Steven Glaser; Jan W. Hopmans; Carolyn T. Hunsaker; Matt Meadows; Peter C. Hartsough

    2011-01-01

    Many aspects of forest hydrology have been based on accurate but not necessarily spatially representative measurements, reflecting the measurement capabilities that were traditionally available. Two developments are bringing about fundamental changes in sampling strategies in forest hydrology and biogeochemistry: (a) technical advances in measurement capability, as is...

  20. Sampling strategy to develop a primary core collection of apple ...

    African Journals Online (AJOL)

    PRECIOUS

    2010-01-11

    Jan 11, 2010 ... Physiology and Molecular Biology for Fruit, Tree, Beijing 100193, China. ... analyzed on genetic diversity to ensure their represen- .... strategy, cluster and random sampling. .... on isozyme data―A simulation study, Theor.

  1. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, J. D. (Prostat, Mesa, AZ); Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)

    2006-10-01

    Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

  2. Development and Demonstration of a Method to Evaluate Bio-Sampling Strategies Using Building Simulation and Sample Planning Software.

    Science.gov (United States)

    Dols, W Stuart; Persily, Andrew K; Morrow, Jayne B; Matzke, Brett D; Sego, Landon H; Nuffer, Lisa L; Pulsipher, Brent A

    2010-01-01

    In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones.

  3. Sampling strategy for a large scale indoor radiation survey - a pilot project

    International Nuclear Information System (INIS)

    Strand, T.; Stranden, E.

    1986-01-01

    Optimisation of a stratified random sampling strategy for large scale indoor radiation surveys is discussed. It is based on the results from a small scale pilot project where variances in dose rates within different categories of houses were assessed. By selecting a predetermined precision level for the mean dose rate in a given region, the number of measurements needed can be optimised. The results of a pilot project in Norway are presented together with the development of the final sampling strategy for a planned large scale survey. (author)

  4. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    Science.gov (United States)

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  5. Recruitment of hard-to-reach population subgroups via adaptations of the snowball sampling strategy.

    Science.gov (United States)

    Sadler, Georgia Robins; Lee, Hau-Chen; Lim, Rod Seung-Hwan; Fullerton, Judith

    2010-09-01

    Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author's program of research are provided to demonstrate how adaptations of snowball sampling can be used effectively in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more-vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or for research studies when the recruitment of a population-based sample is not essential.

  6. Sampling strategies for the analysis of reactive low-molecular weight compounds in air

    NARCIS (Netherlands)

    Henneken, H.

    2006-01-01

    Within this thesis, new sampling and analysis strategies for the determination of airborne workplace contaminants have been developed. Special focus has been directed towards the development of air sampling methods that involve diffusive sampling. In an introductory overview, the current

  7. A Geostatistical Approach to Indoor Surface Sampling Strategies

    DEFF Research Database (Denmark)

    Schneider, Thomas; Petersen, Ole Holm; Nielsen, Allan Aasbjerg

    1990-01-01

    Particulate surface contamination is of concern in production industries such as food processing, aerospace, electronics and semiconductor manufacturing. There is also an increased awareness that surface contamination should be monitored in industrial hygiene surveys. A conceptual and theoretical...... framework for designing sampling strategies is thus developed. The distribution and spatial correlation of surface contamination can be characterized using concepts from geostatistical science, where spatial applications of statistics is most developed. The theory is summarized and particulate surface...... contamination, sampled from small areas on a table, have been used to illustrate the method. First, the spatial correlation is modelled and the parameters estimated from the data. Next, it is shown how the contamination at positions not measured can be estimated with kriging, a minimum mean square error method...

  8. Impact of sampling strategy on stream load estimates in till landscape of the Midwest

    Science.gov (United States)

    Vidon, P.; Hubbard, L.E.; Soyeux, E.

    2009-01-01

    Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.

  9. Perspectives on land snails - sampling strategies for isotopic analyses

    Science.gov (United States)

    Kwiecien, Ola; Kalinowski, Annika; Kamp, Jessica; Pellmann, Anna

    2017-04-01

    Since the seminal works of Goodfriend (1992), several substantial studies confirmed a relation between the isotopic composition of land snail shells (d18O, d13C) and environmental parameters like precipitation amount, moisture source, temperature and vegetation type. This relation, however, is not straightforward and site dependent. The choice of sampling strategy (discrete or bulk sampling) and cleaning procedure (several methods can be used, but comparison of their effects in an individual shell has yet not been achieved) further complicate the shell analysis. The advantage of using snail shells as environmental archive lies in the snails' limited mobility, and therefore an intrinsic aptitude of recording local and site-specific conditions. Also, snail shells are often found at dated archaeological sites. An obvious drawback is that shell assemblages rarely make up a continuous record, and a single shell is only a snapshot of the environmental setting at a given time. Shells from archaeological sites might represent a dietary component and cooking would presumably alter the isotopic signature of aragonite material. Consequently, a proper sampling strategy is of great importance and should be adjusted to the scientific question. Here, we compare and contrast different sampling approaches using modern shells collected in Morocco, Spain and Germany. The bulk shell approach (fine-ground material) yields information on mean environmental parameters within the life span of analyzed individuals. However, despite homogenization, replicate measurements of bulk shell material returned results with a variability greater than analytical precision (up to 2‰ for d18O, and up to 1‰ for d13C), calling for caution analyzing only single individuals. Horizontal high-resolution sampling (single drill holes along growth lines) provides insights into the amplitude of seasonal variability, while vertical high-resolution sampling (multiple drill holes along the same growth line

  10. Sample preparation strategies for food and biological samples prior to nanoparticle detection and imaging

    DEFF Research Database (Denmark)

    Larsen, Erik Huusfeldt; Löschner, Katrin

    2014-01-01

    microscopy (TEM) proved to be necessary for trouble shooting of results obtained from AFFF-LS-ICP-MS. Aqueous and enzymatic extraction strategies were tested for thorough sample preparation aiming at degrading the sample matrix and to liberate the AgNPs from chicken meat into liquid suspension. The resulting...... AFFF-ICP-MS fractograms, which corresponded to the enzymatic digests, showed a major nano-peak (about 80 % recovery of AgNPs spiked to the meat) plus new smaller peaks that eluted close to the void volume of the fractograms. Small, but significant shifts in retention time of AFFF peaks were observed...... for the meat sample extracts and the corresponding neat AgNP suspension, and rendered sizing by way of calibration with AgNPs as sizing standards inaccurate. In order to gain further insight into the sizes of the separated AgNPs, or their possible dissolved state, fractions of the AFFF eluate were collected...

  11. Sampling strategy for estimating human exposure pathways to consumer chemicals

    NARCIS (Netherlands)

    Papadopoulou, Eleni; Padilla-Sanchez, Juan A.; Collins, Chris D.; Cousins, Ian T.; Covaci, Adrian; de Wit, Cynthia A.; Leonards, Pim E.G.; Voorspoels, Stefan; Thomsen, Cathrine; Harrad, Stuart; Haug, Line S.

    2016-01-01

    Human exposure to consumer chemicals has become a worldwide concern. In this work, a comprehensive sampling strategy is presented, to our knowledge being the first to study all relevant exposure pathways in a single cohort using multiple methods for assessment of exposure from each exposure pathway.

  12. Dried blood spot measurement: application in tacrolimus monitoring using limited sampling strategy and abbreviated AUC estimation.

    Science.gov (United States)

    Cheung, Chi Yuen; van der Heijden, Jaques; Hoogtanders, Karin; Christiaans, Maarten; Liu, Yan Lun; Chan, Yiu Han; Choi, Koon Shing; van de Plas, Afke; Shek, Chi Chung; Chau, Ka Foon; Li, Chun Sang; van Hooff, Johannes; Stolk, Leo

    2008-02-01

    Dried blood spot (DBS) sampling and high-performance liquid chromatography tandem-mass spectrometry have been developed in monitoring tacrolimus levels. Our center favors the use of limited sampling strategy and abbreviated formula to estimate the area under concentration-time curve (AUC(0-12)). However, it is inconvenient for patients because they have to wait in the center for blood sampling. We investigated the application of DBS method in tacrolimus level monitoring using limited sampling strategy and abbreviated AUC estimation approach. Duplicate venous samples were obtained at each time point (C(0), C(2), and C(4)). To determine the stability of blood samples, one venous sample was sent to our laboratory immediately. The other duplicate venous samples, together with simultaneous fingerprick blood samples, were sent to the University of Maastricht in the Netherlands. Thirty six patients were recruited and 108 sets of blood samples were collected. There was a highly significant relationship between AUC(0-12), estimated from venous blood samples, and fingerprick blood samples (r(2) = 0.96, P AUC(0-12) strategy as drug monitoring.

  13. Focusing and non-focusing modulation strategies for the improvement of on-line two-dimensional hydrophilic interaction chromatography × reversed phase profiling of complex food samples.

    Science.gov (United States)

    Montero, Lidia; Ibáñez, Elena; Russo, Mariateresa; Rastrelli, Luca; Cifuentes, Alejandro; Herrero, Miguel

    2017-09-08

    Comprehensive two-dimensional liquid chromatography (LC × LC) is ever gaining interest in food analysis, as often, food-related samples are too complex to be analyzed through one-dimensional approaches. The use of hydrophilic interaction chromatography (HILIC) combined with reversed phase (RP) separations has already been demonstrated as a very orthogonal combination, which allows attaining increased resolving power. However, this coupling encompasses different analytical challenges, mainly related to the important solvent strength mismatch between the two dimensions, besides those common to every LC × LC method. In the present contribution, different strategies are proposed and compared to further increase HILIC × RP method performance for the analysis of complex food samples, using licorice as a model sample. The influence of different parameters in non-focusing modulation methods based on sampling loops, as well as under focusing modulation, through the use of trapping columns in the interface and through active modulation procedures are studied in order to produce resolving power and sensitivity gains. Although the use of a dilution strategy using sampling loops as well as the highest possible first dimension sampling rate allowed significant improvements on resolution, focusing modulation produced significant gains also in peak capacity and sensitivity. Overall, the obtained results demonstrate the great applicability and potential that active modulation may have for the analysis of complex food samples, such as licorice, by HILIC × RP. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Statistical sampling strategies

    International Nuclear Information System (INIS)

    Andres, T.H.

    1987-01-01

    Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized

  15. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  16. Recruiting hard-to-reach United States population sub-groups via adaptations of snowball sampling strategy

    Science.gov (United States)

    Sadler, Georgia Robins; Lee, Hau-Chen; Seung-Hwan Lim, Rod; Fullerton, Judith

    2011-01-01

    Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author’s program of research are provided to demonstrate how adaptations of snowball sampling can be effectively used in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or subjects for research studies when recruitment of a population based sample is not essential. PMID:20727089

  17. Sampling and analyte enrichment strategies for ambient mass spectrometry.

    Science.gov (United States)

    Li, Xianjiang; Ma, Wen; Li, Hongmei; Ai, Wanpeng; Bai, Yu; Liu, Huwei

    2018-01-01

    Ambient mass spectrometry provides great convenience for fast screening, and has showed promising potential in analytical chemistry. However, its relatively low sensitivity seriously restricts its practical utility in trace compound analysis. In this review, we summarize the sampling and analyte enrichment strategies coupled with nine modes of representative ambient mass spectrometry (desorption electrospray ionization, paper vhspray ionization, wooden-tip spray ionization, probe electrospray ionization, coated blade spray ionization, direct analysis in real time, desorption corona beam ionization, dielectric barrier discharge ionization, and atmospheric-pressure solids analysis probe) that have dramatically increased the detection sensitivity. We believe that these advances will promote routine use of ambient mass spectrometry. Graphical abstract Scheme of sampling stretagies for ambient mass spectrometry.

  18. Searching out the hydrogen absorption/desorption limiting reaction factors: Strategies allowing to increase kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Zeaiter, Ali, E-mail: ali.zeaiter@femto-st.fr; Chapelle, David; Nardin, Philippe

    2015-10-05

    Highlights: • A macro scale thermodynamic model that simulates the response of a FeTi-X hydride tank is performed, and validated experimentally. • A sensibility study to identify the most influent input variables that can changes very largely the reaction rate. - Abstract: Hydrogen gas has become one of the most promising energy carriers. Main breakthrough concerns hydrogen solid storage, specially based on intermetallic material use. Regarding the raw material abundance and cost, the AB type alloy FeTi is an auspicious candidate to store hydrogen. Its absorption/desorption kinetics is a basic hindrance to common use, compared with more usual hydrides. First, discussions based on literature help us identifying the successive steps leading to metal hydriding, and allow to introduce the physical parameters which drive or limit the reaction. This analysis leads us to suggest strategies in order to increase absorption/desorption kinetics. Attention is then paid to a thermofluidodynamic model, allowing to describe a macroscopic solid storage reactor. Thus, we can achieve a simulation which describes the overall reaction inside the hydrogen reactor and, by varying the sub-mentioned parameters (thermal conductivity, the powder granularity, environment heat exchange…), we attempt to hierarchy the reaction limiting factors. These simulations are correlated to absorption/desorption experiments for which pressure, temperature and hydrogen flow are recorded.

  19. Observing System Simulation Experiments for the assessment of temperature sampling strategies in the Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    F. Raicich

    2003-01-01

    Full Text Available For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT profiles collected along Volunteer Observing Ship (VOS tracks, Airborne XBTs (AXBTs and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000 is also studied. The data impact is quantified by the error reduction relative to the free run. The most effective sampling strategies determine 25–40% error reduction, depending on the season, the geographic area and the depth range. A qualitative relationship can be recognized in terms of the spread of information from the data positions, between basin circulation features and spatial patterns of the error reduction fields, as a function of different spatial and seasonal characteristics of the dynamics. The largest error reductions are observed when samplings are characterized by extensive spatial coverages, as in the cases of AXBTs and the combination of XBTs and surface temperatures. The sampling strategy adopted during the TOP is characterized by little impact, as a consequence of a sampling frequency that is too low. Key words. Oceanography: general (marginal and semi-enclosed seas; numerical modelling

  20. Observing System Simulation Experiments for the assessment of temperature sampling strategies in the Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    F. Raicich

    Full Text Available For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT profiles collected along Volunteer Observing Ship (VOS tracks, Airborne XBTs (AXBTs and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000 is also studied.

    The data impact is quantified by the error reduction relative to the free run. The most effective sampling strategies determine 25–40% error reduction, depending on the season, the geographic area and the depth range. A qualitative relationship can be recognized in terms of the spread of information from the data positions, between basin circulation features and spatial patterns of the error reduction fields, as a function of different spatial and seasonal characteristics of the dynamics. The largest error reductions are observed when samplings are characterized by extensive spatial coverages, as in the cases of AXBTs and the combination of XBTs and surface temperatures. The sampling strategy adopted during the TOP is characterized by little impact, as a consequence of a sampling frequency that is too low.

    Key words. Oceanography: general (marginal and semi-enclosed seas; numerical modelling

  1. [Study of spatial stratified sampling strategy of Oncomelania hupensis snail survey based on plant abundance].

    Science.gov (United States)

    Xun-Ping, W; An, Z

    2017-07-27

    Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.

  2. Strategies and equipment for sampling suspended sediment and associated toxic chemicals in large rivers - with emphasis on the Mississippi River

    Science.gov (United States)

    Meade, R.H.; Stevens, H.H.

    1990-01-01

    A Lagrangian strategy for sampling large rivers, which was developed and tested in the Orinoco and Amazon Rivers of South America during the early 1980s, is now being applied to the study of toxic chemicals in the Mississippi River. A series of 15-20 cross-sections of the Mississippi mainstem and its principal tributaries is sampled by boat in downstream sequence, beginning upriver of St. Louis and concluding downriver of New Orleans 3 weeks later. The timing of the downstream sampling sequence approximates the travel time of the river water. Samples at each cross-section are discharge-weighted to provide concentrations of dissolved and suspended constituents that are converted to fluxes. Water-sediment mixtures are collected from 10-40 equally spaced points across the river width by sequential depth integration at a uniform vertical transit rate. Essential equipment includes (i) a hydraulic winch, for sensitive control of vertical transit rates, and (ii) a collapsible-bag sampler, which allows integrated samples to be collected at all depths in the river. A section is usually sampled in 4-8 h, for a total sample recovery of 100-120 l. Sampled concentrations of suspended silt and clay are reproducible within 3%.

  3. Analyzing electric utility NO{sub x} emission allowance trading strategies

    Energy Technology Data Exchange (ETDEWEB)

    Selker, F.

    2005-04-01

    This article presented a computer model designed to help power producers negotiate the nitrous oxide (NO{sub x}) emission allowance (EA) market. Created in 1999, the EA market poses a serious constraint to utilities and has the potential to substantially increase total power productions costs and to force plant shutdowns if emissions exceed limits. The market was created in response to the 1990 Clean Air Act Amendments, with the goal of effectively reducing the cost of summer ozone levels. Over 450 sources in the Northeast regions receive an allocation of NO{sub x} allowances to cover their NO{sub x} emissions during the May to September period. Various uncertainties created by the market were examined, including late summer heat waves and nuclear outages, both of which could boost emissions during times when offsets are difficult to initiate. Weather, planning and plant outages were also discussed. Supply shortages were considered along with issues concerning the model's ability to assess options and uncertainties. The feasibility of the emissions allowance acting as a viable buffer was also evaluated. It was noted that the net cost of buying and selling allowances during the NO{sub x} season varied with inventory levels. A hypothetical analysis of a NO{sub x} inventory was presented. It was suggested that purchasing options to buy allowances offered another hedge against NO{sub x} EA shortages and noncompliance. It was concluded that the model allowed users to explore the cost and risk tradeoffs of various combinations. 5 figs.

  4. A sampling strategy for estimating plot average annual fluxes of chemical elements from forest soils

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.; Vries, de W.

    2010-01-01

    A sampling strategy for estimating spatially averaged annual element leaching fluxes from forest soils is presented and tested in three Dutch forest monitoring plots. In this method sampling locations and times (days) are selected by probability sampling. Sampling locations were selected by

  5. Development and Demonstration of a Method to Evaluate Bio-Sampling Strategies Using Building Simulation and Sample Planning Software

    OpenAIRE

    Dols, W. Stuart; Persily, Andrew K.; Morrow, Jayne B.; Matzke, Brett D.; Sego, Landon H.; Nuffer, Lisa L.; Pulsipher, Brent A.

    2010-01-01

    In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by vir...

  6. Metabolomic analysis of urine samples by UHPLC-QTOF-MS: Impact of normalization strategies.

    Science.gov (United States)

    Gagnebin, Yoric; Tonoli, David; Lescuyer, Pierre; Ponte, Belen; de Seigneux, Sophie; Martin, Pierre-Yves; Schappler, Julie; Boccard, Julien; Rudaz, Serge

    2017-02-22

    Among the various biological matrices used in metabolomics, urine is a biofluid of major interest because of its non-invasive collection and its availability in large quantities. However, significant sources of variability in urine metabolomics based on UHPLC-MS are related to the analytical drift and variation of the sample concentration, thus requiring normalization. A sequential normalization strategy was developed to remove these detrimental effects, including: (i) pre-acquisition sample normalization by individual dilution factors to narrow the concentration range and to standardize the analytical conditions, (ii) post-acquisition data normalization by quality control-based robust LOESS signal correction (QC-RLSC) to correct for potential analytical drift, and (iii) post-acquisition data normalization by MS total useful signal (MSTUS) or probabilistic quotient normalization (PQN) to prevent the impact of concentration variability. This generic strategy was performed with urine samples from healthy individuals and was further implemented in the context of a clinical study to detect alterations in urine metabolomic profiles due to kidney failure. In the case of kidney failure, the relation between creatinine/osmolality and the sample concentration is modified, and relying only on these measurements for normalization could be highly detrimental. The sequential normalization strategy was demonstrated to significantly improve patient stratification by decreasing the unwanted variability and thus enhancing data quality. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Complementary sample preparation strategies for analysis of cereal β-glucan oxidation products by UPLC-MS/MS

    Science.gov (United States)

    Boulos, Samy; Nyström, Laura

    2017-11-01

    The oxidation of cereal (1→3,1→4)-β-D-glucan can influence the health promoting and technological properties of this linear, soluble homopolysaccharide by introduction of new functional groups or chain scission. Apart from deliberate oxidative modifications, oxidation of β-glucan can already occur during processing and storage, which is mediated by hydroxyl radicals (HO•) formed by the Fenton reaction. We present four complementary sample preparation strategies to investigate oat and barley β-glucan oxidation products by hydrophilic interaction ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS), employing selective enzymatic digestion, graphitized carbon solid phase extraction (SPE), and functional group labeling techniques. The combination of these methods allows for detection of both lytic (C1, C3/4, C5) and non-lytic (C2, C4/3, C6) oxidation products resulting from HO•-attack at different glucose-carbons. By treating oxidized β-glucan with lichenase and β-glucosidase, only oxidized parts of the polymer remained in oligomeric form, which could be separated by SPE from the vast majority of non-oxidized glucose units. This allowed for the detection of oligomers with mid-chain glucuronic acids (C6) and carbonyls, as well as carbonyls at the non-reducing end from lytic C3/C4 oxidation. Neutral reducing ends were detected by reductive amination with anthranilic acid/amide as labeled glucose and cross-ring cleaved units (arabinose, erythrose) after enzyme treatment and SPE. New acidic chain termini were observed by carbodiimide-mediated amidation of carboxylic acids as anilides of gluconic, arabinonic, and erythronic acids. Hence, a full characterization of all types of oxidation products was possible by combining complementary sample preparation strategies. Differences in fine structure depending on source (oat vs. barley) translates to the ratio of observed oxidized oligomers, with in-depth analysis corroborating a random HO

  8. Sampling and analysis strategies to support waste form qualification

    International Nuclear Information System (INIS)

    Westsik, J.H. Jr.; Pulsipher, B.A.; Eggett, D.L.; Kuhn, W.L.

    1989-04-01

    As part of the waste acceptance process, waste form producers will be required to (1) demonstrate that their glass waste form will meet minimum specifications, (2) show that the process can be controlled to consistently produce an acceptable waste form, and (3) provide documentation that the waste form produced meets specifications. Key to the success of these endeavors is adequate sampling and chemical and radiochemical analyses of the waste streams from the waste tanks through the process to the final glass product. This paper suggests sampling and analysis strategies for meeting specific statistical objectives of (1) detection of compositions outside specification limits, (2) prediction of final glass product composition, and (3) estimation of composition in process vessels for both reporting and guiding succeeding process steps. 2 refs., 1 fig., 3 tabs

  9. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  10. Strategy for thermo-gravimetric analysis of K East fuel samples

    International Nuclear Information System (INIS)

    Lawrence, L.A.

    1997-01-01

    A strategy was developed for the Thermo-Gravimetric Analysis (TGA) testing of K East fuel samples for oxidation rate determinations. Tests will first establish if there are any differences for dry air oxidation between the K West and K East fuel. These tests will be followed by moist inert gas oxidation rate measurements. The final series of tests will consider pure water vapor i.e., steam

  11. A census-weighted, spatially-stratified household sampling strategy for urban malaria epidemiology

    Directory of Open Access Journals (Sweden)

    Slutsker Laurence

    2008-02-01

    Full Text Available Abstract Background Urban malaria is likely to become increasingly important as a consequence of the growing proportion of Africans living in cities. A novel sampling strategy was developed for urban areas to generate a sample simultaneously representative of population and inhabited environments. Such a strategy should facilitate analysis of important epidemiological relationships in this ecological context. Methods Census maps and summary data for Kisumu, Kenya, were used to create a pseudo-sampling frame using the geographic coordinates of census-sampled structures. For every enumeration area (EA designated as urban by the census (n = 535, a sample of structures equal to one-tenth the number of households was selected. In EAs designated as rural (n = 32, a geographically random sample totalling one-tenth the number of households was selected from a grid of points at 100 m intervals. The selected samples were cross-referenced to a geographic information system, and coordinates transferred to handheld global positioning units. Interviewers found the closest eligible household to the sampling point and interviewed the caregiver of a child aged Results 4,336 interviews were completed in 473 of the 567 study area EAs from June 2002 through February 2003. EAs without completed interviews were randomly distributed, and non-response was approximately 2%. Mean distance from the assigned sampling point to the completed interview was 74.6 m, and was significantly less in urban than rural EAs, even when controlling for number of households. The selected sample had significantly more children and females of childbearing age than the general population, and fewer older individuals. Conclusion This method selected a sample that was simultaneously population-representative and inclusive of important environmental variation. The use of a pseudo-sampling frame and pre-programmed handheld GPS units is more efficient and may yield a more complete sample than

  12. SAMPLING ADAPTIVE STRATEGY AND SPATIAL ORGANISATION ESTIMATION OF SOIL ANIMAL COMMUNITIES AT VARIOUS HIERARCHICAL LEVELS OF URBANISED TERRITORIES

    Directory of Open Access Journals (Sweden)

    Baljuk J.A.

    2014-12-01

    Full Text Available In work the algorithm of adaptive strategy of optimum spatial sampling for studying of the spatial organisation of communities of soil animals in the conditions of an urbanization have been presented. As operating variables the principal components obtained as a result of the analysis of the field data on soil penetration resistance, soils electrical conductivity and density of a forest stand, collected on a quasiregular grid have been used. The locations of experimental polygons have been stated by means of program ESAP. The sampling has been made on a regular grid within experimental polygons. The biogeocoenological estimation of experimental polygons have been made on a basis of A.L.Belgard's ecomorphic analysis. The spatial configuration of biogeocoenosis types has been established on the basis of the data of earth remote sensing and the analysis of digital elevation model. The algorithm was suggested which allows to reveal the spatial organisation of soil animal communities at investigated point, biogeocoenosis, and landscape.

  13. How to handle speciose clades? Mass taxon-sampling as a strategy towards illuminating the natural history of Campanula (Campanuloideae.

    Directory of Open Access Journals (Sweden)

    Guilhem Mansion

    Full Text Available BACKGROUND: Speciose clades usually harbor species with a broad spectrum of adaptive strategies and complex distribution patterns, and thus constitute ideal systems to disentangle biotic and abiotic causes underlying species diversification. The delimitation of such study systems to test evolutionary hypotheses is difficult because they often rely on artificial genus concepts as starting points. One of the most prominent examples is the bellflower genus Campanula with some 420 species, but up to 600 species when including all lineages to which Campanula is paraphyletic. We generated a large alignment of petD group II intron sequences to include more than 70% of described species as a reference. By comparison with partial data sets we could then assess the impact of selective taxon sampling strategies on phylogenetic reconstruction and subsequent evolutionary conclusions. METHODOLOGY/PRINCIPAL FINDINGS: Phylogenetic analyses based on maximum parsimony (PAUP, PRAP, Bayesian inference (MrBayes, and maximum likelihood (RAxML were first carried out on the large reference data set (D680. Parameters including tree topology, branch support, and age estimates, were then compared to those obtained from smaller data sets resulting from "classification-guided" (D088 and "phylogeny-guided sampling" (D101. Analyses of D088 failed to fully recover the phylogenetic diversity in Campanula, whereas D101 inferred significantly different branch support and age estimates. CONCLUSIONS/SIGNIFICANCE: A short genomic region with high phylogenetic utility allowed us to easily generate a comprehensive phylogenetic framework for the speciose Campanula clade. Our approach recovered 17 well-supported and circumscribed sub-lineages. Knowing these will be instrumental for developing more specific evolutionary hypotheses and guide future research, we highlight the predictive value of a mass taxon-sampling strategy as a first essential step towards illuminating the detailed

  14. A computational study of a fast sampling valve designed to sample soot precursors inside a forming diesel spray plume

    International Nuclear Information System (INIS)

    Dumitrescu, Cosmin; Puzinauskas, Paulius V.; Agrawal, Ajay K.; Liu, Hao; Daly, Daniel T.

    2009-01-01

    Accurate chemical reaction mechanisms are critically needed to fully optimize combustion strategies for modern internal-combustion engines. These mechanisms are needed to predict emission formation and the chemical heat release characteristics for traditional direct-injection diesel as well as recently-developed and proposed variant combustion strategies. Experimental data acquired under conditions representative of such combustion strategies are required to validate these reaction mechanisms. This paper explores the feasibility of developing a fast sampling valve which extracts reactants at known locations in the spray reaction structure to provide these data. CHEMKIN software is used to establish the reaction timescales which dictate the required fast sampling capabilities. The sampling process is analyzed using separate FLUENT and CHEMKIN calculations. The non-reacting FLUENT CFD calculations give a quantitative estimate of the sample quantity as well as the fluid mixing and thermal history. A CHEMKIN reactor network has been created that reflects these mixing and thermal time scales and allows a theoretical evaluation of the quenching process

  15. Optimisation (sampling strategies and analytical procedures) for site specific environment monitoring at the areas of uranium production legacy sites in Ukraine - 59045

    International Nuclear Information System (INIS)

    Voitsekhovych, Oleg V.; Lavrova, Tatiana V.; Kostezh, Alexander B.

    2012-01-01

    There are many sites in the world, where Environment are still under influence of the contamination related to the Uranium production carried out in past. Author's experience shows that lack of site characterization data, incomplete or unreliable environment monitoring studies can significantly limit quality of Safety Assessment procedures and Priority actions analyses needed for Remediation Planning. During recent decades the analytical laboratories of the many enterprises, currently being responsible for establishing the site specific environment monitoring program have been significantly improved their technical sampling and analytical capacities. However, lack of experience in the optimal site specific sampling strategy planning and also not enough experience in application of the required analytical techniques, such as modern alpha-beta radiometers, gamma and alpha spectrometry and liquid-scintillation analytical methods application for determination of U-Th series radionuclides in the environment, does not allow to these laboratories to develop and conduct efficiently the monitoring programs as a basis for further Safety Assessment in decision making procedures. This paper gives some conclusions, which were gained from the experience establishing monitoring programs in Ukraine and also propose some practical steps on optimization in sampling strategy planning and analytical procedures to be applied for the area required Safety assessment and justification for its potential remediation and safe management. (authors)

  16. Sampling strategies and stopping criteria for stochastic dual dynamic programming: a case study in long-term hydrothermal scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Homem-de-Mello, Tito [University of Illinois at Chicago, Department of Mechanical and Industrial Engineering, Chicago, IL (United States); Matos, Vitor L. de; Finardi, Erlon C. [Universidade Federal de Santa Catarina, LabPlan - Laboratorio de Planejamento de Sistemas de Energia Eletrica, Florianopolis (Brazil)

    2011-03-15

    The long-term hydrothermal scheduling is one of the most important problems to be solved in the power systems area. This problem aims to obtain an optimal policy, under water (energy) resources uncertainty, for hydro and thermal plants over a multi-annual planning horizon. It is natural to model the problem as a multi-stage stochastic program, a class of models for which algorithms have been developed. The original stochastic process is represented by a finite scenario tree and, because of the large number of stages, a sampling-based method such as the Stochastic Dual Dynamic Programming (SDDP) algorithm is required. The purpose of this paper is two-fold. Firstly, we study the application of two alternative sampling strategies to the standard Monte Carlo - namely, Latin hypercube sampling and randomized quasi-Monte Carlo - for the generation of scenario trees, as well as for the sampling of scenarios that is part of the SDDP algorithm. Secondly, we discuss the formulation of stopping criteria for the optimization algorithm in terms of statistical hypothesis tests, which allows us to propose an alternative criterion that is more robust than that originally proposed for the SDDP. We test these ideas on a problem associated with the whole Brazilian power system, with a three-year planning horizon. (orig.)

  17. Recommended Immunological Strategies to Screen for Botulinum Neurotoxin-Containing Samples

    Directory of Open Access Journals (Sweden)

    Stéphanie Simon

    2015-11-01

    Full Text Available Botulinum neurotoxins (BoNTs cause the life-threatening neurological illness botulism in humans and animals and are divided into seven serotypes (BoNT/A–G, of which serotypes A, B, E, and F cause the disease in humans. BoNTs are classified as “category A” bioterrorism threat agents and are relevant in the context of the Biological Weapons Convention. An international proficiency test (PT was conducted to evaluate detection, quantification and discrimination capabilities of 23 expert laboratories from the health, food and security areas. Here we describe three immunological strategies that proved to be successful for the detection and quantification of BoNT/A, B, and E considering the restricted sample volume (1 mL distributed. To analyze the samples qualitatively and quantitatively, the first strategy was based on sensitive immunoenzymatic and immunochromatographic assays for fast qualitative and quantitative analyses. In the second approach, a bead-based suspension array was used for screening followed by conventional ELISA for quantification. In the third approach, an ELISA plate format assay was used for serotype specific immunodetection of BoNT-cleaved substrates, detecting the activity of the light chain, rather than the toxin protein. The results provide guidance for further steps in quality assurance and highlight problems to address in the future.

  18. Recommended Immunological Strategies to Screen for Botulinum Neurotoxin-Containing Samples.

    Science.gov (United States)

    Simon, Stéphanie; Fiebig, Uwe; Liu, Yvonne; Tierney, Rob; Dano, Julie; Worbs, Sylvia; Endermann, Tanja; Nevers, Marie-Claire; Volland, Hervé; Sesardic, Dorothea; Dorner, Martin B

    2015-11-26

    Botulinum neurotoxins (BoNTs) cause the life-threatening neurological illness botulism in humans and animals and are divided into seven serotypes (BoNT/A-G), of which serotypes A, B, E, and F cause the disease in humans. BoNTs are classified as "category A" bioterrorism threat agents and are relevant in the context of the Biological Weapons Convention. An international proficiency test (PT) was conducted to evaluate detection, quantification and discrimination capabilities of 23 expert laboratories from the health, food and security areas. Here we describe three immunological strategies that proved to be successful for the detection and quantification of BoNT/A, B, and E considering the restricted sample volume (1 mL) distributed. To analyze the samples qualitatively and quantitatively, the first strategy was based on sensitive immunoenzymatic and immunochromatographic assays for fast qualitative and quantitative analyses. In the second approach, a bead-based suspension array was used for screening followed by conventional ELISA for quantification. In the third approach, an ELISA plate format assay was used for serotype specific immunodetection of BoNT-cleaved substrates, detecting the activity of the light chain, rather than the toxin protein. The results provide guidance for further steps in quality assurance and highlight problems to address in the future.

  19. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    Science.gov (United States)

    Debasish Saha; Armen R. Kemanian; Benjamin M. Rau; Paul R. Adler; Felipe Montes

    2017-01-01

    Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (...

  20. Evaluating sampling strategies for larval cisco (Coregonus artedi)

    Science.gov (United States)

    Myers, J.T.; Stockwell, J.D.; Yule, D.L.; Black, J.A.

    2008-01-01

    To improve our ability to assess larval cisco (Coregonus artedi) populations in Lake Superior, we conducted a study to compare several sampling strategies. First, we compared density estimates of larval cisco concurrently captured in surface waters with a 2 x 1-m paired neuston net and a 0.5-m (diameter) conical net. Density estimates obtained from the two gear types were not significantly different, suggesting that the conical net is a reasonable alternative to the more cumbersome and costly neuston net. Next, we assessed the effect of tow pattern (sinusoidal versus straight tows) to examine if propeller wash affected larval density. We found no effect of propeller wash on the catchability of larval cisco. Given the availability of global positioning systems, we recommend sampling larval cisco using straight tows to simplify protocols and facilitate straightforward measurements of volume filtered. Finally, we investigated potential trends in larval cisco density estimates by sampling four time periods during the light period of a day at individual sites. Our results indicate no significant trends in larval density estimates during the day. We conclude estimates of larval cisco density across space are not confounded by time at a daily timescale. Well-designed, cost effective surveys of larval cisco abundance will help to further our understanding of this important Great Lakes forage species.

  1. Direct and long-term detection of gene doping in conventional blood samples.

    Science.gov (United States)

    Beiter, T; Zimmermann, M; Fragasso, A; Hudemann, J; Niess, A M; Bitzer, M; Lauer, U M; Simon, P

    2011-03-01

    The misuse of somatic gene therapy for the purpose of enhancing athletic performance is perceived as a coming threat to the world of sports and categorized as 'gene doping'. This article describes a direct detection approach for gene doping that gives a clear yes-or-no answer based on the presence or absence of transgenic DNA in peripheral blood samples. By exploiting a priming strategy to specifically amplify intronless DNA sequences, we developed PCR protocols allowing the detection of very small amounts of transgenic DNA in genomic DNA samples to screen for six prime candidate genes. Our detection strategy was verified in a mouse model, giving positive signals from minute amounts (20 μl) of blood samples for up to 56 days following intramuscular adeno-associated virus-mediated gene transfer, one of the most likely candidate vector systems to be misused for gene doping. To make our detection strategy amenable for routine testing, we implemented a robust sample preparation and processing protocol that allows cost-efficient analysis of small human blood volumes (200 μl) with high specificity and reproducibility. The practicability and reliability of our detection strategy was validated by a screening approach including 327 blood samples taken from professional and recreational athletes under field conditions.

  2. Population Pharmacokinetics and Optimal Sampling Strategy for Model-Based Precision Dosing of Melphalan in Patients Undergoing Hematopoietic Stem Cell Transplantation.

    Science.gov (United States)

    Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A

    2018-05-01

    High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2  = 0.98; p strategy promises to achieve the target area under the curve as part of precision dosing.

  3. Combined use of leaf size and economics traits allows direct comparison of hydrophyte and terrestrial herbaceous adaptive strategies.

    Science.gov (United States)

    Pierce, Simon; Brusa, Guido; Sartori, Matteo; Cerabolini, Bruno E L

    2012-04-01

    Hydrophytes generally exhibit highly acquisitive leaf economics. However, a range of growth forms is evident, from small, free-floating and rapidly growing Lemniden to large, broad-leaved Nymphaeiden, denoting variability in adaptive strategies. Traits used to classify adaptive strategies in terrestrial species, such as canopy height, are not applicable to hydrophytes. We hypothesize that hydrophyte leaf size traits and economics exhibit sufficient overlap with terrestrial species to allow a common classification of plant functional types, sensu Grime's CSR theory. Leaf morpho-functional traits were measured for 61 species from 47 water bodies in lowland continental, sub-alpine and alpine bioclimatic zones in southern Europe and compared against the full leaf economics spectrum and leaf size range of terrestrial herbs, and between hydrophyte growth forms. Hydrophytes differed in the ranges and mean values of traits compared with herbs, but principal components analysis (PCA) demonstrated that both groups shared axes of trait variability: PCA1 encompassed size variation (area and mass), and PCA2 ranged from relatively dense, carbon-rich leaves to nitrogen-rich leaves of high specific leaf area (SLA). Most growth forms exhibited trait syndromes directly equivalent to herbs classified as R adapted, although Nymphaeiden ranged between C and SR adaptation. Our findings support the hypothesis that hydrophyte adaptive strategy variation reflects fundamental trade-offs in economics and size that govern all plants, and that hydrophyte adaptive strategies can be directly compared with terrestrial species by combining leaf economics and size traits.

  4. Human papillomavirus self-sampling for screening nonattenders

    DEFF Research Database (Denmark)

    Lam, Janni Uyen Hoa; Rebolj, Matejka; Ejegod, Ditte Møller

    2017-01-01

    was well-accepted among nonattenders. Adopting modern technology-based platforms into the current organized screening program would serve as a convenient communication method between health authority and citizens, allowing easy access for the citizen and reducing the work load in administrating self-sampling......In organized cervical screening programs, typically 25% of the invited women do not attend. The Copenhagen Self-sampling Initiative (CSi) aimed to gain experiences on participation among screening nonattenders in the Capital Region of Denmark. Here, we report on the effectiveness of different...... communication platforms used in the pilot with suggestions for strategies prior to a full-implementation. Moreover, an innovative approach using self-sampling brushes with unique radio frequency identification chips allowed for unprecedented levels patient identification safety. Nonattenders from the capital...

  5. Sampling strategies to measure the prevalence of common recurrent infections in longitudinal studies

    Directory of Open Access Journals (Sweden)

    Luby Stephen P

    2010-08-01

    Full Text Available Abstract Background Measuring recurrent infections such as diarrhoea or respiratory infections in epidemiological studies is a methodological challenge. Problems in measuring the incidence of recurrent infections include the episode definition, recall error, and the logistics of close follow up. Longitudinal prevalence (LP, the proportion-of-time-ill estimated by repeated prevalence measurements, is an alternative measure to incidence of recurrent infections. In contrast to incidence which usually requires continuous sampling, LP can be measured at intervals. This study explored how many more participants are needed for infrequent sampling to achieve the same study power as frequent sampling. Methods We developed a set of four empirical simulation models representing low and high risk settings with short or long episode durations. The model was used to evaluate different sampling strategies with different assumptions on recall period and recall error. Results The model identified three major factors that influence sampling strategies: (1 the clustering of episodes in individuals; (2 the duration of episodes; (3 the positive correlation between an individual's disease incidence and episode duration. Intermittent sampling (e.g. 12 times per year often requires only a slightly larger sample size compared to continuous sampling, especially in cluster-randomized trials. The collection of period prevalence data can lead to highly biased effect estimates if the exposure variable is associated with episode duration. To maximize study power, recall periods of 3 to 7 days may be preferable over shorter periods, even if this leads to inaccuracy in the prevalence estimates. Conclusion Choosing the optimal approach to measure recurrent infections in epidemiological studies depends on the setting, the study objectives, study design and budget constraints. Sampling at intervals can contribute to making epidemiological studies and trials more efficient, valid

  6. OUTPACE long duration stations: physical variability, context of biogeochemical sampling, and evaluation of sampling strategy

    Directory of Open Access Journals (Sweden)

    A. de Verneil

    2018-04-01

    Full Text Available Research cruises to quantify biogeochemical fluxes in the ocean require taking measurements at stations lasting at least several days. A popular experimental design is the quasi-Lagrangian drifter, often mounted with in situ incubations or sediment traps that follow the flow of water over time. After initial drifter deployment, the ship tracks the drifter for continuing measurements that are supposed to represent the same water environment. An outstanding question is how to best determine whether this is true. During the Oligotrophy to UlTra-oligotrophy PACific Experiment (OUTPACE cruise, from 18 February to 3 April 2015 in the western tropical South Pacific, three separate stations of long duration (five days over the upper 500 m were conducted in this quasi-Lagrangian sampling scheme. Here we present physical data to provide context for these three stations and to assess whether the sampling strategy worked, i.e., that a single body of water was sampled. After analyzing tracer variability and local water circulation at each station, we identify water layers and times where the drifter risks encountering another body of water. While almost no realization of this sampling scheme will be truly Lagrangian, due to the presence of vertical shear, the depth-resolved observations during the three stations show most layers sampled sufficiently homogeneous physical environments during OUTPACE. By directly addressing the concerns raised by these quasi-Lagrangian sampling platforms, a protocol of best practices can begin to be formulated so that future research campaigns include the complementary datasets and analyses presented here to verify the appropriate use of the drifter platform.

  7. Improvements to robotics-inspired conformational sampling in rosetta.

    Directory of Open Access Journals (Sweden)

    Amelie Stein

    Full Text Available To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new "next-generation KIC" method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions.

  8. Sampling strategies for tropical forest nutrient cycling studies: a case study in São Paulo, Brazil

    Directory of Open Access Journals (Sweden)

    G. Sparovek

    1997-12-01

    Full Text Available The precise sampling of soil, biological or micro climatic attributes in tropical forests, which are characterized by a high diversity of species and complex spatial variability, is a difficult task. We found few basic studies to guide sampling procedures. The objective of this study was to define a sampling strategy and data analysis for some parameters frequently used in nutrient cycling studies, i. e., litter amount, total nutrient amounts in litter and its composition (Ca, Mg, Κ, Ν and P, and soil attributes at three depths (organic matter, Ρ content, cation exchange capacity and base saturation. A natural remnant forest in the West of São Paulo State (Brazil was selected as study area and samples were collected in July, 1989. The total amount of litter and its total nutrient amounts had a high spatial independent variance. Conversely, the variance of litter composition was lower and the spatial dependency was peculiar to each nutrient. The sampling strategy for the estimation of litter amounts and the amount of nutrient in litter should be different than the sampling strategy for nutrient composition. For the estimation of litter amounts and the amount of nutrients in litter (related to quantity a large number of randomly distributed determinations are needed. Otherwise, for the estimation of litter nutrient composition (related to quality a smaller amount of spatially located samples should be analyzed. The determination of sampling for soil attributes differed according to the depth. Overall, surface samples (0-5 cm showed high short distance spatial dependent variance, whereas, subsurface samples exhibited spatial dependency in longer distances. Short transects with sampling interval of 5-10 m are recommended for surface sampling. Subsurface samples must also be spatially located, but with transects or grids with longer distances between sampling points over the entire area. Composite soil samples would not provide a complete

  9. Clinical usefulness of limited sampling strategies for estimating AUC of proton pump inhibitors.

    Science.gov (United States)

    Niioka, Takenori

    2011-03-01

    Cytochrome P450 (CYP) 2C19 (CYP2C19) genotype is regarded as a useful tool to predict area under the blood concentration-time curve (AUC) of proton pump inhibitors (PPIs). In our results, however, CYP2C19 genotypes had no influence on AUC of all PPIs during fluvoxamine treatment. These findings suggest that CYP2C19 genotyping is not always a good indicator for estimating AUC of PPIs. Limited sampling strategies (LSS) were developed to estimate AUC simply and accurately. It is important to minimize the number of blood samples because of patient's acceptance. This article reviewed the usefulness of LSS for estimating AUC of three PPIs (omeprazole: OPZ, lansoprazole: LPZ and rabeprazole: RPZ). The best prediction formulas in each PPI were AUC(OPZ)=9.24 x C(6h)+2638.03, AUC(LPZ)=12.32 x C(6h)+3276.09 and AUC(RPZ)=1.39 x C(3h)+7.17 x C(6h)+344.14, respectively. In order to optimize the sampling strategy of LPZ, we tried to establish LSS for LPZ using a time point within 3 hours through the property of pharmacokinetics of its enantiomers. The best prediction formula using the fewest sampling points (one point) was AUC(racemic LPZ)=6.5 x C(3h) of (R)-LPZ+13.7 x C(3h) of (S)-LPZ-9917.3 x G1-14387.2×G2+7103.6 (G1: homozygous extensive metabolizer is 1 and the other genotypes are 0; G2: heterozygous extensive metabolizer is 1 and the other genotypes are 0). Those strategies, plasma concentration monitoring at one or two time-points, might be more suitable for AUC estimation than reference to CYP2C19 genotypes, particularly in the case of coadministration of CYP mediators.

  10. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Science.gov (United States)

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding). Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants). Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol) that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively, we discuss how

  11. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Directory of Open Access Journals (Sweden)

    Abhishek Mitra

    Full Text Available Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding. Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants. Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively

  12. Collaboration During the NASA ABoVE Airborne SAR Campaign: Sampling Strategies Used by NGEE Arctic and Other Partners in Alaska and Western Canada

    Science.gov (United States)

    Wullschleger, S. D.; Charsley-Groffman, L.; Baltzer, J. L.; Berg, A. A.; Griffith, P. C.; Jafarov, E. E.; Marsh, P.; Miller, C. E.; Schaefer, K. M.; Siqueira, P.; Wilson, C. J.; Kasischke, E. S.

    2017-12-01

    There is considerable interest in using L- and P-band Synthetic Aperture Radar (SAR) data to monitor variations in aboveground woody biomass, soil moisture, and permafrost conditions in high-latitude ecosystems. Such information is useful for quantifying spatial heterogeneity in surface and subsurface properties, and for model development and evaluation. To conduct these studies, it is desirable that field studies share a common sampling strategy so that the data from multiple sites can be combined and used to analyze variations in conditions across different landscape geomorphologies and vegetation types. In 2015, NASA launched the decade-long Arctic-Boreal Vulnerability Experiment (ABoVE) to study the sensitivity and resilience of these ecosystems to disturbance and environmental change. NASA is able to leverage its remote sensing strengths to collect airborne and satellite observations to capture important ecosystem properties and dynamics across large spatial scales. A critical component of this effort includes collection of ground-based data that can be used to analyze, calibrate and validate remote sensing products. ABoVE researchers at a large number of sites located in important Arctic and boreal ecosystems in Alaska and western Canada are following common design protocols and strategies for measuring soil moisture, thaw depth, biomass, and wetland inundation. Here we elaborate on those sampling strategies as used in the 2017 summer SAR campaign and address the sampling design and measurement protocols for supporting the ABoVE aerial activities. Plot size, transect length, and distribution of replicates across the landscape systematically allowed investigators to optimally sample a site for soil moisture, thaw depth, and organic layer thickness. Specific examples and data sets are described for the Department of Energy's Next-Generation Ecosystem Experiments (NGEE Arctic) project field sites near Nome and Barrow, Alaska. Future airborne and satellite

  13. Sample design and gamma-ray counting strategy of neutron activation system for triton burnup measurements in KSTAR

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Jungmin [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of); Cheon, Mun Seong [ITER Korea, National Fusion Research Institute, Daejeon (Korea, Republic of); Chung, Kyoung-Jae, E-mail: jkjlsh1@snu.ac.kr [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of); Hwang, Y.S. [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of)

    2016-11-01

    Highlights: • Sample design for triton burnup ratio measurement is carried out. • Samples for 14.1 MeV neutron measurements are selected for KSTAR. • Si and Cu are the most suitable materials for d-t neutron measurements. • Appropriate γ-ray counting strategies for each selected sample are established. - Abstract: On the purpose of triton burnup measurements in Korea Superconducting Tokamak Advanced Research (KSTAR) deuterium plasmas, appropriate neutron activation system (NAS) samples for 14.1 MeV d-t neutron measurements have been designed and gamma-ray counting strategy is established. Neutronics calculations are performed with the MCNP5 neutron transport code for the KSTAR neutral beam heated deuterium plasma discharges. Based on those calculations and the assumed d-t neutron yield, the activities induced by d-t neutrons are estimated with the inventory code FISPACT-2007 for candidate sample materials: Si, Cu, Al, Fe, Nb, Co, Ti, and Ni. It is found that Si, Cu, Al, and Fe are suitable for the KSATR NAS in terms of the minimum detectable activity (MDA) calculated based on the standard deviation of blank measurements. Considering background gamma-rays radiated from surrounding structures activated by thermalized fusion neutrons, appropriate gamma-ray counting strategy for each selected sample is established.

  14. Females' sampling strategy to comparatively evaluate prospective mates in the peacock blenny Salaria pavo

    Science.gov (United States)

    Locatello, Lisa; Rasotto, Maria B.

    2017-08-01

    Emerging evidence suggests the occurrence of comparative decision-making processes in mate choice, questioning the traditional idea of female choice based on rules of absolute preference. In such a scenario, females are expected to use a typical best-of- n sampling strategy, being able to recall previous sampled males based on memory of their quality and location. Accordingly, the quality of preferred mate is expected to be unrelated to both the number and the sequence of female visits. We found support for these predictions in the peacock blenny, Salaria pavo, a fish where females have the opportunity to evaluate the attractiveness of many males in a short time period and in a restricted spatial range. Indeed, even considering the variability in preference among females, most of them returned to previous sampled males for further evaluations; thus, the preferred male did not represent the last one in the sequence of visited males. Moreover, there was no relationship between the attractiveness of the preferred male and the number of further visits assigned to the other males. Our results suggest the occurrence of a best-of- n mate sampling strategy in the peacock blenny.

  15. An instrument design and sample strategy for measuring soil respiration in the coastal temperate rain forest

    Science.gov (United States)

    Nay, S. M.; D'Amore, D. V.

    2009-12-01

    The coastal temperate rainforest (CTR) along the northwest coast of North America is a large and complex mosaic of forests and wetlands located on an undulating terrain ranging from sea level to thousands of meters in elevation. This biome stores a dynamic portion of the total carbon stock of North America. The fate of the terrestrial carbon stock is of concern due to the potential for mobilization and export of this store to both the atmosphere as carbon respiration flux and ocean as dissolved organic and inorganic carbon flux. Soil respiration is the largest export vector in the system and must be accurately measured to gain any comprehensive understanding of how carbon moves though this system. Suitable monitoring tools capable of measuring carbon fluxes at small spatial scales are essential for our understanding of carbon dynamics at larger spatial scales within this complex assemblage of ecosystems. We have adapted instrumentation and developed a sampling strategy for optimizing replication of soil respiration measurements to quantify differences among spatially complex landscape units of the CTR. We start with the design of the instrument to ease the technological, ergonomic and financial barriers that technicians encounter in monitoring the efflux of CO2 from the soil. Our sampling strategy optimizes the physical efforts of the field work and manages for the high variation of flux measurements encountered in this difficult environment of rough terrain, dense vegetation and wet climate. Our soil respirometer incorporates an infra-red gas analyzer (LiCor Inc. LI-820) and an 8300 cm3 soil respiration chamber; the device is durable, lightweight, easy to operate and can be built for under $5000 per unit. The modest unit price allows for a multiple unit fleet to be deployed and operated in an intensive field monitoring campaign. We use a large 346 cm2 collar to accommodate as much micro spatial variation as feasible and to facilitate repeated measures for tracking

  16. How to Handle Speciose Clades? Mass Taxon-Sampling as a Strategy towards Illuminating the Natural History of Campanula (Campanuloideae)

    Science.gov (United States)

    Mansion, Guilhem; Parolly, Gerald; Crowl, Andrew A.; Mavrodiev, Evgeny; Cellinese, Nico; Oganesian, Marine; Fraunhofer, Katharina; Kamari, Georgia; Phitos, Dimitrios; Haberle, Rosemarie; Akaydin, Galip; Ikinci, Nursel; Raus, Thomas; Borsch, Thomas

    2012-01-01

    Background Speciose clades usually harbor species with a broad spectrum of adaptive strategies and complex distribution patterns, and thus constitute ideal systems to disentangle biotic and abiotic causes underlying species diversification. The delimitation of such study systems to test evolutionary hypotheses is difficult because they often rely on artificial genus concepts as starting points. One of the most prominent examples is the bellflower genus Campanula with some 420 species, but up to 600 species when including all lineages to which Campanula is paraphyletic. We generated a large alignment of petD group II intron sequences to include more than 70% of described species as a reference. By comparison with partial data sets we could then assess the impact of selective taxon sampling strategies on phylogenetic reconstruction and subsequent evolutionary conclusions. Methodology/Principal Findings Phylogenetic analyses based on maximum parsimony (PAUP, PRAP), Bayesian inference (MrBayes), and maximum likelihood (RAxML) were first carried out on the large reference data set (D680). Parameters including tree topology, branch support, and age estimates, were then compared to those obtained from smaller data sets resulting from “classification-guided” (D088) and “phylogeny-guided sampling” (D101). Analyses of D088 failed to fully recover the phylogenetic diversity in Campanula, whereas D101 inferred significantly different branch support and age estimates. Conclusions/Significance A short genomic region with high phylogenetic utility allowed us to easily generate a comprehensive phylogenetic framework for the speciose Campanula clade. Our approach recovered 17 well-supported and circumscribed sub-lineages. Knowing these will be instrumental for developing more specific evolutionary hypotheses and guide future research, we highlight the predictive value of a mass taxon-sampling strategy as a first essential step towards illuminating the detailed evolutionary

  17. Sampling strategies to improve passive optical remote sensing of river bathymetry

    Science.gov (United States)

    Legleiter, Carl; Overstreet, Brandon; Kinzel, Paul J.

    2018-01-01

    Passive optical remote sensing of river bathymetry involves establishing a relation between depth and reflectance that can be applied throughout an image to produce a depth map. Building upon the Optimal Band Ratio Analysis (OBRA) framework, we introduce sampling strategies for constructing calibration data sets that lead to strong relationships between an image-derived quantity and depth across a range of depths. Progressively excluding observations that exceed a series of cutoff depths from the calibration process improved the accuracy of depth estimates and allowed the maximum detectable depth ($d_{max}$) to be inferred directly from an image. Depth retrieval in two distinct rivers also was enhanced by a stratified version of OBRA that partitions field measurements into a series of depth bins to avoid biases associated with under-representation of shallow areas in typical field data sets. In the shallower, clearer of the two rivers, including the deepest field observations in the calibration data set did not compromise depth retrieval accuracy, suggesting that $d_{max}$ was not exceeded and the reach could be mapped without gaps. Conversely, in the deeper and more turbid stream, progressive truncation of input depths yielded a plausible estimate of $d_{max}$ consistent with theoretical calculations based on field measurements of light attenuation by the water column. This result implied that the entire channel, including pools, could not be mapped remotely. However, truncation improved the accuracy of depth estimates in areas shallower than $d_{max}$, which comprise the majority of the channel and are of primary interest for many habitat-oriented applications.

  18. Analytical strategies for uranium determination in natural water and industrial effluents samples

    International Nuclear Information System (INIS)

    Santos, Juracir Silva

    2011-01-01

    The work was developed under the project 993/2007 - 'Development of analytical strategies for uranium determination in environmental and industrial samples - Environmental monitoring in the Caetite city, Bahia, Brazil' and made possible through a partnership established between Universidade Federal da Bahia and the Comissao Nacional de Energia Nuclear. Strategies were developed to uranium determination in natural water and effluents of uranium mine. The first one was a critical evaluation of the determination of uranium by inductively coupled plasma optical emission spectrometry (ICP OES) performed using factorial and Doehlert designs involving the factors: acid concentration, radio frequency power and nebuliser gas flow rate. Five emission lines were simultaneously studied (namely: 367.007, 385.464, 385.957, 386.592 and 409.013 nm), in the presence of HN0 3 , H 3 C 2 00H or HCI. The determinations in HN0 3 medium were the most sensitive. Among the factors studied, the gas flow rate was the most significant for the five emission lines. Calcium caused interference in the emission intensity for some lines and iron did not interfere (at least up to 10 mg L -1 ) in the five lines studied. The presence of 13 other elements did not affect the emission intensity of uranium for the lines chosen. The optimized method, using the line at 385.957 nm, allows the determination of uranium with limit of quantification of 30 μg L -1 and precision expressed as RSD lower than 2.2% for uranium concentrations of either 500 and 1000 μg L -1 . In second one, a highly sensitive flow-based procedure for uranium determination in natural waters is described. A 100-cm optical path flow cell based on a liquid-core waveguide (LCW) was exploited to increase sensitivity of the arsenazo 111 method, aiming to achieve the limits established by environmental regulations. The flow system was designed with solenoid micro-pumps in order to improve mixing and minimize reagent consumption, as well as

  19. Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.

    Science.gov (United States)

    Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan

    2013-06-01

    The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.

  20. Sample preparation composite and replicate strategy case studies for assay of solid oral drug products.

    Science.gov (United States)

    Nickerson, Beverly; Harrington, Brent; Li, Fasheng; Guo, Michele Xuemei

    2017-11-30

    Drug product assay is one of several tests required for new drug products to ensure the quality of the product at release and throughout the life cycle of the product. Drug product assay testing is typically performed by preparing a composite sample of multiple dosage units to obtain an assay value representative of the batch. In some cases replicate composite samples may be prepared and the reportable assay value is the average value of all the replicates. In previously published work by Harrington et al. (2014) [5], a sample preparation composite and replicate strategy for assay was developed to provide a systematic approach which accounts for variability due to the analytical method and dosage form with a standard error of the potency assay criteria based on compendia and regulatory requirements. In this work, this sample preparation composite and replicate strategy for assay is applied to several case studies to demonstrate the utility of this approach and its application at various stages of pharmaceutical drug product development. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Individual Differences in Strategy Use on Division Problems: Mental versus Written Computation

    Science.gov (United States)

    Hickendorff, Marian; van Putten, Cornelis M.; Verhelst, Norman D.; Heiser, Willem J.

    2010-01-01

    Individual differences in strategy use (choice and accuracy) were analyzed. A sample of 362 Grade 6 students solved complex division problems under 2 different conditions. In the choice condition students were allowed to use either a mental or a written strategy. In the subsequent no-choice condition, they were required to use a written strategy.…

  2. A preliminary evaluation of comminution and sampling strategies for radioactive cemented waste

    Energy Technology Data Exchange (ETDEWEB)

    Bilodeau, M.; Lastra, R.; Bouzoubaa, N. [Natural Resources Canada, Ottawa, ON (Canada); Chapman, M. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2011-07-01

    Lixiviation of Hg, U and Cs contaminants and micro-encapsulation of cemented radioactive waste (CRW) are the two main components of a CRW stabilization research project carried out at Natural Resources Canada in collaboration with Atomic Energy of Canada Limited. Unmolding CRW from the storage pail, its fragmentation into a size range suitable for both processes and the collection of a representative sample are three essential steps for providing optimal material conditions for the two studies. Separation of wires, metals and plastic incorporated into CRW samples is also required. A comminution and sampling strategy was developed to address all those needs. Dust emissions and other health and safety concerns were given full consideration. Surrogate cemented waste (SCW) was initially used for this comminution study where Cu was used as a substitute for U and Hg. SCW was characterized as a friable material through the measurement of the Bond work index of 7.7 kWh/t. A mineralogical investigation and the calibration of material heterogeneity parameters of the sampling error model showed that Cu, Hg and Cs are finely disseminated in the cement matrix. A sampling strategy was built from the model and successfully validated with radioactive waste. A larger than expected sampling error was observed with U due to the formation of large U solid phases, which were not observed with the Cu tracer. SCW samples were crushed and ground under different rock fragmentation mechanisms: compression (jaw and cone crushers, rod mill), impact (ball mill), attrition, high voltage disintegration and high pressure water (and liquid nitrogen) jetting. Cryogenic grinding was also tested with the attrition mill. Crushing and grinding technologies were assessed against criteria that were gathered from literature surveys, experiential know-how and discussion with the client and field experts. Water jetting and its liquid nitrogen variant were retained for pail cutting and waste unmolding while

  3. A preliminary evaluation of comminution and sampling strategies for radioactive cemented waste

    International Nuclear Information System (INIS)

    Bilodeau, M.; Lastra, R.; Bouzoubaa, N.; Chapman, M.

    2011-01-01

    Lixiviation of Hg, U and Cs contaminants and micro-encapsulation of cemented radioactive waste (CRW) are the two main components of a CRW stabilization research project carried out at Natural Resources Canada in collaboration with Atomic Energy of Canada Limited. Unmolding CRW from the storage pail, its fragmentation into a size range suitable for both processes and the collection of a representative sample are three essential steps for providing optimal material conditions for the two studies. Separation of wires, metals and plastic incorporated into CRW samples is also required. A comminution and sampling strategy was developed to address all those needs. Dust emissions and other health and safety concerns were given full consideration. Surrogate cemented waste (SCW) was initially used for this comminution study where Cu was used as a substitute for U and Hg. SCW was characterized as a friable material through the measurement of the Bond work index of 7.7 kWh/t. A mineralogical investigation and the calibration of material heterogeneity parameters of the sampling error model showed that Cu, Hg and Cs are finely disseminated in the cement matrix. A sampling strategy was built from the model and successfully validated with radioactive waste. A larger than expected sampling error was observed with U due to the formation of large U solid phases, which were not observed with the Cu tracer. SCW samples were crushed and ground under different rock fragmentation mechanisms: compression (jaw and cone crushers, rod mill), impact (ball mill), attrition, high voltage disintegration and high pressure water (and liquid nitrogen) jetting. Cryogenic grinding was also tested with the attrition mill. Crushing and grinding technologies were assessed against criteria that were gathered from literature surveys, experiential know-how and discussion with the client and field experts. Water jetting and its liquid nitrogen variant were retained for pail cutting and waste unmolding while

  4. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  5. Limited sampling strategy for determining metformin area under the plasma concentration-time curve

    DEFF Research Database (Denmark)

    Santoro, Ana Beatriz; Stage, Tore Bjerregaard; Struchiner, Claudio José

    2016-01-01

    AIM: The aim was to develop and validate limited sampling strategy (LSS) models to predict the area under the plasma concentration-time curve (AUC) for metformin. METHODS: Metformin plasma concentrations (n = 627) at 0-24 h after a single 500 mg dose were used for LSS development, based on all su...

  6. Methodology Series Module 5: Sampling Strategies.

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  7. Methodology series module 5: Sampling strategies

    Directory of Open Access Journals (Sweden)

    Maninder Singh Setia

    2016-01-01

    Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability sampling – based on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.

  8. [Strategies for biobank networks. Classification of different approaches for locating samples and an outlook on the future within the BBMRI-ERIC].

    Science.gov (United States)

    Lablans, Martin; Kadioglu, Dennis; Mate, Sebastian; Leb, Ines; Prokosch, Hans-Ulrich; Ückert, Frank

    2016-03-01

    Medical research projects often require more biological material than can be supplied by a single biobank. For this reason, a multitude of strategies support locating potential research partners with matching material without requiring centralization of sample storage. Classification of different strategies for biobank networks, in particular for locating suitable samples. Description of an IT infrastructure combining these strategies. Existing strategies can be classified according to three criteria: (a) granularity of sample data: coarse bank-level data (catalogue) vs. fine-granular sample-level data, (b) location of sample data: central (central search service) vs. decentral storage (federated search services), and (c) level of automation: automatic (query-based, federated search service) vs. semi-automatic (inquiry-based, decentral search). All mentioned search services require data integration. Metadata help to overcome semantic heterogeneity. The "Common Service IT" in BBMRI-ERIC (Biobanking and BioMolecular Resources Research Infrastructure) unites a catalogue, the decentral search and metadata in an integrated platform. As a result, researchers receive versatile tools to search suitable biomaterial, while biobanks retain a high degree of data sovereignty. Despite their differences, the presented strategies for biobank networks do not rule each other out but can complement and even benefit from each other.

  9. Methodology Series Module 5: Sampling Strategies

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  10. Comparison of sampling strategies for object-based classification of urban vegetation from Very High Resolution satellite images

    Science.gov (United States)

    Rougier, Simon; Puissant, Anne; Stumpf, André; Lachiche, Nicolas

    2016-09-01

    Vegetation monitoring is becoming a major issue in the urban environment due to the services they procure and necessitates an accurate and up to date mapping. Very High Resolution satellite images enable a detailed mapping of the urban tree and herbaceous vegetation. Several supervised classifications with statistical learning techniques have provided good results for the detection of urban vegetation but necessitate a large amount of training data. In this context, this study proposes to investigate the performances of different sampling strategies in order to reduce the number of examples needed. Two windows based active learning algorithms from state-of-art are compared to a classical stratified random sampling and a third combining active learning and stratified strategies is proposed. The efficiency of these strategies is evaluated on two medium size French cities, Strasbourg and Rennes, associated to different datasets. Results demonstrate that classical stratified random sampling can in some cases be just as effective as active learning methods and that it should be used more frequently to evaluate new active learning methods. Moreover, the active learning strategies proposed in this work enables to reduce the computational runtime by selecting multiple windows at each iteration without increasing the number of windows needed.

  11. A comparison of sample preparation strategies for biological tissues and subsequent trace element analysis using LA-ICP-MS.

    Science.gov (United States)

    Bonta, Maximilian; Török, Szilvia; Hegedus, Balazs; Döme, Balazs; Limbeck, Andreas

    2017-03-01

    Laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) is one of the most commonly applied methods for lateral trace element distribution analysis in medical studies. Many improvements of the technique regarding quantification and achievable lateral resolution have been achieved in the last years. Nevertheless, sample preparation is also of major importance and the optimal sample preparation strategy still has not been defined. While conventional histology knows a number of sample pre-treatment strategies, little is known about the effect of these approaches on the lateral distributions of elements and/or their quantities in tissues. The technique of formalin fixation and paraffin embedding (FFPE) has emerged as the gold standard in tissue preparation. However, the potential use for elemental distribution studies is questionable due to a large number of sample preparation steps. In this work, LA-ICP-MS was used to examine the applicability of the FFPE sample preparation approach for elemental distribution studies. Qualitative elemental distributions as well as quantitative concentrations in cryo-cut tissues as well as FFPE samples were compared. Results showed that some metals (especially Na and K) are severely affected by the FFPE process, whereas others (e.g., Mn, Ni) are less influenced. Based on these results, a general recommendation can be given: FFPE samples are completely unsuitable for the analysis of alkaline metals. When analyzing transition metals, FFPE samples can give comparable results to snap-frozen tissues. Graphical abstract Sample preparation strategies for biological tissues are compared with regard to the elemental distributions and average trace element concentrations.

  12. Optimization of Region-of-Interest Sampling Strategies for Hepatic MRI Proton Density Fat Fraction Quantification

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z.; Schlein, Alexandra N.; Hooker, Jonathan C.; Dehkordy, Soudabeh Fazeli; Hamilton, Gavin; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.

    2017-01-01

    BACKGROUND Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. PURPOSE To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. STUDY TYPE Retrospective secondary analysis of prospectively acquired clinical research data. POPULATION A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. FIELD STRENGTH/SEQUENCE Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradientrecalled echo technique. ASSESSMENT An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. STATISTICAL TESTING Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland–Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland–Altman analyses. RESULTS The study population’s mean whole-liver PDFF was 10.1±8.9% (range: 1.1–44.1%). Although there was no significant difference in average segmental (P=0.452) or lobar (P=0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥ 4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. DATA CONCLUSION Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. Level of

  13. Optimization of region-of-interest sampling strategies for hepatic MRI proton density fat fraction quantification.

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z; Schlein, Alexandra N; Hooker, Jonathan C; Fazeli Dehkordy, Soudabeh; Hamilton, Gavin; Reeder, Scott B; Loomba, Rohit; Sirlin, Claude B

    2018-04-01

    Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. Retrospective secondary analysis of prospectively acquired clinical research data. A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradient-recalled echo technique. An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland-Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland-Altman analyses. The study population's mean whole-liver PDFF was 10.1 ± 8.9% (range: 1.1-44.1%). Although there was no significant difference in average segmental (P = 0.452) or lobar (P = 0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:988-994. © 2017 International Society for Magnetic Resonance

  14. A systematic examination of a random sampling strategy for source apportionment calculations.

    Science.gov (United States)

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Analytical sample preparation strategies for the determination of antimalarial drugs in human whole blood, plasma and urine

    DEFF Research Database (Denmark)

    Casas, Monica Escolà; Hansen, Martin; Krogh, Kristine A

    2014-01-01

    the available sample preparation strategies combined with liquid chromatographic (LC) analysis to determine antimalarials in whole blood, plasma and urine published over the last decade. Sample preparation can be done by protein precipitation, solid-phase extraction, liquid-liquid extraction or dilution. After...

  16. Current advances and strategies towards fully automated sample preparation for regulated LC-MS/MS bioanalysis.

    Science.gov (United States)

    Zheng, Naiyu; Jiang, Hao; Zeng, Jianing

    2014-09-01

    Robotic liquid handlers (RLHs) have been widely used in automated sample preparation for liquid chromatography-tandem mass spectrometry (LC-MS/MS) bioanalysis. Automated sample preparation for regulated bioanalysis offers significantly higher assay efficiency, better data quality and potential bioanalytical cost-savings. For RLHs that are used for regulated bioanalysis, there are additional requirements, including 21 CFR Part 11 compliance, software validation, system qualification, calibration verification and proper maintenance. This article reviews recent advances in automated sample preparation for regulated bioanalysis in the last 5 years. Specifically, it covers the following aspects: regulated bioanalysis requirements, recent advances in automation hardware and software development, sample extraction workflow simplification, strategies towards fully automated sample extraction, and best practices in automated sample preparation for regulated bioanalysis.

  17. Methodology Series Module 5: Sampling Strategies

    OpenAIRE

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ? Sampling Method?. There are essentially two types of sampling methods: 1) probability sampling ? based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling ? based on researcher's choice, population that accessible & available. Some of the non-probabilit...

  18. Sampling strategy for estimating human exposure pathways to consumer chemicals

    Directory of Open Access Journals (Sweden)

    Eleni Papadopoulou

    2016-03-01

    Full Text Available Human exposure to consumer chemicals has become a worldwide concern. In this work, a comprehensive sampling strategy is presented, to our knowledge being the first to study all relevant exposure pathways in a single cohort using multiple methods for assessment of exposure from each exposure pathway. The selected groups of chemicals to be studied are consumer chemicals whose production and use are currently in a state of transition and are; per- and polyfluorinated alkyl substances (PFASs, traditional and “emerging” brominated flame retardants (BFRs and EBFRs, organophosphate esters (OPEs and phthalate esters (PEs. Information about human exposure to these contaminants is needed due to existing data gaps on human exposure intakes from multiple exposure pathways and relationships between internal and external exposure. Indoor environment, food and biological samples were collected from 61 participants and their households in the Oslo area (Norway on two consecutive days, during winter 2013-14. Air, dust, hand wipes, and duplicate diet (food and drink samples were collected as indicators of external exposure, and blood, urine, blood spots, hair, nails and saliva as indicators of internal exposure. A food diary, food frequency questionnaire (FFQ and indoor environment questionnaire were also implemented. Approximately 2000 samples were collected in total and participant views on their experiences of this campaign were collected via questionnaire. While 91% of our participants were positive about future participation in a similar project, some tasks were viewed as problematic. Completing the food diary and collection of duplicate food/drink portions were the tasks most frequent reported as “hard”/”very hard”. Nevertheless, a strong positive correlation between the reported total mass of food/drinks in the food record and the total weight of the food/drinks in the collection bottles was observed, being an indication of accurate performance

  19. Planning schistosomiasis control: investigation of alternative sampling strategies for Schistosoma mansoni to target mass drug administration of praziquantel in East Africa.

    Science.gov (United States)

    Sturrock, Hugh J W; Gething, Pete W; Ashton, Ruth A; Kolaczinski, Jan H; Kabatereine, Narcis B; Brooker, Simon

    2011-09-01

    In schistosomiasis control, there is a need to geographically target treatment to populations at high risk of morbidity. This paper evaluates alternative sampling strategies for surveys of Schistosoma mansoni to target mass drug administration in Kenya and Ethiopia. Two main designs are considered: lot quality assurance sampling (LQAS) of children from all schools; and a geostatistical design that samples a subset of schools and uses semi-variogram analysis and spatial interpolation to predict prevalence in the remaining unsurveyed schools. Computerized simulations are used to investigate the performance of sampling strategies in correctly classifying schools according to treatment needs and their cost-effectiveness in identifying high prevalence schools. LQAS performs better than geostatistical sampling in correctly classifying schools, but at a cost with a higher cost per high prevalence school correctly classified. It is suggested that the optimal surveying strategy for S. mansoni needs to take into account the goals of the control programme and the financial and drug resources available.

  20. Optimal sampling strategy for data mining

    International Nuclear Information System (INIS)

    Ghaffar, A.; Shahbaz, M.; Mahmood, W.

    2013-01-01

    Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)

  1. Using Linked Survey Paradata to Improve Sampling Strategies in the Medical Expenditure Panel Survey

    Directory of Open Access Journals (Sweden)

    Mirel Lisa B.

    2017-06-01

    Full Text Available Using paradata from a prior survey that is linked to a new survey can help a survey organization develop more effective sampling strategies. One example of this type of linkage or subsampling is between the National Health Interview Survey (NHIS and the Medical Expenditure Panel Survey (MEPS. MEPS is a nationally representative sample of the U.S. civilian, noninstitutionalized population based on a complex multi-stage sample design. Each year a new sample is drawn as a subsample of households from the prior year’s NHIS. The main objective of this article is to examine how paradata from a prior survey can be used in developing a sampling scheme in a subsequent survey. A framework for optimal allocation of the sample in substrata formed for this purpose is presented and evaluated for the relative effectiveness of alternative substratification schemes. The framework is applied, using real MEPS data, to illustrate how utilizing paradata from the linked survey offers the possibility of making improvements to the sampling scheme for the subsequent survey. The improvements aim to reduce the data collection costs while maintaining or increasing effective responding sample sizes and response rates for a harder to reach population.

  2. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    Science.gov (United States)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-02-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults.

  3. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    International Nuclear Information System (INIS)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-01-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults. (paper)

  4. Measuring strategies for learning regulation in medical education: scale reliability and dimensionality in a Swedish sample.

    Science.gov (United States)

    Edelbring, Samuel

    2012-08-15

    The degree of learners' self-regulated learning and dependence on external regulation influence learning processes in higher education. These regulation strategies are commonly measured by questionnaires developed in other settings than in which they are being used, thereby requiring renewed validation. The aim of this study was to psychometrically evaluate the learning regulation strategy scales from the Inventory of Learning Styles with Swedish medical students (N = 206). The regulation scales were evaluated regarding their reliability, scale dimensionality and interrelations. The primary evaluation focused on dimensionality and was performed with Mokken scale analysis. To assist future scale refinement, additional item analysis, such as item-to-scale correlations, was performed. Scale scores in the Swedish sample displayed good reliability in relation to published results: Cronbach's alpha: 0.82, 0.72, and 0.65 for self-regulation, external regulation and lack of regulation scales respectively. The dimensionalities in scales were adequate for self-regulation and its subscales, whereas external regulation and lack of regulation displayed less unidimensionality. The established theoretical scales were largely replicated in the exploratory analysis. The item analysis identified two items that contributed little to their respective scales. The results indicate that these scales have an adequate capacity for detecting the three theoretically proposed learning regulation strategies in the medical education sample. Further construct validity should be sought by interpreting scale scores in relation to specific learning activities. Using established scales for measuring students' regulation strategies enables a broad empirical base for increasing knowledge on regulation strategies in relation to different disciplinary settings and contributes to theoretical development.

  5. Catch, effort and sampling strategies in the highly variable sardine fisheries around East Java, Indonesia.

    NARCIS (Netherlands)

    Pet, J.S.; Densen, van W.L.T.; Machiels, M.A.M.; Sukkel, M.; Setyohady, D.; Tumuljadi, A.

    1997-01-01

    Temporal and spatial patterns in the fishery for Sardinella spp. around East Java, Indonesia, were studied in an attempt to develop an efficient catch and effort sampling strategy for this highly variable fishery. The inter-annual and monthly variation in catch, effort and catch per unit of effort

  6. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    Science.gov (United States)

    Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.

    2014-01-01

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes

  7. Field screening sampling and analysis strategy and methodology for the 183-H Solar Evaporation Basins: Phase 2, Soils

    International Nuclear Information System (INIS)

    Antipas, A.; Hopkins, A.M.; Wasemiller, M.A.; McCain, R.G.

    1996-01-01

    This document provides a sampling/analytical strategy and methodology for Resource Conservation and Recovery Act (RCRA) closure of the 183-H Solar Evaporation Basins within the boundaries and requirements identified in the initial Phase II Sampling and Analysis Plan for RCRA Closure of the 183-H Solar Evaporation Basins

  8. Modelling of in-stream nitrogen and phosphorus concentrations using different sampling strategies for calibration data

    Science.gov (United States)

    Jomaa, Seifeddine; Jiang, Sanyuan; Yang, Xiaoqiang; Rode, Michael

    2016-04-01

    It is known that a good evaluation and prediction of surface water pollution is mainly limited by the monitoring strategy and the capability of the hydrological water quality model to reproduce the internal processes. To this end, a compromise sampling frequency, which can reflect the dynamical behaviour of leached nutrient fluxes responding to changes in land use, agriculture practices and point sources, and appropriate process-based water quality model are required. The objective of this study was to test the identification of hydrological water quality model parameters (nitrogen and phosphorus) under two different monitoring strategies: (1) regular grab-sampling approach and (2) regular grab-sampling with additional monitoring during the hydrological events using automatic samplers. First, the semi-distributed hydrological water quality HYPE (Hydrological Predictions for the Environment) model was successfully calibrated (1994-1998) for discharge (NSE = 0.86), nitrate-N (lowest NSE for nitrate-N load = 0.69), particulate phosphorus and soluble phosphorus in the Selke catchment (463 km2, central Germany) for the period 1994-1998 using regular grab-sampling approach (biweekly to monthly for nitrogen and phosphorus concentrations). Second, the model was successfully validated during the period 1999-2010 for discharge, nitrate-N, particulate-phosphorus and soluble-phosphorus (lowest NSE for soluble phosphorus load = 0.54). Results, showed that when additional sampling during the events with random grab-sampling approach was used (period 2011-2013), the hydrological model could reproduce only the nitrate-N and soluble phosphorus concentrations reasonably well. However, when additional sampling during the hydrological events was considered, the HYPE model could not represent the measured particulate phosphorus. This reflects the importance of suspended sediment during the hydrological events increasing the concentrations of particulate phosphorus. The HYPE model could

  9. Measuring strategies for learning regulation in medical education: Scale reliability and dimensionality in a Swedish sample

    Directory of Open Access Journals (Sweden)

    Edelbring Samuel

    2012-08-01

    Full Text Available Abstract Background The degree of learners’ self-regulated learning and dependence on external regulation influence learning processes in higher education. These regulation strategies are commonly measured by questionnaires developed in other settings than in which they are being used, thereby requiring renewed validation. The aim of this study was to psychometrically evaluate the learning regulation strategy scales from the Inventory of Learning Styles with Swedish medical students (N = 206. Methods The regulation scales were evaluated regarding their reliability, scale dimensionality and interrelations. The primary evaluation focused on dimensionality and was performed with Mokken scale analysis. To assist future scale refinement, additional item analysis, such as item-to-scale correlations, was performed. Results Scale scores in the Swedish sample displayed good reliability in relation to published results: Cronbach’s alpha: 0.82, 0.72, and 0.65 for self-regulation, external regulation and lack of regulation scales respectively. The dimensionalities in scales were adequate for self-regulation and its subscales, whereas external regulation and lack of regulation displayed less unidimensionality. The established theoretical scales were largely replicated in the exploratory analysis. The item analysis identified two items that contributed little to their respective scales. Discussion The results indicate that these scales have an adequate capacity for detecting the three theoretically proposed learning regulation strategies in the medical education sample. Further construct validity should be sought by interpreting scale scores in relation to specific learning activities. Using established scales for measuring students’ regulation strategies enables a broad empirical base for increasing knowledge on regulation strategies in relation to different disciplinary settings and contributes to theoretical development.

  10. On-capillary sample cleanup method for the electrophoretic determination of carbohydrates in juice samples.

    Science.gov (United States)

    Morales-Cid, Gabriel; Simonet, Bartolomé M; Cárdenas, Soledad; Valcárcel, Miguel

    2007-05-01

    On many occasions, sample treatment is a critical step in electrophoretic analysis. As an alternative to batch procedures, in this work, a new strategy is presented with a view to develop an on-capillary sample cleanup method. This strategy is based on the partial filling of the capillary with carboxylated single-walled carbon nanotube (c-SWNT). The nanoparticles retain interferences from the matrix allowing the determination and quantification of carbohydrates (viz glucose, maltose and fructose). The precision of the method for the analysis of real samples ranged from 5.3 to 6.4%. The proposed method was compared with a method based on a batch filtration of the juice sample through diatomaceous earth and further electrophoretic determination. This method was also validated in this work. The RSD for this other method ranged from 5.1 to 6%. The results obtained by both methods were statistically comparable demonstrating the accuracy of the proposed methods and their effectiveness. Electrophoretic separation of carbohydrates was achieved using 200 mM borate solution as a buffer at pH 9.5 and applying 15 kV. During separation, the capillary temperature was kept constant at 40 degrees C. For the on-capillary cleanup method, a solution containing 50 mg/L of c-SWNTs prepared in 300 mM borate solution at pH 9.5 was introduced for 60 s into the capillary just before sample introduction. For the electrophoretic analysis of samples cleaned in batch with diatomaceous earth, it is also recommended to introduce into the capillary, just before the sample, a 300 mM borate solution as it enhances the sensitivity and electrophoretic resolution.

  11. Practical experiences with an extended screening strategy for genetically modified organisms (GMOs) in real-life samples.

    Science.gov (United States)

    Scholtens, Ingrid; Laurensse, Emile; Molenaar, Bonnie; Zaaijer, Stephanie; Gaballo, Heidi; Boleij, Peter; Bak, Arno; Kok, Esther

    2013-09-25

    Nowadays most animal feed products imported into Europe have a GMO (genetically modified organism) label. This means that they contain European Union (EU)-authorized GMOs. For enforcement of these labeling requirements, it is necessary, with the rising number of EU-authorized GMOs, to perform an increasing number of analyses. In addition to this, it is necessary to test products for the potential presence of EU-unauthorized GMOs. Analysis for EU-authorized and -unauthorized GMOs in animal feed has thus become laborious and expensive. Initial screening steps may reduce the number of GMO identification methods that need to be applied, but with the increasing diversity also screening with GMO elements has become more complex. For the present study, the application of an informative detailed 24-element screening and subsequent identification strategy was applied in 50 animal feed samples. Almost all feed samples were labeled as containing GMO-derived materials. The main goal of the study was therefore to investigate if a detailed screening strategy would reduce the number of subsequent identification analyses. An additional goal was to test the samples in this way for the potential presence of EU-unauthorized GMOs. Finally, to test the robustness of the approach, eight of the samples were tested in a concise interlaboratory study. No significant differences were found between the results of the two laboratories.

  12. Digital Content Strategies

    OpenAIRE

    Halbheer, Daniel; Stahl, Florian; Koenigsberg, Oded; Lehmann, Donald R

    2013-01-01

    This paper studies content strategies for online publishers of digital information goods. It examines sampling strategies and compares their performance to paid content and free content strategies. A sampling strategy, where some of the content is offered for free and consumers are charged for access to the rest, is known as a "metered model" in the newspaper industry. We analyze optimal decisions concerning the size of the sample and the price of the paid content when sampling serves the dua...

  13. Towards an optimal sampling strategy for assessing genetic variation within and among white clover (Trifolium repens L. cultivars using AFLP

    Directory of Open Access Journals (Sweden)

    Khosro Mehdi Khanlou

    2011-01-01

    Full Text Available Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He and Shannon diversity index (I were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation.

  14. Can preapproval jump-start the allowance market

    Energy Technology Data Exchange (ETDEWEB)

    Dudek, D.J.; Goffman, J.

    1992-06-01

    With compliance deadlines approaching in three years, utility, environmental and financial planners and their regulators are in the process of grappling with the requirements imposed, and opportunities created, by the acid rain program established under Title 4 of the Clean Air Act amendments of 1990. The novel element of the program - emissions or allowance trading through a nationwide allowance market - presents great challenges for utilities and their regulators. Perhaps the foremost challenge is establishing the allowance market. If state utility commissions subject utilities' compliance strategies to traditional after-the-fact prudence reviews, as tradition would impel them to do, the attendant regulatory risks are likely to push utilities toward more conservative compliance schemes that underuse allowance trading (as the exchange at the head of this article demonstrates). If that happens, the market will fail to develop, and its full potential for environmental benefit at least cost will go unrealized. This, in turn, is likely to strengthen the case for non-market regulatory mechanisms - a vicious circle. In this paper, the authors suggest a way out of this.

  15. Can preapproval jump-start the allowance market?

    International Nuclear Information System (INIS)

    Dudek, D.J.; Goffman, J.

    1992-01-01

    With compliance deadlines approaching in three years, utility, environmental and financial planners and their regulators are in the process of grappling with the requirements imposed, and opportunities created, by the acid rain program established under Title 4 of the Clean Air Act amendments of 1990. The novel element of the program - emissions or allowance trading through a nationwide allowance market - presents great challenges for utilities and their regulators. Perhaps the foremost challenge is establishing the allowance market. If state utility commissions subject utilities' compliance strategies to traditional after-the-fact prudence reviews, as tradition would impel them to do, the attendant regulatory risks are likely to push utilities toward more conservative compliance schemes that underuse allowance trading (as the exchange at the head of this article demonstrates). If that happens, the market will fail to develop, and its full potential for environmental benefit at least cost will go unrealized. This, in turn, is likely to strengthen the case for non-market regulatory mechanisms - a vicious circle. In this paper, the authors suggest a way out of this

  16. Serum sample containing endogenous antibodies interfering with multiple hormone immunoassays. Laboratory strategies to detect interference

    Directory of Open Access Journals (Sweden)

    Elena García-González

    2016-04-01

    Full Text Available Objectives: Endogenous antibodies (EA may interfere with immunoassays, causing erroneous results for hormone analyses. As (in most cases this interference arises from the assay format and most immunoassays, even from different manufacturers, are constructed in a similar way, it is possible for a single type of EA to interfere with different immunoassays. Here we describe the case of a patient whose serum sample contains EA that interfere several hormones tests. We also discuss the strategies deployed to detect interference. Subjects and methods: Over a period of four years, a 30-year-old man was subjected to a plethora of laboratory and imaging diagnostic procedures as a consequence of elevated hormone results, mainly of pituitary origin, which did not correlate with the overall clinical picture. Results: Once analytical interference was suspected, the best laboratory approaches to investigate it were sample reanalysis on an alternative platform and sample incubation with antibody blocking tubes. Construction of an in-house ‘nonsense’ sandwich assay was also a valuable strategy to confirm interference. In contrast, serial sample dilutions were of no value in our case, while polyethylene glycol (PEG precipitation gave inconclusive results, probably due to the use of inappropriate PEG concentrations for several of the tests assayed. Conclusions: Clinicians and laboratorians must be aware of the drawbacks of immunometric assays, and alert to the possibility of EA interference when results do not fit the clinical pattern. Keywords: Endogenous antibodies, Immunoassay, Interference, Pituitary hormones, Case report

  17. [Identification of Systemic Contaminations with Legionella Spec. in Drinking Water Plumbing Systems: Sampling Strategies and Corresponding Parameters].

    Science.gov (United States)

    Völker, S; Schreiber, C; Müller, H; Zacharias, N; Kistemann, T

    2017-05-01

    After the amendment of the Drinking Water Ordinance in 2011, the requirements for the hygienic-microbiological monitoring of drinking water installations have increased significantly. In the BMBF-funded project "Biofilm Management" (2010-2014), we examined the extent to which established sampling strategies in practice can uncover drinking water plumbing systems systemically colonized with Legionella. Moreover, we investigated additional parameters that might be suitable for detecting systemic contaminations. We subjected the drinking water plumbing systems of 8 buildings with known microbial contamination (Legionella) to an intensive hygienic-microbiological sampling with high spatial and temporal resolution. A total of 626 drinking hot water samples were analyzed with classical culture-based methods. In addition, comprehensive hygienic observations were conducted in each building and qualitative interviews with operators and users were applied. Collected tap-specific parameters were quantitatively analyzed by means of sensitivity and accuracy calculations. The systemic presence of Legionella in drinking water plumbing systems has a high spatial and temporal variability. Established sampling strategies were only partially suitable to detect long-term Legionella contaminations in practice. In particular, the sampling of hot water at the calorifier and circulation re-entrance showed little significance in terms of contamination events. To detect the systemic presence of Legionella,the parameters stagnation (qualitatively assessed) and temperature (compliance with the 5K-rule) showed better results. © Georg Thieme Verlag KG Stuttgart · New York.

  18. A sampling strategy to establish existing plant configuration baselines

    International Nuclear Information System (INIS)

    Buchanan, L.P.

    1995-01-01

    The Department of Energy's Gaseous Diffusion Plants (DOEGDP) are undergoing a Safety Analysis Update Program. As part of this program, critical existing structures are being reevaluated for Natural Phenomena Hazards (NPH) based on the recommendations of UCRL-15910. The Department of Energy has specified that current plant configurations be used in the performance of these reevaluations. This paper presents the process and results of a walkdown program implemented at DOEGDP to establish the current configuration baseline for these existing critical structures for use in subsequent NPH evaluations. These structures are classified as moderate hazard facilities and were constructed in the early 1950's. The process involved a statistical sampling strategy to determine the validity of critical design information as represented on the original design drawings such as member sizes, orientation, connection details and anchorage. A floor load inventory of the dead load of the equipment, both permanently attached and spare, was also performed as well as a walkthrough inspection of the overall structure to identify any other significant anomalies

  19. Use of a holder-vacuum tube device to save on-site hands in preparing urine samples for head-space gas-chromatography, and its application to determine the time allowance for sample sealing.

    Science.gov (United States)

    Kawai, Toshio; Sumino, Kimiaki; Ohashi, Fumiko; Ikeda, Masayuki

    2011-01-01

    To facilitate urine sample preparation prior to head-space gas-chromatographic (HS-GC) analysis. Urine samples containing one of the five solvents (acetone, methanol, methyl ethyl ketone, methyl isobutyl ketone and toluene) at the levels of biological exposure limits were aspirated into a vacuum tube via holder, a device commercially available for venous blood collection (the vacuum tube method). The urine sample, 5 ml, was quantitatively transferred to a 20-ml head-space vial prior to HS-GC analysis. The loaded tubes were stored at +4 ℃ in dark for up to 3 d. The vacuum tube method facilitated on-site procedures of urine sample preparation for HS-GC with no significant loss of solvents in the sample and no need of skilled hands, whereas on-site sample preparation time was significantly reduced. Furthermore, no loss of solvents was detected during the 3-d storage, irrespective of hydrophilic (acetone) or lipophilic solvent (toluene). In a pilot application, high performance of the vacuum tube method in sealing a sample in an air-tight space succeeded to confirm that no solvent will be lost when sealing is completed within 5 min after urine voiding, and that the allowance time is as long as 30 min in case of toluene in urine. The use of the holder-vacuum tube device not only saves hands for transfer of the sample to air-tight space, but facilitates sample storage prior to HS-GC analysis.

  20. Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.

    Science.gov (United States)

    Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh

    2012-02-28

    Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.

  1. The analytical calibration in (bio)imaging/mapping of the metallic elements in biological samples--definitions, nomenclature and strategies: state of the art.

    Science.gov (United States)

    Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech

    2015-01-01

    Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. External calibration strategy for trace element quantification in botanical samples by LA-ICP-MS using filter paper

    International Nuclear Information System (INIS)

    Nunes, Matheus A.G.; Voss, Mônica; Corazza, Gabriela; Flores, Erico M.M.; Dressler, Valderi L.

    2016-01-01

    The use of reference solutions dispersed on filter paper discs is proposed for the first time as an external calibration strategy for matrix matching and determination of As, Cd, Co, Cr, Cu, Mn, Ni, Pb, Sr, V and Zn in plants by laser ablation-inductively coupled plasma mass spectrometry (LA-ICP-MS). The procedure is based on the use of filter paper discs as support for aqueous reference solutions, which are further evaporated, resulting in solid standards with concentrations up to 250 μg g −1 of each element. The use of filter paper for calibration is proposed as matrix matched standards due to the similarities of this material with botanical samples, regarding to carbon concentration and its distribution through both matrices. These characteristics allowed the use of 13 C as internal standard (IS) during the analysis by LA-ICP-MS. In this way, parameters as analyte signal normalization with 13 C, carrier gas flow rate, laser energy, spot size, and calibration range were monitored. The calibration procedure using solution deposition on filter paper discs resulted in precision improvement when 13 C was used as IS. The method precision was calculated by the analysis of a certified reference material (CRM) of botanical matrix, considering the RSD obtained for 5 line scans and was lower than 20%. Accuracy of LA-ICP-MS determinations were evaluated by analysis of four CRM pellets of botanical composition, as well as by comparison with results obtained by ICP-MS using solution nebulization after microwave assisted digestion. Plant samples of unknown elemental composition were analyzed by the proposed LA method and good agreement were obtained with results of solution analysis. Limits of detection (LOD) established for LA-ICP-MS were obtained by the ablation of 10 lines on the filter paper disc containing 40 μL of 5% HNO 3 (v v −1 ) as calibration blank. Values ranged from 0.05 to 0.81  μg g −1 . Overall, the use of filter paper as support for dried

  3. External calibration strategy for trace element quantification in botanical samples by LA-ICP-MS using filter paper

    Energy Technology Data Exchange (ETDEWEB)

    Nunes, Matheus A.G.; Voss, Mônica; Corazza, Gabriela; Flores, Erico M.M.; Dressler, Valderi L., E-mail: vdressler@gmail.com

    2016-01-28

    The use of reference solutions dispersed on filter paper discs is proposed for the first time as an external calibration strategy for matrix matching and determination of As, Cd, Co, Cr, Cu, Mn, Ni, Pb, Sr, V and Zn in plants by laser ablation-inductively coupled plasma mass spectrometry (LA-ICP-MS). The procedure is based on the use of filter paper discs as support for aqueous reference solutions, which are further evaporated, resulting in solid standards with concentrations up to 250 μg g{sup −1} of each element. The use of filter paper for calibration is proposed as matrix matched standards due to the similarities of this material with botanical samples, regarding to carbon concentration and its distribution through both matrices. These characteristics allowed the use of {sup 13}C as internal standard (IS) during the analysis by LA-ICP-MS. In this way, parameters as analyte signal normalization with {sup 13}C, carrier gas flow rate, laser energy, spot size, and calibration range were monitored. The calibration procedure using solution deposition on filter paper discs resulted in precision improvement when {sup 13}C was used as IS. The method precision was calculated by the analysis of a certified reference material (CRM) of botanical matrix, considering the RSD obtained for 5 line scans and was lower than 20%. Accuracy of LA-ICP-MS determinations were evaluated by analysis of four CRM pellets of botanical composition, as well as by comparison with results obtained by ICP-MS using solution nebulization after microwave assisted digestion. Plant samples of unknown elemental composition were analyzed by the proposed LA method and good agreement were obtained with results of solution analysis. Limits of detection (LOD) established for LA-ICP-MS were obtained by the ablation of 10 lines on the filter paper disc containing 40 μL of 5% HNO{sub 3} (v v{sup −1}) as calibration blank. Values ranged from 0.05 to 0.81  μg g{sup −1}. Overall, the use of filter

  4. Impact of shrinking measurement error budgets on qualification metrology sampling and cost

    Science.gov (United States)

    Sendelbach, Matthew; Sarig, Niv; Wakamoto, Koichi; Kim, Hyang Kyun (Helen); Isbester, Paul; Asano, Masafumi; Matsuki, Kazuto; Vaid, Alok; Osorio, Carmen; Archie, Chas

    2014-04-01

    When designing an experiment to assess the accuracy of a tool as compared to a reference tool, semiconductor metrologists are often confronted with the situation that they must decide on the sampling strategy before the measurements begin. This decision is usually based largely on the previous experience of the metrologist and the available resources, and not on the statistics that are needed to achieve acceptable confidence limits on the final result. This paper shows a solution to this problem, called inverse TMU analysis, by presenting statistically-based equations that allow the user to estimate the needed sampling after providing appropriate inputs, allowing him to make important "risk vs. reward" sampling, cost, and equipment decisions. Application examples using experimental data from scatterometry and critical dimension scanning electron microscope (CD-SEM) tools are used first to demonstrate how the inverse TMU analysis methodology can be used to make intelligent sampling decisions before the start of the experiment, and then to reveal why low sampling can lead to unstable and misleading results. A model is developed that can help an experimenter minimize the costs associated both with increased sampling and with making wrong decisions caused by insufficient sampling. A second cost model is described that reveals the inadequacy of current TEM (Transmission Electron Microscopy) sampling practices and the enormous costs associated with TEM sampling that is needed to provide reasonable levels of certainty in the result. These high costs reach into the tens of millions of dollars for TEM reference metrology as the measurement error budgets reach angstrom levels. The paper concludes with strategies on how to manage and mitigate these costs.

  5. A Cost-Constrained Sampling Strategy in Support of LAI Product Validation in Mountainous Areas

    Directory of Open Access Journals (Sweden)

    Gaofei Yin

    2016-08-01

    Full Text Available Increasing attention is being paid on leaf area index (LAI retrieval in mountainous areas. Mountainous areas present extreme topographic variability, and are characterized by more spatial heterogeneity and inaccessibility compared with flat terrain. It is difficult to collect representative ground-truth measurements, and the validation of LAI in mountainous areas is still problematic. A cost-constrained sampling strategy (CSS in support of LAI validation was presented in this study. To account for the influence of rugged terrain on implementation cost, a cost-objective function was incorporated to traditional conditioned Latin hypercube (CLH sampling strategy. A case study in Hailuogou, Sichuan province, China was used to assess the efficiency of CSS. Normalized difference vegetation index (NDVI, land cover type, and slope were selected as auxiliary variables to present the variability of LAI in the study area. Results show that CSS can satisfactorily capture the variability across the site extent, while minimizing field efforts. One appealing feature of CSS is that the compromise between representativeness and implementation cost can be regulated according to actual surface heterogeneity and budget constraints, and this makes CSS flexible. Although the proposed method was only validated for the auxiliary variables rather than the LAI measurements, it serves as a starting point for establishing the locations of field plots and facilitates the preparation of field campaigns in mountainous areas.

  6. 78 FR 23896 - Notice of Funds Availability: Inviting Applications for the Quality Samples Program

    Science.gov (United States)

    2013-04-23

    ... proposals for the 2014 Quality Samples Program (QSP). The intended effect of this notice is to solicit... Strategy (UES) application Internet Web site. The UES allows applicants to submit a single consolidated and... of the FAS marketing programs, financial assistance programs, and market access programs. The...

  7. 40 CFR 35.2025 - Allowance and advance of allowance.

    Science.gov (United States)

    2010-07-01

    ... advance of allowance. (a) Allowance. Step 2+3 and Step 3 grant agreements will include an allowance for facilities planning and design of the project and Step 7 agreements will include an allowance for facility... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Allowance and advance of allowance. 35...

  8. Development of an automatic sampling device for the continuous measurement of atmospheric carbonyls compounds

    International Nuclear Information System (INIS)

    Perraud, V.

    2007-12-01

    Two sampling strategies were studied to develop an automatic instrument for the continuous measurement of atmospheric carbonyl compounds. Because of its specificity towards carbonyls compounds, sampling by using a transfer of gaseous phase in a liquid phase associated with a simultaneous chemical derivatization of the trapped compounds was first studied. However, this method do not allow a quantitative sampling of all studied carbonyl compounds, nor a continuous measurement in the field. To overcome the difficulties, a second strategy was investigated: the cryogenic adsorption onto solid adsorbent followed by thermodesorption and a direct analysis by GC/MS. Collection efficiency using different solid adsorbents was found greater than 95% for carbonyl compounds consisting of 1 to 7 carbons. This work is a successful first step towards the realization of the automatic sampling device for a continuous measurement of atmospheric carbonyls compounds. (author)

  9. Interactive Control System, Intended Strategy, Implemented Strategy dan Emergent Strategy

    OpenAIRE

    Tubagus Ismail; Darjat Sudrajat

    2012-01-01

    The purpose of this study was to examine the relationship between management control system (MCS) and strategy formation processes, namely: intended strategy, emergent strategy and impelemented strategy. The focus of MCS in this study was interactive control system. The study was based on Structural Equation Modeling (SEM) as its multivariate analyses instrument. The samples were upper middle managers of manufacturing company in Banten Province, DKI Jakarta Province and West Java Province. AM...

  10. Evaluation of 5-FU pharmacokinetics in cancer patients with DPD deficiency using a Bayesian limited sampling strategy

    NARCIS (Netherlands)

    Van Kuilenburg, A.; Hausler, P.; Schalhorn, A.; Tanck, M.; Proost, J.H.; Terborg, C.; Behnke, D.; Schwabe, W.; Jabschinsky, K.; Maring, J.G.

    Aims: Dihydropyrimidine dehydrogenase (DPD) is the initial enzyme in the catabolism of 5-fluorouracil (5FU) and DPD deficiency is an important pharmacogenetic syndrome. The main purpose of this study was to develop a limited sampling strategy to evaluate the pharmacokinetics of 5FU and to detect

  11. Effect of measurement error budgets and hybrid metrology on qualification metrology sampling

    Science.gov (United States)

    Sendelbach, Matthew; Sarig, Niv; Wakamoto, Koichi; Kim, Hyang Kyun (Helen); Isbester, Paul; Asano, Masafumi; Matsuki, Kazuto; Osorio, Carmen; Archie, Chas

    2014-10-01

    Until now, metrologists had no statistics-based method to determine the sampling needed for an experiment before the start that accuracy experiment. We show a solution to this problem called inverse total measurement uncertainty (TMU) analysis, by presenting statistically based equations that allow the user to estimate the needed sampling after providing appropriate inputs, allowing him to make important "risk versus reward" sampling, cost, and equipment decisions. Application examples using experimental data from scatterometry and critical dimension scanning electron microscope tools are used first to demonstrate how the inverse TMU analysis methodology can be used to make intelligent sampling decisions and then to reveal why low sampling can lead to unstable and misleading results. One model is developed that can help experimenters minimize sampling costs. A second cost model reveals the inadequacy of some current sampling practices-and the enormous costs associated with sampling that provides reasonable levels of certainty in the result. We introduce the strategies on how to manage and mitigate these costs and begin the discussion on how fabs are able to manufacture devices using minimal reference sampling when qualifying metrology steps. Finally, the relationship between inverse TMU analysis and hybrid metrology is explored.

  12. Adaptive designs for the one-sample log-rank test.

    Science.gov (United States)

    Schmidt, Rene; Faldum, Andreas; Kwiecien, Robert

    2017-09-22

    Traditional designs in phase IIa cancer trials are single-arm designs with a binary outcome, for example, tumor response. In some settings, however, a time-to-event endpoint might appear more appropriate, particularly in the presence of loss to follow-up. Then the one-sample log-rank test might be the method of choice. It allows to compare the survival curve of the patients under treatment to a prespecified reference survival curve. The reference curve usually represents the expected survival under standard of the care. In this work, convergence of the one-sample log-rank statistic to Brownian motion is proven using Rebolledo's martingale central limit theorem while accounting for staggered entry times of the patients. On this basis, a confirmatory adaptive one-sample log-rank test is proposed where provision is made for data dependent sample size reassessment. The focus is to apply the inverse normal method. This is done in two different directions. The first strategy exploits the independent increments property of the one-sample log-rank statistic. The second strategy is based on the patient-wise separation principle. It is shown by simulation that the proposed adaptive test might help to rescue an underpowered trial and at the same time lowers the average sample number (ASN) under the null hypothesis as compared to a single-stage fixed sample design. © 2017, The International Biometric Society.

  13. Limited sampling strategies drawn within 3 hours postdose poorly predict mycophenolic acid area-under-the-curve after enteric-coated mycophenolate sodium.

    NARCIS (Netherlands)

    Winter, B.C. de; Gelder, T. van; Mathôt, R.A.A.; Glander, P.; Tedesco-Silva, H.; Hilbrands, L.B.; Budde, K.; Hest, R.M. van

    2009-01-01

    Previous studies predicted that limited sampling strategies (LSS) for estimation of mycophenolic acid (MPA) area-under-the-curve (AUC(0-12)) after ingestion of enteric-coated mycophenolate sodium (EC-MPS) using a clinically feasible sampling scheme may have poor predictive performance. Failure of

  14. Monolith Chromatography as Sample Preparation Step in Virome Studies of Water Samples.

    Science.gov (United States)

    Gutiérrez-Aguirre, Ion; Kutnjak, Denis; Rački, Nejc; Rupar, Matevž; Ravnikar, Maja

    2018-01-01

    Viruses exist in aquatic media and many of them use this media as transmission route. Next-generation sequencing (NGS) technologies have opened new doors in virus research, allowing also to reveal a hidden diversity of viral species in aquatic environments. Not surprisingly, many of the newly discovered viruses are found in environmental fresh and marine waters. One of the problems in virome research can be the low amount of viral nucleic acids present in the sample in contrast to the background ones (host, eukaryotic, prokaryotic, environmental). Therefore, virus enrichment prior to NGS is necessary in many cases. In water samples, an added problem resides in the low concentration of viruses typically present in aquatic media. Different concentration strategies have been used to overcome such limitations. CIM monoliths are a new generation of chromatographic supports that due to their particular structural characteristics are very efficient in concentration and purification of viruses. In this chapter, we describe the use of CIM monolithic chromatography for sample preparation step in NGS studies targeting viruses in fresh or marine water. The step-by-step protocol will include a case study where CIM concentration was used to study the virome of a wastewater sample using NGS.

  15. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Biopolymers for sample collection, protection, and preservation.

    Science.gov (United States)

    Sorokulova, Iryna; Olsen, Eric; Vodyanoy, Vitaly

    2015-07-01

    One of the principal challenges in the collection of biological samples from air, water, and soil matrices is that the target agents are not stable enough to be transferred from the collection point to the laboratory of choice without experiencing significant degradation and loss of viability. At present, there is no method to transport biological samples over considerable distances safely, efficiently, and cost-effectively without the use of ice or refrigeration. Current techniques of protection and preservation of biological materials have serious drawbacks. Many known techniques of preservation cause structural damages, so that biological materials lose their structural integrity and viability. We review applications of a novel bacterial preservation process, which is nontoxic and water soluble and allows for the storage of samples without refrigeration. The method is capable of protecting the biological sample from the effects of environment for extended periods of time and then allows for the easy release of these collected biological materials from the protective medium without structural or DNA damage. Strategies for sample collection, preservation, and shipment of bacterial, viral samples are described. The water-soluble polymer is used to immobilize the biological material by replacing the water molecules within the sample with molecules of the biopolymer. The cured polymer results in a solid protective film that is stable to many organic solvents, but quickly removed by the application of the water-based solution. The process of immobilization does not require the use of any additives, accelerators, or plastifiers and does not involve high temperature or radiation to promote polymerization.

  17. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Science.gov (United States)

    Yao, Peng-Cheng; Gao, Hai-Yan; Wei, Ya-Nan; Zhang, Jian-Hang; Chen, Xiao-Yong; Li, Hong-Qing

    2017-01-01

    Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  18. Standard methods for sampling and sample preparation for gamma spectroscopy

    International Nuclear Information System (INIS)

    Taskaeva, M.; Taskaev, E.; Nikolov, P.

    1993-01-01

    The strategy for sampling and sample preparation is outlined: necessary number of samples; analysis and treatment of the results received; quantity of the analysed material according to the radionuclide concentrations and analytical methods; the minimal quantity and kind of the data needed for making final conclusions and decisions on the base of the results received. This strategy was tested in gamma spectroscopic analysis of radionuclide contamination of the region of Eleshnitsa Uranium Mines. The water samples was taken and stored according to the ASTM D 3370-82. The general sampling procedures were in conformity with the recommendations of ISO 5667. The radionuclides was concentrated by coprecipitation with iron hydroxide and ion exchange. The sampling of soil samples complied with the rules of ASTM C 998, and their sample preparation - with ASTM C 999. After preparation the samples were sealed hermetically and measured. (author)

  19. Effect of Forging Allowance Value on the Power Consumption of Machining Process

    Directory of Open Access Journals (Sweden)

    L. D. Mal'kova

    2015-01-01

    Full Text Available The paper aim is to develop and study possible energy-efficiency measures for machined forgings drawing on analysis of the impact of the allowance for machining and its scatter.The most sophisticated option to take into consideration the effect of the cut depth is the work-piece machining in which the forging allowance value results from the blank production.Research of power consumption was conducted for turning the cylindrical surface of 144 mm length and  1,5 33 0,5   diameter on forgings of the work-pieces "screw of steering control" made from steel 60PP. A radial dimension allowance at said cylindrical surface at six points of the five sections was sized to assess the allowance value dispersion. The size of the sample measurements at the control points was n = 600. Statistic processing has shown normal law of distribution and sample homogeneity.To analyze the results of experiments was calculated a range of allowances for this workpiece. Calculated minimum and maximum allowance per one side for rough lathing were, respectively, 0.905 mm and 1.905mm. It was found that 77% points under control lie in calculated range of allowance values. And there are no points out of the range on lesser side that proves a lack of rejects; but there are points out of the range on the bigger side, that will require additional costs for machining the specified surface, including the cost of electricity.There were three power consumption calculations based on factory- recommended duty: for processing the entire sample of forgings with an average allowance, for machining forgings allowances of which are within the recommended design range of allowance, and for processing the entire sample of forgings with a minimum value of allowance.It was found that elimination of allowance values which are outside the recommended range enables to reduce the power consumption, at least, by 6%, and the overall power consumption for processing the measured forgings

  20. Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.

    Science.gov (United States)

    Blutke, Andreas; Wanke, Rüdiger

    2018-03-06

    In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical

  1. Sampling strategies to capture single-cell heterogeneity

    OpenAIRE

    Satwik Rajaram; Louise E. Heinrich; John D. Gordan; Jayant Avva; Kathy M. Bonness; Agnieszka K. Witkiewicz; James S. Malter; Chloe E. Atreya; Robert S. Warren; Lani F. Wu; Steven J. Altschuler

    2017-01-01

    Advances in single-cell technologies have highlighted the prevalence and biological significance of cellular heterogeneity. A critical question is how to design experiments that faithfully capture the true range of heterogeneity from samples of cellular populations. Here, we develop a data-driven approach, illustrated in the context of image data, that estimates the sampling depth required for prospective investigations of single-cell heterogeneity from an existing collection of samples. ...

  2. Development of improved space sampling strategies for ocean chemical properties: Total carbon dioxide and dissolved nitrate

    Science.gov (United States)

    Goyet, Catherine; Davis, Daniel; Peltzer, Edward T.; Brewer, Peter G.

    1995-01-01

    Large-scale ocean observing programs such as the Joint Global Ocean Flux Study (JGOFS) and the World Ocean Circulation Experiment (WOCE) today, must face the problem of designing an adequate sampling strategy. For ocean chemical variables, the goals and observing technologies are quite different from ocean physical variables (temperature, salinity, pressure). We have recently acquired data on the ocean CO2 properties on WOCE cruises P16c and P17c that are sufficiently dense to test for sampling redundancy. We use linear and quadratic interpolation methods on the sampled field to investigate what is the minimum number of samples required to define the deep ocean total inorganic carbon (TCO2) field within the limits of experimental accuracy (+/- 4 micromol/kg). Within the limits of current measurements, these lines were oversampled in the deep ocean. Should the precision of the measurement be improved, then a denser sampling pattern may be desirable in the future. This approach rationalizes the efficient use of resources for field work and for estimating gridded (TCO2) fields needed to constrain geochemical models.

  3. Standard operating procedures for collection of soil and sediment samples for the Sediment-bound Contaminant Resiliency and Response (SCoRR) strategy pilot study

    Science.gov (United States)

    Fisher, Shawn C.; Reilly, Timothy J.; Jones, Daniel K.; Benzel, William M.; Griffin, Dale W.; Loftin, Keith A.; Iwanowicz, Luke R.; Cohl, Jonathan A.

    2015-12-17

    An understanding of the effects on human and ecological health brought by major coastal storms or flooding events is typically limited because of a lack of regionally consistent baseline and trends data in locations proximal to potential contaminant sources and mitigation activities, sensitive ecosystems, and recreational facilities where exposures are probable. In an attempt to close this gap, the U.S. Geological Survey (USGS) has implemented the Sediment-bound Contaminant Resiliency and Response (SCoRR) strategy pilot study to collect regional sediment-quality data prior to and in response to future coastal storms. The standard operating procedure (SOP) detailed in this document serves as the sample-collection protocol for the SCoRR strategy by providing step-by-step instructions for site preparation, sample collection and processing, and shipping of soil and surficial sediment (for example, bed sediment, marsh sediment, or beach material). The objectives of the SCoRR strategy pilot study are (1) to create a baseline of soil-, sand-, marsh sediment-, and bed-sediment-quality data from sites located in the coastal counties from Maine to Virginia based on their potential risk of being contaminated in the event of a major coastal storm or flooding (defined as Resiliency mode); and (2) respond to major coastal storms and flooding by reoccupying select baseline sites and sampling within days of the event (defined as Response mode). For both modes, samples are collected in a consistent manner to minimize bias and maximize quality control by ensuring that all sampling personnel across the region collect, document, and process soil and sediment samples following the procedures outlined in this SOP. Samples are analyzed using four USGS-developed screening methods—inorganic geochemistry, organic geochemistry, pathogens, and biological assays—which are also outlined in this SOP. Because the SCoRR strategy employs a multi-metric approach for sample analyses, this

  4. A nested-PCR strategy for molecular diagnosis of mollicutes in uncultured biological samples from cows with vulvovaginitis.

    Science.gov (United States)

    Voltarelli, Daniele Cristina; de Alcântara, Brígida Kussumoto; Lunardi, Michele; Alfieri, Alice Fernandes; de Arruda Leme, Raquel; Alfieri, Amauri Alcindo

    2018-01-01

    Bacteria classified in Mycoplasma (M. bovis and M. bovigenitalium) and Ureaplasma (U. diversum) genera are associated with granular vulvovaginitis that affect heifers and cows at reproductive age. The traditional means for detection and speciation of mollicutes from clinical samples have been culture and serology. However, challenges experienced with these laboratory methods have hampered assessment of their impact in pathogenesis and epidemiology in cattle worldwide. The aim of this study was to develop a PCR strategy to detect and primarily discriminate between the main species of mollicutes associated with reproductive disorders of cattle in uncultured clinical samples. In order to amplify the 16S-23S rRNA internal transcribed spacer region of the genome, a consensual and species-specific nested-PCR assay was developed to identify and discriminate between main species of mollicutes. In addition, 31 vaginal swab samples from dairy and beef affected cows were investigated. This nested-PCR strategy was successfully employed in the diagnosis of single and mixed mollicute infections of diseased cows from cattle herds from Brazil. The developed system enabled the rapid and unambiguous identification of the main mollicute species known to be associated with this cattle reproductive disorder through differential amplification of partial fragments of the ITS region of mollicute genomes. The development of rapid and sensitive tools for mollicute detection and discrimination without the need for previous cultures or sequencing of PCR products is a high priority for accurate diagnosis in animal health. Therefore, the PCR strategy described herein may be helpful for diagnosis of this class of bacteria in genital swabs submitted to veterinary diagnostic laboratories, not demanding expertise in mycoplasma culture and identification. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Sample preparation and fractionation for proteome analysis and cancer biomarker discovery by mass spectrometry.

    Science.gov (United States)

    Ahmed, Farid E

    2009-03-01

    Sample preparation and fractionation technologies are one of the most crucial processes in proteomic analysis and biomarker discovery in solubilized samples. Chromatographic or electrophoretic proteomic technologies are also available for separation of cellular protein components. There are, however, considerable limitations in currently available proteomic technologies as none of them allows for the analysis of the entire proteome in a simple step because of the large number of peptides, and because of the wide concentration dynamic range of the proteome in clinical blood samples. The results of any undertaken experiment depend on the condition of the starting material. Therefore, proper experimental design and pertinent sample preparation is essential to obtain meaningful results, particularly in comparative clinical proteomics in which one is looking for minor differences between experimental (diseased) and control (nondiseased) samples. This review discusses problems associated with general and specialized strategies of sample preparation and fractionation, dealing with samples that are solution or suspension, in a frozen tissue state, or formalin-preserved tissue archival samples, and illustrates how sample processing might influence detection with mass spectrometric techniques. Strategies that dramatically improve the potential for cancer biomarker discovery in minimally invasive, blood-collected human samples are also presented.

  6. Appreciating the difference between design-based and model-based sampling strategies in quantitative morphology of the nervous system.

    Science.gov (United States)

    Geuna, S

    2000-11-20

    Quantitative morphology of the nervous system has undergone great developments over recent years, and several new technical procedures have been devised and applied successfully to neuromorphological research. However, a lively debate has arisen on some issues, and a great deal of confusion appears to exist that is definitely responsible for the slow spread of the new techniques among scientists. One such element of confusion is related to uncertainty about the meaning, implications, and advantages of the design-based sampling strategy that characterize the new techniques. In this article, to help remove this uncertainty, morphoquantitative methods are described and contrasted on the basis of the inferential paradigm of the sampling strategy: design-based vs model-based. Moreover, some recommendations are made to help scientists judge the appropriateness of a method used for a given study in relation to its specific goals. Finally, the use of the term stereology to label, more or less expressly, only some methods is critically discussed. Copyright 2000 Wiley-Liss, Inc.

  7. Solid-Phase Extraction Strategies to Surmount Body Fluid Sample Complexity in High-Throughput Mass Spectrometry-Based Proteomics

    Science.gov (United States)

    Bladergroen, Marco R.; van der Burgt, Yuri E. M.

    2015-01-01

    For large-scale and standardized applications in mass spectrometry- (MS-) based proteomics automation of each step is essential. Here we present high-throughput sample preparation solutions for balancing the speed of current MS-acquisitions and the time needed for analytical workup of body fluids. The discussed workflows reduce body fluid sample complexity and apply for both bottom-up proteomics experiments and top-down protein characterization approaches. Various sample preparation methods that involve solid-phase extraction (SPE) including affinity enrichment strategies have been automated. Obtained peptide and protein fractions can be mass analyzed by direct infusion into an electrospray ionization (ESI) source or by means of matrix-assisted laser desorption ionization (MALDI) without further need of time-consuming liquid chromatography (LC) separations. PMID:25692071

  8. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Directory of Open Access Journals (Sweden)

    Peng-Cheng Yao

    Full Text Available Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01. These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  9. 42 CFR 61.9 - Payments: Stipends; dependency allowances; travel allowances.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Payments: Stipends; dependency allowances; travel... FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.9 Payments: Stipends; dependency allowances; travel allowances. Payments for stipends, dependency allowances, and the travel allowances...

  10. Sustained Attention Across the Life Span in a Sample of 10,000: Dissociating Ability and Strategy.

    Science.gov (United States)

    Fortenbaugh, Francesca C; DeGutis, Joseph; Germine, Laura; Wilmer, Jeremy B; Grosso, Mallory; Russo, Kathryn; Esterman, Michael

    2015-09-01

    Normal and abnormal differences in sustained visual attention have long been of interest to scientists, educators, and clinicians. Still lacking, however, is a clear understanding of how sustained visual attention varies across the broad sweep of the human life span. In the present study, we filled this gap in two ways. First, using an unprecedentedly large 10,430-person sample, we modeled age-related differences with substantially greater precision than have prior efforts. Second, using the recently developed gradual-onset continuous performance test (gradCPT), we parsed sustained-attention performance over the life span into its ability and strategy components. We found that after the age of 15 years, the strategy and ability trajectories saliently diverge. Strategy becomes monotonically more conservative with age, whereas ability peaks in the early 40s and is followed by a gradual decline in older adults. These observed life-span trajectories for sustained attention are distinct from results of other life-span studies focusing on fluid and crystallized intelligence. © The Author(s) 2015.

  11. Direct quantification of lipopeptide biosurfactants in biological samples via HPLC and UPLC-MS requires sample modification with an organic solvent.

    Science.gov (United States)

    Biniarz, Piotr; Łukaszewicz, Marcin

    2017-06-01

    The rapid and accurate quantification of biosurfactants in biological samples is challenging. In contrast to the orcinol method for rhamnolipids, no simple biochemical method is available for the rapid quantification of lipopeptides. Various liquid chromatography (LC) methods are promising tools for relatively fast and exact quantification of lipopeptides. Here, we report strategies for the quantification of the lipopeptides pseudofactin and surfactin in bacterial cultures using different high- (HPLC) and ultra-performance liquid chromatography (UPLC) systems. We tested three strategies for sample pretreatment prior to LC analysis. In direct analysis (DA), bacterial cultures were injected directly and analyzed via LC. As a modification, we diluted the samples with methanol and detected an increase in lipopeptide recovery in the presence of methanol. Therefore, we suggest this simple modification as a tool for increasing the accuracy of LC methods. We also tested freeze-drying followed by solvent extraction (FDSE) as an alternative for the analysis of "heavy" samples. In FDSE, the bacterial cultures were freeze-dried, and the resulting powder was extracted with different solvents. Then, the organic extracts were analyzed via LC. Here, we determined the influence of the extracting solvent on lipopeptide recovery. HPLC methods allowed us to quantify pseudofactin and surfactin with run times of 15 and 20 min per sample, respectively, whereas UPLC quantification was as fast as 4 and 5.5 min per sample, respectively. Our methods provide highly accurate measurements and high recovery levels for lipopeptides. At the same time, UPLC-MS provides the possibility to identify lipopeptides and their structural isoforms.

  12. Volume Ray Casting with Peak Finding and Differential Sampling

    KAUST Repository

    Knoll, A.

    2009-11-01

    Direct volume rendering and isosurfacing are ubiquitous rendering techniques in scientific visualization, commonly employed in imaging 3D data from simulation and scan sources. Conventionally, these methods have been treated as separate modalities, necessitating different sampling strategies and rendering algorithms. In reality, an isosurface is a special case of a transfer function, namely a Dirac impulse at a given isovalue. However, artifact-free rendering of discrete isosurfaces in a volume rendering framework is an elusive goal, requiring either infinite sampling or smoothing of the transfer function. While preintegration approaches solve the most obvious deficiencies in handling sharp transfer functions, artifacts can still result, limiting classification. In this paper, we introduce a method for rendering such features by explicitly solving for isovalues within the volume rendering integral. In addition, we present a sampling strategy inspired by ray differentials that automatically matches the frequency of the image plane, resulting in fewer artifacts near the eye and better overall performance. These techniques exhibit clear advantages over standard uniform ray casting with and without preintegration, and allow for high-quality interactive volume rendering with sharp C0 transfer functions. © 2009 IEEE.

  13. Future prices and market for SO2 allowances

    International Nuclear Information System (INIS)

    Sanghi, A.; Joseph, A.; Michael, K.; Munro, W.; Wang, J.

    1993-01-01

    The expected price of SO 2 emission allowances is an important issue in energy and integrated resource planning activities. For example, the expected price of SO 2 allowances in needed in order to evaluate alternative strategies for meeting SO 2 provisions of the Clean Air Act Amendments of 1990. In addition, the expected SO 2 allowance price is important to state public utility regulators who must provide guidance on rate-making issues regarding utility compliance plans which involve allowance trading and direct investment of SO 2 control technologies. Last but not the least, the expected SO 2 allowance price is an important determinant of the future market for natural gas and low sulfur coal. The paper develops estimates of SO 2 allowance prices over time by constructing national supply and demand curves for SO 2 reductions. Both the supply and demand for SO 2 reductions are based on an analysis of the sulfur content of fuels burned in 1990 by utilities throughout the United States; and on assumptions about plant retirements, the rate of new capacity growth, the types of new and replacement plants constructed, the costs of SO 2 reduction measures and legislation by midwest states to maintain the use of high sulfur coal to protect local jobs. The paper shows that SO 2 allowance prices will peak around the year 2000 at about $500 per ton, and will eventually fall to zero by about the year 2020. A sensitivity analysis indicates that the price of SO 2 allowances is relatively insensitive to assumptions regarding the availability of natural gas or energy demand growth. However, SO 2 allowance prices tend to be quite sensitive to assumptions regarding regulations which may force early retirement of existing power plants and possible legislation which may reduce CO 2 emissions

  14. Radial line-scans as representative sampling strategy in dried-droplet laser ablation of liquid samples deposited on pre-cut filter paper disks

    Energy Technology Data Exchange (ETDEWEB)

    Nischkauer, Winfried [Institute of Chemical Technologies and Analytics, Vienna University of Technology, Vienna (Austria); Department of Analytical Chemistry, Ghent University, Ghent (Belgium); Vanhaecke, Frank [Department of Analytical Chemistry, Ghent University, Ghent (Belgium); Bernacchi, Sébastien; Herwig, Christoph [Institute of Chemical Engineering, Vienna University of Technology, Vienna (Austria); Limbeck, Andreas, E-mail: Andreas.Limbeck@tuwien.ac.at [Institute of Chemical Technologies and Analytics, Vienna University of Technology, Vienna (Austria)

    2014-11-01

    Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL{sup −1} with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with

  15. Fate of organic microcontaminants in wastewater treatment and river systems: An uncertainty assessment in view of sampling strategy, and compound consumption rate and degradability.

    Science.gov (United States)

    Aymerich, I; Acuña, V; Ort, C; Rodríguez-Roda, I; Corominas, Ll

    2017-11-15

    The growing awareness of the relevance of organic microcontaminants on the environment has led to a growing number of studies on attenuation of these compounds in wastewater treatment plants (WWTP) and rivers. However, the effects of the sampling strategies (frequency and duration of composite samples) on the attenuation estimates are largely unknown. Our goal was to assess how frequency and duration of composite samples influence uncertainty of the attenuation estimates in WWTPs and rivers. Furthermore, we also assessed how compound consumption rate and degradability influence uncertainty. The assessment was conducted through simulating the integrated wastewater system of Puigcerdà (NE Iberian Peninsula) using a sewer pattern generator and a coupled model of WWTP and river. Results showed that the sampling strategy is especially critical at the influent of WWTP, particularly when the number of toilet flushes containing the compound of interest is small (≤100 toilet flushes with compound day -1 ), and less critical at the effluent of the WWTP and in the river due to the mixing effects of the WWTP. For example, at the WWTP, when evaluating a compound that is present in 50 pulses·d -1 using a sampling frequency of 15-min to collect a 24-h composite sample, the attenuation uncertainty can range from 94% (0% degradability) to 9% (90% degradability). The estimation of attenuation in rivers is less critical than in WWTPs, as the attenuation uncertainty was lower than 10% for all evaluated scenarios. Interestingly, the errors in the estimates of attenuation are usually lower than those of loads for most sampling strategies and compound characteristics (e.g. consumption and degradability), although the opposite occurs for compounds with low consumption and inappropriate sampling strategies at the WWTP. Hence, when designing a sampling campaign, one should consider the influence of compounds' consumption and degradability as well as the desired level of accuracy in

  16. 19 CFR 151.23 - Allowance for moisture in raw sugar.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Allowance for moisture in raw sugar. 151.23...; DEPARTMENT OF THE TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Sugars, Sirups, and Molasses § 151.23 Allowance for moisture in raw sugar. Inasmuch as the absorption of sea water or moisture...

  17. Actual distribution of Cronobacter spp. in industrial batches of powdered infant formula and consequences for performance of sampling strategies.

    Science.gov (United States)

    Jongenburger, I; Reij, M W; Boer, E P J; Gorris, L G M; Zwietering, M H

    2011-11-15

    The actual spatial distribution of microorganisms within a batch of food influences the results of sampling for microbiological testing when this distribution is non-homogeneous. In the case of pathogens being non-homogeneously distributed, it markedly influences public health risk. This study investigated the spatial distribution of Cronobacter spp. in powdered infant formula (PIF) on industrial batch-scale for both a recalled batch as well a reference batch. Additionally, local spatial occurrence of clusters of Cronobacter cells was assessed, as well as the performance of typical sampling strategies to determine the presence of the microorganisms. The concentration of Cronobacter spp. was assessed in the course of the filling time of each batch, by taking samples of 333 g using the most probable number (MPN) enrichment technique. The occurrence of clusters of Cronobacter spp. cells was investigated by plate counting. From the recalled batch, 415 MPN samples were drawn. The expected heterogeneous distribution of Cronobacter spp. could be quantified from these samples, which showed no detectable level (detection limit of -2.52 log CFU/g) in 58% of samples, whilst in the remainder concentrations were found to be between -2.52 and 2.75 log CFU/g. The estimated average concentration in the recalled batch was -2.78 log CFU/g and a standard deviation of 1.10 log CFU/g. The estimated average concentration in the reference batch was -4.41 log CFU/g, with 99% of the 93 samples being below the detection limit. In the recalled batch, clusters of cells occurred sporadically in 8 out of 2290 samples of 1g taken. The two largest clusters contained 123 (2.09 log CFU/g) and 560 (2.75 log CFU/g) cells. Various sampling strategies were evaluated for the recalled batch. Taking more and smaller samples and keeping the total sampling weight constant, considerably improved the performance of the sampling plans to detect such a type of contaminated batch. Compared to random sampling

  18. A radial sampling strategy for uniform k-space coverage with retrospective respiratory gating in 3D ultrashort-echo-time lung imaging.

    Science.gov (United States)

    Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon

    2016-05-01

    The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Radial line-scans as representative sampling strategy in dried-droplet laser ablation of liquid samples deposited on pre-cut filter paper disks.

    Science.gov (United States)

    Nischkauer, Winfried; Vanhaecke, Frank; Bernacchi, Sébastien; Herwig, Christoph; Limbeck, Andreas

    2014-11-01

    Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL - 1 with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with

  20. Radial line-scans as representative sampling strategy in dried-droplet laser ablation of liquid samples deposited on pre-cut filter paper disks

    Science.gov (United States)

    Nischkauer, Winfried; Vanhaecke, Frank; Bernacchi, Sébastien; Herwig, Christoph; Limbeck, Andreas

    2014-11-01

    Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL- 1 with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with

  1. SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS

    OpenAIRE

    Sampath Sundaram; Ammani Sivaraman

    2010-01-01

    In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i)  Balanced Systematic Sampling (BSS) of  Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg  (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...

  2. GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies.

    Science.gov (United States)

    Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain

    2015-01-01

    Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/.

  3. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  4. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  5. Sampling strategies and materials for investigating large reactive particle complaints from Valley Village homeowners near a coal-fired power plant

    International Nuclear Information System (INIS)

    Chang, A.; Davis, H.; Frazar, B.; Haines, B.

    1997-01-01

    This paper will present Phase 3's sampling strategies, techniques, methods and substrates for assisting the District to resolve the complaints involving yellowish-brown staining and spotting of homes, cars, etc. These spots could not be easily washed off and some were permanent. The sampling strategies for the three phases were based on Phase 1 -- the identification of the reactive particles conducted in October, 1989 by APCD and IITRI, Phase 2 -- a study of the size distribution and concentration as a function of distance and direction of reactive particle deposition conducted by Radian and LG and E, and Phase 3 -- the determination of the frequency of soiling events over a full year's duration conducted in 1995 by APCD and IITRI. The sampling methods included two primary substrates -- ACE sheets and painted steel, and four secondary substrates -- mailbox, aluminum siding, painted wood panels and roof tiles. The secondary substrates were the main objects from the Valley Village complaints. The sampling technique included five Valley Village (VV) soiling/staining assessment sites and one southwest of the power plant as background/upwind site. The five VV sites northeast of the power plant covered 50 degrees span sector and 3/4 miles distance from the stacks. Hourly meteorological data for wind speeds and wind directions were collected. Based on this sampling technique, there were fifteen staining episodes detected. Nine of them were in summer, 1995

  6. ELIMINATION OF THE CHARACTERIZATION OF DWPF POUR STREAM SAMPLE AND THE GLASS FABRICATION AND TESTING OF THE DWPF SLUDGE BATCH QUALIFICATION SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Amoroso, J.; Peeler, D.; Edwards, T.

    2012-05-11

    contrast, the variability study has significantly added value to the DWPF's qualification strategy. The variability study has evolved to become the primary aspect of the DWPF's compliance strategy as it has been shown to be versatile and capable of adapting to the DWPF's various and diverse waste streams and blending strategies. The variability study, which aims to ensure durability requirements and the PCT and chemical composition correlations are valid for the compositional region to be processed at the DWPF, must continue to be performed. Due to the importance of the variability study and its place in the DWPF's qualification strategy, it will also be discussed in this report. An analysis of historical data and Production Records indicated that the recommendation of the Six Sigma team to eliminate all characterization of pour stream glass samples and the glass fabrication and PCT performed with the qualification glass does not compromise the DWPF's current compliance plan. Furthermore, the DWPF should continue to produce an acceptable waste form following the remaining elements of the Glass Product Control Program; regardless of a sludge-only or coupled operations strategy. If the DWPF does decide to eliminate the characterization of pour stream samples, pour stream samples should continue to be collected for archival reasons, which would allow testing to be performed should any issues arise or new repository test methods be developed.

  7. Elimination Of The Characterization Of DWPF Pour Stream Sample And The Glass Fabrication And Testing Of The DWPF Sludge Batch Qualification Sample

    International Nuclear Information System (INIS)

    Amoroso, J.; Peeler, D.; Edwards, T.

    2012-01-01

    variability study has significantly added value to the DWPF's qualification strategy. The variability study has evolved to become the primary aspect of the DWPF's compliance strategy as it has been shown to be versatile and capable of adapting to the DWPF's various and diverse waste streams and blending strategies. The variability study, which aims to ensure durability requirements and the PCT and chemical composition correlations are valid for the compositional region to be processed at the DWPF, must continue to be performed. Due to the importance of the variability study and its place in the DWPF's qualification strategy, it will also be discussed in this report. An analysis of historical data and Production Records indicated that the recommendation of the Six Sigma team to eliminate all characterization of pour stream glass samples and the glass fabrication and PCT performed with the qualification glass does not compromise the DWPF's current compliance plan. Furthermore, the DWPF should continue to produce an acceptable waste form following the remaining elements of the Glass Product Control Program; regardless of a sludge-only or coupled operations strategy. If the DWPF does decide to eliminate the characterization of pour stream samples, pour stream samples should continue to be collected for archival reasons, which would allow testing to be performed should any issues arise or new repository test methods be developed.

  8. Interactive Control System, Intended Strategy, Implemented Strategy dan Emergent Strategy

    Directory of Open Access Journals (Sweden)

    Tubagus Ismail

    2012-09-01

    Full Text Available The purpose of this study was to examine the relationship between management control system (MCS and strategy formation processes, namely: intended strategy, emergent strategy and impelemented strategy. The focus of MCS in this study was interactive control system. The study was based on Structural Equation Modeling (SEM as its multivariate analyses instrument. The samples were upper middle managers of manufacturing company in Banten Province, DKI Jakarta Province and West Java Province. AMOS Software 16 program is used as an additional instrument to resolve the problem in SEM modeling. The study found that interactive control system brought a positive and significant influence on Intended strategy; interactive control system brought a positive and significant influence on implemented strategy; interactive control system brought a positive and significant influence on emergent strategy. The limitation of this study is that our empirical model only used one way relationship between the process of strategy formation and interactive control system.

  9. A hybrid computational strategy to address WGS variant analysis in >5000 samples.

    Science.gov (United States)

    Huang, Zhuoyi; Rustagi, Navin; Veeraraghavan, Narayanan; Carroll, Andrew; Gibbs, Richard; Boerwinkle, Eric; Venkata, Manjunath Gorentla; Yu, Fuli

    2016-09-10

    The decreasing costs of sequencing are driving the need for cost effective and real time variant calling of whole genome sequencing data. The scale of these projects are far beyond the capacity of typical computing resources available with most research labs. Other infrastructures like the cloud AWS environment and supercomputers also have limitations due to which large scale joint variant calling becomes infeasible, and infrastructure specific variant calling strategies either fail to scale up to large datasets or abandon joint calling strategies. We present a high throughput framework including multiple variant callers for single nucleotide variant (SNV) calling, which leverages hybrid computing infrastructure consisting of cloud AWS, supercomputers and local high performance computing infrastructures. We present a novel binning approach for large scale joint variant calling and imputation which can scale up to over 10,000 samples while producing SNV callsets with high sensitivity and specificity. As a proof of principle, we present results of analysis on Cohorts for Heart And Aging Research in Genomic Epidemiology (CHARGE) WGS freeze 3 dataset in which joint calling, imputation and phasing of over 5300 whole genome samples was produced in under 6 weeks using four state-of-the-art callers. The callers used were SNPTools, GATK-HaplotypeCaller, GATK-UnifiedGenotyper and GotCloud. We used Amazon AWS, a 4000-core in-house cluster at Baylor College of Medicine, IBM power PC Blue BioU at Rice and Rhea at Oak Ridge National Laboratory (ORNL) for the computation. AWS was used for joint calling of 180 TB of BAM files, and ORNL and Rice supercomputers were used for the imputation and phasing step. All other steps were carried out on the local compute cluster. The entire operation used 5.2 million core hours and only transferred a total of 6 TB of data across the platforms. Even with increasing sizes of whole genome datasets, ensemble joint calling of SNVs for low

  10. 42 CFR 61.8 - Benefits: Stipends; dependency allowances; travel allowances; vacation.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Benefits: Stipends; dependency allowances; travel...; dependency allowances; travel allowances; vacation. Individuals awarded regular fellowships shall be entitled...) Stipend. (b) Dependency allowances. (c) When authorized in advance, separate allowances for travel. Such...

  11. The Relationship Between The Brand Strategy And Business Strategy

    OpenAIRE

    Karaömer, Ahmed

    2013-01-01

    In this study the relationship between three companies brand strategies and business strategies were investigated based on Aaker’s view on brand architecture. The concept and its strategies are characterized by the driving roles brands possess. At the top of the spectrum, “House of Brands” allows the brands to have the entire driver role which decreases moving downwards on the spectrum, first comes “Endorsed Brands” where the master brand has a little driver role, followed by “Subbrands” wher...

  12. Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.

    Science.gov (United States)

    Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping

    2016-01-01

    Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.

  13. A comparison of temporal and location-based sampling strategies for global positioning system-triggered electronic diaries.

    Science.gov (United States)

    Törnros, Tobias; Dorn, Helen; Reichert, Markus; Ebner-Priemer, Ulrich; Salize, Hans-Joachim; Tost, Heike; Meyer-Lindenberg, Andreas; Zipf, Alexander

    2016-11-21

    Self-reporting is a well-established approach within the medical and psychological sciences. In order to avoid recall bias, i.e. past events being remembered inaccurately, the reports can be filled out on a smartphone in real-time and in the natural environment. This is often referred to as ambulatory assessment and the reports are usually triggered at regular time intervals. With this sampling scheme, however, rare events (e.g. a visit to a park or recreation area) are likely to be missed. When addressing the correlation between mood and the environment, it may therefore be beneficial to include participant locations within the ambulatory assessment sampling scheme. Based on the geographical coordinates, the database query system then decides if a self-report should be triggered or not. We simulated four different ambulatory assessment sampling schemes based on movement data (coordinates by minute) from 143 voluntary participants tracked for seven consecutive days. Two location-based sampling schemes incorporating the environmental characteristics (land use and population density) at each participant's location were introduced and compared to a time-based sampling scheme triggering a report on the hour as well as to a sampling scheme incorporating physical activity. We show that location-based sampling schemes trigger a report less often, but we obtain more unique trigger positions and a greater spatial spread in comparison to sampling strategies based on time and distance. Additionally, the location-based methods trigger significantly more often at rarely visited types of land use and less often outside the study region where no underlying environmental data are available.

  14. Sampling strategies for millipedes (Diplopoda), centipedes ...

    African Journals Online (AJOL)

    At present considerable effort is being made to document and describe invertebrate diversity as part of numerous biodiversity conservation research projects. In order to determine diversity, rapid and effective sampling and estimation procedures are required and these need to be standardized for a particular group of ...

  15. PRE-EXÁMENES COMO UNA ESTRATEGIA DIDÁCTICA EN LOS CURSOS DE FÍSICA (SAMPLE TEST AS A TEACHING STRATEGY IN PHYSICS COURSES

    Directory of Open Access Journals (Sweden)

    Morales Ríos Herbert

    2010-04-01

    Full Text Available Resumen:Se describe la experiencia del uso de pre-exámenes o exámenes de prueba como una estrategia didáctica para el mejoramiento en el rendimiento y en el desempeño estudiantil en los cursos propios de la carrera de física. El objetivo principal de la experiencia era determinar, a priori, las deficiencias, tanto matemáticas como físicas, que tiene el estudiantado, corregirlas antes de administrarle el examen definitivo y establecer una evaluación formativa en el curso. En particular, el tema evaluado era el de oscilaciones lineales del curso de Mecánica Teórica. Se detalla en qué consiste dicha estrategia, la motivación de su implementación y los roles tanto docente como estudiantil. Se analizan los resultados de la experiencia para concluir con las bondades, limitaciones y proyecciones futuras del uso de los pre-exámenes, con el fin de mostrarlos como una herramienta más dentro de la labor docente universitaria.Abstract:We discuss our experience of using sample tests as a teaching strategy that allows us to improve the student grades in courses that belong to the College Physics Program. The main purpose of our experience was to find out the common mistakes both in mathematics and in physics made by the students and to correct them before the actual test, so that we could accomplish a formative evaluation. In particular, the evaluated subject was linear oscillations in the Classical Mechanics course. We describe what the strategy consists of, our motivation for using it and both the professor and the student roles. We analyze our results obtained in its implementation to conclude with the pros and cons of this teaching strategy and also with its future applications as a useful tool for improving college teaching.

  16. Hybrid data acquisition and processing strategies with increased throughput and selectivity: pSMART analysis for global qualitative and quantitative analysis.

    Science.gov (United States)

    Prakash, Amol; Peterman, Scott; Ahmad, Shadab; Sarracino, David; Frewen, Barbara; Vogelsang, Maryann; Byram, Gregory; Krastins, Bryan; Vadali, Gouri; Lopez, Mary

    2014-12-05

    Data-dependent acquisition (DDA) and data-independent acquisition strategies (DIA) have both resulted in improved understanding of proteomics samples. Both strategies have advantages and disadvantages that are well-published, where DDA is typically applied for deep discovery and DIA may be used to create sample records. In this paper, we present a hybrid data acquisition and processing strategy (pSMART) that combines the strengths of both techniques and provides significant benefits for qualitative and quantitative peptide analysis. The performance of pSMART is compared to published DIA strategies in an experiment that allows the objective assessment of DIA performance with respect to interrogation of previously acquired MS data. The results of this experiment demonstrate that pSMART creates fewer decoy hits than a standard DIA strategy. Moreover, we show that pSMART is more selective, sensitive, and reproducible than either standard DIA or DDA strategies alone.

  17. Strategies for monitoring the emerging polar organic contaminants in water with emphasis on integrative passive sampling.

    Science.gov (United States)

    Söderström, Hanna; Lindberg, Richard H; Fick, Jerker

    2009-01-16

    Although polar organic contaminants (POCs) such as pharmaceuticals are considered as some of today's most emerging contaminants few of them are regulated or included in on-going monitoring programs. However, the growing concern among the public and researchers together with the new legislature within the European Union, the registration, evaluation and authorisation of chemicals (REACH) system will increase the future need of simple, low cost strategies for monitoring and risk assessment of POCs in aquatic environments. In this article, we overview the advantages and shortcomings of traditional and novel sampling techniques available for monitoring the emerging POCs in water. The benefits and drawbacks of using active and biological sampling were discussed and the principles of organic passive samplers (PS) presented. A detailed overview of type of polar organic PS available, and their classes of target compounds and field of applications were given, and the considerations involved in using them such as environmental effects and quality control were discussed. The usefulness of biological sampling of POCs in water was found to be limited. Polar organic PS was considered to be the only available, but nevertheless, an efficient alternative to active water sampling due to its simplicity, low cost, no need of power supply or maintenance, and the ability of collecting time-integrative samples with one sample collection. However, the polar organic PS need to be further developed before they can be used as standard in water quality monitoring programs.

  18. Two different strategies of host manipulation allow parasites to persist in intermediate-definitive host systems

    NARCIS (Netherlands)

    Vries, de L.J.; Langevelde, van F.

    2018-01-01

    Trophically transmitted parasites start their development in an intermediate host, before they finish the development in their definitive host when the definitive host preys on the intermediate host. In intermediate-definitive host systems, two strategies of host manipulation have been evolved:

  19. Evaluation of sampling strategies to estimate crown biomass

    Science.gov (United States)

    Krishna P Poudel; Hailemariam Temesgen; Andrew N Gray

    2015-01-01

    Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire...

  20. Moral fiction or moral fact? The distinction between doing and allowing in medical ethics.

    Science.gov (United States)

    Huddle, Thomas S

    2013-06-01

    Opponents of physician-assisted suicide (PAS) maintain that physician withdrawal-of-life-sustaining-treatment cannot be morally equated to voluntary active euthanasia. PAS opponents generally distinguish these two kinds of act by positing a possible moral distinction between killing and allowing-to-die, ceteris paribus. While that distinction continues to be widely accepted in the public discourse, it has been more controversial among philosophers. Some ethicist PAS advocates are so certain that the distinction is invalid that they describe PAS opponents who hold to the distinction as in the grip of 'moral fictions'. The author contends that such a diagnosis is too hasty. The possibility of a moral distinction between active euthanasia and allowing-to-die has not been closed off by the argumentative strategies employed by these PAS advocates, including the contrasting cases strategy and the assimilation of doing and allowing to a common sense notion of causation. The philosophical debate over the doing/allowing distinction remains inconclusive, but physicians and others who rely upon that distinction in thinking about the ethics of end-of-life care need not give up on it in response to these arguments. © 2012 John Wiley & Sons Ltd.

  1. Evaluating alternative offering strategies for wind producers in a pool

    International Nuclear Information System (INIS)

    Rahimiyan, Morteza; Morales, Juan M.; Conejo, Antonio J.

    2011-01-01

    Highlights: → Out-of-sample analysis allows comparing diverse offers using real-world data. → Offering the best production forecast is not optimal for a wind producer. → Stochastic programming offers lead to maximum expected profit. → Offering the best production forecast is not generally optimal for risk control. → Stochastic programming offers lead to the best tradeoff profit versus risk. -- Abstract: As wind power technology matures and reaches break-even cost, wind producers find it increasingly attractive to participate in pool markets instead of being paid feed-in tariffs. The key issue is then how a wind producer should offer in the pool markets to achieve maximum profit while controlling the variability of such profit. This paper compares two families of offering strategies based, respectively, on a naive use of wind production forecasts and on stochastic programming models. These strategies are compared through a comprehensive out-of-sample chronological analysis based on real-world data. A number of relevant conclusions are then duly drawn.

  2. A Fast and Robust Feature-Based Scan-Matching Method in 3D SLAM and the Effect of Sampling Strategies

    Directory of Open Access Journals (Sweden)

    Cihan Ulas

    2013-11-01

    Full Text Available Simultaneous localization and mapping (SLAM plays an important role in fully autonomous systems when a GNSS (global navigation satellite system is not available. Studies in both 2D indoor and 3D outdoor SLAM are based on the appearance of environments and utilize scan-matching methods to find rigid body transformation parameters between two consecutive scans. In this study, a fast and robust scan-matching method based on feature extraction is introduced. Since the method is based on the matching of certain geometric structures, like plane segments, the outliers and noise in the point cloud are considerably eliminated. Therefore, the proposed scan-matching algorithm is more robust than conventional methods. Besides, the registration time and the number of iterations are significantly reduced, since the number of matching points is efficiently decreased. As a scan-matching framework, an improved version of the normal distribution transform (NDT is used. The probability density functions (PDFs of the reference scan are generated as in the traditional NDT, and the feature extraction - based on stochastic plane detection - is applied to the only input scan. By using experimental dataset belongs to an outdoor environment like a university campus, we obtained satisfactory performance results. Moreover, the feature extraction part of the algorithm is considered as a special sampling strategy for scan-matching and compared to other sampling strategies, such as random sampling and grid-based sampling, the latter of which is first used in the NDT. Thus, this study also shows the effect of the subsampling on the performance of the NDT.

  3. Should tobacco and alcohol companies be allowed to influence Australia's National Drug Strategy?

    Science.gov (United States)

    Freeman, Becky; MacKenzie, Ross; Daube, Mike

    2017-04-27

    Formation of Australia's National Drug Strategy (NDS) included an extensive consultation process that was open not only to community and public health stakeholders, but also to representatives of the tobacco and alcohol industries. Australia is bound by the World Health Organization Framework Convention on Tobacco Control, which requires governments to protect tobacco control measures from interference by the tobacco industry. NDS consultation submissions made by these conflicted industries are not publicly available for scrutiny. The NDS goals are at odds with the commercial agenda of industries that support regulatory stagnation, oppose and undermine effective action, ignore and distort evidence, and prioritise profits over health.

  4. The Viking X ray fluorescence experiment - Sampling strategies and laboratory simulations. [Mars soil sampling

    Science.gov (United States)

    Baird, A. K.; Castro, A. J.; Clark, B. C.; Toulmin, P., III; Rose, H., Jr.; Keil, K.; Gooding, J. L.

    1977-01-01

    Ten samples of Mars regolith material (six on Viking Lander 1 and four on Viking Lander 2) have been delivered to the X ray fluorescence spectrometers as of March 31, 1977. An additional six samples at least are planned for acquisition in the remaining Extended Mission (to January 1979) for each lander. All samples acquired are Martian fines from the near surface (less than 6-cm depth) of the landing sites except the latest on Viking Lander 1, which is fine material from the bottom of a trench dug to a depth of 25 cm. Several attempts on each lander to acquire fresh rock material (in pebble sizes) for analysis have yielded only cemented surface crustal material (duricrust). Laboratory simulation and experimentation are required both for mission planning of sampling and for interpretation of data returned from Mars. This paper is concerned with the rationale for sample site selections, surface sampler operations, and the supportive laboratory studies needed to interpret X ray results from Mars.

  5. Selecting Sample Preparation Workflows for Mass Spectrometry-Based Proteomic and Phosphoproteomic Analysis of Patient Samples with Acute Myeloid Leukemia.

    Science.gov (United States)

    Hernandez-Valladares, Maria; Aasebø, Elise; Selheim, Frode; Berven, Frode S; Bruserud, Øystein

    2016-08-22

    Global mass spectrometry (MS)-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML) biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC) or metal oxide affinity chromatography (MOAC). We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP) as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.

  6. Sampling stored product insect pests: a comparison of four statistical sampling models for probability of pest detection

    Science.gov (United States)

    Statistically robust sampling strategies form an integral component of grain storage and handling activities throughout the world. Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult due to species biology and behavioral characteristics. ...

  7. AMORE-HX: a multidimensional optimization of radial enhanced NMR-sampled hydrogen exchange

    International Nuclear Information System (INIS)

    Gledhill, John M.; Walters, Benjamin T.; Wand, A. Joshua

    2009-01-01

    The Cartesian sampled three-dimensional HNCO experiment is inherently limited in time resolution and sensitivity for the real time measurement of protein hydrogen exchange. This is largely overcome by use of the radial HNCO experiment that employs the use of optimized sampling angles. The significant practical limitation presented by use of three-dimensional data is the large data storage and processing requirements necessary and is largely overcome by taking advantage of the inherent capabilities of the 2D-FT to process selective frequency space without artifact or limitation. Decomposition of angle spectra into positive and negative ridge components provides increased resolution and allows statistical averaging of intensity and therefore increased precision. Strategies for averaging ridge cross sections within and between angle spectra are developed to allow further statistical approaches for increasing the precision of measured hydrogen occupancy. Intensity artifacts potentially introduced by over-pulsing are effectively eliminated by use of the BEST approach

  8. Molecular double-check strategy for the identification and characterization of European Lyssaviruses

    DEFF Research Database (Denmark)

    Fischer, Melina; Freuling, Conrad M.; Müller, Thomas

    2014-01-01

    The “gold standard” for post-mortem rabies diagnosis is the direct fluorescent antibody test (FAT). However, in the case of ante-mortem non-neural sample material or decomposed tissues, the FAT reaches its limit, and the use of molecular techniques can be advantageous. In this study, we developed......-systems for Rabies virus, European bat lyssavirus type 1 and 2 as well as Bokeloh bat lyssavirus. All assays were validated successfully with a comprehensive panel of lyssavirus positive samples, as well as negative material from various host species. This double-check strategy allows for both safe and sensitive...

  9. Behavioral Contexts, Food-Choice Coping Strategies, and Dietary Quality of a Multiethnic Sample of Employed Parents

    Science.gov (United States)

    Blake, Christine E.; Wethington, Elaine; Farrell, Tracy J.; Bisogni, Carole A.; Devine, Carol M.

    2012-01-01

    Employed parents’ work and family conditions provide behavioral contexts for their food choices. Relationships between employed parents’ food-choice coping strategies, behavioral contexts, and dietary quality were evaluated. Data on work and family conditions, sociodemographic characteristics, eating behavior, and dietary intake from two 24-hour dietary recalls were collected in a random sample cross-sectional pilot telephone survey in the fall of 2006. Black, white, and Latino employed mothers (n=25) and fathers (n=25) were recruited from a low/moderate income urban area in upstate New York. Hierarchical cluster analysis (Ward’s method) identified three clusters of parents differing in use of food-choice coping strategies (ie, Individualized Eating, Missing Meals, and Home Cooking). Cluster sociodemographic, work, and family characteristics were compared using χ2 and Fisher’s exact tests. Cluster differences in dietary quality (Healthy Eating Index 2005) were analyzed using analysis of variance. Clusters differed significantly (P≤0.05) on food-choice coping strategies, dietary quality, and behavioral contexts (ie, work schedule, marital status, partner’s employment, and number of children). Individualized Eating and Missing Meals clusters were characterized by nonstandard work hours, having a working partner, single parenthood and with family meals away from home, grabbing quick food instead of a meal, using convenience entrées at home, and missing meals or individualized eating. The Home Cooking cluster included considerably more married fathers with nonemployed spouses and more home-cooked family meals. Food-choice coping strategies affecting dietary quality reflect parents’ work and family conditions. Nutritional guidance and family policy needs to consider these important behavioral contexts for family nutrition and health. PMID:21338739

  10. Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards.

    Science.gov (United States)

    Bornstein, Marc H; Jager, Justin; Putnick, Diane L

    2013-12-01

    Sampling is a key feature of every study in developmental science. Although sampling has far-reaching implications, too little attention is paid to sampling. Here, we describe, discuss, and evaluate four prominent sampling strategies in developmental science: population-based probability sampling, convenience sampling, quota sampling, and homogeneous sampling. We then judge these sampling strategies by five criteria: whether they yield representative and generalizable estimates of a study's target population, whether they yield representative and generalizable estimates of subsamples within a study's target population, the recruitment efforts and costs they entail, whether they yield sufficient power to detect subsample differences, and whether they introduce "noise" related to variation in subsamples and whether that "noise" can be accounted for statistically. We use sample composition of gender, ethnicity, and socioeconomic status to illustrate and assess the four sampling strategies. Finally, we tally the use of the four sampling strategies in five prominent developmental science journals and make recommendations about best practices for sample selection and reporting.

  11. Preschool Boys' Development of Emotional Self-regulation Strategies in a Sample At-risk for Behavior Problems

    Science.gov (United States)

    Supplee, Lauren H.; Skuban, Emily Moye; Trentacosta, Christopher J.; Shaw, Daniel S.; Stoltz, Emilee

    2011-01-01

    Little longitudinal research has been conducted on changes in children's emotional self-regulation strategy (SRS) use after infancy, particularly for children at risk. The current study examined changes in boys' emotional SRS from toddlerhood through preschool. Repeated observational assessments using delay of gratification tasks at ages 2, 3, and 4 were examined with both variable- and person-oriented analyses in a low-income sample of boys (N = 117) at-risk for early problem behavior. Results were consistent with theory on emotional SRS development in young children. Children initially used more emotion-focused SRS (e.g., comfort seeking) and transitioned to greater use of planful SRS (e.g., distraction) by age 4. Person-oriented analysis using trajectory analysis found similar patterns from 2–4, with small groups of boys showing delayed movement away from emotion-focused strategies or delay in the onset of regular use of distraction. The results provide a foundation for future research to examine the development of SRS in low-income young children. PMID:21675542

  12. Biological sample collector

    Science.gov (United States)

    Murphy, Gloria A [French Camp, CA

    2010-09-07

    A biological sample collector is adapted to a collect several biological samples in a plurality of filter wells. A biological sample collector may comprise a manifold plate for mounting a filter plate thereon, the filter plate having a plurality of filter wells therein; a hollow slider for engaging and positioning a tube that slides therethrough; and a slide case within which the hollow slider travels to allow the tube to be aligned with a selected filter well of the plurality of filter wells, wherein when the tube is aligned with the selected filter well, the tube is pushed through the hollow slider and into the selected filter well to sealingly engage the selected filter well and to allow the tube to deposit a biological sample onto a filter in the bottom of the selected filter well. The biological sample collector may be portable.

  13. Rare variant association analysis in case-parents studies by allowing for missing parental genotypes.

    Science.gov (United States)

    Li, Yumei; Xiang, Yang; Xu, Chao; Shen, Hui; Deng, Hongwen

    2018-01-15

    The development of next-generation sequencing technologies has facilitated the identification of rare variants. Family-based design is commonly used to effectively control for population admixture and substructure, which is more prominent for rare variants. Case-parents studies, as typical strategies in family-based design, are widely used in rare variant-disease association analysis. Current methods in case-parents studies are based on complete case-parents data; however, parental genotypes may be missing in case-parents trios, and removing these data may lead to a loss in statistical power. The present study focuses on testing for rare variant-disease association in case-parents study by allowing for missing parental genotypes. In this report, we extended the collapsing method for rare variant association analysis in case-parents studies to allow for missing parental genotypes, and investigated the performance of two methods by using the difference of genotypes between affected offspring and their corresponding "complements" in case-parent trios and TDT framework. Using simulations, we showed that, compared with the methods just only using complete case-parents data, the proposed strategy allowing for missing parental genotypes, or even adding unrelated affected individuals, can greatly improve the statistical power and meanwhile is not affected by population stratification. We conclude that adding case-parents data with missing parental genotypes to complete case-parents data set can greatly improve the power of our strategy for rare variant-disease association.

  14. Introducing CO2 Allowances, Higher Prices For All Consumers; Higher Revenues For Whom?

    NARCIS (Netherlands)

    Gurkan, G.; Langestraat, R.; Ozdemir, O.

    2013-01-01

    Abstract Introducing a ceiling on total carbon dioxide (CO2) emissions and allowing polluting industries to buy and sell permits to meet it (known as a cap-and-trade system) affects investment strategies, generation quantities, and prices in electricity markets. In this paper we analyze these

  15. Arbitrage strategy

    OpenAIRE

    Kardaras, Constantinos

    2010-01-01

    An arbitrage strategy allows a financial agent to make certain profit out of nothing, i.e., out of zero initial investment. This has to be disallowed on economic basis if the market is in equilibrium state, as opportunities for riskless profit would result in an instantaneous movement of prices of certain financial instruments. The principle of not allowing for arbitrage opportunities in financial markets has far-reaching consequences, most notably the option-pricing and hedging formulas in c...

  16. Soil sampling

    International Nuclear Information System (INIS)

    Fortunati, G.U.; Banfi, C.; Pasturenzi, M.

    1994-01-01

    This study attempts to survey the problems associated with techniques and strategies of soil sampling. Keeping in mind the well defined objectives of a sampling campaign, the aim was to highlight the most important aspect of representativeness of samples as a function of the available resources. Particular emphasis was given to the techniques and particularly to a description of the many types of samplers which are in use. The procedures and techniques employed during the investigations following the Seveso accident are described. (orig.)

  17. Relevance of sampling schemes in light of Ruelle's linear response theory

    International Nuclear Information System (INIS)

    Lucarini, Valerio; Wouters, Jeroen; Faranda, Davide; Kuna, Tobias

    2012-01-01

    We reconsider the theory of the linear response of non-equilibrium steady states to perturbations. We first show that using a general functional decomposition for space–time dependent forcings, we can define elementary susceptibilities that allow us to construct the linear response of the system to general perturbations. Starting from the definition of SRB measure, we then study the consequence of taking different sampling schemes for analysing the response of the system. We show that only a specific choice of the time horizon for evaluating the response of the system to a general time-dependent perturbation allows us to obtain the formula first presented by Ruelle. We also discuss the special case of periodic perturbations, showing that when they are taken into consideration the sampling can be fine-tuned to make the definition of the correct time horizon immaterial. Finally, we discuss the implications of our results in terms of strategies for analysing the outputs of numerical experiments by providing a critical review of a formula proposed by Reick

  18. Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards

    Science.gov (United States)

    Bornstein, Marc H.; Jager, Justin; Putnick, Diane L.

    2014-01-01

    Sampling is a key feature of every study in developmental science. Although sampling has far-reaching implications, too little attention is paid to sampling. Here, we describe, discuss, and evaluate four prominent sampling strategies in developmental science: population-based probability sampling, convenience sampling, quota sampling, and homogeneous sampling. We then judge these sampling strategies by five criteria: whether they yield representative and generalizable estimates of a study’s target population, whether they yield representative and generalizable estimates of subsamples within a study’s target population, the recruitment efforts and costs they entail, whether they yield sufficient power to detect subsample differences, and whether they introduce “noise” related to variation in subsamples and whether that “noise” can be accounted for statistically. We use sample composition of gender, ethnicity, and socioeconomic status to illustrate and assess the four sampling strategies. Finally, we tally the use of the four sampling strategies in five prominent developmental science journals and make recommendations about best practices for sample selection and reporting. PMID:25580049

  19. Strategies for Distinguishing Abiotic Chemistry from Martian Biochemistry in Samples Returned from Mars

    Science.gov (United States)

    Glavin, D. P.; Burton, A. S.; Callahan, M. P.; Elsila, J. E.; Stern, J. C.; Dworkin, J. P.

    2012-01-01

    A key goal in the search for evidence of extinct or extant life on Mars will be the identification of chemical biosignatures including complex organic molecules common to all life on Earth. These include amino acids, the monomer building blocks of proteins and enzymes, and nucleobases, which serve as the structural basis of information storage in DNA and RNA. However, many of these organic compounds can also be formed abiotically as demonstrated by their prevalence in carbonaceous meteorites [1]. Therefore, an important challenge in the search for evidence of life on Mars will be distinguishing between abiotic chemistry of either meteoritic or martian origin from any chemical biosignatures from an extinct or extant martian biota. Although current robotic missions to Mars, including the 2011 Mars Science Laboratory (MSL) and the planned 2018 ExoMars rovers, will have the analytical capability needed to identify these key classes of organic molecules if present [2,3], return of a diverse suite of martian samples to Earth would allow for much more intensive laboratory studies using a broad array of extraction protocols and state-of-theart analytical techniques for bulk and spatially resolved characterization, molecular detection, and isotopic and enantiomeric compositions that may be required for unambiguous confirmation of martian life. Here we will describe current state-of-the-art laboratory analytical techniques that have been used to characterize the abundance and distribution of amino acids and nucleobases in meteorites, Apollo samples, and comet- exposed materials returned by the Stardust mission with an emphasis on their molecular characteristics that can be used to distinguish abiotic chemistry from biochemistry as we know it. The study of organic compounds in carbonaceous meteorites is highly relevant to Mars sample return analysis, since exogenous organic matter should have accumulated in the martian regolith over the last several billion years and the

  20. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    Directory of Open Access Journals (Sweden)

    Francesco Bonavolontà

    2014-10-01

    Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  1. Uncertainty and sampling issues in tank characterization

    International Nuclear Information System (INIS)

    Liebetrau, A.M.; Pulsipher, B.A.; Kashporenko, D.M.

    1997-06-01

    A defensible characterization strategy must recognize that uncertainties are inherent in any measurement or estimate of interest and must employ statistical methods for quantifying and managing those uncertainties. Estimates of risk and therefore key decisions must incorporate knowledge about uncertainty. This report focuses statistical methods that should be employed to ensure confident decision making and appropriate management of uncertainty. Sampling is a major source of uncertainty that deserves special consideration in the tank characterization strategy. The question of whether sampling will ever provide the reliable information needed to resolve safety issues is explored. The issue of sample representativeness must be resolved before sample information is reliable. Representativeness is a relative term but can be defined in terms of bias and precision. Currently, precision can be quantified and managed through an effective sampling and statistical analysis program. Quantifying bias is more difficult and is not being addressed under the current sampling strategies. Bias could be bounded by (1) employing new sampling methods that can obtain samples from other areas in the tanks, (2) putting in new risers on some worst case tanks and comparing the results from existing risers with new risers, or (3) sampling tanks through risers under which no disturbance or activity has previously occurred. With some bound on bias and estimates of precision, various sampling strategies could be determined and shown to be either cost-effective or infeasible

  2. New directions in childhood obesity research: how a comprehensive biorepository will allow better prediction of outcomes

    Directory of Open Access Journals (Sweden)

    Woo Jessica G

    2010-10-01

    Full Text Available Abstract Background Childhood obesity is associated with the early development of diseases such as type 2 diabetes and cardiovascular disease. Unfortunately, to date, traditional methods of research have failed to identify effective prevention and treatment strategies, and large numbers of children and adolescents continue to be at high risk of developing weight-related disease. Aim To establish a unique 'biorepository' of data and biological samples from overweight and obese children, in order to investigate the complex 'gene × environment' interactions that govern disease risk. Methods The 'Childhood Overweight BioRepository of Australia' collects baseline environmental, clinical and anthropometric data, alongside storage of blood samples for genetic, metabolic and hormonal profiles. Opportunities for longitudinal data collection have also been incorporated into the study design. National and international harmonisation of data and sample collection will achieve required statistical power. Results Ethical approval in the parent site has been obtained and early data indicate a high response rate among eligible participants (71% with a high level of compliance for comprehensive data collection (range 56% to 97% for individual study components. Multi-site ethical approval is now underway. Conclusions In time, it is anticipated that this comprehensive approach to data collection will allow early identification of individuals most susceptible to disease, as well as facilitating refinement of prevention and treatment programs.

  3. Selecting Sample Preparation Workflows for Mass Spectrometry-Based Proteomic and Phosphoproteomic Analysis of Patient Samples with Acute Myeloid Leukemia

    Directory of Open Access Journals (Sweden)

    Maria Hernandez-Valladares

    2016-08-01

    Full Text Available Global mass spectrometry (MS-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC or metal oxide affinity chromatography (MOAC. We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.

  4. Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards

    OpenAIRE

    Bornstein, Marc H.; Jager, Justin; Putnick, Diane L.

    2013-01-01

    Sampling is a key feature of every study in developmental science. Although sampling has far-reaching implications, too little attention is paid to sampling. Here, we describe, discuss, and evaluate four prominent sampling strategies in developmental science: population-based probability sampling, convenience sampling, quota sampling, and homogeneous sampling. We then judge these sampling strategies by five criteria: whether they yield representative and generalizable estimates of a study’s t...

  5. A belief-based evolutionarily stable strategy

    OpenAIRE

    Deng, Xinyang; Wang, Zhen; Liu, Qi; Deng, Yong; Mahadevan, Sankaran

    2014-01-01

    As an equilibrium refinement of the Nash equilibrium, evolutionarily stable strategy (ESS) is a key concept in evolutionary game theory and has attracted growing interest. An ESS can be either a pure strategy or a mixed strategy. Even though the randomness is allowed in mixed strategy, the selection probability of pure strategy in a mixed strategy may fluctuate due to the impact of many factors. The fluctuation can lead to more uncertainty. In this paper, such uncertainty involved in mixed st...

  6. Precommitted Investment Strategy versus Time-Consistent Investment Strategy for a Dual Risk Model

    Directory of Open Access Journals (Sweden)

    Lidong Zhang

    2014-01-01

    Full Text Available We are concerned with optimal investment strategy for a dual risk model. We assume that the company can invest into a risk-free asset and a risky asset. Short-selling and borrowing money are allowed. Due to lack of iterated-expectation property, the Bellman Optimization Principle does not hold. Thus we investigate the precommitted strategy and time-consistent strategy, respectively. We take three steps to derive the precommitted investment strategy. Furthermore, the time-consistent investment strategy is also obtained by solving the extended Hamilton-Jacobi-Bellman equations. We compare the precommitted strategy with time-consistent strategy and find that these different strategies have different advantages: the former can make value function maximized at the original time t=0 and the latter strategy is time-consistent for the whole time horizon. Finally, numerical analysis is presented for our results.

  7. Quantum Chinos game: winning strategies through quantum fluctuations

    International Nuclear Information System (INIS)

    Guinea, F; Martin-Delgado, M A

    2003-01-01

    We apply several quantization schemes to simple versions of the Chinos game. Classically, for two players with one coin each, there is a symmetric stable strategy that allows each player to win half of the times on average. A partial quantization of the game (semiclassical) allows us to find a winning strategy for the second player, but it is unstable w.r.t. the classical strategy. However, in a fully quantum version of the game we find a winning strategy for the first player that is optimal: the symmetric classical situation is broken at the quantum level. (letter to the editor)

  8. Coping strategies: gender differences and development throughout life span.

    Science.gov (United States)

    Meléndez, Juan Carlos; Mayordomo, Teresa; Sancho, Patricia; Tomás, José Manuel

    2012-11-01

    Development during life-span implies to cope with stressful events, and this coping may be done with several strategies. It could be useful to know if these coping strategies differ as a consequence of personal characteristics. This work uses the Coping with Stress Questionnaire with this aim using a sample of 400 participants. Specifically, the effects of gender and age group (young people, middle age and elderly), as well as its interaction on coping strategies is studied. With regard to age, on one hand, it is hypothesised a decrement in the use of coping strategies centred in problem solving and social support seeking as age increases. On the other hand, the use of emotional coping is hypothesised to increase with age. With respect to gender, it is hypothesised a larger use of emotional coping and social support seeking within women, and a larger use of problem solving within men. A MANOVA found significant effects for the two main effects (gender and age) as well as several interactions. Separate ANOVAs allowed us to test for potential differences in each of the coping strategies measured in the CAE. These results partially supported the hypotheses. Results are discussed in relation to scientific literature on coping, age and gender.

  9. Soil Gas Sample Handling: Evaluation of Water Removal and Sample Ganging

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Brad G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Abrecht, David G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hayes, James C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mendoza, Donaldo P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    Soil gas sampling is currently conducted in support of Nuclear Test Ban treaty verification. Soil gas samples are collected and analyzed for isotopes of interest. Some issues that can impact sampling and analysis of these samples are excess moisture and sample processing time. Here we discuss three potential improvements to the current sampling protocol; a desiccant for water removal, use of molecular sieve to remove CO2 from the sample during collection, and a ganging manifold to allow composite analysis of multiple samples.

  10. Isotropic 3D cardiac cine MRI allows efficient sparse segmentation strategies based on 3D surface reconstruction.

    Science.gov (United States)

    Odille, Freddy; Bustin, Aurélien; Liu, Shufang; Chen, Bailiang; Vuissoz, Pierre-André; Felblinger, Jacques; Bonnemains, Laurent

    2018-05-01

    Segmentation of cardiac cine MRI data is routinely used for the volumetric analysis of cardiac function. Conventionally, 2D contours are drawn on short-axis (SAX) image stacks with relatively thick slices (typically 8 mm). Here, an acquisition/reconstruction strategy is used for obtaining isotropic 3D cine datasets; reformatted slices are then used to optimize the manual segmentation workflow. Isotropic 3D cine datasets were obtained from multiple 2D cine stacks (acquired during free-breathing in SAX and long-axis (LAX) orientations) using nonrigid motion correction (cine-GRICS method) and super-resolution. Several manual segmentation strategies were then compared, including conventional SAX segmentation, LAX segmentation in three views only, and combinations of SAX and LAX slices. An implicit B-spline surface reconstruction algorithm is proposed to reconstruct the left ventricular cavity surface from the sparse set of 2D contours. All tested sparse segmentation strategies were in good agreement, with Dice scores above 0.9 despite using fewer slices (3-6 sparse slices instead of 8-10 contiguous SAX slices). When compared to independent phase-contrast flow measurements, stroke volumes computed from four or six sparse slices had slightly higher precision than conventional SAX segmentation (error standard deviation of 5.4 mL against 6.1 mL) at the cost of slightly lower accuracy (bias of -1.2 mL against 0.2 mL). Functional parameters also showed a trend to improved precision, including end-diastolic volumes, end-systolic volumes, and ejection fractions). The postprocessing workflow of 3D isotropic cardiac imaging strategies can be optimized using sparse segmentation and 3D surface reconstruction. Magn Reson Med 79:2665-2675, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Strategy under uncertainty.

    Science.gov (United States)

    Courtney, H; Kirkland, J; Viguerie, P

    1997-01-01

    At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy.

  12. Intermittent search strategies

    Science.gov (United States)

    Bénichou, O.; Loverdo, C.; Moreau, M.; Voituriez, R.

    2011-01-01

    This review examines intermittent target search strategies, which combine phases of slow motion, allowing the searcher to detect the target, and phases of fast motion during which targets cannot be detected. It is first shown that intermittent search strategies are actually widely observed at various scales. At the macroscopic scale, this is, for example, the case of animals looking for food; at the microscopic scale, intermittent transport patterns are involved in a reaction pathway of DNA-binding proteins as well as in intracellular transport. Second, generic stochastic models are introduced, which show that intermittent strategies are efficient strategies that enable the minimization of search time. This suggests that the intrinsic efficiency of intermittent search strategies could justify their frequent observation in nature. Last, beyond these modeling aspects, it is proposed that intermittent strategies could also be used in a broader context to design and accelerate search processes.

  13. Should tobacco and alcohol companies be allowed to influence Australia’s National Drug Strategy?

    Directory of Open Access Journals (Sweden)

    Becky Freeman

    2017-04-01

    Full Text Available Formation of Australia’s National Drug Strategy (NDS included an extensive consultation process that was open not only to community and public health stakeholders, but also to representatives of the tobacco and alcohol industries. Australia is bound by the World Health Organization Framework Convention on Tobacco Control, which requires governments to protect tobacco control measures from interference by the tobacco industry. NDS consultation submissions made by these conflicted industries are not publicly available for scrutiny. The NDS goals are at odds with the commercial agenda of industries that support regulatory stagnation, oppose and undermine effective action, ignore and distort evidence, and prioritise profits over health.

  14. Hepatitis B virus DNA quantification with the three-in-one (3io) method allows accurate single-step differentiation of total HBV DNA and cccDNA in biopsy-size liver samples.

    Science.gov (United States)

    Taranta, Andrzej; Tien Sy, Bui; Zacher, Behrend Johan; Rogalska-Taranta, Magdalena; Manns, Michael Peter; Bock, Claus Thomas; Wursthorn, Karsten

    2014-08-01

    Hepatitis B virus (HBV) replicates via reverse transcription converting its partially double stranded genome into the covalently closed circular DNA (cccDNA). The long-lasting cccDNA serves as a replication intermediate in the nuclei of hepatocytes. It is an excellent, though evasive, parameter for monitoring the course of liver disease and treatment efficiency. To develop and test a new approach for HBV DNA quantification in serum and small-size liver samples. The p3io plasmid contains an HBV fragment and human β-actin gene (hACTB) as a standard. Respective TaqMan probes were labeled with different fluorescent dyes. A triplex real-time PCR for simultaneous quantification of total HBV DNA, cccDNA and hACTB could be established. Three-in-one method allows simultaneous analysis of 3 targets with a lower limit of quantification of 48 copies per 20 μl PCR reaction and a wide range of linearity (R(2)>0.99, pDNA samples from HBV infected patients. Total HBV DNA and cccDNA could be quantified in 32 and 22 of 33 FFPE preserved liver specimens, respectively. Total HBV DNA concentrations quantified by the 3io method remained comparable with Cobas TaqMan HBV Test v2.0. The three-in-one protocol allows the single step quantification of viral DNA in samples from different sources. Therefore lower sample input, faster data acquisition, a lowered error and significantly lower costs are the advantages of the method. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  16. Relationship termination in emerging adulthood: Coping strategies as predictors of posttraumatic growth

    Directory of Open Access Journals (Sweden)

    Simona Zgaga

    2013-02-01

    Full Text Available Relationship termination happens relatively often in emerging adulthood but is nevertheless as distressing then as it is later in life. We examined the relationship between coping strategies and posttraumatic growth in a sample of 260 emerging adults whose heterosexual romantic relationships were terminated at most two years before participating in the study. Participants completed The Posttraumatic Growth Inventory, The COPE Inventory and Emotion Approach Coping scale. For the purposes of the study we also conceptualized a new coping inventory, related specifically to coping with relationship termination. While controlling for gender and age the coping strategies explained 34 % of variability in posttraumatic growth. Statistically significant predictors of posttraumatic growth were problem-oriented coping strategies, coping strategies, oriented towards emotions and other people, acceptance and positive self-motivation as well as coping strategies that allow some distancing from the stressor. Results indicate that posttraumatic growth is related to problem-oriented and also to emotion-oriented coping strategies, which is reasonable since relationship termination is a stressor that cannot be eliminated. It is important that an individual who is facing it can cope well with the unpleasant emotions deriving from relationship termination.

  17. Novel technologies and an overall strategy to allow hazard assessment and risk prediction of chemicals, cosmetics, and drugs with animal-free methods.

    Science.gov (United States)

    Leist, Marcel; Lidbury, Brett A; Yang, Chihae; Hayden, Patrick J; Kelm, Jens M; Ringeissen, Stephanie; Detroyer, Ann; Meunier, Jean R; Rathman, James F; Jackson, George R; Stolper, Gina; Hasiwa, Nina

    2012-01-01

    Several alternative methods to replace animal experiments have been accepted by legal bodies. An even larger number of tests are under development or already in use for non-regulatory applications or for the generation of information stored in proprietary knowledge bases. The next step for the use of the different in vitro methods is their combination into integrated testing strategies (ITS) to get closer to the overall goal of predictive "in vitro-based risk evaluation processes." We introduce here a conceptual framework as the basis for future ITS and their use for risk evaluation without animal experiments. The framework allows incorporation of both individual tests and already integrated approaches. Illustrative examples for elements to be incorporated are drawn from the session "Innovative technologies" at the 8th World Congress on Alternatives and Animal Use in the Life Sciences, held in Montreal, 2011. For instance, LUHMES cells (conditionally immortalized human neurons) were presented as an example for a 2D cell system. The novel 3D platform developed by InSphero was chosen as an example for the design and use of scaffold-free, organotypic microtissues. The identification of critical pathways of toxicity (PoT) may be facilitated by approaches exemplified by the MatTek 3D model for human epithelial tissues with engineered toxicological reporter functions. The important role of in silico methods and of modeling based on various pre-existing data is demonstrated by Altamira's comprehensive approach to predicting a molecule's potential for skin irritancy. A final example demonstrates how natural variation in human genetics may be overcome using data analytic (pattern recognition) techniques borrowed from computer science and statistics. The overall hazard and risk assessment strategy integrating these different examples has been compiled in a graphical work flow.

  18. A method of language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees

    1993-01-01

    In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... to determine how many languages from each phylum should be selected, given any required sample size....

  19. CHILD ALLOWANCE

    CERN Multimedia

    Human Resources Division

    2001-01-01

    HR Division wishes to clarify to members of the personnel that the allowance for a dependent child continues to be paid during all training courses ('stages'), apprenticeships, 'contrats de qualification', sandwich courses or other courses of similar nature. Any payment received for these training courses, including apprenticeships, is however deducted from the amount reimbursable as school fees. HR Division would also like to draw the attention of members of the personnel to the fact that any contract of employment will lead to the suppression of the child allowance and of the right to reimbursement of school fees.

  20. Sampling informative/complex a priori probability distributions using Gibbs sampling assisted by sequential simulation

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Mosegaard, Klaus; Cordua, Knud Skou

    2010-01-01

    Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample the solutions to non-linear inverse problems. In principle these methods allow incorporation of arbitrarily complex a priori information, but current methods allow only relatively simple...... this algorithm with the Metropolis algorithm to obtain an efficient method for sampling posterior probability densities for nonlinear inverse problems....

  1. Cost-constrained optimal sampling for system identification in pharmacokinetics applications with population priors and nuisance parameters.

    Science.gov (United States)

    Sorzano, Carlos Oscars S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar

    2015-06-01

    Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  2. Spontaneous gestures influence strategy choices in problem solving.

    Science.gov (United States)

    Alibali, Martha W; Spencer, Robert C; Knox, Lucy; Kita, Sotaro

    2011-09-01

    Do gestures merely reflect problem-solving processes, or do they play a functional role in problem solving? We hypothesized that gestures highlight and structure perceptual-motor information, and thereby make such information more likely to be used in problem solving. Participants in two experiments solved problems requiring the prediction of gear movement, either with gesture allowed or with gesture prohibited. Such problems can be correctly solved using either a perceptual-motor strategy (simulation of gear movements) or an abstract strategy (the parity strategy). Participants in the gesture-allowed condition were more likely to use perceptual-motor strategies than were participants in the gesture-prohibited condition. Gesture promoted use of perceptual-motor strategies both for participants who talked aloud while solving the problems (Experiment 1) and for participants who solved the problems silently (Experiment 2). Thus, spontaneous gestures influence strategy choices in problem solving.

  3. A simple vibrating sample magnetometer for macroscopic samples

    Science.gov (United States)

    Lopez-Dominguez, V.; Quesada, A.; Guzmán-Mínguez, J. C.; Moreno, L.; Lere, M.; Spottorno, J.; Giacomone, F.; Fernández, J. F.; Hernando, A.; García, M. A.

    2018-03-01

    We here present a simple model of a vibrating sample magnetometer (VSM). The system allows recording magnetization curves at room temperature with a resolution of the order of 0.01 emu and is appropriated for macroscopic samples. The setup can be mounted with different configurations depending on the requirements of the sample to be measured (mass, saturation magnetization, saturation field, etc.). We also include here examples of curves obtained with our setup and comparison curves measured with a standard commercial VSM that confirms the reliability of our device.

  4. Pierre Gy's sampling theory and sampling practice heterogeneity, sampling correctness, and statistical process control

    CERN Document Server

    Pitard, Francis F

    1993-01-01

    Pierre Gy's Sampling Theory and Sampling Practice, Second Edition is a concise, step-by-step guide for process variability management and methods. Updated and expanded, this new edition provides a comprehensive study of heterogeneity, covering the basic principles of sampling theory and its various applications. It presents many practical examples to allow readers to select appropriate sampling protocols and assess the validity of sampling protocols from others. The variability of dynamic process streams using variography is discussed to help bridge sampling theory with statistical process control. Many descriptions of good sampling devices, as well as descriptions of poor ones, are featured to educate readers on what to look for when purchasing sampling systems. The book uses its accessible, tutorial style to focus on professional selection and use of methods. The book will be a valuable guide for mineral processing engineers; metallurgists; geologists; miners; chemists; environmental scientists; and practit...

  5. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research.

    Science.gov (United States)

    Gentles, Stephen J; Charles, Cathy; Nicholas, David B; Ploeg, Jenny; McKibbon, K Ann

    2016-10-11

    Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews, might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research. The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type

  6. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...... assumed to be normally distributed, and sequential one-sided hypothesis tests on the population standard deviation of the differences against a hypothesised value of 1.5 were performed, employing an alpha spending function. The fixed-sample analysis (N = 45) was compared with the group-sequential analysis...... strategies comprising one (at N = 23), two (at N = 15, 30), or three interim analyses (at N = 11, 23, 34), respectively, which were defined post hoc. RESULTS: When performing interim analyses with one third and two thirds of patients, sufficient agreement could be concluded after the first interim analysis...

  7. Water born pollutants sampling using porous suction samples

    International Nuclear Information System (INIS)

    Baig, M.A.

    1997-01-01

    The common standard method of sampling water born pollutants in the vadoze zone is core sampling and it is followed by extraction of pore fluid. This method does not allow sampling at the same location next time and again later on. There is an alternative approach for sampling fluids (water born pollutants) from both saturated and unsaturated regions of vadose zone using porous suction samplers. There are three types of porous suction samplers, vacuum-operated, pressure-vacuum lysimeters, high pressure vacuum samples. The suction samples are operated in the range of 0-70 centi bars and usually consist of ceramic and polytetrafluorethylene (PTFE). The operation range of PTFE is higher than ceramic cups. These samplers are well suited for in situ and repeated sampling form the same location. This paper discusses the physical properties and operating condition of such samplers to the utilized under our environmental sampling. (author)

  8. Sampling strategies for efficient estimation of tree foliage biomass

    Science.gov (United States)

    Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson

    2011-01-01

    Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...

  9. Characteristics of HIV target CD4 T cells collected using different sampling methods from the genital tract of HIV seronegative women.

    Science.gov (United States)

    Iyer, Smita S; Sabula, Michael J; Mehta, C Christina; Haddad, Lisa B; Brown, Nakita L; Amara, Rama R; Ofotokun, Igho; Sheth, Anandi N

    2017-01-01

    Understanding the immune profile of CD4 T cells, the primary targets for HIV, in the female genital tract (FGT) is critical for evaluating and developing effective biomedical HIV prevention strategies in women. However, longitudinal investigation of HIV susceptibility markers expressed by FGT CD4 T cells has been hindered by low cellular yield and risk of sampling-associated trauma. We investigated three minimally invasive FGT sampling methods to characterize and compare CD4 T cell yield and phenotype with the goal of establishing feasible sampling strategies for immune profiling of mucosal CD4 T cells. FGT samples were collected bimonthly from 12 healthy HIV negative women of reproductive age in the following order: 1) Cervicovaginal lavage (CVL), 2) two sequential endocervical flocked swabs (FS), and 3) two sequential endocervical cytobrushes (CB1, CB2). Cells were isolated and phentoyped via flow cytometry. CD4 T cell recovery was highest from each individual CB compared to either CVL or FS (p sampling method, expressed CCR5 relative to peripheral blood (p samples. Using three different mucosal sampling methods collected longitudinally we demonstrate that CD4 T cells within the FGT express CCR5 and α4β7 and are highly activated, attributes which could act in concert to facilitate HIV acquisition. FS and CB sampling methods can allow for investigation of strategies to reduce HIV target cells in the FGT and could inform the design and interpretation microbicide and vaccine studies in women.

  10. Strategy as Projects

    DEFF Research Database (Denmark)

    Lund Pedersen, Carsten; Ritter, Thomas; Andersen, Torben Juul

    This paper proposes the adoption of a project-based view to analyze strategy formation and strategic renewal over time. Projects are resource-committing, empirically-tracable investments, and as such, particularly suitable for the analysis of different manifestations of intended strategies as well...... as post-hoc manifestations of deviant, even rebellious, actions taken in opposition to the initial strategy announcement. The paper presents an analytical framework (a 5x2 matrix) of ten different project categories that together allows researchers to investigate how strategic renewal is realized through...... the enactment of different types of project initiatives throughout the organization. The developed framework is validated by two field studies that outline the robustness of the proposed matrix. In addition to the demonstration of the advantages of the framework, we discuss the limitations of the strategy-as-projects...

  11. Evolutionary Stable Strategy

    Indian Academy of Sciences (India)

    IAS Admin

    Biologists refer to this as a 'limited war' or conventional (ritualistic) strategy (not ... Now allow us to explain a few things about cat behaviour, which will help .... getting injured, but it also helped save both energy and time, vital .... intelligent and.

  12. Data from: Two different strategies of host manipulation allow parasites to persist in intermediate-definitive host systems

    NARCIS (Netherlands)

    Vries, de Lana; Langevelde, van F.

    2017-01-01

    Trophically-transmitted parasites start their development in an intermediate host, before they finish the development in their definitive host when the definitive host preys on the intermediate host. In intermediate-definitive host systems, two strategies of host manipulation have been evolved:

  13. Novel Sample-handling Approach for XRD Analysis with Minimal Sample Preparation

    Science.gov (United States)

    Sarrazin, P.; Chipera, S.; Bish, D.; Blake, D.; Feldman, S.; Vaniman, D.; Bryson, C.

    2004-01-01

    Sample preparation and sample handling are among the most critical operations associated with X-ray diffraction (XRD) analysis. These operations require attention in a laboratory environment, but they become a major constraint in the deployment of XRD instruments for robotic planetary exploration. We are developing a novel sample handling system that dramatically relaxes the constraints on sample preparation by allowing characterization of coarse-grained material that would normally be impossible to analyze with conventional powder-XRD techniques.

  14. Detailed characterization of welding fumes in personal exposure samples

    International Nuclear Information System (INIS)

    Quémerais, B; Mino, James; Amin, M R; Golshahi, H; Izadi, H

    2015-01-01

    The objective of the project was to develop a method allowing for detailed characterization of welding particles including particle number concentration, size distribution, surface chemistry and chemical composition of individual particles, as well as metal concentration of various welding fumes in personal exposure samples using regular sampling equipment. A sample strategy was developed to evaluate the variation of the collection methods on mass concentration. Samples were collected with various samplers and filters at two different locations using our collection system. The first location was using a robotic welding system while the second was manual welding. Collected samples were analysed for mass concentration using gravimetryand metal concentration using ICP/OES. More advanced analysis was performed on selected filters using X-Ray Photoelectron Spectroscopy to determine surface composition of the particles, and X-Ray Diffraction to determine chemical composition of the fumes. Results showed that the robotic system had a lot of variation in space when the collection system was located close to the weld. Collection efficiency was found to be quite variable depending upon the type of filter. As well, metal concentrations in blank filters were dependent upon the type of filter with MCE presenting with the highest blank values. Results obtained with the XRD and XPS systems showed that it was possible to analyse a small of powdered welding fume sample but results on filters were not conclusive. (paper)

  15. [Strategy for molecular testing in pulmonary carcinoma].

    Science.gov (United States)

    Penault-Llorca, Frédérique; Tixier, Lucie; Perrot, Loïc; Cayre, Anne

    2016-01-01

    Nowadays, the analysis of theranostic molecular markers is central in the management of lung cancer. As those tumors are diagnosed in two third of the cases at an advanced stage, molecular screening is frequently performed on "small samples". The screening strategy starts by an accurate histopathological characterization, including on biopsies or cytological specimens. WHO 2015 provided a new classification for small biopsy and cytology, defining categories such as non-small cell carcinoma (NSCC), favor adenocarcinoma (TTF1 positive), or favor squamous cell carcinoma (p40 positive). Only the NSCC tumors, non-squamous, are eligible to molecular testing. A strategy aiming at tissue sparing for the small biopsies has to be organized. Tests corresponding to available drugs are prioritized. Blank slides will be prepared for immunohistochemistry and in situ hybridization based tests such as ALK. DNA will then be extracted for the other tests, EGFR mutation screening first associated or not to KRAS. Then, the emerging biomarkers (HER2, ROS1, RET, BRAF…) as well as potentially other markers in case of clinical trials, can been tested. The spread of next generation sequencing technologies, with a very sensitive all-in-one approach will allow the identification of minority clones. Eventually, the development of liquid biopsies will provide the opportunity to monitor the apparition of resistance clones during treatment. This non-invasive approach allows patients with a contraindication to perform biopsy or with non-relevant biopsies to access to molecular screening. Copyright © 2016. Published by Elsevier Masson SAS.

  16. Purposeful Sampling for Qualitative Data Collection and Analysis in Mixed Method Implementation Research.

    Science.gov (United States)

    Palinkas, Lawrence A; Horwitz, Sarah M; Green, Carla A; Wisdom, Jennifer P; Duan, Naihua; Hoagwood, Kimberly

    2015-09-01

    Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.

  17. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research

    Science.gov (United States)

    Palinkas, Lawrence A.; Horwitz, Sarah M.; Green, Carla A.; Wisdom, Jennifer P.; Duan, Naihua; Hoagwood, Kimberly

    2013-01-01

    Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research. PMID:24193818

  18. The role of coping strategies and self-efficacy as predictors of life satisfaction in a sample of parents of children with autism spectrum disorder.

    Science.gov (United States)

    Luque Salas, Bárbara; Yáñez Rodríguez, Virginia; Tabernero Urbieta, Carmen; Cuadrado, Esther

    2017-02-01

    This research aims to understand the role of coping strategies and self-efficacy expectations as predictors of life satisfaction in a sample of parents of boys and girls diagnosed with autistic spectrum disorder. A total of 129 parents (64 men and 65 women) answered a questionnaire on life-satisfaction, coping strategies and self-efficacy scales. Using a regression model, results show that the age of the child is associated with a lower level of satisfaction in parents. The results show that self-efficacy is the variable that best explains the level of satisfaction in mothers, while the use of problem solving explains a higher level of satisfaction in fathers. Men and women show similar levels of life satisfaction; however significant differences were found in coping strategies where women demonstrated higher expressing emotions and social support strategies than men. The development of functional coping strategies and of a high level of self-efficacy represents a key tool for adapting to caring for children with autism. Our results indicated the necessity of early intervention with parents to promote coping strategies, self-efficacy and high level of life satisfaction.

  19. Time management strategies for research productivity.

    Science.gov (United States)

    Chase, Jo-Ana D; Topp, Robert; Smith, Carol E; Cohen, Marlene Z; Fahrenwald, Nancy; Zerwic, Julie J; Benefield, Lazelle E; Anderson, Cindy M; Conn, Vicki S

    2013-02-01

    Researchers function in a complex environment and carry multiple role responsibilities. This environment is prone to various distractions that can derail productivity and decrease efficiency. Effective time management allows researchers to maintain focus on their work, contributing to research productivity. Thus, improving time management skills is essential to developing and sustaining a successful program of research. This article presents time management strategies addressing behaviors surrounding time assessment, planning, and monitoring. Herein, the Western Journal of Nursing Research editorial board recommends strategies to enhance time management, including setting realistic goals, prioritizing, and optimizing planning. Involving a team, problem-solving barriers, and early management of potential distractions can facilitate maintaining focus on a research program. Continually evaluating the effectiveness of time management strategies allows researchers to identify areas of improvement and recognize progress.

  20. Risk evaluation of accident management strategies

    International Nuclear Information System (INIS)

    Dingman, S.; Camp, A.

    1992-01-01

    The use of Probabilistic Risk Assessment (PRA) methods to evaluate accident management strategies in nuclear power plants discussed in this paper. The PRA framework allows an integrated evaluation to be performed to give the full implications of a particular strategy. The methodology is demonstrated for a particular accident management strategy, intentional depressurization of the reactor coolant system to avoid containment pressurization during the ejection of molten debris at vessel breach

  1. Sampling of temporal networks: Methods and biases

    Science.gov (United States)

    Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter

    2017-11-01

    Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.

  2. Long-term strategic asset allocation: An out-of-sample evaluation

    NARCIS (Netherlands)

    Diris, B.F.; Palm, F.C.; Schotman, P.C.

    We evaluate the out-of-sample performance of a long-term investor who follows an optimized dynamic trading strategy. Although the dynamic strategy is able to benefit from predictability out-of-sample, a short-term investor using a single-period market timing strategy would have realized an almost

  3. Integrating the Theory of Sampling into Underground Mine Grade Control Strategies

    Directory of Open Access Journals (Sweden)

    Simon C. Dominy

    2018-05-01

    Full Text Available Grade control in underground mines aims to deliver quality tonnes to the process plant via the accurate definition of ore and waste. It comprises a decision-making process including data collection and interpretation; local estimation; development and mining supervision; ore and waste destination tracking; and stockpile management. The foundation of any grade control programme is that of high-quality samples collected in a geological context. The requirement for quality samples has long been recognised, where they should be representative and fit-for-purpose. Once a sampling error is introduced, it propagates through all subsequent processes contributing to data uncertainty, which leads to poor decisions and financial loss. Proper application of the Theory of Sampling reduces errors during sample collection, preparation, and assaying. To achieve quality, sampling techniques must minimise delimitation, extraction, and preparation errors. Underground sampling methods include linear (chip and channel, grab (broken rock, and drill-based samples. Grade control staff should be well-trained and motivated, and operating staff should understand the critical need for grade control. Sampling must always be undertaken with a strong focus on safety and alternatives sought if the risk to humans is high. A quality control/quality assurance programme must be implemented, particularly when samples contribute to a reserve estimate. This paper assesses grade control sampling with emphasis on underground gold operations and presents recommendations for optimal practice through the application of the Theory of Sampling.

  4. Clinical evaluation of a Mucorales-specific real-time PCR assay in tissue and serum samples.

    Science.gov (United States)

    Springer, Jan; Lackner, Michaela; Ensinger, Christian; Risslegger, Brigitte; Morton, Charles Oliver; Nachbaur, David; Lass-Flörl, Cornelia; Einsele, Hermann; Heinz, Werner J; Loeffler, Juergen

    2016-12-01

    Molecular diagnostic assays can accelerate the diagnosis of fungal infections and subsequently improve patient outcomes. In particular, the detection of infections due to Mucorales is still challenging for laboratories and physicians. The aim of this study was to evaluate a probe-based Mucorales-specific real-time PCR assay (Muc18S) using tissue and serum samples from patients suffering from invasive mucormycosis (IMM). This assay can detect a broad range of clinically relevant Mucorales species and can be used to complement existing diagnostic tests or to screen high-risk patients. An advantage of the Muc18S assay is that it exclusively detects Mucorales species allowing the diagnosis of Mucorales DNA without sequencing within a few hours. In paraffin-embedded tissue samples this PCR-based method allowed rapid identification of Mucorales in comparison with standard methods and showed 91 % sensitivity in the IMM tissue samples. We also evaluated serum samples, an easily accessible material, from patients at risk from IMM. Mucorales DNA was detected in all patients with probable/proven IMM (100 %) and in 29 % of the possible cases. Detection of IMM in serum could enable an earlier diagnosis (up to 21 days) than current methods including tissue samples, which were gained mainly post-mortem. A screening strategy for high-risk patients, which would enable targeted treatment to improve patient outcomes, is therefore possible.

  5. Sampling Design of Soil Physical Properties in a Conilon Coffee Field

    Directory of Open Access Journals (Sweden)

    Eduardo Oliveira de Jesus Santos

    Full Text Available ABSTRACT Establishing the number of samples required to determine values of soil physical properties ultimately results in optimization of labor and allows better representation of such attributes. The objective of this study was to analyze the spatial variability of soil physical properties in a Conilon coffee field and propose a soil sampling method better attuned to conditions of the management system. The experiment was performed in a Conilon coffee field in Espírito Santo state, Brazil, under a 3.0 × 2.0 × 1.0 m (4,000 plants ha-1 double spacing design. An irregular grid, with dimensions of 107 × 95.7 m and 65 sampling points, was set up. Soil samples were collected from the 0.00-0.20 m depth from each sampling point. Data were analyzed under descriptive statistical and geostatistical methods. Using statistical parameters, the adequate number of samples for analyzing the attributes under study was established, which ranged from 1 to 11 sampling points. With the exception of particle density, all soil physical properties showed a spatial dependence structure best fitted to the spherical model. Establishment of the number of samples and spatial variability for the physical properties of soils may be useful in developing sampling strategies that minimize costs for farmers within a tolerable and predictable level of error.

  6. Connectivity strategies for higher-order neural networks applied to pattern recognition

    Science.gov (United States)

    Spirkovska, Lilly; Reid, Max B.

    1990-01-01

    Different strategies for non-fully connected HONNs (higher-order neural networks) are discussed, showing that by using such strategies an input field of 128 x 128 pixels can be attained while still achieving in-plane rotation and translation-invariant recognition. These techniques allow HONNs to be used with the larger input scenes required for practical pattern-recognition applications. The number of interconnections that must be stored has been reduced by a factor of approximately 200,000 in a T/C case and about 2000 in a Space Shuttle/F-18 case by using regional connectivity. Third-order networks have been simulated using several connection strategies. The method found to work best is regional connectivity. The main advantages of this strategy are the following: (1) it considers features of various scales within the image and thus gets a better sample of what the image looks like; (2) it is invariant to shape-preserving geometric transformations, such as translation and rotation; (3) the connections are predetermined so that no extra computations are necessary during run time; and (4) it does not require any extra storage for recording which connections were formed.

  7. Innovative recruitment using online networks: lessons learned from an online study of alcohol and other drug use utilizing a web-based, respondent-driven sampling (webRDS) strategy.

    Science.gov (United States)

    Bauermeister, José A; Zimmerman, Marc A; Johns, Michelle M; Glowacki, Pietreck; Stoddard, Sarah; Volz, Erik

    2012-09-01

    We used a web version of Respondent-Driven Sampling (webRDS) to recruit a sample of young adults (ages 18-24) and examined whether this strategy would result in alcohol and other drug (AOD) prevalence estimates comparable to national estimates (National Survey on Drug Use and Health [NSDUH]). We recruited 22 initial participants (seeds) via Facebook to complete a web survey examining AOD risk correlates. Sequential, incentivized recruitment continued until our desired sample size was achieved. After correcting for webRDS clustering effects, we contrasted our AOD prevalence estimates (past 30 days) to NSDUH estimates by comparing the 95% confidence intervals of prevalence estimates. We found comparable AOD prevalence estimates between our sample and NSDUH for the past 30 days for alcohol, marijuana, cocaine, Ecstasy (3,4-methylenedioxymethamphetamine, or MDMA), and hallucinogens. Cigarette use was lower than NSDUH estimates. WebRDS may be a suitable strategy to recruit young adults online. We discuss the unique strengths and challenges that may be encountered by public health researchers using webRDS methods.

  8. Hot Zone Identification: Analyzing Effects of Data Sampling on Spam Clustering

    Directory of Open Access Journals (Sweden)

    Rasib Khan

    2014-03-01

    Full Text Available Email is the most common and comparatively the most efficient means of exchanging information in today's world. However, given the widespread use of emails in all sectors, they have been the target of spammers since the beginning. Filtering spam emails has now led to critical actions such as forensic activities based on mining spam email. The data mine for spam emails at the University of Alabama at Birmingham is considered to be one of the most prominent resources for mining and identifying spam sources. It is a widely researched repository used by researchers from different global organizations. The usual process of mining the spam data involves going through every email in the data mine and clustering them based on their different attributes. However, given the size of the data mine, it takes an exceptionally long time to execute the clustering mechanism each time. In this paper, we have illustrated sampling as an efficient tool for data reduction, while preserving the information within the clusters, which would thus allow the spam forensic experts to quickly and effectively identify the ‘hot zone’ from the spam campaigns. We have provided detailed comparative analysis of the quality of the clusters after sampling, the overall distribution of clusters on the spam data, and timing measurements for our sampling approach. Additionally, we present different strategies which allowed us to optimize the sampling process using data-preprocessing and using the database engine's computational resources, and thus improving the performance of the clustering process.

  9. WRAP Module 1 sampling strategy and waste characterization alternatives study

    Energy Technology Data Exchange (ETDEWEB)

    Bergeson, C.L.

    1994-09-30

    The Waste Receiving and Processing Module 1 Facility is designed to examine, process, certify, and ship drums and boxes of solid wastes that have a surface dose equivalent of less than 200 mrem/h. These wastes will include low-level and transuranic wastes that are retrievably stored in the 200 Area burial grounds and facilities in addition to newly generated wastes. Certification of retrievably stored wastes processing in WRAP 1 is required to meet the waste acceptance criteria for onsite treatment and disposal of low-level waste and mixed low-level waste and the Waste Isolation Pilot Plant Waste Acceptance Criteria for the disposal of TRU waste. In addition, these wastes will need to be certified for packaging in TRUPACT-II shipping containers. Characterization of the retrievably stored waste is needed to support the certification process. Characterization data will be obtained from historical records, process knowledge, nondestructive examination nondestructive assay, visual inspection of the waste, head-gas sampling, and analysis of samples taken from the waste containers. Sample characterization refers to the method or methods that are used to test waste samples for specific analytes. The focus of this study is the sample characterization needed to accurately identify the hazardous and radioactive constituents present in the retrieved wastes that will be processed in WRAP 1. In addition, some sampling and characterization will be required to support NDA calculations and to provide an over-check for the characterization of newly generated wastes. This study results in the baseline definition of WRAP 1 sampling and analysis requirements and identifies alternative methods to meet these requirements in an efficient and economical manner.

  10. WRAP Module 1 sampling strategy and waste characterization alternatives study

    International Nuclear Information System (INIS)

    Bergeson, C.L.

    1994-01-01

    The Waste Receiving and Processing Module 1 Facility is designed to examine, process, certify, and ship drums and boxes of solid wastes that have a surface dose equivalent of less than 200 mrem/h. These wastes will include low-level and transuranic wastes that are retrievably stored in the 200 Area burial grounds and facilities in addition to newly generated wastes. Certification of retrievably stored wastes processing in WRAP 1 is required to meet the waste acceptance criteria for onsite treatment and disposal of low-level waste and mixed low-level waste and the Waste Isolation Pilot Plant Waste Acceptance Criteria for the disposal of TRU waste. In addition, these wastes will need to be certified for packaging in TRUPACT-II shipping containers. Characterization of the retrievably stored waste is needed to support the certification process. Characterization data will be obtained from historical records, process knowledge, nondestructive examination nondestructive assay, visual inspection of the waste, head-gas sampling, and analysis of samples taken from the waste containers. Sample characterization refers to the method or methods that are used to test waste samples for specific analytes. The focus of this study is the sample characterization needed to accurately identify the hazardous and radioactive constituents present in the retrieved wastes that will be processed in WRAP 1. In addition, some sampling and characterization will be required to support NDA calculations and to provide an over-check for the characterization of newly generated wastes. This study results in the baseline definition of WRAP 1 sampling and analysis requirements and identifies alternative methods to meet these requirements in an efficient and economical manner

  11. A belief-based evolutionarily stable strategy.

    Science.gov (United States)

    Deng, Xinyang; Wang, Zhen; Liu, Qi; Deng, Yong; Mahadevan, Sankaran

    2014-11-21

    As an equilibrium refinement of the Nash equilibrium, evolutionarily stable strategy (ESS) is a key concept in evolutionary game theory and has attracted growing interest. An ESS can be either a pure strategy or a mixed strategy. Even though the randomness is allowed in mixed strategy, the selection probability of pure strategy in a mixed strategy may fluctuate due to the impact of many factors. The fluctuation can lead to more uncertainty. In this paper, such uncertainty involved in mixed strategy has been further taken into consideration: a belief strategy is proposed in terms of Dempster-Shafer evidence theory. Furthermore, based on the proposed belief strategy, a belief-based ESS has been developed. The belief strategy and belief-based ESS can reduce to the mixed strategy and mixed ESS, which provide more realistic and powerful tools to describe interactions among agents. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    Science.gov (United States)

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  13. Effects of Direct Fuel Injection Strategies on Cycle-by-Cycle Variability in a Gasoline Homogeneous Charge Compression Ignition Engine: Sample Entropy Analysis

    Directory of Open Access Journals (Sweden)

    Jacek Hunicz

    2015-01-01

    Full Text Available In this study we summarize and analyze experimental observations of cyclic variability in homogeneous charge compression ignition (HCCI combustion in a single-cylinder gasoline engine. The engine was configured with negative valve overlap (NVO to trap residual gases from prior cycles and thus enable auto-ignition in successive cycles. Correlations were developed between different fuel injection strategies and cycle average combustion and work output profiles. Hypothesized physical mechanisms based on these correlations were then compared with trends in cycle-by-cycle predictability as revealed by sample entropy. The results of these comparisons help to clarify how fuel injection strategy can interact with prior cycle effects to affect combustion stability and so contribute to design control methods for HCCI engines.

  14. Ligand pose and orientational sampling in molecular docking.

    Directory of Open Access Journals (Sweden)

    Ryan G Coleman

    Full Text Available Molecular docking remains an important tool for structure-based screening to find new ligands and chemical probes. As docking ambitions grow to include new scoring function terms, and to address ever more targets, the reliability and extendability of the orientation sampling, and the throughput of the method, become pressing. Here we explore sampling techniques that eliminate stochastic behavior in DOCK3.6, allowing us to optimize the method for regularly variable sampling of orientations. This also enabled a focused effort to optimize the code for efficiency, with a three-fold increase in the speed of the program. This, in turn, facilitated extensive testing of the method on the 102 targets, 22,805 ligands and 1,411,214 decoys of the Directory of Useful Decoys-Enhanced (DUD-E benchmarking set, at multiple levels of sampling. Encouragingly, we observe that as sampling increases from 50 to 500 to 2000 to 5000 to 20,000 molecular orientations in the binding site (and so from about 1×10(10 to 4×10(10 to 1×10(11 to 2×10(11 to 5×10(11 mean atoms scored per target, since multiple conformations are sampled per orientation, the enrichment of ligands over decoys monotonically increases for most DUD-E targets. Meanwhile, including internal electrostatics in the evaluation ligand conformational energies, and restricting aromatic hydroxyls to low energy rotamers, further improved enrichment values. Several of the strategies used here to improve the efficiency of the code are broadly applicable in the field.

  15. Statistical sampling strategies for survey of soil contamination

    NARCIS (Netherlands)

    Brus, D.J.

    2011-01-01

    This chapter reviews methods for selecting sampling locations in contaminated soils for three situations. In the first situation a global estimate of the soil contamination in an area is required. The result of the surey is a number or a series of numbers per contaminant, e.g. the estimated mean

  16. Some statistical and sampling needs for detecting spills or migration at commercial low-level radioactive waste disposal sites

    International Nuclear Information System (INIS)

    Thomas, J.M.; Eberhardt, L.L.; Skalski, J.R.; Simmons, M.A.

    1984-05-01

    As part of a larger study funded by the US Nuclear Regulatory Commission we have been investigating field sampling strategies and compositing as a means of detecting spills or migration at commercial low-level radioactive waste disposal sites. The overall project is designed to produce information for developing guidance on implementing 10 CFR part 61. Compositing (pooling samples) for detection is discussed first, followed by our development of a statistical test to allow a decision as to whether any component of a composite exceeds a prescribed maximum acceptable level. The question of optimal field sampling designs and an Apple computer program designed to show the difficulties in constructing efficient field designs and using compositing schemes are considered. 6 references, 3 figures, 3 tables

  17. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    computationally intensive method. As such, many strategies were used to reduce the computation time and memory usage: a) bottlenecks were implemented in C++, b) a finite set of candidate locations is used for perturbing the sample points, and c) data matrices are computed only once and then updated at each iteration instead of being recomputed. spsann is available at GitHub under a licence GLP Version 2.0 and will be further developed to: a) allow the use of a cost surface, b) implement other sensitive parts of the source code in C++, c) implement other optimizing criteria, d) allow to add or delete points to/from an existing point pattern.

  18. Standard methods for sampling freshwater fishes: Opportunities for international collaboration

    Science.gov (United States)

    Bonar, Scott A.; Mercado-Silva, Norman; Hubert, Wayne A.; Beard, Douglas; Dave, Göran; Kubečka, Jan; Graeb, Brian D. S.; Lester, Nigel P.; Porath, Mark T.; Winfield, Ian J.

    2017-01-01

    With publication of Standard Methods for Sampling North American Freshwater Fishes in 2009, the American Fisheries Society (AFS) recommended standard procedures for North America. To explore interest in standardizing at intercontinental scales, a symposium attended by international specialists in freshwater fish sampling was convened at the 145th Annual AFS Meeting in Portland, Oregon, in August 2015. Participants represented all continents except Australia and Antarctica and were employed by state and federal agencies, universities, nongovernmental organizations, and consulting businesses. Currently, standardization is practiced mostly in North America and Europe. Participants described how standardization has been important for management of long-term data sets, promoting fundamental scientific understanding, and assessing efficacy of large spatial scale management strategies. Academics indicated that standardization has been useful in fisheries education because time previously used to teach how sampling methods are developed is now more devoted to diagnosis and treatment of problem fish communities. Researchers reported that standardization allowed increased sample size for method validation and calibration. Group consensus was to retain continental standards where they currently exist but to further explore international and intercontinental standardization, specifically identifying where synergies and bridges exist, and identify means to collaborate with scientists where standardization is limited but interest and need occur.

  19. The Effect of Vocabulary Self-Selection Strategy and Input Enhancement Strategy on the Vocabulary Knowledge of Iranian EFL Learners

    Science.gov (United States)

    Masoudi, Golfam

    2017-01-01

    The present study was designed to investigate empirically the effect of Vocabulary Self-Selection strategy and Input Enhancement strategy on the vocabulary knowledge of Iranian EFL Learners. After taking a diagnostic pretest, both experimental groups enrolled in two classes. Learners who practiced Vocabulary Self-Selection were allowed to…

  20. Sample preservation, transport and processing strategies for honeybee RNA extraction: Influence on RNA yield, quality, target quantification and data normalization.

    Science.gov (United States)

    Forsgren, Eva; Locke, Barbara; Semberg, Emilia; Laugen, Ane T; Miranda, Joachim R de

    2017-08-01

    Viral infections in managed honey bees are numerous, and most of them are caused by viruses with an RNA genome. Since RNA degrades rapidly, appropriate sample management and RNA extraction methods are imperative to get high quality RNA for downstream assays. This study evaluated the effect of various sampling-transport scenarios (combinations of temperature, RNA stabilizers, and duration) of transport on six RNA quality parameters; yield, purity, integrity, cDNA synthesis efficiency, target detection and quantification. The use of water and extraction buffer were also compared for a primary bee tissue homogenate prior to RNA extraction. The strategy least affected by time was preservation of samples at -80°C. All other regimens turned out to be poor alternatives unless the samples were frozen or processed within 24h. Chemical stabilizers have the greatest impact on RNA quality and adding an extra homogenization step (a QIAshredder™ homogenizer) to the extraction protocol significantly improves the RNA yield and chemical purity. This study confirms that RIN values (RNA Integrity Number), should be used cautiously with bee RNA. Using water for the primary homogenate has no negative effect on RNA quality as long as this step is no longer than 15min. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Aromatic hydrocarbons in produced water from offshore oil and gas production. Test of sample strategy; Aromatiske kulbrinter i produceret vand fra offshore olie- og gas industrien. Test af proevetagningsstrategi

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, A.

    2005-07-01

    In co-operation with the Danish EPA, the National Environmental Research Institute (NERI) has carried out a series of measurements of aromatic hydrocarbons in produced water from an offshore oil and gas production platform in the Danish sector of the North Sea as part of the project 'Testing of sampling strategy for aromatic hydrocarbons in produced water from the offshore oil and gas industry'. The measurements included both volatile (BTEX: benzene, toluene, ethylbenzene and xylenes) and semi-volatile aromatic hydrocarbons: NPD (naphthalenes, phenanthrenes and dibenzothiophenes) and selected PAHs (polycyclic aromatic hydrocarbons). In total, 12 samples of produced water were sampled at the Dan FF production platform located in the North Sea by the operator, Maersk Oil and Gas, as four sets of three parallel samples from November 24 - December 02, 2004. After collection of the last set, the samples were shipped to NERI for analysis. The water samples were collected in 1 L glass bottles that were filled completely (without overfilling) and tightly closed. After sampling, the samples were preserved with hydrochloric acid and cooled below ambient until being shipped off to NERI. Here all samples were analysed in dublicates, and the results show that for BTEX, levels were reduced compared to similar measurements carried out by NERI in 2002 and others. In this work, BTEX levels were approximately 5 mg/L, while similar studies showed levels in the range 0,5 - 35 mg/L. For NPD levels were similar, 0,5 - 1,4 mg/L, while for PAH they seerred elevated; 0,1 - 0,4 mg/L in this work compared to 0,001 - 0,3 mg/L in similar studies. The applied sampling strategy has been tested by performing analysis of variance on the analytical data. The test of the analytical data has shown that the mean values of the three parallel samples collected in series constituted a good estimate of the levels at the time of sampling; thus, the variance between the parallel samples was not

  2. Technical Note: Comparison of storage strategies of sea surface microlayer samples

    Directory of Open Access Journals (Sweden)

    K. Schneider-Zapp

    2013-07-01

    Full Text Available The sea surface microlayer (SML is an important biogeochemical system whose physico-chemical analysis often necessitates some degree of sample storage. However, many SML components degrade with time so the development of optimal storage protocols is paramount. We here briefly review some commonly used treatment and storage protocols. Using freshwater and saline SML samples from a river estuary, we investigated temporal changes in surfactant activity (SA and the absorbance and fluorescence of chromophoric dissolved organic matter (CDOM over four weeks, following selected sample treatment and storage protocols. Some variability in the effectiveness of individual protocols most likely reflects sample provenance. None of the various protocols examined performed any better than dark storage at 4 °C without pre-treatment. We therefore recommend storing samples refrigerated in the dark.

  3. Sampling wild species to conserve genetic diversity

    Science.gov (United States)

    Sampling seed from natural populations of crop wild relatives requires choice of the locations to sample from and the amount of seed to sample. While this may seem like a simple choice, in fact careful planning of a collector’s sampling strategy is needed to ensure that a crop wild collection will ...

  4. High-pressure oxygenation of thin-wall YBCO single-domain samples

    International Nuclear Information System (INIS)

    Chaud, X; Savchuk, Y; Sergienko, N; Prikhna, T; Diko, P

    2008-01-01

    The oxygen annealing of ReBCO bulk material, necessary to achieve superconducting properties, usually induces micro- and macro-cracks. This leads to a crack-assisted oxygenation process that allows oxygenating large bulk samples faster than single crystals. But excellent superconducting properties are cancelled by the poor mechanical ones. More progressive oxygenation strategy has been shown to reduce drastically the oxygenation cracks. The problem then arises to keep a reasonable annealing time. The concept of bulk Y123 single-domain samples with thin-wall geometry has been introduced to bypass the inherent limitation due to a slow oxygen diffusion rate. But it is not enough. The use of a high oxygen pressure (16 MPa) enables to speed up further the process. It introduces a displacement in the equilibrium phase diagram towards higher temperatures, i.e., higher diffusion rates, to achieve a given oxygen content in the material. Remarkable results were obtained by applying such a high pressure oxygen annealing process on thin-wall single-domain samples. The trapped field of 16 mm diameter Y123 thin-wall single-domain samples was doubled (0.6T vs 0.3T at 77K) using an annealing time twice shorter (about 3 days). The initial development was made on thin bars. The advantage of thin-wall geometry is that such an annealing can be applied directly to a much larger sample

  5. Selective hedging strategies for oil stockpiling

    International Nuclear Information System (INIS)

    Yun, Won-Cheol

    2006-01-01

    As a feasible option for improving the economics and operational efficiency of stockpiling by public agency, this study suggests simple selective hedging strategies using forward contracts. The main advantage of these selective hedging strategies over the previous ones is not to predict future spot prices, but to utilize the sign and magnitude of basis easily available to the public. Using the weekly spot and forward prices of West Texas Intermediate for the period of October 1997-August 2002, this study adopts an ex ante out-of-sample analysis to examine selective hedging performances compared to no-hedge and minimum-variance routine hedging strategies. To some extent, selective hedging strategies dominate the traditional routine hedging strategy, but do not improve upon the expected returns of no-hedge case, which is mainly due to the data characteristics of out-of-sample period used in this analysis

  6. A Sensitivity Study of Human Errors in Optimizing Surveillance Test Interval (STI) and Allowed Outage Time (AOT) of Standby Safety System

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Shin, Won Ky; You, Young Woo; Yang, Hui Chang

    1998-01-01

    In most cases, the surveillance test intervals (STIs), allowed outage times (AOTS) and testing strategies of safety components in nuclear power plant are prescribed in plant technical specifications. And, in general, it is required that standby safety system shall be redundant (i.e., composed of multiple components) and these components are tested by either staggered test strategy or sequential test strategy. In this study, a linear model is presented to incorporate the effects of human errors associated with test into the evaluation of unavailability. The average unavailabilities of 1/4, 2/4 redundant systems are computed considering human error and testing strategy. The adverse effects of test on system unavailability, such as component wear and test-induced transient have been modelled. The final outcome of this study would be the optimized human error domain from 3-D human error sensitivity analysis by selecting finely classified segment. The results of sensitivity analysis show that the STI and AOT can be optimized provided human error probability is maintained within allowable range. (authors)

  7. Determination of cadmium and lead in urine samples after dispersive solid–liquid extraction on multiwalled carbon nanotubes by slurry sampling electrothermal atomic absorption spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Álvarez Méndez, J.; Barciela García, J.; García Martín, S.; Peña Crecente, R.M.; Herrero Latorre, C., E-mail: carlos.herrero@usc.es

    2015-04-01

    A new method for the determination of Cd and Pb in urine samples has been developed. The method involves dispersive solid-phase extraction (DSPE), slurry sampling (SS), and subsequent electrothermal atomic absorption spectrometry (ETAAS). Oxidized multiwalled carbon nanotubes (MWCNTs) were used as the sorbent material. The isolated MWCNT/analyte aggregates were treated with nitric acid to form a slurry and both metals were determined directly by injecting the slurry into the ETAAS-atomizer. The parameters that influence the adsorption of the metals on MWCNTs in the DSPE process, the formation and extraction of the slurry, and the ETAAS conditions were studied by different factorial design strategies. The detection and quantification limits obtained for Cd under optimized conditions were 9.7 and 32.3 ng L{sup −1}, respectively, and for Pb these limits were 0.13 and 0.43 μg L{sup −1}. The preconcentration factors achieved were 3.9 and 5.4. The RSD values (n = 10) were less than 4.1% and 5.9% for Cd and Pb, respectively. The accuracy of the method was assessed in recovery studies, with values in the range 96–102% obtained for Cd and 97–101% for Pb. In addition, the analysis of certified reference materials gave consistent results. The DSPE–SS–ETAAS method is a novel and useful strategy for the determination of Pb and Cd at low levels in human urine samples. The method is sensitive, fast, and free of matrix interferences, and it avoids the tedious and time-consuming on-column adsorption and elution steps associated with commonly used SPE procedures. The proposed method was used to determine Cd and Pb in urine samples of unexposed healthy people and satisfactory results were obtained. - Highlights: • Cd and Pb determination based on the combination of DSP, SS and ETAAS • Urine matrix was eliminated using DSPE based on multiwalled carbon nanotubes. • Slurry sampling technique permitted the direct injection of sample into the ETAAS atomizer.

  8. Alternating sample changer and an automatic sample changer for liquid scintillation counting of alpha-emitting materials

    International Nuclear Information System (INIS)

    Thorngate, J.H.

    1977-08-01

    Two sample changers are described that were designed for liquid scintillation counting of alpha-emitting samples prepared using solvent-extraction chemistry. One operates manually but changes samples without exposing the photomultiplier tube to light, allowing the high voltage to remain on for improved stability. The other is capable of automatically counting up to 39 samples. An electronic control for the automatic sample changer is also described

  9. A Quantitative Proteomics Approach to Clinical Research with Non-Traditional Samples

    Directory of Open Access Journals (Sweden)

    Rígel Licier

    2016-10-01

    Full Text Available The proper handling of samples to be analyzed by mass spectrometry (MS can guarantee excellent results and a greater depth of analysis when working in quantitative proteomics. This is critical when trying to assess non-traditional sources such as ear wax, saliva, vitreous humor, aqueous humor, tears, nipple aspirate fluid, breast milk/colostrum, cervical-vaginal fluid, nasal secretions, bronco-alveolar lavage fluid, and stools. We intend to provide the investigator with relevant aspects of quantitative proteomics and to recognize the most recent clinical research work conducted with atypical samples and analyzed by quantitative proteomics. Having as reference the most recent and different approaches used with non-traditional sources allows us to compare new strategies in the development of novel experimental models. On the other hand, these references help us to contribute significantly to the understanding of the proportions of proteins in different proteomes of clinical interest and may lead to potential advances in the emerging field of precision medicine.

  10. A Quantitative Proteomics Approach to Clinical Research with Non-Traditional Samples.

    Science.gov (United States)

    Licier, Rígel; Miranda, Eric; Serrano, Horacio

    2016-10-17

    The proper handling of samples to be analyzed by mass spectrometry (MS) can guarantee excellent results and a greater depth of analysis when working in quantitative proteomics. This is critical when trying to assess non-traditional sources such as ear wax, saliva, vitreous humor, aqueous humor, tears, nipple aspirate fluid, breast milk/colostrum, cervical-vaginal fluid, nasal secretions, bronco-alveolar lavage fluid, and stools. We intend to provide the investigator with relevant aspects of quantitative proteomics and to recognize the most recent clinical research work conducted with atypical samples and analyzed by quantitative proteomics. Having as reference the most recent and different approaches used with non-traditional sources allows us to compare new strategies in the development of novel experimental models. On the other hand, these references help us to contribute significantly to the understanding of the proportions of proteins in different proteomes of clinical interest and may lead to potential advances in the emerging field of precision medicine.

  11. 40 CFR 82.8 - Grant of essential use allowances and critical use allowances.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 17 2010-07-01 2010-07-01 false Grant of essential use allowances and critical use allowances. 82.8 Section 82.8 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Albemarle Bill Clark Pest Control, Inc. Burnside Services, Inc. Cardinal Professional Products Chemtura Corp...

  12. Image Sampling with Quasicrystals

    Directory of Open Access Journals (Sweden)

    Mark Grundland

    2009-07-01

    Full Text Available We investigate the use of quasicrystals in image sampling. Quasicrystals produce space-filling, non-periodic point sets that are uniformly discrete and relatively dense, thereby ensuring the sample sites are evenly spread out throughout the sampled image. Their self-similar structure can be attractive for creating sampling patterns endowed with a decorative symmetry. We present a brief general overview of the algebraic theory of cut-and-project quasicrystals based on the geometry of the golden ratio. To assess the practical utility of quasicrystal sampling, we evaluate the visual effects of a variety of non-adaptive image sampling strategies on photorealistic image reconstruction and non-photorealistic image rendering used in multiresolution image representations. For computer visualization of point sets used in image sampling, we introduce a mosaic rendering technique.

  13. Gibbs sampling on large lattice with GMRF

    Science.gov (United States)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  14. Synthetic Multiple-Imputation Procedure for Multistage Complex Samples

    Directory of Open Access Journals (Sweden)

    Zhou Hanzhi

    2016-03-01

    Full Text Available Multiple imputation (MI is commonly used when item-level missing data are present. However, MI requires that survey design information be built into the imputation models. For multistage stratified clustered designs, this requires dummy variables to represent strata as well as primary sampling units (PSUs nested within each stratum in the imputation model. Such a modeling strategy is not only operationally burdensome but also inferentially inefficient when there are many strata in the sample design. Complexity only increases when sampling weights need to be modeled. This article develops a generalpurpose analytic strategy for population inference from complex sample designs with item-level missingness. In a simulation study, the proposed procedures demonstrate efficient estimation and good coverage properties. We also consider an application to accommodate missing body mass index (BMI data in the analysis of BMI percentiles using National Health and Nutrition Examination Survey (NHANES III data. We argue that the proposed methods offer an easy-to-implement solution to problems that are not well-handled by current MI techniques. Note that, while the proposed method borrows from the MI framework to develop its inferential methods, it is not designed as an alternative strategy to release multiply imputed datasets for complex sample design data, but rather as an analytic strategy in and of itself.

  15. High-resolution X-ray diffraction with no sample preparation.

    Science.gov (United States)

    Hansford, G M; Turner, S M R; Degryse, P; Shortland, A J

    2017-07-01

    It is shown that energy-dispersive X-ray diffraction (EDXRD) implemented in a back-reflection geometry is extremely insensitive to sample morphology and positioning even in a high-resolution configuration. This technique allows high-quality X-ray diffraction analysis of samples that have not been prepared and is therefore completely non-destructive. The experimental technique was implemented on beamline B18 at the Diamond Light Source synchrotron in Oxfordshire, UK. The majority of the experiments in this study were performed with pre-characterized geological materials in order to elucidate the characteristics of this novel technique and to develop the analysis methods. Results are presented that demonstrate phase identification, the derivation of precise unit-cell parameters and extraction of microstructural information on unprepared rock samples and other sample types. A particular highlight was the identification of a specific polytype of a muscovite in an unprepared mica schist sample, avoiding the time-consuming and difficult preparation steps normally required to make this type of identification. The technique was also demonstrated in application to a small number of fossil and archaeological samples. Back-reflection EDXRD implemented in a high-resolution configuration shows great potential in the crystallographic analysis of cultural heritage artefacts for the purposes of scientific research such as provenancing, as well as contributing to the formulation of conservation strategies. Possibilities for moving the technique from the synchrotron into museums are discussed. The avoidance of the need to extract samples from high-value and rare objects is a highly significant advantage, applicable also in other potential research areas such as palaeontology, and the study of meteorites and planetary materials brought to Earth by sample-return missions.

  16. 34 CFR 656.30 - What are allowable costs and limitations on allowable costs?

    Science.gov (United States)

    2010-07-01

    ... FOREIGN LANGUAGE AND AREA STUDIES OR FOREIGN LANGUAGE AND INTERNATIONAL STUDIES What Conditions Must Be... 34 Education 3 2010-07-01 2010-07-01 false What are allowable costs and limitations on allowable costs? 656.30 Section 656.30 Education Regulations of the Offices of the Department of Education...

  17. Cross-sample validation provides enhanced proteome coverage in rat vocal fold mucosa.

    Directory of Open Access Journals (Sweden)

    Nathan V Welham

    2011-03-01

    Full Text Available The vocal fold mucosa is a biomechanically unique tissue comprised of a densely cellular epithelium, superficial to an extracellular matrix (ECM-rich lamina propria. Such ECM-rich tissues are challenging to analyze using proteomic assays, primarily due to extensive crosslinking and glycosylation of the majority of high M(r ECM proteins. In this study, we implemented an LC-MS/MS-based strategy to characterize the rat vocal fold mucosa proteome. Our sample preparation protocol successfully solubilized both proteins and certain high M(r glycoconjugates and resulted in the identification of hundreds of mucosal proteins. A straightforward approach to the treatment of protein identifications attributed to single peptide hits allowed the retention of potentially important low abundance identifications (validated by a cross-sample match and de novo interpretation of relevant spectra while still eliminating potentially spurious identifications (global single peptide hits with no cross-sample match. The resulting vocal fold mucosa proteome was characterized by a wide range of cellular and extracellular proteins spanning 12 functional categories.

  18. Small sample whole-genome amplification

    Science.gov (United States)

    Hara, Christine; Nguyen, Christine; Wheeler, Elizabeth; Sorensen, Karen; Arroyo, Erin; Vrankovich, Greg; Christian, Allen

    2005-11-01

    Many challenges arise when trying to amplify and analyze human samples collected in the field due to limitations in sample quantity, and contamination of the starting material. Tests such as DNA fingerprinting and mitochondrial typing require a certain sample size and are carried out in large volume reactions; in cases where insufficient sample is present whole genome amplification (WGA) can be used. WGA allows very small quantities of DNA to be amplified in a way that enables subsequent DNA-based tests to be performed. A limiting step to WGA is sample preparation. To minimize the necessary sample size, we have developed two modifications of WGA: the first allows for an increase in amplified product from small, nanoscale, purified samples with the use of carrier DNA while the second is a single-step method for cleaning and amplifying samples all in one column. Conventional DNA cleanup involves binding the DNA to silica, washing away impurities, and then releasing the DNA for subsequent testing. We have eliminated losses associated with incomplete sample release, thereby decreasing the required amount of starting template for DNA testing. Both techniques address the limitations of sample size by providing ample copies of genomic samples. Carrier DNA, included in our WGA reactions, can be used when amplifying samples with the standard purification method, or can be used in conjunction with our single-step DNA purification technique to potentially further decrease the amount of starting sample necessary for future forensic DNA-based assays.

  19. Evaluation of Multiple Linear Regression-Based Limited Sampling Strategies for Enteric-Coated Mycophenolate Sodium in Adult Kidney Transplant Recipients.

    Science.gov (United States)

    Brooks, Emily K; Tett, Susan E; Isbel, Nicole M; McWhinney, Brett; Staatz, Christine E

    2018-04-01

    Although multiple linear regression-based limited sampling strategies (LSSs) have been published for enteric-coated mycophenolate sodium, none have been evaluated for the prediction of subsequent mycophenolic acid (MPA) exposure. This study aimed to examine the predictive performance of the published LSS for the estimation of future MPA area under the concentration-time curve from 0 to 12 hours (AUC0-12) in renal transplant recipients. Total MPA plasma concentrations were measured in 20 adult renal transplant patients on 2 occasions a week apart. All subjects received concomitant tacrolimus and were approximately 1 month after transplant. Samples were taken at 0, 0.33, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 6, and 8 hours and 0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 3, 4, 6, 9, and 12 hours after dose on the first and second sampling occasion, respectively. Predicted MPA AUC0-12 was calculated using 19 published LSSs and data from the first or second sampling occasion for each patient and compared with the second occasion full MPA AUC0-12 calculated using the linear trapezoidal rule. Bias (median percentage prediction error) and imprecision (median absolute prediction error) were determined. Median percentage prediction error and median absolute prediction error for the prediction of full MPA AUC0-12 were multiple linear regression-based LSS was not possible without concentrations up to at least 8 hours after the dose.

  20. q-Strategy spatial prisoner's dilemma game

    International Nuclear Information System (INIS)

    Li, Zhi-Hua; Fan, Hong-Yi; Xu, Wen-Long; Yang, Han-Xin

    2011-01-01

    We generalize the usual two-strategy prisoner's dilemma game to a multi-strategy game, in which the strategy variable s is allowed to take q different fractional values lying between 0 and 1. The fractional-valued strategies signify that individuals are not absolutely cooperative or defective, instead they can adopt intermediate strategies. Simulation results on 1D and 2D lattices show that, compared with the binary strategy game, the multi-strategy game can sustain cooperation in more stringent defective environments. We give a comprehensive analysis of the distributions of the survived strategies and we compare pairwise the relative strength and weakness of different strategies. It turns out that some intermediate strategies survive the pure defection because they can reduce being exploited and at the same time benefit from the spatial reciprocity effect. Our work may shed some light on the intermediate behaviors in human society. -- Highlights: → We propose a q-strategy prisoner's dilemma game with intermediate strategies. → The intermediate strategies characterize the extent of cooperation or defection. → We implement the model in a structured population. → The intermediate strategies can promote cooperation.

  1. Metagenomic Taxonomy-Guided Database-Searching Strategy for Improving Metaproteomic Analysis.

    Science.gov (United States)

    Xiao, Jinqiu; Tanca, Alessandro; Jia, Ben; Yang, Runqing; Wang, Bo; Zhang, Yu; Li, Jing

    2018-04-06

    Metaproteomics provides a direct measure of the functional information by investigating all proteins expressed by a microbiota. However, due to the complexity and heterogeneity of microbial communities, it is very hard to construct a sequence database suitable for a metaproteomic study. Using a public database, researchers might not be able to identify proteins from poorly characterized microbial species, while a sequencing-based metagenomic database may not provide adequate coverage for all potentially expressed protein sequences. To address this challenge, we propose a metagenomic taxonomy-guided database-search strategy (MT), in which a merged database is employed, consisting of both taxonomy-guided reference protein sequences from public databases and proteins from metagenome assembly. By applying our MT strategy to a mock microbial mixture, about two times as many peptides were detected as with the metagenomic database only. According to the evaluation of the reliability of taxonomic attribution, the rate of misassignments was comparable to that obtained using an a priori matched database. We also evaluated the MT strategy with a human gut microbial sample, and we found 1.7 times as many peptides as using a standard metagenomic database. In conclusion, our MT strategy allows the construction of databases able to provide high sensitivity and precision in peptide identification in metaproteomic studies, enabling the detection of proteins from poorly characterized species within the microbiota.

  2. Strategic Audit and Ownership Strategy

    Directory of Open Access Journals (Sweden)

    Mike Franz Wahl

    2015-10-01

    Full Text Available In the new global economy, ownership has become a central issue for organizational performance. Ownership strategy is where corporate governance meets strategic management. Highlighting a knowledge gap in the field of corporate governance, the author is asking the central research question: “how to develop an ownership strategy?” The main purpose of this paper is to answer this original question and develop a better understanding about ownership strategies. Theoretically there is evidence to indicate that there is a link between strategic audit and ownership strategy. Analyzing firm cases from Estonia allows concluding that the strategic audit is useful for developing systemically ownership strategies, which in turn could be a realistic alternative for complete contracts. The use of strategic audits gives the business owner an opportunity to analyze his own actions and behavior, learning, managing knowledge, and finally clearly expressing his will in the form of an ownership strategy.

  3. Simple PCR assays improve the sensitivity of HIV-1 subtype B drug resistance testing and allow linking of resistance mutations.

    Directory of Open Access Journals (Sweden)

    Jeffrey A Johnson

    Full Text Available BACKGROUND: The success of antiretroviral therapy is known to be compromised by drug-resistant HIV-1 at frequencies detectable by conventional bulk sequencing. Currently, there is a need to assess the clinical consequences of low-frequency drug resistant variants occurring below the detection limit of conventional genotyping. Sensitive detection of drug-resistant subpopulations, however, requires simple and practical methods for routine testing. METHODOLOGY: We developed highly-sensitive and simple real-time PCR assays for nine key drug resistance mutations and show that these tests overcome substantial sequence heterogeneity in HIV-1 clinical specimens. We specifically used early wildtype virus samples from the pre-antiretroviral drug era to measure background reactivity and were able to define highly-specific screening cut-offs that are up to 67-fold more sensitive than conventional genotyping. We also demonstrate that sequencing the mutation-specific PCR products provided a direct and novel strategy to further detect and link associated resistance mutations, allowing easy identification of multi-drug-resistant variants. Resistance mutation associations revealed in mutation-specific amplicon sequences were verified by clonal sequencing. SIGNIFICANCE: Combined, sensitive real-time PCR testing and mutation-specific amplicon sequencing provides a powerful and simple approach that allows for improved detection and evaluation of HIV-1 drug resistance mutations.

  4. Lifetime Prevalence of Suicide Attempts Among Sexual Minority Adults by Study Sampling Strategies: A Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Hottes, Travis Salway; Bogaert, Laura; Rhodes, Anne E; Brennan, David J; Gesink, Dionne

    2016-05-01

    Previous reviews have demonstrated a higher risk of suicide attempts for lesbian, gay, and bisexual (LGB) persons (sexual minorities), compared with heterosexual groups, but these were restricted to general population studies, thereby excluding individuals sampled through LGB community venues. Each sampling strategy, however, has particular methodological strengths and limitations. For instance, general population probability studies have defined sampling frames but are prone to information bias associated with underreporting of LGB identities. By contrast, LGB community surveys may support disclosure of sexuality but overrepresent individuals with strong LGB community attachment. To reassess the burden of suicide-related behavior among LGB adults, directly comparing estimates derived from population- versus LGB community-based samples. In 2014, we searched MEDLINE, EMBASE, PsycInfo, CINAHL, and Scopus databases for articles addressing suicide-related behavior (ideation, attempts) among sexual minorities. We selected quantitative studies of sexual minority adults conducted in nonclinical settings in the United States, Canada, Europe, Australia, and New Zealand. Random effects meta-analysis and meta-regression assessed for a difference in prevalence of suicide-related behavior by sample type, adjusted for study or sample-level variables, including context (year, country), methods (medium, response rate), and subgroup characteristics (age, gender, sexual minority construct). We examined residual heterogeneity by using τ(2). We pooled 30 cross-sectional studies, including 21,201 sexual minority adults, generating the following lifetime prevalence estimates of suicide attempts: 4% (95% confidence interval [CI] = 3%, 5%) for heterosexual respondents to population surveys, 11% (95% CI = 8%, 15%) for LGB respondents to population surveys, and 20% (95% CI = 18%, 22%) for LGB respondents to community surveys (Figure 1). The difference in LGB estimates by sample

  5. Advanced pressure tube sampling tools

    International Nuclear Information System (INIS)

    Wittich, K.C.; King, J.M.

    2002-01-01

    Deuterium concentration is an important parameter that must be assessed to evaluate the Fitness for service of CANDU pressure tubes. In-reactor pressure tube sampling allows accurate deuterium concentration assessment to be made without the expenses associated with fuel channel removal. This technology, which AECL has developed over the past fifteen years, has become the standard method for deuterium concentration assessment. AECL is developing a multi-head tool that would reduce in-reactor handling overhead by allowing one tool to sequentially sample at all four axial pressure tube locations before removal from the reactor. Four sets of independent cutting heads, like those on the existing sampling tools, facilitate this incorporating proven technology demonstrated in over 1400 in-reactor samples taken to date. The multi-head tool is delivered by AECL's Advanced Delivery Machine or other similar delivery machines. Further, AECL has developed an automated sample handling system that receives and processes the tool once out of the reactor. This system retrieves samples from the tool, dries, weighs and places them in labelled vials which are then directed into shielded shipping flasks. The multi-head wet sampling tool and the automated sample handling system are based on proven technology and offer continued savings and dose reduction to utilities in a competitive electricity market. (author)

  6. Atypical antipsychotics: trends in analysis and sample preparation of various biological samples.

    Science.gov (United States)

    Fragou, Domniki; Dotsika, Spyridoula; Sarafidou, Parthena; Samanidou, Victoria; Njau, Samuel; Kovatsi, Leda

    2012-05-01

    Atypical antipsychotics are increasingly popular and increasingly prescribed. In some countries, they can even be obtained over-the-counter, without a prescription, making their abuse quite easy. Although atypical antipsychotics are thought to be safer than typical antipsychotics, they still have severe side effects. Intoxications are not rare and some of them have a fatal outcome. Drug interactions involving atypical antipsychotics complicate patient management in clinical settings and the determination of the cause of death in fatalities. In view of the above, analytical strategies that can efficiently isolate atypical antipsychotics from a variety of biological samples and quantify them accurately, sensitively and reliably, are of utmost importance both for the clinical, as well as for the forensic toxicologist. In this review, we will present and discuss novel analytical strategies that have been developed from 2004 to the present day for the determination of atypical antipsychotics in various biological samples.

  7. Sample Results From The Interim Salt Disposition Program Macrobatch 7 Tank 21H Qualification Samples

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. B.; Washington, A. L. II

    2013-08-08

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 21H in support of qualification of Macrobatch (Salt Batch) 7 for the Interim Salt Disposition Program (ISDP). An ARP and several ESS tests were also performed. This document reports characterization data on the samples of Tank 21H as well as simulated performance of ARP/MCU. No issues with the projected Salt Batch 7 strategy are identified, other than the presence of visible quantities of dark colored solids. A demonstration of the monosodium titanate (0.2 g/L) removal of strontium and actinides provided acceptable 4 hour average decontamination factors for Pu and Sr of 3.22 and 18.4, respectively. The Four ESS tests also showed acceptable behavior with distribution ratios (D(Cs)) values of 15.96, 57.1, 58.6, and 65.6 for the MCU, cold blend, hot blend, and Next Generation Solvent (NGS), respectively. The predicted value for the MCU solvent was 13.2. Currently, there are no models that would allow a prediction of extraction behavior for the other three solvents. SRNL recommends that a model for predicting extraction behavior for cesium removal for the blended solvent and NGS be developed. While no outstanding issues were noted, the presence of solids in the samples should be investigated in future work. It is possible that the solids may represent a potential reservoir of material (such as potassium) that could have an impact on MCU performance if they were to dissolve back into the feed solution. This salt batch is intended to be the first batch to be processed through MCU entirely using the new NGS-MCU solvent.

  8. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    Science.gov (United States)

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  9. Non-Abelian strategies in quantum penny flip game

    Science.gov (United States)

    Mishima, Hiroaki

    2018-01-01

    In this paper, we formulate and analyze generalizations of the quantum penny flip game. In the penny flip game, one coin has two states, heads or tails, and two players apply alternating operations on the coin. In the original Meyer game, the first player is allowed to use quantum (i.e., non-commutative) operations, but the second player is still only allowed to use classical (i.e., commutative) operations. In our generalized games, both players are allowed to use non-commutative operations, with the second player being partially restricted in what operators they use. We show that even if the second player is allowed to use "phase-variable" operations, which are non-Abelian in general, the first player still has winning strategies. Furthermore, we show that even when the second player is allowed to choose one from two or more elements of the group U(2), the second player has winning strategies under certain conditions. These results suggest that there is often a method for restoring the quantum state disturbed by another agent.

  10. Click strategies for single-molecule protein fluorescence.

    Science.gov (United States)

    Milles, Sigrid; Tyagi, Swati; Banterle, Niccolò; Koehler, Christine; VanDelinder, Virginia; Plass, Tilman; Neal, Adrian P; Lemke, Edward A

    2012-03-21

    Single-molecule methods have matured into central tools for studies in biology. Foerster resonance energy transfer (FRET) techniques, in particular, have been widely applied to study biomolecular structure and dynamics. The major bottleneck for a facile and general application of these studies arises from the need to label biological samples site-specifically with suitable fluorescent dyes. In this work, we present an optimized strategy combining click chemistry and the genetic encoding of unnatural amino acids (UAAs) to overcome this limitation for proteins. We performed a systematic study with a variety of clickable UAAs and explored their potential for high-resolution single-molecule FRET (smFRET). We determined all parameters that are essential for successful single-molecule studies, such as accessibility of the probes, expression yield of proteins, and quantitative labeling. Our multiparameter fluorescence analysis allowed us to gain new insights into the effects and photophysical properties of fluorescent dyes linked to various UAAs for smFRET measurements. This led us to determine that, from the extended tool set that we now present, genetically encoding propargyllysine has major advantages for state-of-the-art measurements compared to other UAAs. Using this optimized system, we present a biocompatible one-step dual-labeling strategy of the regulatory protein RanBP3 with full labeling position freedom. Our technique allowed us then to determine that the region encompassing two FxFG repeat sequences adopts a disordered but collapsed state. RanBP3 serves here as a prototypical protein that, due to its multiple cysteines, size, and partially disordered structure, is not readily accessible to any of the typical structure determination techniques such as smFRET, NMR, and X-ray crystallography.

  11. A weighted sampling algorithm for the design of RNA sequences with targeted secondary structure and nucleotide distribution.

    Science.gov (United States)

    Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme

    2013-07-01

    The design of RNA sequences folding into predefined secondary structures is a milestone for many synthetic biology and gene therapy studies. Most of the current software uses similar local search strategies (i.e. a random seed is progressively adapted to acquire the desired folding properties) and more importantly do not allow the user to control explicitly the nucleotide distribution such as the GC-content in their sequences. However, the latter is an important criterion for large-scale applications as it could presumably be used to design sequences with better transcription rates and/or structural plasticity. In this article, we introduce IncaRNAtion, a novel algorithm to design RNA sequences folding into target secondary structures with a predefined nucleotide distribution. IncaRNAtion uses a global sampling approach and weighted sampling techniques. We show that our approach is fast (i.e. running time comparable or better than local search methods), seedless (we remove the bias of the seed in local search heuristics) and successfully generates high-quality sequences (i.e. thermodynamically stable) for any GC-content. To complete this study, we develop a hybrid method combining our global sampling approach with local search strategies. Remarkably, our glocal methodology overcomes both local and global approaches for sampling sequences with a specific GC-content and target structure. IncaRNAtion is available at csb.cs.mcgill.ca/incarnation/. Supplementary data are available at Bioinformatics online.

  12. Coding Strategies and Implementations of Compressive Sensing

    Science.gov (United States)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  13. The association between implementation strategy use and the uptake of hepatitis C treatment in a national sample.

    Science.gov (United States)

    Rogal, Shari S; Yakovchenko, Vera; Waltz, Thomas J; Powell, Byron J; Kirchner, JoAnn E; Proctor, Enola K; Gonzalez, Rachel; Park, Angela; Ross, David; Morgan, Timothy R; Chartier, Maggie; Chinman, Matthew J

    2017-05-11

    Hepatitis C virus (HCV) is a common and highly morbid illness. New medications that have much higher cure rates have become the new evidence-based practice in the field. Understanding the implementation of these new medications nationally provides an opportunity to advance the understanding of the role of implementation strategies in clinical outcomes on a large scale. The Expert Recommendations for Implementing Change (ERIC) study defined discrete implementation strategies and clustered these strategies into groups. The present evaluation assessed the use of these strategies and clusters in the context of HCV treatment across the US Department of Veterans Affairs (VA), Veterans Health Administration, the largest provider of HCV care nationally. A 73-item survey was developed and sent to all VA sites treating HCV via electronic survey, to assess whether or not a site used each ERIC-defined implementation strategy related to employing the new HCV medication in 2014. VA national data regarding the number of Veterans starting on the new HCV medications at each site were collected. The associations between treatment starts and number and type of implementation strategies were assessed. A total of 80 (62%) sites responded. Respondents endorsed an average of 25 ± 14 strategies. The number of treatment starts was positively correlated with the total number of strategies endorsed (r = 0.43, p strategies endorsed (p strategies, compared to 15 strategies in the lowest quartile. There were significant differences in the types of strategies endorsed by sites in the highest and lowest quartiles of treatment starts. Four of the 10 top strategies for sites in the top quartile had significant correlations with treatment starts compared to only 1 of the 10 top strategies in the bottom quartile sites. Overall, only 3 of the top 15 most frequently used strategies were associated with treatment. These results suggest that sites that used a greater number of implementation

  14. What parents say about the allowance: Function of the allowance for parents of different economic incomes

    Directory of Open Access Journals (Sweden)

    Irani Lauer Lellis

    2012-06-01

    Full Text Available The practice of giving allowance is used by several parents in different parts of the world and can contribute to the economic education of children. This study aimed to investigate the purposes of the allowance with 32 parents of varying incomes. We used the focus group technique and Alceste software to analyze the data. The results involved two classes related to the process of using the allowance. These classes have covered aspects of the role of socialization and education allowance, serving as an instrument of reward, but sometimes encouraging bad habits in children. The justification of the fathers concerning the amount of money to be given to the children and when to stop giving allowance were also highlighted.   Keywords: allowance; economic socialization; parenting practices.

  15. Improving snow density estimation for mapping SWE with Lidar snow depth: assessment of uncertainty in modeled density and field sampling strategies in NASA SnowEx

    Science.gov (United States)

    Raleigh, M. S.; Smyth, E.; Small, E. E.

    2017-12-01

    The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.

  16. Diagnostic strategies using myoglobin measurement in myocardial infarction.

    Science.gov (United States)

    Plebani, M; Zaninotto, M

    1998-04-06

    Myoglobin, a low molecular-weight heme protein (17800 D) present in both cardiac and skeletal muscle, is an old test with new perspectives. Advantages and disadvantages of myoglobin determination are well known. Myoglobin is the earliest known, commercially available, biochemical marker of acute myocardial infarction (AMI) and its rapid kinetics make it an early, good marker of reperfusion. However, since myoglobin is present in both skeletal and cardiac muscle, any damage to these muscle types results in its release into blood. Serum myoglobin levels are falsely elevated in conditions unrelated to AMI as skeletal muscle and neuromuscular disorders, renal failure, intramuscular injection, strenuous exercise, and after several toxins and drugs intake. New strategies for myoglobin measurement may resolve this limitation. These strategies include both the combined measurement of myoglobin and a skeletal specific marker (carbonic anhydrase III) or a cardiac specific marker (troponin I), as well as the myoglobin evaluation on serial samples. In particular, the diagnostic algorithm based on the combined measurement of myoglobin and troponin I, assuring a satisfactory analytical turnaround time, significantly improves the diagnostic efficiency of laboratory assessment of suspected AMI patients, allowing the successive monitoring of coronary reperfusion.

  17. Sample-efficient Strategies for Learning in the Presence of Noise

    DEFF Research Database (Denmark)

    Cesa-Bianchi, N.; Dichterman, E.; Fischer, Paul

    1999-01-01

    In this paper, we prove various results about PAC learning in the presence of malicious noise. Our main interest is the sample size behavior of learning algorithms. We prove the first nontrivial sample complexity lower bound in this model by showing that order of &egr;/&Dgr;2 + d/&Dgr; (up...... to logarithmic factors) examples are necessary for PAC learning any target class of {#123;0,1}#125;-valued functions of VC dimension d, where &egr; is the desired accuracy and &eegr; = &egr;/(1 + &egr;) - &Dgr; the malicious noise rate (it is well known that any nontrivial target class cannot be PAC learned...... with accuracy &egr; and malicious noise rate &eegr; &egr;/(1 + &egr;), this irrespective to sample complexity). We also show that this result cannot be significantly improved in general by presenting efficient learning algorithms for the class of all subsets of d elements and the class of unions of at most d...

  18. Optimized cryo-focused ion beam sample preparation aimed at in situ structural studies of membrane proteins.

    Science.gov (United States)

    Schaffer, Miroslava; Mahamid, Julia; Engel, Benjamin D; Laugks, Tim; Baumeister, Wolfgang; Plitzko, Jürgen M

    2017-02-01

    While cryo-electron tomography (cryo-ET) can reveal biological structures in their native state within the cellular environment, it requires the production of high-quality frozen-hydrated sections that are thinner than 300nm. Sample requirements are even more stringent for the visualization of membrane-bound protein complexes within dense cellular regions. Focused ion beam (FIB) sample preparation for transmission electron microscopy (TEM) is a well-established technique in material science, but there are only few examples of biological samples exhibiting sufficient quality for high-resolution in situ investigation by cryo-ET. In this work, we present a comprehensive description of a cryo-sample preparation workflow incorporating additional conductive-coating procedures. These coating steps eliminate the adverse effects of sample charging on imaging with the Volta phase plate, allowing data acquisition with improved contrast. We discuss optimized FIB milling strategies adapted from material science and each critical step required to produce homogeneously thin, non-charging FIB lamellas that make large areas of unperturbed HeLa and Chlamydomonas cells accessible for cryo-ET at molecular resolution. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Rapid Sampling from Sealed Containers

    International Nuclear Information System (INIS)

    Johnston, R.G.; Garcia, A.R.E.; Martinez, R.K.; Baca, E.T.

    1999-01-01

    The authors have developed several different types of tools for sampling from sealed containers. These tools allow the user to rapidly drill into a closed container, extract a sample of its contents (gas, liquid, or free-flowing powder), and permanently reseal the point of entry. This is accomplished without exposing the user or the environment to the container contents, even while drilling. The entire process is completed in less than 15 seconds for a 55 gallon drum. Almost any kind of container can be sampled (regardless of the materials) with wall thicknesses up to 1.3 cm and internal pressures up to 8 atm. Samples can be taken from the top, sides, or bottom of a container. The sampling tools are inexpensive, small, and easy to use. They work with any battery-powered hand drill. This allows considerable safety, speed, flexibility, and maneuverability. The tools also permit the user to rapidly attach plumbing, a pressure relief valve, alarms, or other instrumentation to a container. Possible applications include drum venting, liquid transfer, container flushing, waste characterization, monitoring, sampling for archival or quality control purposes, emergency sampling by rapid response teams, counter-terrorism, non-proliferation and treaty verification, and use by law enforcement personnel during drug or environmental raids

  20. 49 CFR 266.11 - Allowable costs.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Allowable costs. 266.11 Section 266.11... TRANSPORTATION ACT § 266.11 Allowable costs. Allowable costs include only the following costs which are properly allocable to the work performed: Planning and program operation costs which are allowed under Federal...

  1. Strategy for fitting source strength and reconstruction procedure in radioactive particle tracking

    International Nuclear Information System (INIS)

    Mosorov, Volodymyr

    2015-01-01

    The Radioactive Particle Tracking (RPT) technique is widely applied to study the dynamic properties of flows inside a reactor. Usually, a single radioactive particle that is neutrally buoyant with respect to the phase is used as a tracker. The particle moves inside a 3D volume of interest, and its positions are determined by an array of scintillation detectors, which count the incoming photons. The particle position coordinates are calculated by using a reconstruction procedure that solves a minimization problem between the measured counts and calibration data. Although previous studies have described the influence of specified factors on the RPT resolution and sensitivities, the question of how to choose an appropriate source strength and reconstruction procedure for the given RPT setup remains an unsolved problem. This work describes and applies the original strategy for fitting both the source strength and the sampling time interval to a specified RPT setup to guarantee a required accuracy of measurements. Additionally, the measurement accuracy of an RPT setup can be significantly increased by changing the reconstruction procedure. The results of the simulations, based on the Monte Carlo approach, have demonstrated that the proposed strategy allows for the successful implementation of the As Low As Reasonably Achievable (ALARA) principle when designing the RPT setup. The limitations and drawbacks of the proposed procedure are also presented. - Highlights: • We develop an original strategy for fitting source strength and measurement time interval in radioactive particle tracking (RPT) technique. • The proposed strategy allows successfully to implement the ALAPA (As Low As Reasonably Achievable) principle in designing of a RPT setup. • Measurement accuracy of a RPT setup can be significantly increased by improvement of the reconstruction procedure. • The algorithm can be applied to monitor the motion of the radioactive tracer in a reactor

  2. 34 CFR 304.21 - Allowable costs.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false Allowable costs. 304.21 Section 304.21 Education... Grantee § 304.21 Allowable costs. In addition to the allowable costs established in the Education... allowable expenditures by projects funded under the program: (a) Cost of attendance, as defined in Title IV...

  3. 50 CFR 80.15 - Allowable costs.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false Allowable costs. 80.15 Section 80.15... WILDLIFE RESTORATION AND DINGELL-JOHNSON SPORT FISH RESTORATION ACTS § 80.15 Allowable costs. (a) What are allowable costs? Allowable costs are costs that are necessary and reasonable for accomplishment of approved...

  4. Automated processing of forensic casework samples using robotic workstations equipped with nondisposable tips: contamination prevention.

    Science.gov (United States)

    Frégeau, Chantal J; Lett, C Marc; Elliott, Jim; Yensen, Craig; Fourney, Ron M

    2008-05-01

    An automated process has been developed for the analysis of forensic casework samples using TECAN Genesis RSP 150/8 or Freedom EVO liquid handling workstations equipped exclusively with nondisposable tips. Robot tip cleaning routines have been incorporated strategically within the DNA extraction process as well as at the end of each session. Alternative options were examined for cleaning the tips and different strategies were employed to verify cross-contamination. A 2% sodium hypochlorite wash (1/5th dilution of the 10.8% commercial bleach stock) proved to be the best overall approach for preventing cross-contamination of samples processed using our automated protocol. The bleach wash steps do not adversely impact the short tandem repeat (STR) profiles developed from DNA extracted robotically and allow for major cost savings through the implementation of fixed tips. We have demonstrated that robotic workstations equipped with fixed pipette tips can be used with confidence with properly designed tip washing routines to process casework samples using an adapted magnetic bead extraction protocol.

  5. Multiscale sampling model for motion integration.

    Science.gov (United States)

    Sherbakov, Lena; Yazdanbakhsh, Arash

    2013-09-30

    Biologically plausible strategies for visual scene integration across spatial and temporal domains continues to be a challenging topic. The fundamental question we address is whether classical problems in motion integration, such as the aperture problem, can be solved in a model that samples the visual scene at multiple spatial and temporal scales in parallel. We hypothesize that fast interareal connections that allow feedback of information between cortical layers are the key processes that disambiguate motion direction. We developed a neural model showing how the aperture problem can be solved using different spatial sampling scales between LGN, V1 layer 4, V1 layer 6, and area MT. Our results suggest that multiscale sampling, rather than feedback explicitly, is the key process that gives rise to end-stopped cells in V1 and enables area MT to solve the aperture problem without the need for calculating intersecting constraints or crafting intricate patterns of spatiotemporal receptive fields. Furthermore, the model explains why end-stopped cells no longer emerge in the absence of V1 layer 6 activity (Bolz & Gilbert, 1986), why V1 layer 4 cells are significantly more end-stopped than V1 layer 6 cells (Pack, Livingstone, Duffy, & Born, 2003), and how it is possible to have a solution to the aperture problem in area MT with no solution in V1 in the presence of driving feedback. In summary, while much research in the field focuses on how a laminar architecture can give rise to complicated spatiotemporal receptive fields to solve problems in the motion domain, we show that one can reframe motion integration as an emergent property of multiscale sampling achieved concurrently within lamina and across multiple visual areas.

  6. Demonstration/Validation of Incremental Sampling at Two Diverse Military Ranges and Development of an Incremental Sampling Tool

    Science.gov (United States)

    2010-06-01

    Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit

  7. A method of language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees

    1993-01-01

    In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... created with this method will reflect optimally the diversity of the languages of the world. On the basis of the internal structure of each genetic language tree a measure is computed that reflects the linguistic diversity in the language families represented by these trees. This measure is used...... to determine how many languages from each phylum should be selected, given any required sample size....

  8. 2 CFR 215.27 - Allowable costs.

    Science.gov (United States)

    2010-01-01

    ... 2 Grants and Agreements 1 2010-01-01 2010-01-01 false Allowable costs. 215.27 Section 215.27... § 215.27 Allowable costs. For each kind of recipient, there is a set of Federal principles for determining allowable costs. Allowability of costs shall be determined in accordance with the cost principles...

  9. 46 CFR 154.421 - Allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Allowable stress. 154.421 Section 154.421 Shipping COAST... § 154.421 Allowable stress. The allowable stress for the integral tank structure must meet the American Bureau of Shipping's allowable stress for the vessel's hull published in “Rules for Building and Classing...

  10. Searching CLEF-IP by Strategy

    NARCIS (Netherlands)

    W. Alink (Wouter); R. Cornacchia (Roberto); A.P. de Vries (Arjen)

    2010-01-01

    htmlabstractTasks performed by intellectual property specialists are often ad hoc, and continuously require new approaches to search a collection of documents. We therefore investigate the benets of a visual `search strategy builder' to allow IP search experts to express their approach to

  11. Biological sampling for marine radioactivity monitoring

    International Nuclear Information System (INIS)

    Fowler, S.W.

    1997-01-01

    Strategies and methodologies for using marine organisms to monitor radioactivity in marine waters are presented. When the criteria for monitoring radioactivity is to determine routes of radionuclide transfer to man, the ''critical pathway'' approach is often applied. Alternatively, where information on ambient radionuclide levels and distributions is sought, the approach of selecting marine organisms as ''bioindicators'' of radioactivity is generally used. Whichever approach is applied, a great deal of knowledge is required about the physiology and ecology of the specific organism chosen. In addition, several criteria for qualifying as a bioindicator species are discussed; e.g., it must be a sedentary species which reflects the ambient radionuclide concentration at a given site, sufficiently long-lived to allow long-term temporal sampling, widely distributed to allow spatial comparisons, able to bioconcentrate the radionuclide to a relatively high degree, while showing a simple correlation between radionuclide content in its tissues with that in the surrounding waters. Useful hints on the appropriate species to use and the best way to collect and prepare organisms for radioanalysis are also given. It is concluded that benthic algae and bivalve molluscs generally offer the greatest potential for use as a ''bioindicator'' species in radionuclide biomonitoring programmes. Where knowledge on contribution to radiological dose is required, specific edible marine species should be the organisms of choice; however, both purposes can be served when the edible species chosen through critical pathway analysis is also an excellent bioaccumulator of the radionuclide of interest. (author)

  12. The Effect of Summarizing and Presentation Strategies

    Directory of Open Access Journals (Sweden)

    Hooshang Khoshsima

    2014-07-01

    Full Text Available The present study aimed to find out the effect of summarizing and presentation strategies on Iranian intermediate EFL learners’ reading comprehension. 61 students were selected and divided into two experimental and control groups. The homogeneity of their proficiency level was established using a TOEFL proficiency test. The experimental group used the two strategies three sessions each week for twenty weeks, while the control group was not trained on the strategies. After every two-week instruction, an immediate posttest was administered. At the end of the study, a post-test was administered to both groups. Paired-sample t-test and Independent sample t-test were used for analysis. The results of the study revealed that summarizing and presentation strategies had significant effect on promoting reading comprehension of intermediate EFL learners. It also indicated that the presentation strategy was significantly more effective on students’ reading comprehension.

  13. Succinct Sampling from Discrete Distributions

    DEFF Research Database (Denmark)

    Bringmann, Karl; Larsen, Kasper Green

    2013-01-01

    We revisit the classic problem of sampling from a discrete distribution: Given n non-negative w-bit integers x_1,...,x_n, the task is to build a data structure that allows sampling i with probability proportional to x_i. The classic solution is Walker's alias method that takes, when implemented...

  14. 76 FR 32340 - Federal Travel Regulation; Temporary Duty (TDY) Travel Allowances (Taxes); Relocation Allowances...

    Science.gov (United States)

    2011-06-06

    ... reflection of the actual tax impact on the employee. Therefore, this proposed rule offers the one-year RITA... to estimate the additional income tax liability that you incur as a result of relocation benefits and... Allowances (Taxes); Relocation Allowances (Taxes) AGENCY: Office of Governmentwide Policy (OGP), General...

  15. 46 CFR 154.440 - Allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Allowable stress. 154.440 Section 154.440 Shipping COAST... Tank Type A § 154.440 Allowable stress. (a) The allowable stresses for an independent tank type A must... Commandant (CG-522). (b) A greater allowable stress than required in paragraph (a)(1) of this section may be...

  16. BUSINESS STRATEGY, STRUCTURE AND ORGANIZATIONAL PERFORMANCE

    OpenAIRE

    CORINA GAVREA; ROXANA STEGEREAN; LIVIU ILIES

    2012-01-01

    Organizational structure and competitive strategy play an important role in gaining competitive advantage and improving organizational performance. The objective of this paper is to examine how organizational structure and strategy affects firm performance within a sample of 92 Romanian firms. The data used in this study was collected through a questionnaire used to quantify the three variables of interest: organizational performance, strategy and structure.

  17. Single- versus multiple-sample method to measure glomerular filtration rate.

    Science.gov (United States)

    Delanaye, Pierre; Flamant, Martin; Dubourg, Laurence; Vidal-Petiot, Emmanuelle; Lemoine, Sandrine; Cavalier, Etienne; Schaeffner, Elke; Ebert, Natalie; Pottel, Hans

    2018-01-08

    There are many different ways to measure glomerular filtration rate (GFR) using various exogenous filtration markers, each having their own strengths and limitations. However, not only the marker, but also the methodology may vary in many ways, including the use of urinary or plasma clearance, and, in the case of plasma clearance, the number of time points used to calculate the area under the concentration-time curve, ranging from only one (Jacobsson method) to eight (or more) blood samples. We collected the results obtained from 5106 plasma clearances (iohexol or 51Cr-ethylenediaminetetraacetic acid (EDTA)) using three to four time points, allowing GFR calculation using the slope-intercept method and the Bröchner-Mortensen correction. For each time point, the Jacobsson formula was applied to obtain the single-sample GFR. We used Bland-Altman plots to determine the accuracy of the Jacobsson method at each time point. The single-sample method showed within 10% concordances with the multiple-sample method of 66.4%, 83.6%, 91.4% and 96.0% at the time points 120, 180, 240 and ≥300 min, respectively. Concordance was poorer at lower GFR levels, and this trend is in parallel with increasing age. Results were similar in males and females. Some discordance was found in the obese subjects. Single-sample GFR is highly concordant with a multiple-sample strategy, except in the low GFR range (<30 mL/min). © The Author 2018. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  18. Deriving allowable properties of lumber : a practical guide for interpretation of ASTM standards

    Science.gov (United States)

    Alan Bendtsen; William L. Galligan

    1978-01-01

    The ASTM standards for establishing clear wood mechanical properties and for deriving structural grades and related allowable properties for visually graded lumber can be confusing and difficult for the uninitiated to interpret. This report provides a practical guide to using these standards for individuals not familiar with their application. Sample stress...

  19. Radar Doppler Processing with Nonuniform Sampling.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    Conventional signal processing to estimate radar Doppler frequency often assumes uniform pulse/sample spacing. This is for the convenience of t he processing. More recent performance enhancements in processor capability allow optimally processing nonuniform pulse/sample spacing, thereby overcoming some of the baggage that attends uniform sampling, such as Doppler ambiguity and SNR losses due to sidelobe control measures.

  20. Intelligent sampling for the measurement of structured surfaces

    International Nuclear Information System (INIS)

    Wang, J; Jiang, X; Blunt, L A; Scott, P J; Leach, R K

    2012-01-01

    Uniform sampling in metrology has known drawbacks such as coherent spectral aliasing and a lack of efficiency in terms of measuring time and data storage. The requirement for intelligent sampling strategies has been outlined over recent years, particularly where the measurement of structured surfaces is concerned. Most of the present research on intelligent sampling has focused on dimensional metrology using coordinate-measuring machines with little reported on the area of surface metrology. In the research reported here, potential intelligent sampling strategies for surface topography measurement of structured surfaces are investigated by using numerical simulation and experimental verification. The methods include the jittered uniform method, low-discrepancy pattern sampling and several adaptive methods which originate from computer graphics, coordinate metrology and previous research by the authors. By combining the use of advanced reconstruction methods and feature-based characterization techniques, the measurement performance of the sampling methods is studied using case studies. The advantages, stability and feasibility of these techniques for practical measurements are discussed. (paper)

  1. Crucial role of strategy updating for coexistence of strategies in interaction networks

    Science.gov (United States)

    Zhang, Jianlei; Zhang, Chunyan; Cao, Ming; Weissing, Franz J.

    2015-04-01

    Network models are useful tools for studying the dynamics of social interactions in a structured population. After a round of interactions with the players in their local neighborhood, players update their strategy based on the comparison of their own payoff with the payoff of one of their neighbors. Here we show that the assumptions made on strategy updating are of crucial importance for the strategy dynamics. In the first step, we demonstrate that seemingly small deviations from the standard assumptions on updating have major implications for the evolutionary outcome of two cooperation games: cooperation can more easily persist in a Prisoner's Dilemma game, while it can go more easily extinct in a Snowdrift game. To explain these outcomes, we develop a general model for the updating of states in a network that allows us to derive conditions for the steady-state coexistence of states (or strategies). The analysis reveals that coexistence crucially depends on the number of agents consulted for updating. We conclude that updating rules are as important for evolution on a network as network structure and the nature of the interaction.

  2. Sample preparation combined with electroanalysis to improve simultaneous determination of antibiotics in animal derived food samples.

    Science.gov (United States)

    da Silva, Wesley Pereira; de Oliveira, Luiz Henrique; Santos, André Luiz Dos; Ferreira, Valdir Souza; Trindade, Magno Aparecido Gonçalves

    2018-06-01

    A procedure based on liquid-liquid extraction (LLE) and phase separation using magnetically stirred salt-induced high-temperature liquid-liquid extraction (PS-MSSI-HT-LLE) was developed to extract and pre-concentrate ciprofloxacin (CIPRO) and enrofloxacin (ENRO) from animal food samples before electroanalysis. Firstly, simple LLE was used to extract the fluoroquinolones (FQs) from animal food samples, in which dilution was performed to reduce interference effects to below a tolerable threshold. Then, adapted PS-MSSI-HT-LLE protocols allowed re-extraction and further pre-concentration of target analytes in the diluted acid samples for simultaneous electrochemical quantification at low concentration levels. To improve the peak separation, in simultaneous detection, a baseline-corrected second-order derivative approach was processed. These approaches allowed quantification of target FQs from animal food samples spiked at levels of 0.80 to 2.00 µmol L -1 in chicken meat, with recovery values always higher than 80.5%, as well as in milk samples spiked at 4.00 µmol L -1 , with recovery values close to 70.0%. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. The Ambiguity of Weeping. Baroque and Mannerist Discourses in Haynes' Far from Heaven and Sirk's All That Heaven Allows

    NARCIS (Netherlands)

    Post, J.A.

    2012-01-01

    Although Douglas Sirk’ All That Heaven Allows (1954) and Todd Haynes’ Far from Heaven (2002) are both characterized as melodramas, they address their spectators differently. The divergent (emotional) reactions towards both films are the effect of different rhetorical strategies: the first can be

  4. Stable isotope labeling strategy based on coding theory

    Energy Technology Data Exchange (ETDEWEB)

    Kasai, Takuma; Koshiba, Seizo; Yokoyama, Jun; Kigawa, Takanori, E-mail: kigawa@riken.jp [RIKEN Quantitative Biology Center (QBiC), Laboratory for Biomolecular Structure and Dynamics (Japan)

    2015-10-15

    We describe a strategy for stable isotope-aided protein nuclear magnetic resonance (NMR) analysis, called stable isotope encoding. The basic idea of this strategy is that amino-acid selective labeling can be considered as “encoding and decoding” processes, in which the information of amino acid type is encoded by the stable isotope labeling ratio of the corresponding residue and it is decoded by analyzing NMR spectra. According to the idea, the strategy can diminish the required number of labelled samples by increasing information content per sample, enabling discrimination of 19 kinds of non-proline amino acids with only three labeled samples. The idea also enables this strategy to combine with information technologies, such as error detection by check digit, to improve the robustness of analyses with low quality data. Stable isotope encoding will facilitate NMR analyses of proteins under non-ideal conditions, such as those in large complex systems, with low-solubility, and in living cells.

  5. Stable isotope labeling strategy based on coding theory

    International Nuclear Information System (INIS)

    Kasai, Takuma; Koshiba, Seizo; Yokoyama, Jun; Kigawa, Takanori

    2015-01-01

    We describe a strategy for stable isotope-aided protein nuclear magnetic resonance (NMR) analysis, called stable isotope encoding. The basic idea of this strategy is that amino-acid selective labeling can be considered as “encoding and decoding” processes, in which the information of amino acid type is encoded by the stable isotope labeling ratio of the corresponding residue and it is decoded by analyzing NMR spectra. According to the idea, the strategy can diminish the required number of labelled samples by increasing information content per sample, enabling discrimination of 19 kinds of non-proline amino acids with only three labeled samples. The idea also enables this strategy to combine with information technologies, such as error detection by check digit, to improve the robustness of analyses with low quality data. Stable isotope encoding will facilitate NMR analyses of proteins under non-ideal conditions, such as those in large complex systems, with low-solubility, and in living cells

  6. Spatial scan statistics to assess sampling strategy of antimicrobial resistance monitoring programme

    DEFF Research Database (Denmark)

    Vieira, Antonio; Houe, Hans; Wegener, Henrik Caspar

    2009-01-01

    Pie collection and analysis of data on antimicrobial resistance in human and animal Populations are important for establishing a baseline of the occurrence of resistance and for determining trends over time. In animals, targeted monitoring with a stratified sampling plan is normally used. However...... sampled by the Danish Integrated Antimicrobial Resistance Monitoring and Research Programme (DANMAP), by identifying spatial Clusters of samples and detecting areas with significantly high or low sampling rates. These analyses were performed for each year and for the total 5-year study period for all...... by an antimicrobial monitoring program....

  7. System for Earth Sample Registration SESAR: Services for IGSN Registration and Sample Metadata Management

    Science.gov (United States)

    Chan, S.; Lehnert, K. A.; Coleman, R. J.

    2011-12-01

    SESAR, the System for Earth Sample Registration, is an online registry for physical samples collected for Earth and environmental studies. SESAR generates and administers the International Geo Sample Number IGSN, a unique identifier for samples that is dramatically advancing interoperability amongst information systems for sample-based data. SESAR was developed to provide the complete range of registry services, including definition of IGSN syntax and metadata profiles, registration and validation of name spaces requested by users, tools for users to submit and manage sample metadata, validation of submitted metadata, generation and validation of the unique identifiers, archiving of sample metadata, and public or private access to the sample metadata catalog. With the development of SESAR v3, we placed particular emphasis on creating enhanced tools that make metadata submission easier and more efficient for users, and that provide superior functionality for users to manage metadata of their samples in their private workspace MySESAR. For example, SESAR v3 includes a module where users can generate custom spreadsheet templates to enter metadata for their samples, then upload these templates online for sample registration. Once the content of the template is uploaded, it is displayed online in an editable grid format. Validation rules are executed in real-time on the grid data to ensure data integrity. Other new features of SESAR v3 include the capability to transfer ownership of samples to other SESAR users, the ability to upload and store images and other files in a sample metadata profile, and the tracking of changes to sample metadata profiles. In the next version of SESAR (v3.5), we will further improve the discovery, sharing, registration of samples. For example, we are developing a more comprehensive suite of web services that will allow discovery and registration access to SESAR from external systems. Both batch and individual registrations will be possible

  8. Analytical strategies for uranium determination in natural water and industrial effluents samples; Estrategias analiticas para determinacao de uranio em amostras de aguas e efluentes industriais

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Juracir Silva

    2011-07-01

    The work was developed under the project 993/2007 - 'Development of analytical strategies for uranium determination in environmental and industrial samples - Environmental monitoring in the Caetite city, Bahia, Brazil' and made possible through a partnership established between Universidade Federal da Bahia and the Comissao Nacional de Energia Nuclear. Strategies were developed to uranium determination in natural water and effluents of uranium mine. The first one was a critical evaluation of the determination of uranium by inductively coupled plasma optical emission spectrometry (ICP OES) performed using factorial and Doehlert designs involving the factors: acid concentration, radio frequency power and nebuliser gas flow rate. Five emission lines were simultaneously studied (namely: 367.007, 385.464, 385.957, 386.592 and 409.013 nm), in the presence of HN0{sub 3}, H{sub 3}C{sub 2}00H or HCI. The determinations in HN0{sub 3} medium were the most sensitive. Among the factors studied, the gas flow rate was the most significant for the five emission lines. Calcium caused interference in the emission intensity for some lines and iron did not interfere (at least up to 10 mg L{sup -1}) in the five lines studied. The presence of 13 other elements did not affect the emission intensity of uranium for the lines chosen. The optimized method, using the line at 385.957 nm, allows the determination of uranium with limit of quantification of 30 {mu}g L{sup -1} and precision expressed as RSD lower than 2.2% for uranium concentrations of either 500 and 1000 {mu}g L{sup -1}. In second one, a highly sensitive flow-based procedure for uranium determination in natural waters is described. A 100-cm optical path flow cell based on a liquid-core waveguide (LCW) was exploited to increase sensitivity of the arsenazo 111 method, aiming to achieve the limits established by environmental regulations. The flow system was designed with solenoid micro-pumps in order to improve mixing and

  9. Strategy for 90% autoverification of clinical chemistry and immunoassay test results using six sigma process improvement.

    Science.gov (United States)

    Randell, Edward W; Short, Garry; Lee, Natasha; Beresford, Allison; Spencer, Margaret; Kennell, Marina; Moores, Zoë; Parry, David

    2018-06-01

    Six Sigma involves a structured process improvement strategy that places processes on a pathway to continued improvement. The data presented here summarizes a project that took three clinical laboratories from autoverification processes that allowed between about 40% to 60% of tests being auto-verified to more than 90% of tests and samples auto-verified. The project schedule, metrics and targets, a description of the previous system and detailed information on the changes made to achieve greater than 90% auto-verification is presented for this Six Sigma DMAIC (Design, Measure, Analyze, Improve, Control) process improvement project.

  10. A Sample-Based Forest Monitoring Strategy Using Landsat, AVHRR and MODIS Data to Estimate Gross Forest Cover Loss in Malaysia between 1990 and 2005

    Directory of Open Access Journals (Sweden)

    Peter Potapov

    2013-04-01

    Full Text Available Insular Southeast Asia is a hotspot of humid tropical forest cover loss. A sample-based monitoring approach quantifying forest cover loss from Landsat imagery was implemented to estimate gross forest cover loss for two eras, 1990–2000 and 2000–2005. For each time interval, a probability sample of 18.5 km × 18.5 km blocks was selected, and pairs of Landsat images acquired per sample block were interpreted to quantify forest cover area and gross forest cover loss. Stratified random sampling was implemented for 2000–2005 with MODIS-derived forest cover loss used to define the strata. A probability proportional to x (πpx design was implemented for 1990–2000 with AVHRR-derived forest cover loss used as the x variable to increase the likelihood of including forest loss area in the sample. The estimated annual gross forest cover loss for Malaysia was 0.43 Mha/yr (SE = 0.04 during 1990–2000 and 0.64 Mha/yr (SE = 0.055 during 2000–2005. Our use of the πpx sampling design represents a first practical trial of this design for sampling satellite imagery. Although the design performed adequately in this study, a thorough comparative investigation of the πpx design relative to other sampling strategies is needed before general design recommendations can be put forth.

  11. Dual-signal amplification strategy for ultrasensitive chemiluminescence detection of PDGF-BB in capillary electrophoresis.

    Science.gov (United States)

    Cao, Jun-Tao; Wang, Hui; Ren, Shu-Wei; Chen, Yong-Hong; Liu, Yan-Ming

    2015-12-01

    Many efforts have been made toward the achievement of high sensitivity in capillary electrophoresis coupled with chemiluminescence detection (CE-CL). This work describes a novel dual-signal amplification strategy for highly specific and ultrasensitive CL detection of human platelet-derived growth factor-BB (PDGF-BB) using both aptamer and horseradish peroxidase (HRP) modified gold nanoparticles (HRP-AuNPs-aptamer) as nanoprobes in CE. Both AuNPs and HRP in the nanoprobes could amplify the CL signals in the luminol-H2 O2 CL system, owing to the excellent catalytic behavior of AuNPs and HRP in the CL system. Meanwhile, the high affinity of aptamer modified on the AuNPs allows detection with high specificity. As proof-of-concept, the proposed method was employed to quantify the concentration of PDGF-BB from 0.50 to 250 fm with a detection limit of 0.21 fm. The applicability of the assay was further demonstrated in the analysis of PDGF-BB in human serum samples with acceptable accuracy and reliability. The result of this study exhibits distinct advantages, such as high sensitivity, good specificity, simplicity, and very small sample consumption. The good performances of the proposed strategy provide a powerful avenue for ultrasensitive detection of rare proteins in biological sample, showing great promise in biochemical analysis. Copyright © 2015 John Wiley & Sons, Ltd.

  12. OPEC's strategies

    Energy Technology Data Exchange (ETDEWEB)

    Wirl, Franz [Vienna Univ. (Austria). Faculty of Business, Economics and Statistics

    2012-09-15

    This paper investigates rationale explanations of OPEC's strategies. Accounting for market characteristics in particular the sluggishness of demand and supply allows to explain price jumps as rational OPEC strategies from a narrow economic perspective (up and down) as well as from political objectives (at least up) due to the political payoff from standing up against the 'West'. Although the temptation to accrue this political payoff was and remains high, the narrow economic profit motive coupled with an imperfect cooperation among OPEC members explains past price volatility and high prices much better than the usual reference to political events. A more specific prediction is that OPEC will switch back to setting prices since the current quantity strategy encourages oil importing countries to appropriate rents in particular in connection with the need to mitigate global warming. (orig.)

  13. An Energy Efficient Localization Strategy for Outdoor Objects based on Intelligent Light-Intensity Sampling

    OpenAIRE

    Sandnes, Frode Eika

    2010-01-01

    A simple and low cost strategy for implementing pervasive objects that identify and track their own geographical location is proposed. The strategy, which is not reliant on any GIS infrastructure such as GPS, is realized using an electronic artifact with a built in clock, a light sensor, or low-cost digital camera, persistent storage such as flash and sufficient computational circuitry to make elementary trigonometric computations. The object monitors the lighting conditions and thereby detec...

  14. LC-MS analysis of the plasma metabolome–a novel sample preparation strategy

    DEFF Research Database (Denmark)

    Skov, Kasper; Hadrup, Niels; Smedsgaard, Jørn

    2015-01-01

    Blood plasma is a well-known body fluid often analyzed in studies on the effects of toxic compounds as physiological or chemical induced changes in the mammalian body are reflected in the plasma metabolome. Sample preparation prior to LC-MS based analysis of the plasma metabolome is a challenge...... as plasma contains compounds with very different properties. Besides, proteins, which usually are precipitated with organic solvent, phospholipids, are known to cause ion suppression in electrospray mass spectrometry. We have compared two different sample preparation techniques prior to LC-qTOF analysis...... of plasma samples: The first is protein precipitation; the second is protein precipitation followed by solid phase extraction with sub-fractionation into three sub-samples; a phospholipid, a lipid and a polar sub-fraction. Molecular feature extraction of the data files from LC-qTOF analysis of the samples...

  15. Reproductive Strategy and Sexual Conflict Slow Life History Strategy Inihibts Negative Androcentrism

    Directory of Open Access Journals (Sweden)

    Paul R. Gladden

    2013-11-01

    Full Text Available Recent findings indicate that a slow Life History (LH strategy factor is associated with increased levels of Executive Functioning (EF, increased emotional intelligence, decreased levels of sexually coercive behaviors, and decreased levels of negative ethnocentrism. Based on these findings, as well as the generative theory, we predicted that slow LH strategy should inhibit negative androcentrism (bias against women. A sample of undergraduates responded to a battery of questionnaires measuring various facets of their LH Strategy, (e.g., sociosexual orientation, mating effort, mate-value, psychopathy, executive functioning, and emotional intelligence and various convergent measures of Negative Androcentrism. A structural model that the data fit well indicated a latent protective LH strategy trait predicted decreased negative androcentrism. This trait fully mediated the relationship between participant biological sex and androcentrism. We suggest that slow LH strategy may inhibit negative attitudes toward women because of relatively decreased intrasexual competition and intersexual conflict among slow LH strategists. DOI: 10.2458/azu_jmmss.v4i1.17774

  16. Evaluation of the Frequency for Gas Sampling for the High Burnup Confirmatory Data Project

    Energy Technology Data Exchange (ETDEWEB)

    Stockman, Christine T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Alsaed, Halim A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bryan, Charles R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marschman, Steven C. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Scaglione, John M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-05-01

    This report provides a technically based gas sampling frequency strategy for the High Burnup (HBU) Confirmatory Data Project. The evaluation of: 1) the types and magnitudes of gases that could be present in the project cask and, 2) the degradation mechanisms that could change gas compositions culminates in an adaptive gas sampling frequency strategy. This adaptive strategy is compared against the sampling frequency that has been developed based on operational considerations.

  17. Impacts of human activities and sampling strategies on soil heavy metal distribution in a rapidly developing region of China.

    Science.gov (United States)

    Shao, Xuexin; Huang, Biao; Zhao, Yongcun; Sun, Weixia; Gu, Zhiquan; Qian, Weifei

    2014-06-01

    The impacts of industrial and agricultural activities on soil Cd, Hg, Pb, and Cu in Zhangjiagang City, a rapidly developing region in China, were evaluated using two sampling strategies. The soil Cu, Cd, and Pb concentrations near industrial locations were greater than those measured away from industrial locations. The converse was true for Hg. The top enrichment factor (TEF) values, calculated as the ratio of metal concentrations between the topsoil and subsoil, were greater near industrial location than away from industrial locations and were further related to the industry type. Thus, the TEF is an effective index to distinguish sources of toxic elements not only between anthropogenic and geogenic but also among different industry types. Target soil sampling near industrial locations resulted in a greater estimation in high levels of soil heavy metals. This study revealed that the soil heavy metal contamination was primarily limited to local areas near industrial locations, despite rapid development over the last 20 years. The prevention and remediation of the soil heavy metal pollution should focus on these high-risk areas in the future. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. METABOLITE CHARACTERIZATION IN SERUM SAMPLES FROM ...

    African Journals Online (AJOL)

    Preferred Customer

    fasting 10 mL of blood sample from each individual was taken and was allowed to clot in plastic tube for 2 h at room temperature. The serum was collected by centrifugation. The samples were stored under liquid nitrogen for NMR analysis. Before NMR analysis, 600 µL of the samples were taken in a 5 mm high quality NMR ...

  19. Sampling for validation of digital soil maps

    NARCIS (Netherlands)

    Brus, D.J.; Kempen, B.; Heuvelink, G.B.M.

    2011-01-01

    The increase in digital soil mapping around the world means that appropriate and efficient sampling strategies are needed for validation. Data used for calibrating a digital soil mapping model typically are non-random samples. In such a case we recommend collection of additional independent data and

  20. Sampling the Mouse Hippocampal Dentate Gyrus

    OpenAIRE

    Lisa Basler; Lisa Basler; Stephan Gerdes; David P. Wolfer; David P. Wolfer; David P. Wolfer; Lutz Slomianka; Lutz Slomianka

    2017-01-01

    Sampling is a critical step in procedures that generate quantitative morphological data in the neurosciences. Samples need to be representative to allow statistical evaluations, and samples need to deliver a precision that makes statistical evaluations not only possible but also meaningful. Sampling generated variability should, e.g., not be able to hide significant group differences from statistical detection if they are present. Estimators of the coefficient of error (CE) have been develope...

  1. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  2. 29 CFR 95.27 - Allowable costs.

    Science.gov (United States)

    2010-07-01

    ... cost principles applicable to the entity incurring the costs. Thus, allowability of costs incurred by... Governments.” The allowability of costs incurred by non-profit organizations is determined in accordance with... Organizations.” The allowability of costs incurred by institutions of higher education is determined in...

  3. 24 CFR 84.27 - Allowable costs.

    Science.gov (United States)

    2010-04-01

    ... to the entity incurring the costs. Thus, allowability of costs incurred by State, local or federally..., “Cost Principles for State and Local Governments.” The allowability of costs incurred by non-profit...-Profit Organizations.” The allowability of costs incurred by institutions of higher education is...

  4. Virtual navigation strategies from childhood to senescence: evidence for changes across the life span

    Directory of Open Access Journals (Sweden)

    Veronique D Bohbot

    2012-11-01

    Full Text Available This study sought to investigate navigational strategies across the life span, by testing 8-year old children to 80-year old healthy older adults on the 4 on 8 virtual maze (4/8VM. The 4/8VM was previously developed to assess spontaneous navigational strategies, i.e. hippocampal-dependent spatial strategies (navigation by memorizing relationships between landmarks versus caudate nucleus-dependent response strategies (memorizing a series of left and right turns from a given starting position. With the 4/8VM, we previously demonstrated greater fMRI activity and grey matter in the hippocampus of spatial learners relative to response learners. A sample of 599 healthy participants was tested in the current study. Results showed that 84.4% of children, 46.3% of young adults, and 39.3% of older adults spontaneously used spatial strategies (p < 0.0001. Our results suggest that while children predominantly use spatial strategies, the proportion of participants using spatial strategies decreases across the life span, in favor of response strategies. Factors promoting response strategies include repetition, reward and stress. Since response strategies can result from successful repetition of a behavioral pattern, we propose that the increase in response strategies is a biological adaptive mechanism that allows for the automatization of behavior such as walking in order to free up hippocampal-dependent resources. However, the downside of this shift from spatial to response strategies occurs if people stop building novel relationships, which occurs with repetition and routine, and thereby stop stimulating their hippocampus. Reduced fMRI activity and grey matter in the hippocampus were shown to correlate with cognitive deficits in normal aging. Therefore, these results have important implications regarding factors involved in healthy and successful aging.

  5. A Sequential Kriging reliability analysis method with characteristics of adaptive sampling regions and parallelizability

    International Nuclear Information System (INIS)

    Wen, Zhixun; Pei, Haiqing; Liu, Hai; Yue, Zhufeng

    2016-01-01

    The sequential Kriging reliability analysis (SKRA) method has been developed in recent years for nonlinear implicit response functions which are expensive to evaluate. This type of method includes EGRA: the efficient reliability analysis method, and AK-MCS: the active learning reliability method combining Kriging model and Monte Carlo simulation. The purpose of this paper is to improve SKRA by adaptive sampling regions and parallelizability. The adaptive sampling regions strategy is proposed to avoid selecting samples in regions where the probability density is so low that the accuracy of these regions has negligible effects on the results. The size of the sampling regions is adapted according to the failure probability calculated by last iteration. Two parallel strategies are introduced and compared, aimed at selecting multiple sample points at a time. The improvement is verified through several troublesome examples. - Highlights: • The ISKRA method improves the efficiency of SKRA. • Adaptive sampling regions strategy reduces the number of needed samples. • The two parallel strategies reduce the number of needed iterations. • The accuracy of the optimal value impacts the number of samples significantly.

  6. Development Strategy of Lanting Small Industry

    Directory of Open Access Journals (Sweden)

    Atika Tri Puspitasari

    2015-12-01

    Full Text Available This research aims to describe and analyze the strategy of production, marketing, human resources (labor, and capital. The technique of collecting data used observation, interviews, documentation, questionnaires, and triangulation. The technique of sampling was purposive sampling. Findings show that the strategy of production, marketing strategies by the way of increased order coupled with the trademark shows as well as various flavors of innovation development, adjustment of the selling price with the price of raw materials production, the cooperation of manufacturers and suppliers in the distribution of lanting, promotional activities by means of cooperation with the agency and related service trade off products online. The strategy of human resources is with the formation groups of industry in the village of Lemahduwur (but not running smoothly. Strategy capital with the initial capital comes from its own capital and profit as capital accumulation, additional capital when many party and by feast day; increased access to capital, financial administration and against accounting in a simple and routine. The advice given is the government and manufacturers improve HR, technology development, marketing and capital. Manufacturer improves collaboration with suppliers of raw materials, maintaining the typical features and making a trademark.

  7. Adoption of Emissions Abating Technologies by U.S. Electricity Producing Firms Under the SO2 Emission Allowance Market

    Science.gov (United States)

    Creamer, Gregorio Bernardo

    The objective of this research is to determine the adaptation strategies that coal-based, electricity producing firms in the United States utilize to comply with the emission control regulations imposed by the SO2 Emissions Allowance Market created by the Clean Air Act Amendment of 1990, and the effect of market conditions on the decision making process. In particular, I take into consideration (1) the existence of carbon contracts for the provision of coal that may a affect coal prices at the plant level, and (2) local and geographical conditions, as well as political arrangements that may encourage firms to adopt strategies that appear socially less efficient. As the electricity producing sector is a regulated sector, firms do not necessarily behave in a way that maximizes the welfare of society when reacting to environmental regulations. In other words, profit maximization actions taken by the firm do not necessarily translate into utility maximization for society. Therefore, the environmental regulator has to direct firms into adopting strategies that are socially efficient, i.e., that maximize utility. The SO 2 permit market is an instrument that allows each firm to reduce marginal emissions abatement costs according to their own production conditions and abatement costs. Companies will be driven to opt for a cost-minimizing emissions abatement strategy or a combination of abatement strategies when adapting to new environmental regulations or markets. Firms may adopt one or more of the following strategies to reduce abatement costs while meeting the emission constraints imposed by the SO2 Emissions Allowance Market: (1) continue with business as usual on the production site while buying SO2 permits to comply with environmental regulations, (2) switch to higher quality, lower sulfur coal inputs that will generate less SO2 emissions, or (3) adopting new emissions abating technologies. A utility optimization condition is that the marginal value of each input

  8. 45 CFR 1180.56 - Allowable costs.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Allowable costs. 1180.56 Section 1180.56 Public... by a Grantee General Administrative Responsibilities § 1180.56 Allowable costs. (a) Determination of costs allowable under a grant is made in accordance with government-wide cost principles in applicable...

  9. 7 CFR 550.25 - Allowable costs.

    Science.gov (United States)

    2010-01-01

    ... cost principles applicable to the entity incurring the costs. Thus, allowability of costs incurred by... at 2 CFR part 225. The allowability of costs incurred by non-profit organizations is determined in... at 2 CFR part 230. The allowability of costs incurred by institutions of higher education is...

  10. 46 CFR 154.428 - Allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Allowable stress. 154.428 Section 154.428 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS FOR... § 154.428 Allowable stress. The membrane tank and the supporting insulation must have allowable stresses...

  11. Gaseous radiocarbon measurements of small samples

    International Nuclear Information System (INIS)

    Ruff, M.; Szidat, S.; Gaeggeler, H.W.; Suter, M.; Synal, H.-A.; Wacker, L.

    2010-01-01

    Radiocarbon dating by means of accelerator mass spectrometry (AMS) is a well-established method for samples containing carbon in the milligram range. However, the measurement of small samples containing less than 50 μg carbon often fails. It is difficult to graphitise these samples and the preparation is prone to contamination. To avoid graphitisation, a solution can be the direct measurement of carbon dioxide. The MICADAS, the smallest accelerator for radiocarbon dating in Zurich, is equipped with a hybrid Cs sputter ion source. It allows the measurement of both, graphite targets and gaseous CO 2 samples, without any rebuilding. This work presents experiences dealing with small samples containing 1-40 μg carbon. 500 unknown samples of different environmental research fields have been measured yet. Most of the samples were measured with the gas ion source. These data are compared with earlier measurements of small graphite samples. The performance of the two different techniques is discussed and main contributions to the blank determined. An analysis of blank and standard data measured within years allowed a quantification of the contamination, which was found to be of the order of 55 ng and 750 ng carbon (50 pMC) for the gaseous and the graphite samples, respectively. For quality control, a number of certified standards were measured using the gas ion source to demonstrate reliability of the data.

  12. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Resilient Grid Operational Strategies

    Energy Technology Data Exchange (ETDEWEB)

    Pasqualini, Donatella [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-01

    Extreme weather-related disturbances, such as hurricanes, are a leading cause of grid outages historically. Although physical asset hardening is perhaps the most common way to mitigate the impacts of severe weather, operational strategies may be deployed to limit the extent of societal and economic losses associated with weather-related physical damage.1 The purpose of this study is to examine bulk power-system operational strategies that can be deployed to mitigate the impact of severe weather disruptions caused by hurricanes, thereby increasing grid resilience to maintain continuity of critical infrastructure during extreme weather. To estimate the impacts of resilient grid operational strategies, Los Alamos National Laboratory (LANL) developed a framework for hurricane probabilistic risk analysis (PRA). The probabilistic nature of this framework allows us to estimate the probability distribution of likely impacts, as opposed to the worst-case impacts. The project scope does not include strategies that are not operations related, such as transmission system hardening (e.g., undergrounding, transmission tower reinforcement and substation flood protection) and solutions in the distribution network.

  14. 75 FR 14442 - Federal Travel Regulation (FTR); Relocation Allowances-Relocation Income Tax Allowance (RITA) Tables

    Science.gov (United States)

    2010-03-25

    ... GENERAL SERVICES ADMINISTRATION [GSA Bulletin FTR 10-04] Federal Travel Regulation (FTR); Relocation Allowances-- Relocation Income Tax Allowance (RITA) Tables AGENCY: Office of Governmentwide Policy... (73 FR 35952) specifying that GSA would no longer publish the RITA tables found in 41 CFR Part 301-17...

  15. 49 CFR 19.27 - Allowable costs.

    Science.gov (United States)

    2010-10-01

    ... applicable to the entity incurring the costs. Thus, allowability of costs incurred by State, local or... Circular A-87, “Cost Principles for State and Local Governments.” The allowability of costs incurred by non... Principles for Non-Profit Organizations.” The allowability of costs incurred by institutions of higher...

  16. 36 CFR 1210.27 - Allowable costs.

    Science.gov (United States)

    2010-07-01

    ... applicable to the entity incurring the costs. Thus, allowability of costs incurred by State, local or... Circular A-87, “Cost Principles for State and Local Governments.” The allowability of costs incurred by non... Principles for Non-Profit Organizations.” The allowability of costs incurred by institutions of higher...

  17. 7 CFR 3019.27 - Allowable costs.

    Science.gov (United States)

    2010-01-01

    ... applicable to the entity incurring the costs. Thus, allowability of costs incurred by State, local or... Circular A-87, “Cost Principles for State and Local Governments.” The allowability of costs incurred by non... Principles for Non-Profit Organizations.” The allowability of costs incurred by institutions of higher...

  18. Limited-sampling strategy models for estimating the pharmacokinetic parameters of 4-methylaminoantipyrine, an active metabolite of dipyrone

    Directory of Open Access Journals (Sweden)

    Suarez-Kurtz G.

    2001-01-01

    Full Text Available Bioanalytical data from a bioequivalence study were used to develop limited-sampling strategy (LSS models for estimating the area under the plasma concentration versus time curve (AUC and the peak plasma concentration (Cmax of 4-methylaminoantipyrine (MAA, an active metabolite of dipyrone. Twelve healthy adult male volunteers received single 600 mg oral doses of dipyrone in two formulations at a 7-day interval in a randomized, crossover protocol. Plasma concentrations of MAA (N = 336, measured by HPLC, were used to develop LSS models. Linear regression analysis and a "jack-knife" validation procedure revealed that the AUC0-¥ and the Cmax of MAA can be accurately predicted (R²>0.95, bias 0.85 of the AUC0-¥ or Cmax for the other formulation. LSS models based on three sampling points (1.5, 4 and 24 h, but using different coefficients for AUC0-¥ and Cmax, predicted the individual values of both parameters for the enrolled volunteers (R²>0.88, bias = -0.65 and -0.37%, precision = 4.3 and 7.4% as well as for plasma concentration data sets generated by simulation (R²>0.88, bias = -1.9 and 8.5%, precision = 5.2 and 8.7%. Bioequivalence assessment of the dipyrone formulations based on the 90% confidence interval of log-transformed AUC0-¥ and Cmax provided similar results when either the best-estimated or the LSS-derived metrics were used.

  19. Self-scheduling and bidding strategies of thermal units with stochastic emission constraints

    International Nuclear Information System (INIS)

    Laia, R.; Pousinho, H.M.I.; Melíco, R.; Mendes, V.M.F.

    2015-01-01

    Highlights: • The management of thermal power plants is considered for different emission allowance levels. • The uncertainty on electricity price is considered by a set of scenarios. • A stochastic MILP approach allows devising the bidding strategies and hedging against price uncertainty and emission allowances. - Abstract: This paper is on the self-scheduling problem for a thermal power producer taking part in a pool-based electricity market as a price-taker, having bilateral contracts and emission-constrained. An approach based on stochastic mixed-integer linear programming approach is proposed for solving the self-scheduling problem. Uncertainty regarding electricity price is considered through a set of scenarios computed by simulation and scenario-reduction. Thermal units are modelled by variable costs, start-up costs and technical operating constraints, such as: forbidden operating zones, ramp up/down limits and minimum up/down time limits. A requirement on emission allowances to mitigate carbon footprint is modelled by a stochastic constraint. Supply functions for different emission allowance levels are accessed in order to establish the optimal bidding strategy. A case study is presented to illustrate the usefulness and the proficiency of the proposed approach in supporting biding strategies

  20. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  1. A library of MiMICs allows tagging of genes and reversible, spatial and temporal knockdown of proteins in Drosophila

    Science.gov (United States)

    Nagarkar-Jaiswal, Sonal; Lee, Pei-Tseng; Campbell, Megan E; Chen, Kuchuan; Anguiano-Zarate, Stephanie; Cantu Gutierrez, Manuel; Busby, Theodore; Lin, Wen-Wen; He, Yuchun; Schulze, Karen L; Booth, Benjamin W; Evans-Holm, Martha; Venken, Koen JT; Levis, Robert W; Spradling, Allan C; Hoskins, Roger A; Bellen, Hugo J

    2015-01-01

    Here, we document a collection of ∼7434 MiMIC (Minos Mediated Integration Cassette) insertions of which 2854 are inserted in coding introns. They allowed us to create a library of 400 GFP-tagged genes. We show that 72% of internally tagged proteins are functional, and that more than 90% can be imaged in unfixed tissues. Moreover, the tagged mRNAs can be knocked down by RNAi against GFP (iGFPi), and the tagged proteins can be efficiently knocked down by deGradFP technology. The phenotypes associated with RNA and protein knockdown typically correspond to severe loss of function or null mutant phenotypes. Finally, we demonstrate reversible, spatial, and temporal knockdown of tagged proteins in larvae and adult flies. This new strategy and collection of strains allows unprecedented in vivo manipulations in flies for many genes. These strategies will likely extend to vertebrates. DOI: http://dx.doi.org/10.7554/eLife.05338.001 PMID:25824290

  2. 42 CFR 417.802 - Allowable costs.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Allowable costs. 417.802 Section 417.802 Public... PLANS Health Care Prepayment Plans § 417.802 Allowable costs. (a) General rule. The costs that are considered allowable for HCPP reimbursement are the same as those for reasonable cost HMOs and CMPs specified...

  3. 34 CFR 675.33 - Allowable costs.

    Science.gov (United States)

    2010-07-01

    ... costs. An institution's share of allowable costs may be in cash or in the form of services. The... 34 Education 3 2010-07-01 2010-07-01 false Allowable costs. 675.33 Section 675.33 Education... costs. (a)(1) Allowable and unallowable costs. Except as provided in paragraph (a)(2) of this section...

  4. [Sampling optimization for tropical invertebrates: an example using dung beetles (Coleoptera: Scarabaeinae) in Venezuela].

    Science.gov (United States)

    Ferrer-Paris, José Rafael; Sánchez-Mercado, Ada; Rodríguez, Jon Paul

    2013-03-01

    The development of efficient sampling protocols is an essential prerequisite to evaluate and identify priority conservation areas. There are f ew protocols for fauna inventory and monitoring in wide geographical scales for the tropics, where the complexity of communities and high biodiversity levels, make the implementation of efficient protocols more difficult. We proposed here a simple strategy to optimize the capture of dung beetles, applied to sampling with baited traps and generalizable to other sampling methods. We analyzed data from eight transects sampled between 2006-2008 withthe aim to develop an uniform sampling design, that allows to confidently estimate species richness, abundance and composition at wide geographical scales. We examined four characteristics of any sampling design that affect the effectiveness of the sampling effort: the number of traps, sampling duration, type and proportion of bait, and spatial arrangement of the traps along transects. We used species accumulation curves, rank-abundance plots, indicator species analysis, and multivariate correlograms. We captured 40 337 individuals (115 species/morphospecies of 23 genera). Most species were attracted by both dung and carrion, but two thirds had greater relative abundance in traps baited with human dung. Different aspects of the sampling design influenced each diversity attribute in different ways. To obtain reliable richness estimates, the number of traps was the most important aspect. Accurate abundance estimates were obtained when the sampling period was increased, while the spatial arrangement of traps was determinant to capture the species composition pattern. An optimum sampling strategy for accurate estimates of richness, abundance and diversity should: (1) set 50-70 traps to maximize the number of species detected, (2) get samples during 48-72 hours and set trap groups along the transect to reliably estimate species abundance, (3) set traps in groups of at least 10 traps to

  5. 40 CFR 82.10 - Availability of consumption allowances in addition to baseline consumption allowances for class I...

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 17 2010-07-01 2010-07-01 false Availability of consumption allowances in addition to baseline consumption allowances for class I controlled substances. 82.10 Section 82.10... STRATOSPHERIC OZONE Production and Consumption Controls § 82.10 Availability of consumption allowances in...

  6. Trends in Scottish newborn screening programme for congenital hypothyroidism 1980-2014: strategies for reducing age at notification after initial and repeat sampling.

    Science.gov (United States)

    Mansour, Chourouk; Ouarezki, Yasmine; Jones, Jeremy; Fitch, Moira; Smith, Sarah; Mason, Avril; Donaldson, Malcolm

    2017-10-01

    To determine ages at first capillary sampling and notification and age at notification after second sampling in Scottish newborns referred with elevated thyroid-stimulating hormone (TSH). Referrals between 1980 and 2014 inclusive were grouped into seven 5-year blocks and analysed according to agreed standards. Of 2 116 132 newborn infants screened, 919 were referred with capillary TSH elevation ≥8 mU/L of whom 624 had definite (606) or probable (18) congenital hypothyroidism. Median age at first sampling fell from 7 to 5 days between 1980 and 2014 (standard 4-7 days), with 22, 8 and 3 infants sampled >7 days during 2000-2004, 2005-2009 and 2010-2014. Median age at notification was consistently ≤14 days, range falling during 2000-2004, 2005-2009 and 2010-2014 from 6 to 78, 7-52 and 7-32 days with 12 (14.6%), 6 (5.6%) and 5 (4.3%) infants notified >14 days. However 18/123 (14.6%) of infants undergoing second sampling from 2000 onwards breached the ≤26-day standard for notification. By 2010-2014, the 91 infants with confirmed congenital hypothyroidism had shown favourable median age at first sample (5 days) with start of treatment (10.5 days) approaching age at notification. Most standards for newborn thyroid screening are being met by the Scottish programme, but there is a need to reduce age range at notification, particularly following second sampling. Strategies to improve screening performance include carrying out initial capillary sampling as close to 96 hours as possible; introducing 6-day laboratory reporting and use of electronic transmission for communicating repeat requests. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  7. Prediction strategies in a TV recommender system - Method and experiments

    NARCIS (Netherlands)

    van Setten, M.J.; Veenstra, M.; van Dijk, Elisabeth M.A.G.; Nijholt, Antinus; Isaísas, P.; Karmakar, N.

    2003-01-01

    Predicting the interests of a user in information is an important process in personalized information systems. In this paper, we present a way to create prediction engines that allow prediction techniques to be easily combined into prediction strategies. Prediction strategies choose one or a

  8. Deciding on Family Holidays - Role Distribution and Strategies in Use

    DEFF Research Database (Denmark)

    Therkelsen, Anette

    2010-01-01

    this complexity in role distribution. Likewise in relation to decision-making strategies, contextual factors are helpful in explaining the strategies used, in particular the convention that holidays are an extraordinary “free space” which allows for more negotiation power being bestowed on children than...

  9. Detection of silver nanoparticles in parsley by solid sampling high-resolution-continuum source atomic absorption spectrometry.

    Science.gov (United States)

    Feichtmeier, Nadine S; Leopold, Kerstin

    2014-06-01

    In this work, we present a fast and simple approach for detection of silver nanoparticles (AgNPs) in biological material (parsley) by solid sampling high-resolution-continuum source atomic absorption spectrometry (HR-CS AAS). A novel evaluation strategy was developed in order to distinguish AgNPs from ionic silver and for sizing of AgNPs. For this purpose, atomisation delay was introduced as significant indication of AgNPs, whereas atomisation rates allow distinction of 20-, 60-, and 80-nm AgNPs. Atomisation delays were found to be higher for samples containing silver ions than for samples containing silver nanoparticles. A maximum difference in atomisation delay normalised by the sample weight of 6.27 ± 0.96 s mg(-1) was obtained after optimisation of the furnace program of the AAS. For this purpose, a multivariate experimental design was used varying atomisation temperature, atomisation heating rate and pyrolysis temperature. Atomisation rates were calculated as the slope of the first inflection point of the absorbance signals and correlated with the size of the AgNPs in the biological sample. Hence, solid sampling HR-CS AAS was proved to be a promising tool for identifying and distinguishing silver nanoparticles from ionic silver directly in solid biological samples.

  10. Wideband 4-diode sampling circuit

    Science.gov (United States)

    Wojtulewicz, Andrzej; Radtke, Maciej

    2016-09-01

    The objective of this work was to develop a wide-band sampling circuit. The device should have the ability to collect samples of a very fast signal applied to its input, strengthen it and prepare for further processing. The study emphasizes the method of sampling pulse shaping. The use of ultrafast pulse generator allows sampling signals with a wide frequency spectrum, reaching several gigahertzes. The device uses a pulse transformer to prepare symmetrical pulses. Their final shape is formed with the help of the step recovery diode, two coplanar strips and Schottky diode. Made device can be used in the sampling oscilloscope, as well as other measurement system.

  11. The future(s) of emission allowances

    International Nuclear Information System (INIS)

    Rosenzweig, K.M.; Villarreal, J.A.

    1993-01-01

    The Clean Air Act Amendments of 1990 (CAAA) established a sulfur dioxide emission allowance system to be implemented by the US Environmental Protection Agency (EPA). Under the two-phase implementation of the program, electric utilities responsible for approximately 70 percent of SO 2 emissions in the United States will be issued emission allowances, each representing authorization to emit one ton of sulfur dioxide during a specified calendar year or a later year. Allowances will be issued to utilities with electric-generating units affected by the CAAA limits, as well as to certain entities which may choose to opt-in to the program. Each utility or other emission source must hold a number of allowances at least equal to its total SO 2 emissions during any given year. Unused allowances may be sold, traded, or held in inventory for use against SO 2 emissions in future years. Anyone can buy and hold allowances, including affected utilities, non-utility companies, SO 2 allowances brokers and dealers, environmental groups, and individuals. During Phase I of the program, allowances equivalent to approximately 6.4 million tons of SO 2 emissions will be allocated annually to a group of 110 large, high-SO 2 -emitting power plants. In Phase II, virtually all power-generating utilities (representing approximately 99.4 percent of total US utility emissions) will be subject to the program. The number of allowances issued will increase to approximately 8.9 million a year, with certain special allocations raising the actual number issued to 9.48 million between the years 2000 to 2009, and 8.95 million yearly thereafter. Thus, the CAAA goal of annual emissions of 9 million tons should be achieved by 2010, when virtually all US emission sources will be participating in the program

  12. Comprehensive Study of the Flow Control Strategy in a Wirelessly Charged Centrifugal Microfluidic Platform with Two Rotation Axes.

    Science.gov (United States)

    Zhu, Yunzeng; Chen, Yiqi; Meng, Xiangrui; Wang, Jing; Lu, Ying; Xu, Youchun; Cheng, Jing

    2017-09-05

    Centrifugal microfluidics has been widely applied in the sample-in-answer-out systems for the analyses of nucleic acids, proteins, and small molecules. However, the inherent characteristic of unidirectional fluid propulsion limits the flexibility of these fluidic chips. Providing an extra degree of freedom to allow the unconstrained and reversible pumping of liquid is an effective strategy to address this limitation. In this study, a wirelessly charged centrifugal microfluidic platform with two rotation axes has been constructed and the flow control strategy in such platform with two degrees of freedom was comprehensively studied for the first time. Inductively coupled coils are installed on the platform to achieve wireless power transfer to the spinning stage. A micro servo motor is mounted on both sides of the stage to alter the orientation of the device around a secondary rotation axis on demand during stage rotation. The basic liquid operations on this platform, including directional transport of liquid, valving, metering, and mixing, are comprehensively studied and realized. Finally, a chip for the simultaneous determination of hexavalent chromium [Cr(VI)] and methanal in water samples is designed and tested based on the strategy presented in this paper, demonstrating the potential use of this platform for on-site environmental monitoring, food safety testing, and other life science applications.

  13. 50 CFR 85.41 - Allowable costs.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false Allowable costs. 85.41 Section 85.41... Use/Acceptance of Funds § 85.41 Allowable costs. (a) Allowable grant costs are limited to those costs... applicable Federal cost principles in 43 CFR 12.60(b). Purchase of informational signs, program signs, and...

  14. 46 CFR 154.447 - Allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Allowable stress. 154.447 Section 154.447 Shipping COAST... Tank Type B § 154.447 Allowable stress. (a) An independent tank type B designed from bodies of revolution must have allowable stresses 3 determined by the following formulae: 3 See Appendix B for stress...

  15. Protocol for Microplastics Sampling on the Sea Surface and Sample Analysis

    Science.gov (United States)

    Kovač Viršek, Manca; Palatinus, Andreja; Koren, Špela; Peterlin, Monika; Horvat, Petra; Kržan, Andrej

    2016-01-01

    Microplastic pollution in the marine environment is a scientific topic that has received increasing attention over the last decade. The majority of scientific publications address microplastic pollution of the sea surface. The protocol below describes the methodology for sampling, sample preparation, separation and chemical identification of microplastic particles. A manta net fixed on an »A frame« attached to the side of the vessel was used for sampling. Microplastic particles caught in the cod end of the net were separated from samples by visual identification and use of stereomicroscopes. Particles were analyzed for their size using an image analysis program and for their chemical structure using ATR-FTIR and micro FTIR spectroscopy. The described protocol is in line with recommendations for microplastics monitoring published by the Marine Strategy Framework Directive (MSFD) Technical Subgroup on Marine Litter. This written protocol with video guide will support the work of researchers that deal with microplastics monitoring all over the world. PMID:28060297

  16. A tactical asset allocation strategy that exploits variations in VIX

    OpenAIRE

    Richard Cloutier; Arsen Djatej; Dean Kiefer

    2017-01-01

    Buy and hold strategies make staying disciplined difficult for investors, especially given the variability of returns for different asset classes/strategies during divergent market conditions. Market timing strategies, on the other hand, present significant theoretical benefits, but in reality these benefits are difficult to obtain. Tactical asset allocation, where limited deviations from the strategic allocation are allowed permits the portfolio manager to take advantage of market conditions...

  17. Polygyny, mate-guarding, and posthumous fertilization as alternative male mating strategies.

    Science.gov (United States)

    Zamudio, K R; Sinervo, B

    2000-12-19

    Alternative male mating strategies within populations are thought to be evolutionarily stable because different behaviors allow each male type to successfully gain access to females. Although alternative male strategies are widespread among animals, quantitative evidence for the success of discrete male strategies is available for only a few systems. We use nuclear microsatellites to estimate the paternity rates of three male lizard strategies previously modeled as a rock-paper-scissors game. Each strategy has strengths that allow it to outcompete one morph, and weaknesses that leave it vulnerable to the strategy of another. Blue-throated males mate-guard their females and avoid cuckoldry by yellow-throated "sneaker" males, but mate-guarding is ineffective against aggressive orange-throated neighbors. The ultradominant orange-throated males are highly polygynous and maintain large territories; they overpower blue-throated neighbors and cosire offspring with their females, but are often cuckolded by yellow-throated males. Finally, yellow-throated sneaker males sire offspring via secretive copulations and often share paternity of offspring within a female's clutch. Sneaker males sire more offspring posthumously, indicating that sperm competition may be an important component of their strategy.

  18. Dissecting the pathobiology of altered MRI signal in amyotrophic lateral sclerosis: A post mortem whole brain sampling strategy for the integration of ultra-high-field MRI and quantitative neuropathology.

    Science.gov (United States)

    Pallebage-Gamarallage, Menuka; Foxley, Sean; Menke, Ricarda A L; Huszar, Istvan N; Jenkinson, Mark; Tendler, Benjamin C; Wang, Chaoyue; Jbabdi, Saad; Turner, Martin R; Miller, Karla L; Ansorge, Olaf

    2018-03-13

    Amyotrophic lateral sclerosis (ALS) is a clinically and histopathologically heterogeneous neurodegenerative disorder, in which therapy is hindered by the rapid progression of disease and lack of biomarkers. Magnetic resonance imaging (MRI) has demonstrated its potential for detecting the pathological signature and tracking disease progression in ALS. However, the microstructural and molecular pathological substrate is poorly understood and generally defined histologically. One route to understanding and validating the pathophysiological correlates of MRI signal changes in ALS is to directly compare MRI to histology in post mortem human brains. The article delineates a universal whole brain sampling strategy of pathologically relevant grey matter (cortical and subcortical) and white matter tracts of interest suitable for histological evaluation and direct correlation with MRI. A standardised systematic sampling strategy that was compatible with co-registration of images across modalities was established for regions representing phosphorylated 43-kDa TAR DNA-binding protein (pTDP-43) patterns that were topographically recognisable with defined neuroanatomical landmarks. Moreover, tractography-guided sampling facilitated accurate delineation of white matter tracts of interest. A digital photography pipeline at various stages of sampling and histological processing was established to account for structural deformations that might impact alignment and registration of histological images to MRI volumes. Combined with quantitative digital histology image analysis, the proposed sampling strategy is suitable for routine implementation in a high-throughput manner for acquisition of large-scale histology datasets. Proof of concept was determined in the spinal cord of an ALS patient where multiple MRI modalities (T1, T2, FA and MD) demonstrated sensitivity to axonal degeneration and associated heightened inflammatory changes in the lateral corticospinal tract. Furthermore

  19. Making strategy: learning by doing.

    Science.gov (United States)

    Christensen, C M

    1997-01-01

    Companies find it difficult to change strategy for many reasons, but one stands out: strategic thinking is not a core managerial competence at most companies. Executives hone their capabilities by tackling problems over and over again. Changing strategy, however, is not usually a task that they face repeatedly. Once companies have found a strategy that works, they want to use it, not change it. Consequently, most managers do not develop a competence in strategic thinking. This Manager's Tool Kit presents a three-stage method executives can use to conceive and implement a creative and coherent strategy themselves. The first stage is to identify and map the driving forces that the company needs to address. The process of mapping provides strategy-making teams with visual representations of team members' assumptions, those pictures, in turn, enable managers to achieve consensus in determining the driving forces. Once a senior management team has formulated a new strategy, it must align the strategy with the company's resource-allocation process to make implementation possible. Senior management teams can translate their strategy into action by using aggregate project planning. And management teams that link strategy and innovation through that planning process will develop a competence in implementing strategic change. The author guides the reader through the three stages of strategy making by examining the case of a manufacturing company that was losing ground to competitors. After mapping the driving forces, the company's senior managers were able to devise a new strategy that allowed the business to maintain a competitive advantage in its industry.

  20. RANKED SET SAMPLING FOR ECOLOGICAL RESEARCH: ACCOUNTING FOR THE TOTAL COSTS OF SAMPLING

    Science.gov (United States)

    Researchers aim to design environmental studies that optimize precision and allow for generalization of results, while keeping the costs of associated field and laboratory work at a reasonable level. Ranked set sampling is one method to potentially increase precision and reduce ...

  1. An Optimal Sample Data Usage Strategy to Minimize Overfitting and Underfitting Effects in Regression Tree Models Based on Remotely-Sensed Data

    Directory of Open Access Journals (Sweden)

    Yingxin Gu

    2016-11-01

    Full Text Available Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD between the predicted and actual NDVI (scaled NDVI, value from 0–200 and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4, which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.

  2. Utility allowed returns and market extremes

    International Nuclear Information System (INIS)

    Murry, D.A.; Nan, G.D.; Harrington, B.M.

    1993-01-01

    In recent years interest rates have fluctuated from exceptionally high levels in the early 1980s to their current levels, the lowest in two decades. Observers and analysts generally have assumed that allowed returns by regulatory commissions follow the movement of interest rates; indeed some analysts use a risk premium method to estimate the cost of common equity, assuming a constant and linear relationship between interest rates and the cost of common equity. That suggests we could expect a relatively stable relationship between interest rates and allowed returns, as well. However, a simple comparison of allowed returns and interest rates shows that this is not the case in recent years. The relationship between market interest rates and the returns allowed by commissions varies and is obviously a great deal more complicated. Empirically, there appears to be only a narrow range where market interest rates significantly affect the allowed returns on common stock set by state commissions, at least for electric and combination utilities. If rates are at historically low levels, allowed returns based largely on market rates will hasten subsequent rate filings, and commissions appear to look beyond the low rate levels. Conversely, it appears that regulators do not let historically high market rates determine allowed returns either. At either high or low interest levels, caution seems to be the policy

  3. Direct trace-elemental analysis of urine samples by laser ablation-inductively coupled plasma mass spectrometry after sample deposition on clinical filter papers.

    Science.gov (United States)

    Aramendía, Maite; Rello, Luis; Vanhaecke, Frank; Resano, Martín

    2012-10-16

    Collection of biological fluids on clinical filter papers shows important advantages from a logistic point of view, although analysis of these specimens is far from straightforward. Concerning urine analysis, and particularly when direct trace elemental analysis by laser ablation-inductively coupled plasma mass spectrometry (LA-ICPMS) is aimed at, several problems arise, such as lack of sensitivity or different distribution of the analytes on the filter paper, rendering obtaining reliable quantitative results quite difficult. In this paper, a novel approach for urine collection is proposed, which circumvents many of these problems. This methodology consists on the use of precut filter paper discs where large amounts of sample can be retained upon a single deposition. This provides higher amounts of the target analytes and, thus, sufficient sensitivity, and allows addition of an adequate internal standard at the clinical lab prior to analysis, therefore making it suitable for a strategy based on unsupervised sample collection and ulterior analysis at referral centers. On the basis of this sampling methodology, an analytical method was developed for the direct determination of several elements in urine (Be, Bi, Cd, Co, Cu, Ni, Sb, Sn, Tl, Pb, and V) at the low μg L(-1) level by means of LA-ICPMS. The method developed provides good results in terms of accuracy and LODs (≤1 μg L(-1) for most of the analytes tested), with a precision in the range of 15%, fit-for-purpose for clinical control analysis.

  4. An Overview of Advanced SILAC-Labeling Strategies for Quantitative Proteomics.

    Science.gov (United States)

    Terzi, F; Cambridge, S

    2017-01-01

    Comparative, quantitative mass spectrometry of proteins provides great insight to protein abundance and function, but some molecular characteristics related to protein dynamics are not so easily obtained. Because the metabolic incorporation of stable amino acid isotopes allows the extraction of distinct temporal and spatial aspects of protein dynamics, the SILAC methodology is uniquely suited to be adapted for advanced labeling strategies. New SILAC strategies have emerged that allow deeper foraging into the complexity of cellular proteomes. Here, we review a few advanced SILAC-labeling strategies that have been published during last the years. Among them, different subsaturating-labeling as well as dual-labeling schemes are most prominent for a range of analyses including those of neuronal proteomes, secretion, or cell-cell-induced stimulations. These recent developments suggest that much more information can be gained from proteomic analyses if the labeling strategies are specifically tailored toward the experimental design. © 2017 Elsevier Inc. All rights reserved.

  5. Reverse sample genome probing, a new technique for identification of bacteria in environmental samples by DNA hybridization, and its application to the identification of sulfate-reducing bacteria in oil field samples

    International Nuclear Information System (INIS)

    Voordouw, G.; Voordouw, J.K.; Karkhoff-Schweizer, R.R.; Fedorak, P.M.; Westlake, D.W.S.

    1991-01-01

    A novel method for identification of bacteria in environmental samples by DNA hybridization is presented. It is based on the fact that, even within a genus, the genomes of different bacteria may have little overall sequence homology. This allows the use of the labeled genomic DNA of a given bacterium (referred to as a standard) to probe for its presence and that of bacteria with highly homologous genomes in total DNA obtained from an environmental sample. Alternatively, total DNA extracted from the sample can be labeled and used to probe filters on which denatured chromosomal DNA from relevant bacterial standards has been spotted. The latter technique is referred to as reverse sample genome probing, since it is the reverse of the usual practice of deriving probes from reference bacteria for analyzing a DNA sample. Reverse sample genome probing allows identification of bacteria in a sample in a single step once a master filter with suitable standards has been developed. Application of reverse sample genome probing to the identification of sulfate-reducing bacteria in 31 samples obtained primarily from oil fields in the province of Alberta has indicated that there are at least 20 genotypically different sulfate-reducing bacteria in these samples

  6. A new modeling strategy for third-order fast high-performance liquid chromatographic data with fluorescence detection. Quantitation of fluoroquinolones in water samples.

    Science.gov (United States)

    Alcaráz, Mirta R; Bortolato, Santiago A; Goicoechea, Héctor C; Olivieri, Alejandro C

    2015-03-01

    Matrix augmentation is regularly employed in extended multivariate curve resolution-alternating least-squares (MCR-ALS), as applied to analytical calibration based on second- and third-order data. However, this highly useful concept has almost no correspondence in parallel factor analysis (PARAFAC) of third-order data. In the present work, we propose a strategy to process third-order chromatographic data with matrix fluorescence detection, based on an Augmented PARAFAC model. The latter involves decomposition of a three-way data array augmented along the elution time mode with data for the calibration samples and for each of the test samples. A set of excitation-emission fluorescence matrices, measured at different chromatographic elution times for drinking water samples, containing three fluoroquinolones and uncalibrated interferences, were evaluated using this approach. Augmented PARAFAC exploits the second-order advantage, even in the presence of significant changes in chromatographic profiles from run to run. The obtained relative errors of prediction were ca. 10 % for ofloxacin, ciprofloxacin, and danofloxacin, with a significant enhancement in analytical figures of merit in comparison with previous reports. The results are compared with those furnished by MCR-ALS.

  7. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis.

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-05

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH 4 + strategy for ethylene and SO 2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO 2 from fruits. It was satisfied that trace ethylene and SO 2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO 2 during the entire LVCC sampling process were proved to be gas targets from real samples by SERS. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. 40 CFR 73.27 - Special allowance reserve.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Special allowance reserve. 73.27 Section 73.27 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) SULFUR DIOXIDE ALLOWANCE SYSTEM Allowance Allocations § 73.27 Special allowance reserve. (a...

  9. GMOtrack: generator of cost-effective GMO testing strategies.

    Science.gov (United States)

    Novak, Petra Krau; Gruden, Kristina; Morisset, Dany; Lavrac, Nada; Stebih, Dejan; Rotter, Ana; Zel, Jana

    2009-01-01

    Commercialization of numerous genetically modified organisms (GMOs) has already been approved worldwide, and several additional GMOs are in the approval process. Many countries have adopted legislation to deal with GMO-related issues such as food safety, environmental concerns, and consumers' right of choice, making GMO traceability a necessity. The growing extent of GMO testing makes it important to study optimal GMO detection and identification strategies. This paper formally defines the problem of routine laboratory-level GMO tracking as a cost optimization problem, thus proposing a shift from "the same strategy for all samples" to "sample-centered GMO testing strategies." An algorithm (GMOtrack) for finding optimal two-phase (screening-identification) testing strategies is proposed. The advantages of cost optimization with increasing GMO presence on the market are demonstrated, showing that optimization approaches to analytic GMO traceability can result in major cost reductions. The optimal testing strategies are laboratory-dependent, as the costs depend on prior probabilities of local GMO presence, which are exemplified on food and feed samples. The proposed GMOtrack approach, publicly available under the terms of the General Public License, can be extended to other domains where complex testing is involved, such as safety and quality assurance in the food supply chain.

  10. Simultaneous escaping of explicit and hidden free energy barriers: application of the orthogonal space random walk strategy in generalized ensemble based conformational sampling.

    Science.gov (United States)

    Zheng, Lianqing; Chen, Mengen; Yang, Wei

    2009-06-21

    To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.

  11. THE ESSENCE OF STRATEGY DEVELOPMENT COMPANY IN THE INTEGRATED STRUCTURE

    Directory of Open Access Journals (Sweden)

    A. I. Khorev

    2014-01-01

    Full Text Available Summary. In the beginning of the article is defined a rational sequence of the consideration of the nature of the strategy of a company development, included into an integrated structure. Further the article describes the following items separately: "a strategy", "a development of a company", and "an integrational structure", applying them to companies included to the integrated structure; separating them from a strategy of development of an autonomous company. The article defines functions which such strategy must define, taking into consideration the nature of the strategy of the company development, included into an integrated structure. Next, the article defines six steps which describe a sequence of development of the strategy of the company development, included into an integrated structure. The analysis which is defined in the article allows determining a complete definition of essence of the strategy of the company development, included into an integrated structure. The article also defines a place of the strategy of development into the hierarchical structure of the strategies. The strategy of the company development, included into an integrated structure (as well as the strategy of development of an autonomous company -- is a competition strategy, and it separates "strategy of leadership for costs", “differentiation strategy”, and “strategy of focusing for costs”. Also authors are analyzed the strategy of the cost optimization. According to the complex definition of the strategy, and the strategy's place inside the hierarchical structure, the article defines functions which corporate, competitive, and functional strategies execute during the management of companies inside an integrational structure. The article presents characteristics of applied strategic decisions at different levels of all three types of strategies. The article's researches allow companies included to the integrated structure define their place inside the

  12. q-Strategy spatial prisoner's dilemma game

    Energy Technology Data Exchange (ETDEWEB)

    Li, Zhi-Hua, E-mail: zhihuli@mail.ustc.edu.cn [Department of Material Science and Engineering, University of Science and Technology of China, Hefei 230026 (China); Fan, Hong-Yi [Department of Material Science and Engineering, University of Science and Technology of China, Hefei 230026 (China); Xu, Wen-Long [School of Computer Science and Technology, Beihang University, Beijing 100083 (China); Yang, Han-Xin [Department of Modern Physics, University of Science and Technology of China, Hefei 230026 (China)

    2011-09-26

    We generalize the usual two-strategy prisoner's dilemma game to a multi-strategy game, in which the strategy variable s is allowed to take q different fractional values lying between 0 and 1. The fractional-valued strategies signify that individuals are not absolutely cooperative or defective, instead they can adopt intermediate strategies. Simulation results on 1D and 2D lattices show that, compared with the binary strategy game, the multi-strategy game can sustain cooperation in more stringent defective environments. We give a comprehensive analysis of the distributions of the survived strategies and we compare pairwise the relative strength and weakness of different strategies. It turns out that some intermediate strategies survive the pure defection because they can reduce being exploited and at the same time benefit from the spatial reciprocity effect. Our work may shed some light on the intermediate behaviors in human society. -- Highlights: → We propose a q-strategy prisoner's dilemma game with intermediate strategies. → The intermediate strategies characterize the extent of cooperation or defection. → We implement the model in a structured population. → The intermediate strategies can promote cooperation.

  13. Sampling design for long-term regional trends in marine rocky intertidal communities

    Science.gov (United States)

    Irvine, Gail V.; Shelley, Alice

    2013-01-01

    Probability-based designs reduce bias and allow inference of results to the pool of sites from which they were chosen. We developed and tested probability-based designs for monitoring marine rocky intertidal assemblages at Glacier Bay National Park and Preserve (GLBA), Alaska. A multilevel design was used that varied in scale and inference. The levels included aerial surveys, extensive sampling of 25 sites, and more intensive sampling of 6 sites. Aerial surveys of a subset of intertidal habitat indicated that the original target habitat of bedrock-dominated sites with slope ≤30° was rare. This unexpected finding illustrated one value of probability-based surveys and led to a shift in the target habitat type to include steeper, more mixed rocky habitat. Subsequently, we evaluated the statistical power of different sampling methods and sampling strategies to detect changes in the abundances of the predominant sessile intertidal taxa: barnacles Balanomorpha, the mussel Mytilus trossulus, and the rockweed Fucus distichus subsp. evanescens. There was greatest power to detect trends in Mytilus and lesser power for barnacles and Fucus. Because of its greater power, the extensive, coarse-grained sampling scheme was adopted in subsequent years over the intensive, fine-grained scheme. The sampling attributes that had the largest effects on power included sampling of “vertical” line transects (vs. horizontal line transects or quadrats) and increasing the number of sites. We also evaluated the power of several management-set parameters. Given equal sampling effort, sampling more sites fewer times had greater power. The information gained through intertidal monitoring is likely to be useful in assessing changes due to climate, including ocean acidification; invasive species; trampling effects; and oil spills.

  14. Research Award: Corporate Strategy and Evaluaon Division

    International Development Research Centre (IDRC) Digital Library (Canada)

    Corey Piccioni

    These one‐year, paid, in‐house programs of training and mentorship allow award ... and developmental evaluaon, to assess and adjust their program strategies? ... Be either currently enrolled at a recognized university at the master's or ...

  15. PENGARUH STRATEGI PEMASARAN TERHADAP KEPUASAN NASABAH PADA PT BANK TABUNGAN NEGARA KCP KOPO

    Directory of Open Access Journals (Sweden)

    Riris Roisah

    2016-03-01

    Full Text Available ABSTRACT - Banking competition in the market and business situations are changing very fast. PT. Bank Tabungan Negara as one of the banking institutions in Indonesia, was originally commissioned by the Indonesian government to be the only bank that won the distribution of mortgages, however, with the change in government policy in Indonesia, where the entire mortgage banking channel is allowed, then the State Savings Bank to shift focus business This study aims to determine the marketing strategy, knowing the level of customer satisfaction and to determine the effect of marketing strategy on customer satisfaction at PT. Bank Tabungan Negara KCP Kopo. The research method used is a quantitative method by using statistical analysis. Ans the study is a type of survey, which took samples from the population and use the questionnaire as a measurement tool. The design used in this research is descriptive method of verification for the data analysis used linear regression, coefficient test, determination and t-test. Based on the findings and results of t-test (partial there is a significant positive effect on customer satisfaction. While the contribution of customer satisfaction marketing strategy shown by koefisisen determination has been described by 0,472, meaning that marketing strategies affect customer satisfaction of 47.2%, while the remaining 52.8% are influenced by variables not examined. And the results of the study showed R = 0.687 R almost close to 1, that mean the variables of marketing strategy to influence customer satisfaction. Keywords: strategy marketing, marketing mix, customer satisfaction.

  16. Sampling by electro-erosion on irradiated materials

    International Nuclear Information System (INIS)

    Riviere, M.; Pizzanelli, J.P.

    1986-05-01

    Sampling on irradiated materials, in particular for mechanical property study of steels in the FAST NEUTRON program needed the set in a hot cell of a machining device by electroerosion. This device allows sampling of tenacity, traction, resilience test pieces [fr

  17. Churchill: an ultra-fast, deterministic, highly scalable and balanced parallelization strategy for the discovery of human genetic variation in clinical and population-scale genomics.

    Science.gov (United States)

    Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter

    2015-01-20

    While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.

  18. CRISPR/Cas9 allows efficient and complete knock-in of a destabilization domain-tagged essential protein in a human cell line, allowing rapid knockdown of protein function.

    Science.gov (United States)

    Park, Arnold; Won, Sohui T; Pentecost, Mickey; Bartkowski, Wojciech; Lee, Benhur

    2014-01-01

    Although modulation of protein levels is an important tool for study of protein function, it is difficult or impossible to knockdown or knockout genes that are critical for cell growth or viability. For such genes, a conditional knockdown approach would be valuable. The FKBP protein-based destabilization domain (DD)-tagging approach, which confers instability to the tagged protein in the absence of the compound Shield-1, has been shown to provide rapid control of protein levels determined by Shield-1 concentration. Although a strategy to knock-in DD-tagged protein at the endogenous loci has been employed in certain parasite studies, partly due to the relative ease of knock-in as a result of their mostly haploid lifecycles, this strategy has not been demonstrated in diploid or hyperploid mammalian cells due to the relative difficulty of achieving complete knock-in in all alleles. The recent advent of CRISPR/Cas9 homing endonuclease-mediated targeted genome cleavage has been shown to allow highly efficient homologous recombination at the targeted locus. We therefore assessed the feasibility of using CRISPR/Cas9 to achieve complete knock-in to DD-tag the essential gene Treacher Collins-Franceschetti syndrome 1 (TCOF1) in human 293T cells. Using a double antibiotic selection strategy to select clones with at least two knock-in alleles, we obtained numerous complete knock-in clones within three weeks of initial transfection. DD-TCOF1 expression in the knock-in cells was Shield-1 concentration-dependent, and removal of Shield-1 resulted in destabilization of DD-TCOF1 over the course of hours. We further confirmed that the tagged TCOF1 retained the nucleolar localization of the wild-type untagged protein, and that destabilization of DD-TCOF1 resulted in impaired cell growth, as expected for a gene implicated in ribosome biogenesis. CRISPR/Cas9-mediated homologous recombination to completely knock-in a DD tag likely represents a generalizable and efficient strategy to

  19. 40 CFR 60.4142 - Hg allowance allocations.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Hg allowance allocations. 60.4142... Coal-Fired Electric Steam Generating Units Hg Allowance Allocations § 60.4142 Hg allowance allocations. (a)(1) The baseline heat input (in MMBtu) used with respect to Hg allowance allocations under...

  20. Cryogenic Liquid Sample Acquisition System for Remote Space Applications

    Science.gov (United States)

    Mahaffy, Paul; Trainer, Melissa; Wegel, Don; Hawk, Douglas; Melek, Tony; Johnson, Christopher; Amato, Michael; Galloway, John

    2013-01-01

    There is a need to acquire autonomously cryogenic hydrocarbon liquid sample from remote planetary locations such as the lakes of Titan for instruments such as mass spectrometers. There are several problems that had to be solved relative to collecting the right amount of cryogenic liquid sample into a warmer spacecraft, such as not allowing the sample to boil off or fractionate too early; controlling the intermediate and final pressures within carefully designed volumes; designing for various particulates and viscosities; designing to thermal, mass, and power-limited spacecraft interfaces; and reducing risk. Prior art inlets for similar instruments in spaceflight were designed primarily for atmospheric gas sampling and are not useful for this front-end application. These cryogenic liquid sample acquisition system designs for remote space applications allow for remote, autonomous, controlled sample collections of a range of challenging cryogenic sample types. The design can control the size of the sample, prevent fractionation, control pressures at various stages, and allow for various liquid sample levels. It is capable of collecting repeated samples autonomously in difficult lowtemperature conditions often found in planetary missions. It is capable of collecting samples for use by instruments from difficult sample types such as cryogenic hydrocarbon (methane, ethane, and propane) mixtures with solid particulates such as found on Titan. The design with a warm actuated valve is compatible with various spacecraft thermal and structural interfaces. The design uses controlled volumes, heaters, inlet and vent tubes, a cryogenic valve seat, inlet screens, temperature and cryogenic liquid sensors, seals, and vents to accomplish its task.

  1. Petrochemical producers gain advantage with novel business strategies

    International Nuclear Information System (INIS)

    Glauthier, T.; Kalkstein, H.; Williamson, R.

    1997-01-01

    After 50 years of gradual change in the petrochemicals industry, the rules of the game are rapidly being written. Parity among competitors has made strategies based on minimizing costs increasingly ineffectual. Some competitors are now finding new business approaches that may allow them to leave others behind. Although the recent upturn in the chemical cycle has brought with it high utilization rates and encouraging financial returns for manufacturers of both petrochemicals and other chemicals, chemical managers need to be aware that the next downturn may erase these gains. The industry has experienced periods of poor financial performance in the past, and there is little reason to expect that the future will bring improvements. Until recently, petrochemical companies have generally pursued strategies focused on optimizing particular portions of the value chain. For the purposes of this article, it is helpful to think of the value chain in terms of four main business segments: feedstocks, products, production processes, and service/distribution. Some chemical companies have managed to avoid a competitive stalemate by developing strategies that have fundamentally changed the way the game is played. Granted, it will still be necessary to pursue maximum efficiency, but the emerging strategies will allow the companies that adopt them to differentiate themselves further than they otherwise could have done. These strategies are discussed

  2. New strategy for evaluating grain cooking quality of progenies in dry bean breeding programs

    Directory of Open Access Journals (Sweden)

    Bruna Line Carvalho

    2017-04-01

    Full Text Available The methodology available for evaluating the cooking quality of dry beans is impractical for assessing a large number of progenies. The aims of this study were to propose a new strategy for evaluating cooking quality of grains and to estimate genetic and phenotypic parameters using a selection index. A total of 256 progenies of the 13thcycle of a recurrent selection program were evaluated at three locations for yield, grain type, and cooked grains. Samples of grains from each progeny were placing in a cooker and the percentage of cooked grains was assessed. The new strategy for evaluating cooking quality was efficient because it allowed a nine-fold increase in the number of progenies evaluated per unit time in comparison to available methods. The absence of association between grain yield and percentage of cooked grains or grain type indicated that it is possible to select high yielding lines with excellent grain aspect and good cooking properties using a selection index.

  3. Combining Electrochemical Sensors with Miniaturized Sample Preparation for Rapid Detection in Clinical Samples

    Science.gov (United States)

    Bunyakul, Natinan; Baeumner, Antje J.

    2015-01-01

    Clinical analyses benefit world-wide from rapid and reliable diagnostics tests. New tests are sought with greatest demand not only for new analytes, but also to reduce costs, complexity and lengthy analysis times of current techniques. Among the myriad of possibilities available today to develop new test systems, amperometric biosensors are prominent players—best represented by the ubiquitous amperometric-based glucose sensors. Electrochemical approaches in general require little and often enough only simple hardware components, are rugged and yet provide low limits of detection. They thus offer many of the desirable attributes for point-of-care/point-of-need tests. This review focuses on investigating the important integration of sample preparation with (primarily electrochemical) biosensors. Sample clean up requirements, miniaturized sample preparation strategies, and their potential integration with sensors will be discussed, focusing on clinical sample analyses. PMID:25558994

  4. Spatiotemporally Representative and Cost-Efficient Sampling Design for Validation Activities in Wanglang Experimental Site

    Directory of Open Access Journals (Sweden)

    Gaofei Yin

    2017-11-01

    Full Text Available Spatiotemporally representative Elementary Sampling Units (ESUs are required for capturing the temporal variations in surface spatial heterogeneity through field measurements. Since inaccessibility often coexists with heterogeneity, a cost-efficient sampling design is mandatory. We proposed a sampling strategy to generate spatiotemporally representative and cost-efficient ESUs based on the conditioned Latin hypercube sampling scheme. The proposed strategy was constrained by multi-temporal Normalized Difference Vegetation Index (NDVI imagery, and the ESUs were limited within a sampling feasible region established based on accessibility criteria. A novel criterion based on the Overlapping Area (OA between the NDVI frequency distribution histogram from the sampled ESUs and that from the entire study area was used to assess the sampling efficiency. A case study in Wanglang National Nature Reserve in China showed that the proposed strategy improves the spatiotemporally representativeness of sampling (mean annual OA = 74.7% compared to the single-temporally constrained (OA = 68.7% and the random sampling (OA = 63.1% strategies. The introduction of the feasible region constraint significantly reduces in-situ labour-intensive characterization necessities at expenses of about 9% loss in the spatiotemporal representativeness of the sampling. Our study will support the validation activities in Wanglang experimental site providing a benchmark for locating the nodes of automatic observation systems (e.g., LAINet which need a spatially distributed and temporally fixed sampling design.

  5. Evaluation of spot and passive sampling for monitoring, flux estimation and risk assessment of pesticides within the constraints of a typical regulatory monitoring scheme.

    Science.gov (United States)

    Zhang, Zulin; Troldborg, Mads; Yates, Kyari; Osprey, Mark; Kerr, Christine; Hallett, Paul D; Baggaley, Nikki; Rhind, Stewart M; Dawson, Julian J C; Hough, Rupert L

    2016-11-01

    In many agricultural catchments of Europe and North America, pesticides occur at generally low concentrations with significant temporal variation. This poses several challenges for both monitoring and understanding ecological risks/impacts of these chemicals. This study aimed to compare the performance of passive and spot sampling strategies given the constraints of typical regulatory monitoring. Nine pesticides were investigated in a river currently undergoing regulatory monitoring (River Ugie, Scotland). Within this regulatory framework, spot and passive sampling were undertaken to understand spatiotemporal occurrence, mass loads and ecological risks. All the target pesticides were detected in water by both sampling strategies. Chlorotoluron was observed to be the dominant pesticide by both spot (maximum: 111.8ng/l, mean: 9.35ng/l) and passive sampling (maximum: 39.24ng/l, mean: 4.76ng/l). The annual pesticide loads were estimated to be 2735g and 1837g based on the spot and passive sampling data, respectively. The spatiotemporal trend suggested that agricultural activities were the primary source of the compounds with variability in loads explained in large by timing of pesticide applications and rainfall. The risk assessment showed chlorotoluron and chlorpyrifos posed the highest ecological risks with 23% of the chlorotoluron spot samples and 36% of the chlorpyrifos passive samples resulting in a Risk Quotient greater than 0.1. This suggests that mitigation measures might need to be taken to reduce the input of pesticides into the river. The overall comparison of the two sampling strategies supported the hypothesis that passive sampling tends to integrate the contaminants over a period of exposure and allows quantification of contamination at low concentration. The results suggested that within a regulatory monitoring context passive sampling was more suitable for flux estimation and risk assessment of trace contaminants which cannot be diagnosed by spot

  6. Analysis of submicrogram samples by INAA

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, D J [National Aeronautics and Space Administration, Houston, TX (USA). Lyndon B. Johnson Space Center

    1990-12-20

    Procedure have been developed to increase the sensitivity of instrumental neutron activation analysis (INAA) so that cosmic-dust samples weighing only 10{sup -9}-10{sup -7} g are routinely analyzed for a sizable number of elements. The primary differences from standard techniques are: (1) irradiation of the samples is much more intense, (2) gamma ray assay of the samples is done using long counting times and large Ge detectors that are operated in an excellent low-background facility, (3) specially prepared glass standards are used, (4) samples are too small to be weighed routinely and concentrations must be obtained indirectly, (5) sample handling is much more difficult, and contamination of small samples with normally insignificant amounts of contaminants is difficult to prevent. In spite of the difficulties, INAA analyses have been done on 15 cosmic-dust particles and a large number of other stratospheric particles. Two-sigma detection limits for some elements are in the range of femtograms (10{sup -15} g), e.g. Co=11, Sc=0.9, Sm=0.2 A particle weighing just 0.2 ng was analyzed, obtaining abundances with relative analytical uncertainties of less than 10% for four elements (Fe, Co, Ni and Sc), which were sufficient to allow identification of the particle as chondritic interplanetary dust. Larger samples allow abundances of twenty or more elements to be obtained. (orig.).

  7. Sample vial inserts: A better approach for sampling heterogeneous slurry samples in the SRS Defense Waste Processing Facility

    International Nuclear Information System (INIS)

    Coleman, C.J.; Goode, S.R.

    1996-01-01

    A convenient and effective new approach for analyzing DWPF samples involves the use of inserts with volumes of 1.5--3 ml placed in the neck of 14 ml sample vials. The inserts have rims that conform to the rim of the vials so that they sit straight and stable in the vial. The DWPF tank sampling system fills the pre-weighed insert rather than the entire vial, so the vial functions only as the insert holder. The shielded cell operator then removes the vial cap and decants the insert containing the sample into a plastic bottle, crucible, etc., for analysis. Inert materials such as Teflon, plastic, and zirconium are used for the insert so it is unnecessary to separate the insert from the sample for most analyses. The key technique advantage of using inserts to take DWPF samples versus filling sample vials is that it provides a convenient and almost foolproof way of obtaining and handling small volumes of slurry samples in a shielded cell without corrupting the sample. Since the insert allows the entire sample to be analyzed, this approach eliminates the errors inherent with subsampling heterogeneous slurries that comprise DWPF samples. Slurry samples can then be analyzed with confidence. Analysis times are dramatically reduced by eliminating the drying and vitrification steps normally used to produce a homogeneous solid sample. Direct dissolution and elemental analysis of slurry samples are achieved in 8 hours or less compared with 40 hours for analysis of vitrified slurry samples. Comparison of samples taken in inserts versus full vials indicate that the insert does not significantly affect sample composition

  8. Soil sampling for environmental contaminants

    International Nuclear Information System (INIS)

    2004-10-01

    The Consultants Meeting on Sampling Strategies, Sampling and Storage of Soil for Environmental Monitoring of Contaminants was organized by the International Atomic Energy Agency to evaluate methods for soil sampling in radionuclide monitoring and heavy metal surveys for identification of punctual contamination (hot particles) in large area surveys and screening experiments. A group of experts was invited by the IAEA to discuss and recommend methods for representative soil sampling for different kinds of environmental issues. The ultimate sinks for all kinds of contaminants dispersed within the natural environment through human activities are sediment and soil. Soil is a particularly difficult matrix for environmental pollution studies as it is generally composed of a multitude of geological and biological materials resulting from weathering and degradation, including particles of different sizes with varying surface and chemical properties. There are so many different soil types categorized according to their content of biological matter, from sandy soils to loam and peat soils, which make analytical characterization even more complicated. Soil sampling for environmental monitoring of pollutants, therefore, is still a matter of debate in the community of soil, environmental and analytical sciences. The scope of the consultants meeting included evaluating existing techniques with regard to their practicability, reliability and applicability to different purposes, developing strategies of representative soil sampling for cases not yet considered by current techniques and recommending validated techniques applicable to laboratories in developing Member States. This TECDOC includes a critical survey of existing approaches and their feasibility to be applied in developing countries. The report is valuable for radioanalytical laboratories in Member States. It would assist them in quality control and accreditation process

  9. Pricing: A Normative Strategy in the Delivery of Human Services.

    Science.gov (United States)

    Moore, Stephen T.

    1995-01-01

    Discusses a normative strategy toward pricing human services, which will allow providers to develop pricing strategies within the context of organizational missions, goals, and values. Pricing is an effective tool for distributing resources and improving efficiency, and can be used as a tool for encouraging desired patterns of service utilization.…

  10. Predictors of Middle School Students' Use of Self- Handicapping Strategies.

    Science.gov (United States)

    Midgley, Carol; Urdan, Tim

    1995-01-01

    By procrastinating, allowing others to keep them from studying, deliberately not trying, and using other "self-handicapping" strategies, students can convey that those circumstances, rather than lack of ability, are the reasons for subsequent poor performance. Survey data from 256 eighth-grade students indicated that boys use those strategies more…

  11. Comprehensive Study of Human External Exposure to Organophosphate Flame Retardants via Air, Dust, and Hand Wipes: The Importance of Sampling and Assessment Strategy.

    Science.gov (United States)

    Xu, Fuchao; Giovanoulis, Georgios; van Waes, Sofie; Padilla-Sanchez, Juan Antonio; Papadopoulou, Eleni; Magnér, Jorgen; Haug, Line Småstuen; Neels, Hugo; Covaci, Adrian

    2016-07-19

    We compared the human exposure to organophosphate flame retardants (PFRs) via inhalation, dust ingestion, and dermal absorption using different sampling and assessment strategies. Air (indoor stationary air and personal ambient air), dust (floor dust and surface dust), and hand wipes were sampled from 61 participants and their houses. We found that stationary air contains higher levels of ΣPFRs (median = 163 ng/m(3), IQR = 161 ng/m(3)) than personal air (median = 44 ng/m(3), IQR = 55 ng/m(3)), suggesting that the stationary air sample could generate a larger bias for inhalation exposure assessment. Tris(chloropropyl) phosphate isomers (ΣTCPP) accounted for over 80% of ΣPFRs in both stationary and personal air. PFRs were frequently detected in both surface dust (ΣPFRs median = 33 100 ng/g, IQR = 62 300 ng/g) and floor dust (ΣPFRs median = 20 500 ng/g, IQR = 30 300 ng/g). Tris(2-butoxylethyl) phosphate (TBOEP) accounted for 40% and 60% of ΣPFRs in surface and floor dust, respectively, followed by ΣTCPP (30% and 20%, respectively). TBOEP (median = 46 ng, IQR = 69 ng) and ΣTCPP (median = 37 ng, IQR = 49 ng) were also frequently detected in hand wipe samples. For the first time, a comprehensive assessment of human exposure to PFRs via inhalation, dust ingestion, and dermal absorption was conducted with individual personal data rather than reference factors of the general population. Inhalation seems to be the major exposure pathway for ΣTCPP and tris(2-chloroethyl) phosphate (TCEP), while participants had higher exposure to TBOEP and triphenyl phosphate (TPHP) via dust ingestion. Estimated exposure to ΣPFRs was the highest with stationary air inhalation (median =34 ng·kg bw(-1)·day(-1), IQR = 38 ng·kg bw(-1)·day(-1)), followed by surface dust ingestion (median = 13 ng·kg bw(-1)·day(-1), IQR = 28 ng·kg bw(-1)·day(-1)), floor dust ingestion and personal air inhalation. The median dermal exposure on hand wipes was 0.32 ng·kg bw(-1)·day(-1) (IQR

  12. Facebook or Twitter?: Effective recruitment strategies for family caregivers.

    Science.gov (United States)

    Herbell, Kayla; Zauszniewski, Jaclene A

    2018-06-01

    This brief details recent recruitment insights from a large all-online study of family caregivers that aimed to develop a measure to assess how family caregivers manage daily stresses. Online recruitment strategies included the use of Twitter and Facebook. Overall, 800 individuals responded to the recruitment strategy; 230 completed all study procedures. The most effective online recruitment strategy for targeting family caregivers was Facebook, yielding 86% of the sample. Future researchers may find the use of social media recruitment methods appealing because they are inexpensive, simple, and efficient methods for obtaining National samples. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Maternal Attachment Strategies and Emotion Regulation with Adolescent Offspring.

    Science.gov (United States)

    Kobak, Roger; And Others

    1994-01-01

    Examined the relationship between mothers' attachment strategies and emotion regulation in a sample of 42 families with 2 high school-aged siblings. Found that mothers with preoccupied strategies had difficulty regulating emotion during conversations with their older teenagers about them leaving home. Mothers with secure strategies perceived their…

  14. Dose reduction strategies for cardiac CT

    International Nuclear Information System (INIS)

    Midgley, S.M.; Einsiedel, P.; Langenberg, F.; Lui, E.

    2010-01-01

    Full text: Recent advances in CT technology have produced brighter X-ray sources. gantries capable of increased rotation speeds, faster scintil lation materials arranged into multiple rows of detectors, and associated advances in 3D reconstruction methods. These innovations have allowed multi-detector CT to be turned to the diagnosis of cardiac abnormalities and compliment traditional imaging techniques such as coronary angiography. This study examines the cardiac imaging solution offered by the Siemens Somatom Definition Dual Source 64 slice CT scanner. Our dose reduction strategies involve optimising the data acquisition protocols according to diagnostic task, patient size and heart rate. The relationship between scan parameters, image quality and patient dose is examined and verified against measurements with phantoms representing the standard size patient. The dose reduction strategies are reviewed with reference to survey results of patient dose. Some cases allow the insertion of shielding to protect radiosensitive organs, and results are presented to quantify the dose saving.

  15. Allowance Holdings and Transfers Data Inventory

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Allowance Holdings and Transfers Data Inventory contains measured data on holdings and transactions of allowances under the NOx Budget Trading Program (NBP), a...

  16. CHOMIK -Sampling Device of Penetrating Type for Russian Phobos Sample Return Mission

    Science.gov (United States)

    Seweryn, Karol; Grygorczuk, Jerzy; Rickmann, Hans; Morawski, Marek; Aleksashkin, Sergey; Banaszkiewicz, Marek; Drogosz, Michal; Gurgurewicz, Joanna; Kozlov, Oleg E.; Krolikowska-Soltan, Malgorzata; Sutugin, Sergiej E.; Wawrzaszek, Roman; Wisniewski, Lukasz; Zakharov, Alexander

    Measurements of physical properties of planetary bodies allow to determine many important parameters for scientists working in different fields of research. For example effective heat conductivity of the regolith can help with better understanding of processes occurring in the body interior. Chemical and mineralogical composition gives us a chance to better understand the origin and evolution of the moons. In principle such parameters of the planetary bodies can be determined based on three different measurement techniques: (i) in situ measurements (ii) measurements of the samples in laboratory conditions at the Earth and (iii) remote sensing measurements. Scientific missions which allow us to perform all type of measurements, give us a chance for not only parameters determination but also cross calibration of the instruments. Russian Phobos Sample Return (PhSR) mission is one of few which allows for all type of such measurements. The spacecraft will be equipped with remote sensing instruments like: spectrometers, long wave radar and dust counter, instruments for in-situ measurements -gas-chromatograph, seismometer, thermodetector and others and also robotic arm and sampling device. PhSR mission will be launched in November 2011 on board of a launch vehicle Zenit. About a year later (11 months) the vehicle will reach the Martian orbit. It is anticipated that it will land on Phobos in the beginning of 2013. A take off back will take place a month later and the re-entry module containing a capsule that will hold the soil sample enclosed in a container will be on its way back to Earth. The 11 kg re-entry capsule with the container will land in Kazakhstan in mid-2014. A unique geological penetrator CHOMIK dedicated for the Phobos Sample Return space mis-sion will be designed and manufactured at the Space Mechatronics and Robotics Laboratory, Space Research Centre Polish Academy of Sciences (SRC PAS) in Warsaw. Functionally CHOMIK is based on the well known MUPUS

  17. Forecasting the Allocation Ratio of Carbon Emission Allowance Currency for 2020 and 2030 in China

    Directory of Open Access Journals (Sweden)

    Shihong Zeng

    2016-07-01

    Full Text Available Many countries and scholars have used various strategies to improve and optimize the allocation ratios for carbon emission allowances. This issue is more urgent for China due to the uneven development across the country. This paper proposes a new method that divides low-carbon economy development processes into two separate periods: from 2020 to 2029 and from 2030 to 2050. These two periods have unique requirements and emissions reduction potential; therefore, they must involve different allocation methods, so that reduction behaviors do not stall the development of regional low-carbon economies. During the first period, a more deterministic economic development approach for the carbon emission allowance allocation ratio should be used. During the second period, more adaptive and optimized policy guidance should be employed. We developed a low-carbon economy index evaluation system using the entropy weight method to measure information filtering levels. We conducted vector autoregressive correlation tests, consulted 60 experts for the fuzzy analytic hierarchy process, and we conducted max-min standardized data processing tests. This article presents first- and second-period carbon emission allowance models in combination with a low-carbon economy index evaluation system. Finally, we forecast reasonable carbon emission allowance allocation ratios for China for the periods starting in 2020 and 2030. A good allocation ratio for the carbon emission allowance can help boost China’s economic development and help the country reach its energy conservation and emissions reduction goals.

  18. 40 CFR 82.20 - Availability of consumption allowances in addition to baseline consumption allowances for class...

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 17 2010-07-01 2010-07-01 false Availability of consumption allowances in addition to baseline consumption allowances for class II controlled substances. 82.20 Section 82...) PROTECTION OF STRATOSPHERIC OZONE Production and Consumption Controls § 82.20 Availability of consumption...

  19. Building up Autonomy Through Reading Strategies

    Directory of Open Access Journals (Sweden)

    Alexander Izquierdo Castillo

    2014-10-01

    Full Text Available This article reports on an action research project conducted with six ninth grade students in a rural public school in Colombia. The purpose of the study was to determine how the implementation of three reading strategies (skimming, scanning, and making predictions, when reading topics selected by learners, helps them to improve their reading comprehension and promotes their autonomy in the learning process. The results show that these learners developed some autonomous features such as making decisions for learning and doing assigned homework, increasing reading awareness and motivation. Additionally, the training on reading strategies allowed them to succeed in their reading comprehension. We conclude that these reading strategies are tools that take learners along the path of autonomy.

  20. Service Quality Strategy: Implementation in Algarve Hotels

    OpenAIRE

    Carlos J. F. Cândido

    2010-01-01

    This chapter addresses the problem of service quality strategy implementation and undertakes a tentative validation of three models. The first focuses on service quality, as a function of quality gaps, while the second and third ones examine strategy implementation. The models aim to help to explain how to implement a service quality strategy that simultaneously avoids quality gaps and resistance to change. Sample data has been collected through questionnaires distributed within the p...

  1. Hemodialysis: stressors and coping strategies.

    Science.gov (United States)

    Ahmad, Muayyad M; Al Nazly, Eman K

    2015-01-01

    End-stage renal disease (ESRD) is an irreversible and life-threatening condition. In Jordan, the number of ESRD patients treated with hemodialysis is on the rise. Identifying stressors and coping strategies used by patients with ESRD may help nurses and health care providers to gain a clearer understanding of the condition of these patients and thus institute effective care planning. The purpose of this study was to identify stressors perceived by Jordanian patients on hemodialysis, and the coping strategies used by them. A convenience sample of 131 Jordanian men and women was recruited from outpatients' dialysis units in four hospitals. Stressors perceived by participants on hemodialysis and the coping strategies were measured using Hemodialysis Stressor Scale, and Ways of Coping Scale-Revised. Findings showed that patients on hemodialysis psychosocial stressors scores mean was higher than the physiological stressors mean. Positive reappraisal coping strategy had the highest mean among the coping strategies and the lowest mean was accepting responsibility. Attention should be focused towards the psychosocial stressors of patients on hemodialysis and also helping patients utilize the coping strategies that help to alleviate the stressors. The most used coping strategy was positive reappraisal strategy which includes faith and prayer.

  2. Lunar Sample Compendium

    Science.gov (United States)

    Meyer, Charles

    2005-01-01

    The purpose of the Lunar Sample Compendium will be to inform scientists, astronauts and the public about the various lunar samples that have been returned from the Moon. This Compendium will be organized rock by rock in the manor of a catalog, but will not be as comprehensive, nor as complete, as the various lunar sample catalogs that are available. Likewise, this Compendium will not duplicate the various excellent books and reviews on the subject of lunar samples (Cadogen 1981, Heiken et al. 1991, Papike et al. 1998, Warren 2003, Eugster 2003). However, it is thought that an online Compendium, such as this, will prove useful to scientists proposing to study individual lunar samples and should help provide backup information for lunar sample displays. This Compendium will allow easy access to the scientific literature by briefly summarizing the significant findings of each rock along with the documentation of where the detailed scientific data are to be found. In general, discussion and interpretation of the results is left to the formal reviews found in the scientific literature. An advantage of this Compendium will be that it can be updated, expanded and corrected as need be.

  3. Natural strategies for photosynthetic light harvesting

    NARCIS (Netherlands)

    Croce, R.; van Amerongen, H.

    2014-01-01

    Photosynthetic organisms are crucial for life on Earth as they provide food and oxygen and are at the basis of most energy resources. They have a large variety of light-harvesting strategies that allow them to live nearly everywhere where sunlight can penetrate. They have adapted their pigmentation

  4. An optimal tuning strategy for tidal turbines.

    Science.gov (United States)

    Vennell, Ross

    2016-11-01

    Tuning wind and tidal turbines is critical to maximizing their power output. Adopting a wind turbine tuning strategy of maximizing the output at any given time is shown to be an extremely poor strategy for large arrays of tidal turbines in channels. This 'impatient-tuning strategy' results in far lower power output, much higher structural loads and greater environmental impacts due to flow reduction than an existing 'patient-tuning strategy' which maximizes the power output averaged over the tidal cycle. This paper presents a 'smart patient tuning strategy', which can increase array output by up to 35% over the existing strategy. This smart strategy forgoes some power generation early in the half tidal cycle in order to allow stronger flows to develop later in the cycle. It extracts enough power from these stronger flows to produce more power from the cycle as a whole than the existing strategy. Surprisingly, the smart strategy can often extract more power without increasing maximum structural loads on the turbines, while also maintaining stronger flows along the channel. This paper also shows that, counterintuitively, for some tuning strategies imposing a cap on turbine power output to limit loads can increase a turbine's average power output.

  5. Clean Air Markets - Allowances Query Wizard

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Allowances Query Wizard is part of a suite of Clean Air Markets-related tools that are accessible at http://camddataandmaps.epa.gov/gdm/index.cfm. The Allowances...

  6. Comparison of Value Generation Strategies Between Planned and Emerging Strategies: A Study Based on Games of Companies

    Directory of Open Access Journals (Sweden)

    Marcos Paixão Garcez

    2012-06-01

    Full Text Available This study aims to analyze the economic results of the planned strategies compared to the emergent strategies in decision-making. The theoretical background emphasizes some aspects, like the strategy concept evolution throughout the time, the typology of strategies proposed by Mintzberg, the comparison between competition and cooperation, and the use of a business simulator as a tool for business research purposes. As a controlled experiment, the EGS simulator (Management Exercise Simulated allowed comparison of the economic results of the two decision-making situations. The findings show that when planned strategies were implemented without corrections, the value generated (expressed by the internal rate of return IRR = 1.51% was greater than in the case of adjusted emerging strategies in three periods (IRR= 1.40%. Comparing the two situations, it is possible to find a value added advantage of 7.86% in favor of the planned strategies, indicating the competition might be responsible for the value decreasing in real environment. Analyzing the performance degrees reached by the competitors, the ranking results show that there is no association between planned strategy and emerging strategies. Although the business simulators can be considered weak approximations for the business environment, the experiment contributed new evidence of the competition rise in oligopoly industries and a new methodological approach for studying this phenomenon.

  7. Research-Grade 3D Virtual Astromaterials Samples: Novel Visualization of NASA's Apollo Lunar Samples and Antarctic Meteorite Samples to Benefit Curation, Research, and Education

    Science.gov (United States)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K. R.; Zeigler, R. A.; Righter, K.; Hanna, R. D.; Ketcham, R. A.

    2017-01-01

    NASA's vast and growing collections of astromaterials are both scientifically and culturally significant, requiring unique preservation strategies that need to be recurrently updated to contemporary technological capabilities and increasing accessibility demands. New technologies have made it possible to advance documentation and visualization practices that can enhance conservation and curation protocols for NASA's Astromaterials Collections. Our interdisciplinary team has developed a method to create 3D Virtual Astromaterials Samples (VAS) of the existing collections of Apollo Lunar Samples and Antarctic Meteorites. Research-grade 3D VAS will virtually put these samples in the hands of researchers and educators worldwide, increasing accessibility and visibility of these significant collections. With new sample return missions on the horizon, it is of primary importance to develop advanced curation standards for documentation and visualization methodologies.

  8. Simultaneous Release and Labeling of O- and N-Glycans Allowing for Rapid Glycomic Analysis by Online LC-UV-ESI-MS/MS.

    Science.gov (United States)

    Wang, Chengjian; Lu, Yu; Han, Jianli; Jin, Wanjun; Li, Lingmei; Zhang, Ying; Song, Xuezheng; Huang, Linjuan; Wang, Zhongfu

    2018-05-24

    Most glycoproteins and biological protein samples undergo both O- and N-glycosylation, making characterization of their structures very complicated and time-consuming. Nevertheless, to fully understand the biological functions of glycosylation, both the glycosylation forms need to be analyzed. Herein we report a versatile, convenient one-pot method in which O- and N-glycans are simultaneously released from glycoproteins and chromogenically labeled in situ and thus available for further characterization. In this procedure, glycoproteins are incubated with 1-phenyl-3-methyl-5-pyrazolone (PMP) in aqueous ammonium hydroxide, making O-glycans released from protein backbones by β-elimination and N-glycans liberated by alkaline hydrolysis. The released glycans are promptly derivatized with PMP in situ by Knoevenagel condensation and Michael addition, with peeling degradation almost completely prevented. The recovered mixture of O- and N-glycans as bis-PMP derivatives features strong ultraviolet (UV) absorbing ability and hydrophobicity, allowing for high-resolution chromatographic separation and high-sensitivity spectrometric detection. Using this technique, O- and N-glycans were simultaneously prepared from some model glycoproteins and complex biological samples, without significant peeling, desialylation, deacetylation, desulfation or other side-reactions, and then comprehensively analyzed by online HILIC-UV-ESI-MS/MS and RP-HPLC-UV-ESI-MS/MS, with which some novel O- and N-glycan structures were first found. This method provides a simple, versatile strategy for high-throughput glycomics analysis.

  9. Stress and coping strategies in a sample of South African managers involved in post-graduate managerial studies

    Directory of Open Access Journals (Sweden)

    Judora J. Spangenberg

    2000-06-01

    Full Text Available To examine the relationships between stress levels and, respectively, stressor appraisal, coping strategies and bio- graphical variables, 107 managers completed a biographical questionnaire. Experience of Work and Life Circumstances Questionnaire, and Coping Strategy Indicator. Significant negative correlations were found between stress levels and appraisal scores on all work-related stressors. An avoidant coping strategy explained significant variance in stress levels in a model also containing social support-seeking and problem-solving coping strategies. It was concluded that an avoidant coping strategy probably contributed to increased stress levels. Female managers experienced significantly higher stress levels and utilized a social support-seeking coping strategy significantly more than male managers did. Opsomming Om die verband tussen stresvlakke en, onderskeidelik, taksering van stressors, streshanteringstrategiee en biografiese veranderlikes te ondersoek, het 107 bestuurders n biografiese vraelys, Ervaring vanWerk- en Lewensomstandighedevraelys en Streshanteringstrategieskaal voltooi. Beduidende negatiewe korrelasies is aangetref tussen stresvlakke en takseringtellings ten opsigte van alle werkverwante stressors. 'nVermydende streshantermgstrategie het beduidende variansie in stresvlakke verklaar in n model wat ook sosiale ondersteuningsoekende en pro-bleemoplossende streshanteringstrategiee ingesluit het. Die gevolgtrekking is bereik dat n vermydende stres- hanteringstrategie waarskynlik bygedra het tot verhoogde stresvlakke. Vroulike bestuurders het beduidend hoer stresvlakke ervaar en het n sosiale ondersteuningsoekende streshanteringstrategie beduidend meer gebnnk as manlike bestuurders.

  10. Comprehensive national energy strategy

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-04-01

    This Comprehensive National Energy Strategy sets forth a set of five common sense goals for national energy policy: (1) improve the efficiency of the energy system, (2) ensure against energy disruptions, (3) promote energy production and use in ways that respect health and environmental values, (4) expand future energy choices, and (5) cooperate internationally on global issues. These goals are further elaborated by a series of objectives and strategies to illustrate how the goals will be achieved. Taken together, the goals, objectives, and strategies form a blueprint for the specific programs, projects, initiatives, investments, and other actions that will be developed and undertaken by the Federal Government, with significant emphasis on the importance of the scientific and technological advancements that will allow implementation of this Comprehensive National Energy Strategy. Moreover, the statutory requirement of regular submissions of national energy policy plans ensures that this framework can be modified to reflect evolving conditions, such as better knowledge of our surroundings, changes in energy markets, and advances in technology. This Strategy, then, should be thought of as a living document. Finally, this plan benefited from the comments and suggestions of numerous individuals and organizations, both inside and outside of government. The Summary of Public Comments, located at the end of this document, describes the public participation process and summarizes the comments that were received. 8 figs.

  11. Relationship with Parents and Coping Strategies in Adolescents of Lima

    Directory of Open Access Journals (Sweden)

    Tomás P. Caycho

    2016-04-01

    Full Text Available This correlational and comparative study aims to determine the relationship between the perception of the relationship with parents and coping strategies in a sample of 320 students chosen through a non-probabilistic sampling of 156 men (48.75% and 164 women (51.25%. To that end, information gathering instruments like the Children’s Report of Parental Behavior Inventory and Adolescent Coping Scale were used. The results suggest that there are statistically significant correlations between some dimensions of perception of the relationship with parents and coping strategies in the sample studied. Finally, with regard to the perception of parenting styles of both mother and father, we see no significant differences between men and women, except for the extreme autonomy of the father, in which men score higher than women. There were no some statistically significant differences in the analysis of coping strategies in the sample in relation to gender.

  12. A soil-specific agro-ecological strategy for sustainable production in Argentina farm fields

    Science.gov (United States)

    Zamora, Martin; Barbera, Agustin; Castro-Franco, Mauricio; Hansson, Alejandro; Domenech, Marisa

    2017-04-01

    The continuous increment of frequencies and doses of pesticides, glyphosate and fertilizers, the deterioration of the structure, biotic balance and fertility of soils and the ground water pollution are characteristics of the current Argentinian agricultural model. In this context, agro-ecological innovations are needed to develop a real sustainable agriculture, enhancing the food supply. Precision agriculture technologies can strengthen the expansion of agro-ecological farming in experimental farm fields. The aim of this study was to propose a soil-specific agro-ecological strategy for sustainable production at field scale focused on the use of soil sensors and digital soil mapping techniques. This strategy has been developed in 15 hectares transition agro-ecological farm field, located at Barrow Experimental Station (Lat:-38.322844, Lon:-60.25572) Argentina. The strategy included five steps: (i) to measure apparent electrical conductivity (ECa) and elevation within agro-ecological farm field; (ii) to apply a clustering method using MULTISPATI-PCA algorithm to delimitate three soil-specific zones (Z1, Z2 and Z3); (iii) to determine three soil sampling points by zone, using conditioned Latin hypercube method, in addition to elevation and ECa as auxiliary information; (iv) to collect soil samples at 2-10 cm depth in each point and to determine in laboratory: total organic carbon content (TOC), cation-exchange capacity (CEC), pH and phosphorus availability (P-Bray). In addition, soil bulk density (SBD) was measured at 0-20 cm depth. Finally, (v) according to each soil-specific zone, a management strategy was recommended. Important differences in soil properties among zones could suggest that the strategy developed was able to apply an agro ecological soil-specific practice management. pH and P-Bray were significantly (pfertilizer and also rotating plots with high stocking rate. The aim is to increase soil organic matter content and CEC. Furthermore, P content will be

  13. An energy-efficient adaptive sampling scheme for wireless sensor networks

    NARCIS (Netherlands)

    Masoum, Alireza; Meratnia, Nirvana; Havinga, Paul J.M.

    2013-01-01

    Wireless sensor networks are new monitoring platforms. To cope with their resource constraints, in terms of energy and bandwidth, spatial and temporal correlation in sensor data can be exploited to find an optimal sampling strategy to reduce number of sampling nodes and/or sampling frequencies while

  14. Optimal Bidding and Operation of a Power Plant with Solvent-Based Carbon Capture under a CO2 Allowance Market: A Solution with a Reinforcement Learning-Based Sarsa Temporal-Difference Algorithm

    Directory of Open Access Journals (Sweden)

    Ziang Li

    2017-04-01

    Full Text Available In this paper, a reinforcement learning (RL-based Sarsa temporal-difference (TD algorithm is applied to search for a unified bidding and operation strategy for a coal-fired power plant with monoethanolamine (MEA-based post-combustion carbon capture under different carbon dioxide (CO2 allowance market conditions. The objective of the decision maker for the power plant is to maximize the discounted cumulative profit during the power plant lifetime. Two constraints are considered for the objective formulation. Firstly, the tradeoff between the energy-intensive carbon capture and the electricity generation should be made under presumed fixed fuel consumption. Secondly, the CO2 allowances purchased from the CO2 allowance market should be approximately equal to the quantity of CO2 emission from power generation. Three case studies are demonstrated thereafter. In the first case, we show the convergence of the Sarsa TD algorithm and find a deterministic optimal bidding and operation strategy. In the second case, compared with the independently designed operation and bidding strategies discussed in most of the relevant literature, the Sarsa TD-based unified bidding and operation strategy with time-varying flexible market-oriented CO2 capture levels is demonstrated to help the power plant decision maker gain a higher discounted cumulative profit. In the third case, a competitor operating another power plant identical to the preceding plant is considered under the same CO2 allowance market. The competitor also has carbon capture facilities but applies a different strategy to earn profits. The discounted cumulative profits of the two power plants are then compared, thus exhibiting the competitiveness of the power plant that is using the unified bidding and operation strategy explored by the Sarsa TD algorithm.

  15. Effective Teaching Strategies for Predicting Reading Growth in English Language Learners

    Science.gov (United States)

    Melgarejo, Melina

    2017-01-01

    The goal of the present study was to examine how effective use of teaching strategies predict reading growth among a sample of English Language Learners. The study specifically examined whether the types of teaching strategies that predict growth in decoding skills also predict growth in comprehension skills. The sample consisted of students in…

  16. A responsible remediation strategy

    International Nuclear Information System (INIS)

    Knowles, C.R.

    1992-01-01

    This paper deals with an approach to cleaning up the residue of 150 years of intense urban and industrial development in the United States. The discussion focuses on several choices and strategies that business can adopt given the existing environmental laws and the socio-economic trends of the 1990's. The thesis of this paper is that the best business strategy for dealing with environmental liabilities is to act affirmatively and aggressively. An aggressive, pro-active approach to environmental remediation liabilities makes good business sense. It allows a company to learn the true size of the problem early. Early assessment and prioritization allows one to control the course and conduct of the cleanup. Early voluntary action is always viewed favorably by agencies. It gives one control over spending patterns which has value in and of itself. Voluntary cleanups are certainly faster and invariably more efficient. And they attain clearly acceptable standards. The volunteering company that takes the lead in a multi-party site finds that the courts are supportive in helping the volunteer collect from recalcitrant polluters. All of these pluses have a direct and positive impact on the bottom line and that means that the aggressive approach is the right thing to do for both stockholders and the communities where a business exists

  17. Sensemaking Strategies for Ethical Decision-making.

    Science.gov (United States)

    Caughron, Jay J; Antes, Alison L; Stenmark, Cheryl K; Thiel, Chaise E; Wang, Xiaoqian; Mumford, Michael D

    2011-01-01

    The current study uses a sensemaking model and thinking strategies identified in earlier research to examine ethical decision-making. Using a sample of 163 undergraduates, a low fidelity simulation approach is used to study the effects personal involvement (in causing the problem and personal involvement in experiencing the outcomes of the problem) could have on the use of cognitive reasoning strategies that have been shown to promote ethical decision-making. A mediated model is presented which suggests that environmental factors influence reasoning strategies, reasoning strategies influence sensemaking, and sensemaking in turn influences ethical decision-making. Findings were mixed but generally supported the hypothesized model. Interestingly, framing the outcomes of ethically charged situations in terms of more global organizational outcomes rather than personal outcomes was found to promote the use of pro-ethical cognitive reasoning strategies.

  18. Lateral sample motion in the plate-rod impact experiments

    International Nuclear Information System (INIS)

    Zaretsky, Eugene; Levi-Hevroni, David; Shvarts, Dov; Ofer, Dror

    2000-01-01

    Velocity of the lateral motion of cylindrical, 9 mm diameter 20 mm length, samples impacted by WHA impactors of 5-mm thickness was monitored by VISAR at the different points of the sample surface at distance of 1 to 4 mm from the sample impacted edge. The impactors were accelerated in the 25-mm pneumatic gun up to velocities of about 300 m/sec. Integrating the VISAR data recorded at the different surface points after the impact with the same velocity allows to obtain the changes of the sample shape during the initial period of the sample deformation. It was found that the character of the lateral motion is different for samples made of WHA and commercial Titanium alloy Ti-6Al-4V. 2-D numerical simulation of the impact allows to conclude that the work hardening of the alloys is responsible for this difference

  19. Developmental Strategy For Effective Sampling To Detect Possible Nutrient Fluxes In Oligotrophic Coastal Reef Waters In The Caribbean

    Science.gov (United States)

    Mendoza, W. G.; Corredor, J. E.; Ko, D.; Zika, R. G.; Mooers, C. N.

    2008-05-01

    The increasing effort to develop the coastal ocean observing system (COOS) in various institutions has gained momentum due to its high value to climate, environmental, economic, and health issues. The stress contributed by nutrients to the coral reef ecosystem is among many problems that are targeted to be resolved using this system. Traditional nutrient sampling has been inadequate to resolve issues on episodic nutrient fluxes in reef regions due to temporal and spatial variability. This paper illustrates sampling strategy using the COOS information to identify areas that need critical investigation. The area investigated is within the Puerto Rico subdomain (60-70oW, 15-20oN), and Caribbean Time Series (CaTS), World Ocean Circulation Experiment (WOCE), Intra-America Sea (IAS) ocean nowcast/forecast system (IASNFS), and other COOS-related online datasets are utilized. Nutrient profile results indicate nitrate is undetectable in the upper 50 m apparently due to high biological consumption. Nutrients are delivered in Puerto Rico particularly in the CaTS station either via a meridional jet formed from opposing cyclonic and anticyclonic eddies or wind-driven upwelling. The strong vertical fluctuation in the upper 50 m demonstrates a high anomaly in temperature and salinity and a strong cross correlation signal. High chlorophyll a concentration corresponding to seasonal high nutrient influx coincides with higher precipitation accumulation rates and apparent riverine input from the Amazon and Orinoco Rivers during summer (August) than during winter (February) seasons. Non-detectability of nutrients in the upper 50 m is a reflection of poor sampling frequency or the absence of a highly sensitive nutrient analysis method to capture episodic events. Thus, this paper was able to determine the range of depths and concentrations that need to be critically investigated to determine nutrient fluxes, nutrient sources, and climatological factors that can affect nutrient delivery

  20. Comparison of strategies for substantiating freedom from scrapie in a sheep flock.

    Science.gov (United States)

    Durand, Benoit; Martinez, Marie-José; Calavas, Didier; Ducrot, Christian

    2009-04-30

    The public health threat represented by a potential circulation of bovine spongiform encephalopathy agent in sheep population has led European animal health authorities to launch large screening and genetic selection programmes. If demonstrated, such a circulation would have dramatic economic consequences for sheep breeding sector. In this context, it is important to evaluate the feasibility of qualification procedures that would allow sheep breeders demonstrating their flock is free from scrapie. Classical approaches, based on surveys designed to detect disease presence, do not account for scrapie specificities: the genetic variations of susceptibility and the absence of live diagnostic test routinely available. Adapting these approaches leads to a paradoxical situation in which a greater amount of testing is needed to substantiate disease freedom in genetically resistant flocks than in susceptible flocks, whereas probability of disease freedom is a priori higher in the former than in the latter. The goal of this study was to propose, evaluate and compare several qualification strategies for demonstrating a flock is free from scrapie. A probabilistic framework was defined that accounts for scrapie specificities and allows solving the preceding paradox. Six qualification strategies were defined that combine genotyping data, diagnostic tests results and flock pedigree. These were compared in two types of simulated flocks: resistant and susceptible flocks. Two strategies allowed demonstrating disease freedom in several years, for the majority of simulated flocks: a strategy in which all the flock animals are genotyped, and a strategy in which only founders animals are genotyped, the flock pedigree being known. In both cases, diagnostic tests are performed on culled animals. The less costly strategy varied according to the genetic context (resistant or susceptible) and to the relative costs of a genotyping exam and of a diagnostic test. This work demonstrates that

  1. An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method

    International Nuclear Information System (INIS)

    Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2015-01-01

    Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

  2. Strategy development management of Multimodal Transport Network

    Directory of Open Access Journals (Sweden)

    Nesterova Natalia S.

    2016-01-01

    Full Text Available The article gives a brief overview of works on the development of transport infrastructure for multimodal transportation and integration of Russian transport system into the international transport corridors. The technology for control of the strategy, that changes shape and capacity of Multi-modal Transport Network (MTN, is considered as part of the methodology for designing and development of MTN. This technology allows to carry out strategic and operational management of the strategy implementation based on the use of the balanced scorecard.

  3. Strategy selection in the minority game

    Science.gov (United States)

    D'hulst, R.; Rodgers, G. J.

    2000-04-01

    We investigate the dynamics of the choice of an active strategy in the minority game. A history distribution is introduced as an analytical tool to study the asymmetry between the two choices offered to the agents. Its properties are studied numerically. It allows us to show that the departure from uniformity in the initial attribution of strategies to the agents is important even in the efficient market. Also, an approximate expression for the variance of the number of agents at one side in the efficient phase is proposed. All the analytical propositions are supported by numerical simulations of the system.

  4. 24 CFR 85.22 - Allowable costs.

    Science.gov (United States)

    2010-04-01

    ... uniform cost accounting standards that comply with cost principles acceptable to the Federal agency. ... TRIBAL GOVERNMENTS Post-Award Requirements Financial Administration § 85.22 Allowable costs. (a... increment above allowable costs) to the grantee or subgrantee. (b) Applicable cost principles. For each kind...

  5. 32 CFR 33.22 - Allowable costs.

    Science.gov (United States)

    2010-07-01

    ... accounting standards that comply with cost principles acceptable to the Federal agency. ... Post-Award Requirements Financial Administration § 33.22 Allowable costs. (a) Limitation on use of... allowable costs) to the grantee or subgrantee. (b) Applicable cost principles. For each kind of organization...

  6. 45 CFR 2541.220 - Allowable costs.

    Science.gov (United States)

    2010-10-01

    ... accounting standards that comply with cost principles acceptable to the Federal agency. ... the grantee or subgrantee. (b) Applicable cost principles. For each kind of organization, there is a set of Federal principles for determining allowable costs. Allowable costs will be determined in...

  7. 28 CFR 70.27 - Allowable costs.

    Science.gov (United States)

    2010-07-01

    ... AND AGREEMENTS (INCLUDING SUBAWARDS) WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial and Program Management § 70.27 Allowable costs. (a... Organizations.” The allowability of costs incurred by institutions of higher education is determined in...

  8. 38 CFR 49.27 - Allowable costs.

    Science.gov (United States)

    2010-07-01

    ... ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial and Program Management § 49.27 Allowable...-Profit Organizations.” The allowability of costs incurred by institutions of higher education is...

  9. 40 CFR 30.27 - Allowable costs.

    Science.gov (United States)

    2010-07-01

    ... ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial and Program Management § 30.27 Allowable...-Profit Organizations.” The allowability of costs incurred by institutions of higher education is...

  10. adLIMS: a customized open source software that allows bridging clinical and basic molecular research studies.

    Science.gov (United States)

    Calabria, Andrea; Spinozzi, Giulio; Benedicenti, Fabrizio; Tenderini, Erika; Montini, Eugenio

    2015-01-01

    Many biological laboratories that deal with genomic samples are facing the problem of sample tracking, both for pure laboratory management and for efficiency. Our laboratory exploits PCR techniques and Next Generation Sequencing (NGS) methods to perform high-throughput integration site monitoring in different clinical trials and scientific projects. Because of the huge amount of samples that we process every year, which result in hundreds of millions of sequencing reads, we need to standardize data management and tracking systems, building up a scalable and flexible structure with web-based interfaces, which are usually called Laboratory Information Management System (LIMS). We started collecting end-users' requirements, composed of desired functionalities of the system and Graphical User Interfaces (GUI), and then we evaluated available tools that could address our requirements, spanning from pure LIMS to Content Management Systems (CMS) up to enterprise information systems. Our analysis identified ADempiere ERP, an open source Enterprise Resource Planning written in Java J2EE, as the best software that also natively implements some highly desirable technological advances, such as the high usability and modularity that grants high use-case flexibility and software scalability for custom solutions. We extended and customized ADempiere ERP to fulfil LIMS requirements and we developed adLIMS. It has been validated by our end-users verifying functionalities and GUIs through test cases for PCRs samples and pre-sequencing data and it is currently in use in our laboratories. adLIMS implements authorization and authentication policies, allowing multiple users management and roles definition that enables specific permissions, operations and data views to each user. For example, adLIMS allows creating sample sheets from stored data using available exporting operations. This simplicity and process standardization may avoid manual errors and information backtracking, features

  11. Abnormal Returns and Contrarian Strategies

    Directory of Open Access Journals (Sweden)

    Ivana Dall'Agnol

    2003-12-01

    Full Text Available We test the hypothesis that strategies which are long on portfolios of looser stocks and short on portfolios of winner stocks generate abnormal returns in Brazil. This type of evidence for the US stock market was interpreted by The Bondt and Thaler (1985 as reflecting systematic evaluation mistakes caused by investors overreaction to news related to the firm performance. We found evidence of contrarian strategies profitability for horizons from 3 months to 3 years in a sample of stock returns from BOVESPA and SOMA from 1986 to 2000. The strategies are more profitable for shorter horizons. Therefore, there was no trace of the momentum effect found by Jagadeesh and Titman (1993 for the same horizons with US data. There are remaing unexplained positive returns for contrarian strategies after accounting for risk, size, and liquidity. We also found that the strategy profitability is reduced after the Real Plan, which suggests that the Brazilian stock market became more efficient after inflation stabilization.

  12. Discourse-voice regulatory strategies in the psychotherapeutic interaction: a state-space dynamics analysis.

    Science.gov (United States)

    Tomicic, Alemka; Martínez, Claudio; Pérez, J Carola; Hollenstein, Tom; Angulo, Salvador; Gerstmann, Adam; Barroux, Isabelle; Krause, Mariane

    2015-01-01

    This study seeks to provide evidence of the dynamics associated with the configurations of discourse-voice regulatory strategies in patient-therapist interactions in relevant episodes within psychotherapeutic sessions. Its central assumption is that discourses manifest themselves differently in terms of their prosodic characteristics according to their regulatory functions in a system of interactions. The association between discourse and vocal quality in patients and therapists was analyzed in a sample of 153 relevant episodes taken from 164 sessions of five psychotherapies using the state space grid (SSG) method, a graphical tool based on the dynamic systems theory (DST). The results showed eight recurrent and stable discourse-voice regulatory strategies of the patients and three of the therapists. Also, four specific groups of these discourse-voice strategies were identified. The latter were interpreted as regulatory configurations, that is to say, as emergent self-organized groups of discourse-voice regulatory strategies constituting specific interactional systems. Both regulatory strategies and their configurations differed between two types of relevant episodes: Change Episodes and Rupture Episodes. As a whole, these results support the assumption that speaking and listening, as dimensions of the interaction that takes place during therapeutic conversation, occur at different levels. The study not only shows that these dimensions are dependent on each other, but also that they function as a complex and dynamic whole in therapeutic dialog, generating relational offers which allow the patient and the therapist to regulate each other and shape the psychotherapeutic process that characterizes each type of relevant episode.

  13. Discourse-Voice Regulatory Strategies in the Psychotherapeutic Interaction: A State-Space Dynamics Analysis

    Directory of Open Access Journals (Sweden)

    Alemka eTomicic

    2015-04-01

    Full Text Available This study seeks to provide evidence of the dynamics associated with the configurations of discourse-voice regulatory strategies in patient-therapist interactions in relevant episodes within psychotherapeutic sessions. Its central assumption is that discourses manifest themselves differently in terms of their prosodic characteristics according to their regulatory functions in a system of interactions. The association between discourse and vocal quality in patients and therapists was analyzed in a sample of 153 relevant episodes taken from 164 sessions of five psychotherapies using the State Space Grid (SSG method, a graphical tool based on the Dynamic Systems Theory (DST. The results showed eight recurrent and stable discourse-voice regulatory strategies of the patients and three of the therapists. Also, four specific groups of these discourse-voice strategies were identified. The latter were interpreted as regulatory configurations, that is to say, as emergent self-organized groups of discourse-voice regulatory strategies constituting specific interactional systems. Both regulatory strategies and their configurations differed between two types of relevant episodes: Change Episodes and Rupture Episodes. As a whole, these results support the assumption that speaking and listening, as dimensions of the interaction that takes place during therapeutic conversation, occur at different levels. The study not only shows that these dimensions are dependent on each other, but also that they function as a complex and dynamic whole in therapeutic dialogue, generating relational offers which allow the patient and the therapist to regulate each other and shape the psychotherapeutic process that characterizes each type of relevant episode.

  14. Peers Influence Mathematics Strategy Use in Early Elementary School

    Science.gov (United States)

    Carr, Martha; Barned, Nicole; Otumfuor, Beryl

    2016-01-01

    This study examined the impact of performance goals on arithmetic strategy use, and how same-sex peer groups contributed to the selection of strategies used by first-graders. It was hypothesized that gender differences in strategy use are a function of performance goals and the influence of same-sex peers. Using a sample of 75 first grade…

  15. Device for radioactivity measurement of liquid samples

    International Nuclear Information System (INIS)

    Lamaziere, J.

    1983-01-01

    The device for low activity gamma measurements comprises an automatic changer for sample transfer from a conveyor to a measuring chamber. The conveyor includes a horizontal table were are regularly distributed sample holders. A lift allows a vertical motion of a plate for the exposition in front of a detector [fr

  16. 29 CFR 97.22 - Allowable costs.

    Science.gov (United States)

    2010-07-01

    ... accounting standards that comply with cost principles acceptable to the Federal agency. [53 FR 8069, 8087... LOCAL GOVERNMENTS Post-Award Requirements Financial Administration § 97.22 Allowable costs. (a... increment above allowable costs) to the grantee or subgrantee. (b) Applicable cost principles. For each kind...

  17. 34 CFR 74.27 - Allowable costs.

    Science.gov (United States)

    2010-07-01

    ... Procedures or uniform cost accounting standards that comply with cost principles acceptable to ED. (b) The... OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial... principles for determining allowable costs. Allowability of costs are determined in accordance with the cost...

  18. 44 CFR 13.22 - Allowable costs.

    Science.gov (United States)

    2010-10-01

    ... uniform cost accounting standards that comply with cost principles acceptable to the Federal agency. ... STATE AND LOCAL GOVERNMENTS Post-Award Requirements Financial Administration § 13.22 Allowable costs. (a... increment above allowable costs) to the grantee or subgrantee. (b) Applicable cost principles. For each kind...

  19. 36 CFR 1207.22 - Allowable costs.

    Science.gov (United States)

    2010-07-01

    ... uniform cost accounting standards that comply with cost principles acceptable to the Federal agency. ... GOVERNMENTS Post-Award Requirements Financial Administration § 1207.22 Allowable costs. (a) Limitation on use... increment above allowable costs) to the grantee or subgrantee. (b) Applicable cost principles. For each kind...

  20. 45 CFR 2543.27 - Allowable costs.

    Science.gov (United States)

    2010-10-01

    ... GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial and Program Management § 2543.27 Allowable costs. For each kind... Organizations.” The allowability of costs incurred by institutions of higher education is determined in...

  1. 20 CFR 435.27 - Allowable costs.

    Science.gov (United States)

    2010-04-01

    ... AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, OTHER NON-PROFIT ORGANIZATIONS, AND COMMERCIAL ORGANIZATIONS Post-Award Requirements Financial and Program Management § 435.27 Allowable costs. For each kind... Organizations.” (c) Allowability of costs incurred by institutions of higher education is determined in...

  2. Reading Skills and Strategies: Assessing Primary School Students’ Awareness in L1 and EFL Strategy Use

    Directory of Open Access Journals (Sweden)

    Evdokimos Aivazoglou

    2014-09-01

    Full Text Available The present study was designed and conducted with the purpose to assess primary school students’ awareness in GL1 (Greek as first language and EFL (English as a foreign language strategy use and investigate the relations between the reported reading strategies use in first (L1 and foreign language (FL.  The sample (455 students attending the fifth and sixth grades of primary schools in Northern Greece was first categorized into skilled and less skilled L1 and EFL readers through screening reading comprehension tests, one in L1 and one in FL, before filling in the reading strategy questionnaires. The findings revealed participants’ preference for “problem solving” strategies, while “global strategies” coming next. Girls were proved to be more aware of their reading strategies use with the boys reporting a more frequent use in both languages. Also, skilled readers were found to use reading strategies more effectively, and appeared to be more flexible in transferring strategies from L1 to FL compared to less-skilled readers.

  3. Message strategies in direct-to-consumer pharmaceutical advertising: a content analysis using Taylor's six-segment message strategy wheel.

    Science.gov (United States)

    Tsai, Wan-Hsiu Sunny; Lancaster, Alyse R

    2012-01-01

    This exploratory study applies Taylor's (1999) six-segment message strategy wheel to direct-to-consumer (DTC) pharmaceutical television commercials to understand message strategies adopted by pharmaceutical advertisers to persuade consumers. A convenience sample of 96 DTC commercial campaigns was analyzed. The results suggest that most DTC drug ads used a combination approach, providing consumers with medical and drug information while simultaneously appealing to the viewer's ego-related needs and desires. In contrast to ration and ego strategies, other approaches including routine, acute need, and social are relatively uncommon while sensory was the least common message strategy. Findings thus recognized the educational value of DTC commercials.

  4. Human Life History Strategies.

    Science.gov (United States)

    Chua, Kristine J; Lukaszewski, Aaron W; Grant, DeMond M; Sng, Oliver

    2017-01-01

    Human life history (LH) strategies are theoretically regulated by developmental exposure to environmental cues that ancestrally predicted LH-relevant world states (e.g., risk of morbidity-mortality). Recent modeling work has raised the question of whether the association of childhood family factors with adult LH variation arises via (i) direct sampling of external environmental cues during development and/or (ii) calibration of LH strategies to internal somatic condition (i.e., health), which itself reflects exposure to variably favorable environments. The present research tested between these possibilities through three online surveys involving a total of over 26,000 participants. Participants completed questionnaires assessing components of self-reported environmental harshness (i.e., socioeconomic status, family neglect, and neighborhood crime), health status, and various LH-related psychological and behavioral phenotypes (e.g., mating strategies, paranoia, and anxiety), modeled as a unidimensional latent variable. Structural equation models suggested that exposure to harsh ecologies had direct effects on latent LH strategy as well as indirect effects on latent LH strategy mediated via health status. These findings suggest that human LH strategies may be calibrated to both external and internal cues and that such calibrational effects manifest in a wide range of psychological and behavioral phenotypes.

  5. Human Life History Strategies

    Directory of Open Access Journals (Sweden)

    Kristine J. Chua

    2016-12-01

    Full Text Available Human life history (LH strategies are theoretically regulated by developmental exposure to environmental cues that ancestrally predicted LH-relevant world states (e.g., risk of morbidity–mortality. Recent modeling work has raised the question of whether the association of childhood family factors with adult LH variation arises via (i direct sampling of external environmental cues during development and/or (ii calibration of LH strategies to internal somatic condition (i.e., health, which itself reflects exposure to variably favorable environments. The present research tested between these possibilities through three online surveys involving a total of over 26,000 participants. Participants completed questionnaires assessing components of self-reported environmental harshness (i.e., socioeconomic status, family neglect, and neighborhood crime, health status, and various LH-related psychological and behavioral phenotypes (e.g., mating strategies, paranoia, and anxiety, modeled as a unidimensional latent variable. Structural equation models suggested that exposure to harsh ecologies had direct effects on latent LH strategy as well as indirect effects on latent LH strategy mediated via health status. These findings suggest that human LH strategies may be calibrated to both external and internal cues and that such calibrational effects manifest in a wide range of psychological and behavioral phenotypes.

  6. Strategy elimination in games with interaction structures

    NARCIS (Netherlands)

    Witzel, A.; Apt, K.R.; Zvesper, J.A.

    2009-01-01

    We study games in the presence of an interaction structure, which allows players to communicate their preferences, assuming that each player initially only knows his own preferences. We study the outcomes of iterated elimination of strictly dominated strategies (IESDS) that can be obtained in any

  7. 40 CFR 52.1174 - Control strategy: Ozone.

    Science.gov (United States)

    2010-07-01

    ... the allowable emissions resulting from the application of the CTG presumptive norm. The State must... Forms, Reference Tables, and General Instructions, along with an implementation strategy for the State's... implement one or more appropriate contingency measure(s) which are contained in the contingency plan...

  8. BioSAXS Sample Changer: a robotic sample changer for rapid and reliable high-throughput X-ray solution scattering experiments.

    Science.gov (United States)

    Round, Adam; Felisaz, Franck; Fodinger, Lukas; Gobbo, Alexandre; Huet, Julien; Villard, Cyril; Blanchet, Clement E; Pernot, Petra; McSweeney, Sean; Roessle, Manfred; Svergun, Dmitri I; Cipriani, Florent

    2015-01-01

    Small-angle X-ray scattering (SAXS) of macromolecules in solution is in increasing demand by an ever more diverse research community, both academic and industrial. To better serve user needs, and to allow automated and high-throughput operation, a sample changer (BioSAXS Sample Changer) that is able to perform unattended measurements of up to several hundred samples per day has been developed. The Sample Changer is able to handle and expose sample volumes of down to 5 µl with a measurement/cleaning cycle of under 1 min. The samples are stored in standard 96-well plates and the data are collected in a vacuum-mounted capillary with automated positioning of the solution in the X-ray beam. Fast and efficient capillary cleaning avoids cross-contamination and ensures reproducibility of the measurements. Independent temperature control for the well storage and for the measurement capillary allows the samples to be kept cool while still collecting data at physiological temperatures. The Sample Changer has been installed at three major third-generation synchrotrons: on the BM29 beamline at the European Synchrotron Radiation Facility (ESRF), the P12 beamline at the PETRA-III synchrotron (EMBL@PETRA-III) and the I22/B21 beamlines at Diamond Light Source, with the latter being the first commercial unit supplied by Bruker ASC.

  9. 46 CFR 54.25-5 - Corrosion allowance.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Corrosion allowance. 54.25-5 Section 54.25-5 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PRESSURE VESSELS Construction With Carbon, Alloy, and Heat Treated Steels § 54.25-5 Corrosion allowance. The corrosion allowance...

  10. Small-angle X-ray scattering tensor tomography: model of the three-dimensional reciprocal-space map, reconstruction algorithm and angular sampling requirements.

    Science.gov (United States)

    Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel

    2018-01-01

    Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.

  11. Surface reconstruction through poisson disk sampling.

    Directory of Open Access Journals (Sweden)

    Wenguang Hou

    Full Text Available This paper intends to generate the approximate Voronoi diagram in the geodesic metric for some unbiased samples selected from original points. The mesh model of seeds is then constructed on basis of the Voronoi diagram. Rather than constructing the Voronoi diagram for all original points, the proposed strategy is to run around the obstacle that the geodesic distances among neighboring points are sensitive to nearest neighbor definition. It is obvious that the reconstructed model is the level of detail of original points. Hence, our main motivation is to deal with the redundant scattered points. In implementation, Poisson disk sampling is taken to select seeds and helps to produce the Voronoi diagram. Adaptive reconstructions can be achieved by slightly changing the uniform strategy in selecting seeds. Behaviors of this method are investigated and accuracy evaluations are done. Experimental results show the proposed method is reliable and effective.

  12. 22 CFR 135.22 - Allowable costs.

    Science.gov (United States)

    2010-04-01

    ... Procedures, or uniform cost accounting standards that comply with cost principles acceptable to the Federal... AGREEMENTS TO STATE AND LOCAL GOVERNMENTS Post-Award Requirements Financial Administration § 135.22 Allowable... principles. For each kind of organization, there is a set of Federal principles for determining allowable...

  13. 20 CFR 631.84 - Allowable projects.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Allowable projects. 631.84 Section 631.84... THE JOB TRAINING PARTNERSHIP ACT Disaster Relief Employment Assistance § 631.84 Allowable projects...) Shall be used exclusively to provide employment on projects that provide food, clothing, shelter and...

  14. 10 CFR 600.317 - Allowable costs.

    Science.gov (United States)

    2010-01-01

    ... to the type of entity incurring the cost as follows: (1) For-profit organizations. Allowability of costs incurred by for-profit organizations and those nonprofit organizations listed in Attachment C to... specifically authorized in the award document. (2) Other types of organizations. Allowability of costs incurred...

  15. 15 CFR 14.27 - Allowable costs.

    Science.gov (United States)

    2010-01-01

    ... GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, OTHER NON-PROFIT, AND COMMERCIAL ORGANIZATIONS Post-Award Requirements Financial and Program Management § 14.27 Allowable costs. For each kind of... Organizations.” The allowability of costs incurred by institutions of higher education is determined in...

  16. Influence of short-term sampling parameters on the uncertainty of the Lden environmental noise indicator

    International Nuclear Information System (INIS)

    Mateus, M; Carrilho, J Dias; Da Silva, M Gameiro

    2015-01-01

    The present study deals with the influence of the sampling parameters on the uncertainty of noise equivalent level in environmental noise measurements. The study has been carried out through the test of different sampling strategies doing resampling trials over continuous monitoring noise files obtained previously in an urban location in the city of Coimbra, in Portugal. On short term measurements, not only the duration of the sampling episodes but also its number have influence on the uncertainty of the result. This influence is higher for the time periods where sound levels suffer a greater variation, such as during the night period. In this period, in case both parameters (duration and number of sampling episodes) are not carefully selected, the uncertainty level can reach too high values contributing to a loss of precision of the measurements. With the obtained data it was investigated the sampling parameters influence on the long term noise indicator uncertainty, calculated according the Draft 1st CD ISO 1996-2:2012 proposed method. It has been verified that this method allows the possibility of defining a general methodology which enables the setting of the parameters once the precision level is fixed. For the three reference periods defined for environmental noise (day, evening and night), it was possible to derive a two variable power law representing the uncertainty of the determined values as a function of the two sampling parameters: duration of sampling episode and number of episodes

  17. Simple and versatile modifications allowing time gated spectral acquisition, imaging and lifetime profiling on conventional wide-field microscopes

    International Nuclear Information System (INIS)

    Pal, Robert; Beeby, Andrew

    2014-01-01

    An inverted microscope has been adapted to allow time-gated imaging and spectroscopy to be carried out on samples containing responsive lanthanide probes. The adaptation employs readily available components, including a pulsed light source, time-gated camera, spectrometer and photon counting detector, allowing imaging, emission spectroscopy and lifetime measurements. Each component is controlled by a suite of software written in LabVIEW and is powered via conventional USB ports. (technical note)

  18. ATTITUDES OF FOOTBALL PLAYERS OF DIFFERENT SPORTS EXPERIENCE ON THE ALLOWED MEANS OF STIMULATING RECOVERY

    Directory of Open Access Journals (Sweden)

    Miroslav Smajić

    2013-07-01

    Full Text Available Allowed stimulating means of recovery are considered substances and physiological processes that act through the increase in performance athletes and to more efficient recovery and improvement of sports results, which are very difficult to achieve the usual training methods of training, and it can be considered useful until used as a supplement to training. If the rational use, it significantly increases the resistance of the organism in Training athletes and competition loads and favorably affect the elimination of general and local fatigue. The aim of the research consists of testing and analyzing the attitudes of different sports experience on the allowed means of stimulating recovery. The sample of examinees consists of 120 footballers of different sports experience (I group (62 – 4-8 years of sports experience, II group (58 – 9-14 years of sports experience. The sample of variables consists of the system of 10 attitudes (claims, and each attitude (claim consists of 5 verbal categories (marked from -2 to +2. To determine multivariate and univariate significance of differences between footballers of different competition ranks multivariate analysis of variance (MANOVA and univariate analysis of variance (ANOVA have been applied. Generally it can be concluded that players with less experience attach less importance to some of the allowed recovery stimulus, and that the level of information increases with sports experience.

  19. Pengaruh strategi bauran pemasaran terhadap harga pada home industri jenang “Mirah” Ponorogo

    Directory of Open Access Journals (Sweden)

    Prasetiyani Ika Saputri

    2017-08-01

    Full Text Available Of this study was to determine the marketing mix strategy in Home Industry Jenang "MIRAH" in Ponorogo, to determine the pricing of products in Home Industry Jenang "MIRAH" in Ponorogo. Marketing Mix Strategy is one factor in the price. The samples in this study using sampling techniques saturated as many as 39 people. Sampling using techniques sampling. Results of simple linear regression Y = 43,477 + 0,558 X, meaning that if the price increase of 1%, then marketing mix strategy will increase by 0,558 if other factors held constant. It is obtained from tcount on marketing mix strategy variable (X is 9,440 with 0,000 signifkansi level. Because 9,440 >1,68488 and 0.000 ttable ie, greater than t table of a significance level of 0.000 t less than 0.05 then the research hypothesis reject Ho and accept Ha. While the results of R2 of 0.670, indicating that 67% strategy marketing mix variables influenced the price, while the remaining 33% is influenced by other factors not examined. Conclusion there is the effect of strategy marketing mix on price satisfaction in Home Industry Jenang "MIRAH" in Ponorogo.

  20. A reexamination and extension of international strategy-structure theory

    OpenAIRE

    Wolf, Joachim; Egelhoff, William G.

    2001-01-01

    Using a sample of 95 German firms, the study finds general support for the traditional fits of international strategy-structure theory. Employing an information-processing perspective, the study conceptually and empirically extends existing theory (1) to address strategy-structure fit for various types of matrix structure, and (2) by adding two new elements of international strategy to the existing international strategy-structure model: the level of international transfers and level of forei...