WorldWideScience

Sample records for small non-random sample

  1. Big Data, Small Sample.

    Science.gov (United States)

    Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan

    2017-05-20

    Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.

  2. An econometric method for estimating population parameters from non-random samples: An application to clinical case finding.

    Science.gov (United States)

    Burger, Rulof P; McLaren, Zoë M

    2017-09-01

    The problem of sample selection complicates the process of drawing inference about populations. Selective sampling arises in many real world situations when agents such as doctors and customs officials search for targets with high values of a characteristic. We propose a new method for estimating population characteristics from these types of selected samples. We develop a model that captures key features of the agent's sampling decision. We use a generalized method of moments with instrumental variables and maximum likelihood to estimate the population prevalence of the characteristic of interest and the agents' accuracy in identifying targets. We apply this method to tuberculosis (TB), which is the leading infectious disease cause of death worldwide. We use a national database of TB test data from South Africa to examine testing for multidrug resistant TB (MDR-TB). Approximately one quarter of MDR-TB cases was undiagnosed between 2004 and 2010. The official estimate of 2.5% is therefore too low, and MDR-TB prevalence is as high as 3.5%. Signal-to-noise ratios are estimated to be between 0.5 and 1. Our approach is widely applicable because of the availability of routinely collected data and abundance of potential instruments. Using routinely collected data to monitor population prevalence can guide evidence-based policy making. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Standard Deviation for Small Samples

    Science.gov (United States)

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  4. Decision Support on Small size Passive Samples

    Directory of Open Access Journals (Sweden)

    Vladimir Popukaylo

    2018-05-01

    Full Text Available A construction technique of adequate mathematical models for small size passive samples, in conditions when classical probabilistic-statis\\-tical methods do not allow obtaining valid conclusions was developed.

  5. Small sample whole-genome amplification

    Science.gov (United States)

    Hara, Christine; Nguyen, Christine; Wheeler, Elizabeth; Sorensen, Karen; Arroyo, Erin; Vrankovich, Greg; Christian, Allen

    2005-11-01

    Many challenges arise when trying to amplify and analyze human samples collected in the field due to limitations in sample quantity, and contamination of the starting material. Tests such as DNA fingerprinting and mitochondrial typing require a certain sample size and are carried out in large volume reactions; in cases where insufficient sample is present whole genome amplification (WGA) can be used. WGA allows very small quantities of DNA to be amplified in a way that enables subsequent DNA-based tests to be performed. A limiting step to WGA is sample preparation. To minimize the necessary sample size, we have developed two modifications of WGA: the first allows for an increase in amplified product from small, nanoscale, purified samples with the use of carrier DNA while the second is a single-step method for cleaning and amplifying samples all in one column. Conventional DNA cleanup involves binding the DNA to silica, washing away impurities, and then releasing the DNA for subsequent testing. We have eliminated losses associated with incomplete sample release, thereby decreasing the required amount of starting template for DNA testing. Both techniques address the limitations of sample size by providing ample copies of genomic samples. Carrier DNA, included in our WGA reactions, can be used when amplifying samples with the standard purification method, or can be used in conjunction with our single-step DNA purification technique to potentially further decrease the amount of starting sample necessary for future forensic DNA-based assays.

  6. Small-sample-worth perturbation methods

    International Nuclear Information System (INIS)

    1985-01-01

    It has been assumed that the perturbed region, R/sub p/, is large enough so that: (1) even without a great deal of biasing there is a substantial probability that an average source-neutron will enter it; and (2) once having entered, the neutron is likely to make several collisions in R/sub p/ during its lifetime. Unfortunately neither assumption is valid for the typical configurations one encounters in small-sample-worth experiments. In such experiments one measures the reactivity change which is induced when a very small void in a critical assembly is filled with a sample of some test-material. Only a minute fraction of the fission-source neutrons ever gets into the sample and, of those neutrons that do, most emerge uncollided. Monte Carlo small-sample perturbations computations are described

  7. Gaseous radiocarbon measurements of small samples

    International Nuclear Information System (INIS)

    Ruff, M.; Szidat, S.; Gaeggeler, H.W.; Suter, M.; Synal, H.-A.; Wacker, L.

    2010-01-01

    Radiocarbon dating by means of accelerator mass spectrometry (AMS) is a well-established method for samples containing carbon in the milligram range. However, the measurement of small samples containing less than 50 μg carbon often fails. It is difficult to graphitise these samples and the preparation is prone to contamination. To avoid graphitisation, a solution can be the direct measurement of carbon dioxide. The MICADAS, the smallest accelerator for radiocarbon dating in Zurich, is equipped with a hybrid Cs sputter ion source. It allows the measurement of both, graphite targets and gaseous CO 2 samples, without any rebuilding. This work presents experiences dealing with small samples containing 1-40 μg carbon. 500 unknown samples of different environmental research fields have been measured yet. Most of the samples were measured with the gas ion source. These data are compared with earlier measurements of small graphite samples. The performance of the two different techniques is discussed and main contributions to the blank determined. An analysis of blank and standard data measured within years allowed a quantification of the contamination, which was found to be of the order of 55 ng and 750 ng carbon (50 pMC) for the gaseous and the graphite samples, respectively. For quality control, a number of certified standards were measured using the gas ion source to demonstrate reliability of the data.

  8. A Geology Sampling System for Small Bodies

    Science.gov (United States)

    Naids, Adam J.; Hood, Anthony D.; Abell, Paul; Graff, Trevor; Buffington, Jesse

    2016-01-01

    Human exploration of microgravity bodies is being investigated as a precursor to a Mars surface mission. Asteroids, comets, dwarf planets, and the moons of Mars all fall into this microgravity category and some are being discussed as potential mission targets. Obtaining geological samples for return to Earth will be a major objective for any mission to a small body. Currently, the knowledge base for geology sampling in microgravity is in its infancy. Humans interacting with non-engineered surfaces in microgravity environment pose unique challenges. In preparation for such missions a team at the NASA Johnson Space Center has been working to gain experience on how to safely obtain numerous sample types in such an environment. This paper describes the type of samples the science community is interested in, highlights notable prototype work, and discusses an integrated geology sampling solution.

  9. Accelerator mass spectrometry of small biological samples.

    Science.gov (United States)

    Salehpour, Mehran; Forsgard, Niklas; Possnert, Göran

    2008-12-01

    Accelerator mass spectrometry (AMS) is an ultra-sensitive technique for isotopic ratio measurements. In the biomedical field, AMS can be used to measure femtomolar concentrations of labeled drugs in body fluids, with direct applications in early drug development such as Microdosing. Likewise, the regenerative properties of cells which are of fundamental significance in stem-cell research can be determined with an accuracy of a few years by AMS analysis of human DNA. However, AMS nominally requires about 1 mg of carbon per sample which is not always available when dealing with specific body substances such as localized, organ-specific DNA samples. Consequently, it is of analytical interest to develop methods for the routine analysis of small samples in the range of a few tens of microg. We have used a 5 MV Pelletron tandem accelerator to study small biological samples using AMS. Different methods are presented and compared. A (12)C-carrier sample preparation method is described which is potentially more sensitive and less susceptible to contamination than the standard procedures.

  10. Privacy problems in the small sample selection

    Directory of Open Access Journals (Sweden)

    Loredana Cerbara

    2013-05-01

    Full Text Available The side of social research that uses small samples for the production of micro data, today finds some operating difficulties due to the privacy law. The privacy code is a really important and necessary law because it guarantees the Italian citizen’s rights, as already happens in other Countries of the world. However it does not seem appropriate to limit once more the possibilities of the data production of the national centres of research. That possibilities are already moreover compromised due to insufficient founds is a common problem becoming more and more frequent in the research field. It would be necessary, therefore, to include in the law the possibility to use telephonic lists to select samples useful for activities directly of interest and importance to the citizen, such as the collection of the data carried out on the basis of opinion polls by the centres of research of the Italian CNR and some universities.

  11. Transportable high sensitivity small sample radiometric calorimeter

    International Nuclear Information System (INIS)

    Wetzel, J.R.; Biddle, R.S.; Cordova, B.S.; Sampson, T.E.; Dye, H.R.; McDow, J.G.

    1998-01-01

    A new small-sample, high-sensitivity transportable radiometric calorimeter, which can be operated in different modes, contains an electrical calibration method, and can be used to develop secondary standards, will be described in this presentation. The data taken from preliminary tests will be presented to indicate the precision and accuracy of the instrument. The calorimeter and temperature-controlled bath, at present, require only a 30-in. by 20-in. tabletop area. The calorimeter is operated from a laptop computer system using unique measurement module capable of monitoring all necessary calorimeter signals. The calorimeter can be operated in the normal calorimeter equilibration mode, as a comparison instrument, using twin chambers and an external electrical calibration method. The sample chamber is 0.75 in (1.9 cm) in diameter by 2.5 in. (6.35 cm) long. This size will accommodate most 238 Pu heat standards manufactured in the past. The power range runs from 0.001 W to <20 W. The high end is only limited by sample size

  12. ASSESSING SMALL SAMPLE WAR-GAMING DATASETS

    Directory of Open Access Journals (Sweden)

    W. J. HURLEY

    2013-10-01

    Full Text Available One of the fundamental problems faced by military planners is the assessment of changes to force structure. An example is whether to replace an existing capability with an enhanced system. This can be done directly with a comparison of measures such as accuracy, lethality, survivability, etc. However this approach does not allow an assessment of the force multiplier effects of the proposed change. To gauge these effects, planners often turn to war-gaming. For many war-gaming experiments, it is expensive, both in terms of time and dollars, to generate a large number of sample observations. This puts a premium on the statistical methodology used to examine these small datasets. In this paper we compare the power of three tests to assess population differences: the Wald-Wolfowitz test, the Mann-Whitney U test, and re-sampling. We employ a series of Monte Carlo simulation experiments. Not unexpectedly, we find that the Mann-Whitney test performs better than the Wald-Wolfowitz test. Resampling is judged to perform slightly better than the Mann-Whitney test.

  13. Maybe Small Is Too Small a Term: Introduction to Advancing Small Sample Prevention Science.

    Science.gov (United States)

    Fok, Carlotta Ching Ting; Henry, David; Allen, James

    2015-10-01

    Prevention research addressing health disparities often involves work with small population groups experiencing such disparities. The goals of this special section are to (1) address the question of what constitutes a small sample; (2) identify some of the key research design and analytic issues that arise in prevention research with small samples; (3) develop applied, problem-oriented, and methodologically innovative solutions to these design and analytic issues; and (4) evaluate the potential role of these innovative solutions in describing phenomena, testing theory, and evaluating interventions in prevention research. Through these efforts, we hope to promote broader application of these methodological innovations. We also seek whenever possible, to explore their implications in more general problems that appear in research with small samples but concern all areas of prevention research. This special section includes two sections. The first section aims to provide input for researchers at the design phase, while the second focuses on analysis. Each article describes an innovative solution to one or more challenges posed by the analysis of small samples, with special emphasis on testing for intervention effects in prevention research. A concluding article summarizes some of their broader implications, along with conclusions regarding future directions in research with small samples in prevention science. Finally, a commentary provides the perspective of the federal agencies that sponsored the conference that gave rise to this special section.

  14. Survey of serum concentrations of dioxins, furans, and coplanar polychlorinated biphenyls in a small non-random sample of U.S. residents

    Energy Technology Data Exchange (ETDEWEB)

    Grassman, J. [Brooklyn Coll. CUNY, Health and Nutrition Sciences, Brooklyn, New York, NY (United States); Patterson, D.G. Jr.; Needham, L.L. [National Center for Environmental Health, Centers for Disease Control and Prevention, Atlanta, GA (United States); Spencer, D.L.; Masten, S.A. [Environmental Toxicology Program, National Inst. of Environmental Health Sciences, Research Triangle Park, NC (United States)

    2004-09-15

    This cross sectional assessment of serum dioxin concentrations was conducted as part of a larger study to examine the relationship between dioxin exposure and gene expression in peripheral blood mononuclear cells. Recent reports indicate that environmental levels of dioxins have declined since the mid-1980's. Except for the recent National Health and Nutrition Examination Survey (NHANES), there has been little systematic surveillance of serum dioxins levels in the US general population. Here, we report the serum concentrations of 22 congeners of dioxins and dioxin-like compounds and their relationship with age, sex, smoking, and meat consumption.

  15. An Improvement to Interval Estimation for Small Samples

    Directory of Open Access Journals (Sweden)

    SUN Hui-Ling

    2017-02-01

    Full Text Available Because it is difficult and complex to determine the probability distribution of small samples,it is improper to use traditional probability theory to process parameter estimation for small samples. Bayes Bootstrap method is always used in the project. Although,the Bayes Bootstrap method has its own limitation,In this article an improvement is given to the Bayes Bootstrap method,This method extended the amount of samples by numerical simulation without changing the circumstances in a small sample of the original sample. And the new method can give the accurate interval estimation for the small samples. Finally,by using the Monte Carlo simulation to model simulation to the specific small sample problems. The effectiveness and practicability of the Improved-Bootstrap method was proved.

  16. Estimation for small domains in double sampling for stratification ...

    African Journals Online (AJOL)

    In this article, we investigate the effect of randomness of the size of a small domain on the precision of an estimator of mean for the domain under double sampling for stratification. The result shows that for a small domain that cuts across various strata with unknown weights, the sampling variance depends on the within ...

  17. Development of electric discharge equipment for small specimen sampling

    International Nuclear Information System (INIS)

    Okamoto, Koji; Kitagawa, Hideaki; Kusumoto, Junichi; Kanaya, Akihiro; Kobayashi, Toshimi

    2009-01-01

    We have developed the on-site electric discharge sampling equipment that can effectively take samples such as small specimens from the surface portion of the plant components. Compared with the conventional sampling equipment, our sampling equipment can take samples that are thinner in depth and larger in area. In addition, the affection to the equipment can be held down to the minimum, and the thermally-affected zone of the material due to electric discharge is small, which is to be ignored. Therefore, our equipment is excellent in taking samples for various tests such as residual life evaluation.

  18. ANL small-sample calorimeter system design and operation

    International Nuclear Information System (INIS)

    Roche, C.T.; Perry, R.B.; Lewis, R.N.; Jung, E.A.; Haumann, J.R.

    1978-07-01

    The Small-Sample Calorimetric System is a portable instrument designed to measure the thermal power produced by radioactive decay of plutonium-containing fuels. The small-sample calorimeter is capable of measuring samples producing power up to 32 milliwatts at a rate of one sample every 20 min. The instrument is contained in two packages: a data-acquisition module consisting of a microprocessor with an 8K-byte nonvolatile memory, and a measurement module consisting of the calorimeter and a sample preheater. The total weight of the system is 18 kg

  19. A thermostat for precise measurements of thermoresistance of small samples

    International Nuclear Information System (INIS)

    Rusinowski, Z.; Slowinski, B.; Winiewski, R.

    1996-01-01

    In the work a simple experimental set-up is described in which special attention is paid to the important problem of the thermal stability of thermoresistance measurements of small samples of manganin

  20. Radioenzymatic assay for trimethoprim in very small serum samples.

    OpenAIRE

    Yogev, R; Melick, C; Tan-Pong, L

    1985-01-01

    A modification of the methotrexate radioassay kit (supplied by New England Enzyme Center) enabled determination of trimethoprim levels in 5-microliter serum samples. An excellent correlation between this assay and high-pressure liquid chromatography assay was found. These preliminary results suggest that with this method rapid determination of trimethoprim levels in very small samples (5 to 10 microliters) can be achieved.

  1. Radioenzymatic assay for trimethoprim in very small serum samples

    International Nuclear Information System (INIS)

    Yogev, R.; Melick, C.; Tan-Pong, L.

    1985-01-01

    A modification of the methotrexate radioassay kit (supplied by New England Enzyme Center) enabled determination of trimethoprim levels in 5-microliter serum samples. An excellent correlation between this assay and high-pressure liquid chromatography assay was found. These preliminary results suggest that with this method rapid determination of trimethoprim levels in very small samples (5 to 10 microliters) can be achieved

  2. Test of a sample container for shipment of small size plutonium samples with PAT-2

    International Nuclear Information System (INIS)

    Kuhn, E.; Aigner, H.; Deron, S.

    1981-11-01

    A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO 2 -powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)

  3. Accurate EPR radiosensitivity calibration using small sample masses

    Science.gov (United States)

    Hayes, R. B.; Haskell, E. H.; Barrus, J. K.; Kenner, G. H.; Romanyukha, A. A.

    2000-03-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed.

  4. Accurate EPR radiosensitivity calibration using small sample masses

    International Nuclear Information System (INIS)

    Hayes, R.B.; Haskell, E.H.; Barrus, J.K.; Kenner, G.H.; Romanyukha, A.A.

    2000-01-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed

  5. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Multi-element analysis of small biological samples

    International Nuclear Information System (INIS)

    Rokita, E.; Cafmeyer, J.; Maenhaut, W.

    1983-01-01

    A method combining PIXE and INAA was developed to determine the elemental composition of small biological samples. The method needs virtually no sample preparation and less than 1 mg is sufficient for the analysis. The method was used for determining up to 18 elements in leaves taken from Cracow Herbaceous. The factors which influence the elemental composition of leaves and the possible use of leaves as an environmental pollution indicator are discussed

  7. Conversion of Small Algal Oil Sample to JP-8

    Science.gov (United States)

    2012-01-01

    cracking of Algal Oil to SPK Hydroprocessing Lab Plant uop Nitrogen Hydrogen Product ., __ Small Scale Lab Hydprocessing plant - Down flow trickle ... bed configuration - Capable of retaining 25 cc of catalyst bed Meter UOP ·CONFIDENTIAL File Number The catalytic deoxygenation stage of the...content which combined with the samples acidity, is a challenge to reactor metallurgy. None the less, an attempt was made to convert this sample to

  8. Consensus of heterogeneous multi-agent systems based on sampled data with a small sampling delay

    International Nuclear Information System (INIS)

    Wang Na; Wu Zhi-Hai; Peng Li

    2014-01-01

    In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  9. inverse gaussian model for small area estimation via gibbs sampling

    African Journals Online (AJOL)

    ADMIN

    For example, MacGibbon and Tomberlin. (1989) have considered estimating small area rates and binomial parameters using empirical Bayes methods. Stroud (1991) used hierarchical Bayes approach for univariate natural exponential families with quadratic variance functions in sample survey applications, while Chaubey ...

  10. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    Science.gov (United States)

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  11. Systematic studies of small scintillators for new sampling calorimeter

    Indian Academy of Sciences (India)

    A new sampling calorimeter using very thin scintillators and the multi-pixel photon counter (MPPC) has been proposed to produce better position resolution for the international linear collider (ILC) experiment. As part of this R & D study, small plastic scintillators of different sizes, thickness and wrapping reflectors are ...

  12. A General Linear Method for Equating with Small Samples

    Science.gov (United States)

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  13. Testing of Small Graphite Samples for Nuclear Qualification

    Energy Technology Data Exchange (ETDEWEB)

    Julie Chapman

    2010-11-01

    Accurately determining the mechanical properties of small irradiated samples is crucial to predicting the behavior of the overal irradiated graphite components within a Very High Temperature Reactor. The sample size allowed in a material test reactor, however, is limited, and this poses some difficulties with respect to mechanical testing. In the case of graphite with a larger grain size, a small sample may exhibit characteristics not representative of the bulk material, leading to inaccuracies in the data. A study to determine a potential size effect on the tensile strength was pursued under the Next Generation Nuclear Plant program. It focuses first on optimizing the tensile testing procedure identified in the American Society for Testing and Materials (ASTM) Standard C 781-08. Once the testing procedure was verified, a size effect was assessed by gradually reducing the diameter of the specimens. By monitoring the material response, a size effect was successfully identified.

  14. [Progress in sample preparation and analytical methods for trace polar small molecules in complex samples].

    Science.gov (United States)

    Zhang, Qianchun; Luo, Xialin; Li, Gongke; Xiao, Xiaohua

    2015-09-01

    Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed.

  15. Exploratory Factor Analysis With Small Samples and Missing Data.

    Science.gov (United States)

    McNeish, Daniel

    2017-01-01

    Exploratory factor analysis (EFA) is an extremely popular method for determining the underlying factor structure for a set of variables. Due to its exploratory nature, EFA is notorious for being conducted with small sample sizes, and recent reviews of psychological research have reported that between 40% and 60% of applied studies have 200 or fewer observations. Recent methodological studies have addressed small size requirements for EFA models; however, these models have only considered complete data, which are the exception rather than the rule in psychology. Furthermore, the extant literature on missing data techniques with small samples is scant, and nearly all existing studies focus on topics that are not of primary interest to EFA models. Therefore, this article presents a simulation to assess the performance of various missing data techniques for EFA models with both small samples and missing data. Results show that deletion methods do not extract the proper number of factors and estimate the factor loadings with severe bias, even when data are missing completely at random. Predictive mean matching is the best method overall when considering extracting the correct number of factors and estimating factor loadings without bias, although 2-stage estimation was a close second.

  16. Small Mammal Sampling in Mortandad and Los Alamos Canyons, 2005

    International Nuclear Information System (INIS)

    Kathy Bennett; Sherri Sherwood; Rhonda Robinson

    2006-01-01

    As part of an ongoing ecological field investigation at Los Alamos National Laboratory, a study was conducted that compared measured contaminant concentrations in sediment to population parameters for small mammals in the Mortandad Canyon watershed. Mortandad Canyon and its tributary canyons have received contaminants from multiple solid waste management units and areas of concern since establishment of the Laboratory in the 1940s. The study included three reaches within Effluent and Mortandad canyons (E-1W, M-2W, and M-3) that had a spread in the concentrations of metals and radionuclides and included locations where polychlorinated biphenyls and perchlorate had been detected. A reference location, reach LA-BKG in upper Los Alamos Canyon, was also included in the study for comparison purposes. A small mammal study was initiated to assess whether potential adverse effects were evident in Mortandad Canyon due to the presence of contaminants, designated as contaminants of potential ecological concern, in the terrestrial media. Study sites, including the reference site, were sampled in late July/early August. Species diversity and the mean daily capture rate were the highest for E-1W reach and the lowest for the reference site. Species composition among the three reaches in Mortandad was similar with very little overlap with the reference canyon. Differences in species composition and diversity were most likely due to differences in habitat. Sex ratios, body weights, and reproductive status of small mammals were also evaluated. However, small sample sizes of some species within some sites affected the analysis. Ratios of males to females by species of each site (n = 5) were tested using a Chi-square analysis. No differences were detected. Where there was sufficient sample size, body weights of adult small mammals were compared between sites. No differences in body weights were found. Reproductive status of species appears to be similar across sites. However, sample

  17. Small Mammal Sampling in Mortandad and Los Alamos Canyons, 2005

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, Kathy; Sherwood, Sherri; Robinson, Rhonda

    2006-08-15

    As part of an ongoing ecological field investigation at Los Alamos National Laboratory, a study was conducted that compared measured contaminant concentrations in sediment to population parameters for small mammals in the Mortandad Canyon watershed. Mortandad Canyon and its tributary canyons have received contaminants from multiple solid waste management units and areas of concern since establishment of the Laboratory in the 1940s. The study included three reaches within Effluent and Mortandad canyons (E-1W, M-2W, and M-3) that had a spread in the concentrations of metals and radionuclides and included locations where polychlorinated biphenyls and perchlorate had been detected. A reference location, reach LA-BKG in upper Los Alamos Canyon, was also included in the study for comparison purposes. A small mammal study was initiated to assess whether potential adverse effects were evident in Mortandad Canyon due to the presence of contaminants, designated as contaminants of potential ecological concern, in the terrestrial media. Study sites, including the reference site, were sampled in late July/early August. Species diversity and the mean daily capture rate were the highest for E-1W reach and the lowest for the reference site. Species composition among the three reaches in Mortandad was similar with very little overlap with the reference canyon. Differences in species composition and diversity were most likely due to differences in habitat. Sex ratios, body weights, and reproductive status of small mammals were also evaluated. However, small sample sizes of some species within some sites affected the analysis. Ratios of males to females by species of each site (n = 5) were tested using a Chi-square analysis. No differences were detected. Where there was sufficient sample size, body weights of adult small mammals were compared between sites. No differences in body weights were found. Reproductive status of species appears to be similar across sites. However, sample

  18. Radiocarbon measurements of small gaseous samples at CologneAMS

    Science.gov (United States)

    Stolz, A.; Dewald, A.; Altenkirch, R.; Herb, S.; Heinze, S.; Schiffer, M.; Feuerstein, C.; Müller-Gatermann, C.; Wotte, A.; Rethemeyer, J.; Dunai, T.

    2017-09-01

    A second SO-110 B (Arnold et al., 2010) ion source was installed at the 6 MV CologneAMS for the measurement of gaseous samples. For the gas supply a dedicated device from Ionplus AG was connected to the ion source. Special effort was devoted to determine optimized operation parameters for the ion source, which give a high carbon current output and a high 14C- yield. The latter is essential in cases when only small samples are available. Additionally a modified immersion lens and modified target pieces were tested and the target position was optimized.

  19. A multi-dimensional sampling method for locating small scatterers

    International Nuclear Information System (INIS)

    Song, Rencheng; Zhong, Yu; Chen, Xudong

    2012-01-01

    A multiple signal classification (MUSIC)-like multi-dimensional sampling method (MDSM) is introduced to locate small three-dimensional scatterers using electromagnetic waves. The indicator is built with the most stable part of signal subspace of the multi-static response matrix on a set of combinatorial sampling nodes inside the domain of interest. It has two main advantages compared to the conventional MUSIC methods. First, the MDSM is more robust against noise. Second, it can work with a single incidence even for multi-scatterers. Numerical simulations are presented to show the good performance of the proposed method. (paper)

  20. Real-time definition of non-randomness in the distribution of genomic events.

    Directory of Open Access Journals (Sweden)

    Ulrich Abel

    Full Text Available Features such as mutations or structural characteristics can be non-randomly or non-uniformly distributed within a genome. So far, computer simulations were required for statistical inferences on the distribution of sequence motifs. Here, we show that these analyses are possible using an analytical, mathematical approach. For the assessment of non-randomness, our calculations only require information including genome size, number of (sampled sequence motifs and distance parameters. We have developed computer programs evaluating our analytical formulas for the real-time determination of expected values and p-values. This approach permits a flexible cluster definition that can be applied to most effectively identify non-random or non-uniform sequence motif distribution. As an example, we show the effectivity and reliability of our mathematical approach in clinical retroviral vector integration site distribution.

  1. Local heterogeneity effects on small-sample worths

    International Nuclear Information System (INIS)

    Schaefer, R.W.

    1986-01-01

    One of the parameters usually measured in a fast reactor critical assembly is the reactivity associated with inserting a small sample of a material into the core (sample worth). Local heterogeneities introduced by the worth measurement techniques can have a significant effect on the sample worth. Unfortunately, the capability is lacking to model some of the heterogeneity effects associated with the experimental technique traditionally used at ANL (the radial tube technique). It has been suggested that these effects could account for a large portion of what remains of the longstanding central worth discrepancy. The purpose of this paper is to describe a large body of experimental data - most of which has never been reported - that shows the effect of radial tube-related local heterogeneities

  2. Research of pneumatic control transmission system for small irradiation samples

    International Nuclear Information System (INIS)

    Bai Zhongxiong; Zhang Haibing; Rong Ru; Zhang Tao

    2008-01-01

    In order to reduce the absorbed dose damage for the operator, pneumatic control has been adopted to realize the rapid transmission of small irradiation samples. On/off of pneumatic circuit and directions for the rapid transmission system are controlled by the electrical control part. The main program initializes the system and detects the location of the manual/automatic change-over switch, and call for the corresponding subprogram to achieve the automatic or manual operation. Automatic subprogram achieves the automatic sample transmission; Manual subprogram completes the deflation, and back and forth movement of the radiation samples. This paper introduces in detail the implementation of the system, in terms of both hardware and software design. (authors)

  3. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  4. Use of the small gas proportional counters for the carbon-14 measurement of very small samples

    International Nuclear Information System (INIS)

    Sayre, E.V.; Harbottle, G.; Stoenner, R.W.; Otlet, R.L.; Evans, G.V.

    1981-01-01

    Two recent developments are: the first is the mass-spectrometric separation of 14 C and 12 C ions, followed by counting of the 14 C, while the second is the extension of conventional proportional counter operation, using CO 2 as counting gas, to very small counters and samples. Although the second method is slow (months of counting time are required for 10 mg of carbon) it does not require operator intervention and many samples may be counted simultaneously. Also, it costs only a fraction of the capital expense of an accelerator installation. The development, construction and operation of suitable small counters are described, and results of three actual dating studies involving milligram scale carbon samples will be given. None of these could have been carried out if conventional, gram-sized samples had been needed. New installations, based on the use of these counters, are under construction or in the planning stages. These are located at Brookhaven Laboratory, the National Bureau of Standards (USA) and Harwell (UK). The Harwell installation, which is in advanced stages of construction, will be described in outline. The main significance of the small-counter method is, that although it will not suffice to measure the smallest (much less than 10 mg) or oldest samples, it will permit existing radiocarbon laboratories to extend their capability considerably, in the direction of smaller samples, at modest expense

  5. Thermal neutron absorption cross section of small samples

    International Nuclear Information System (INIS)

    Nghiep, T.D.; Vinh, T.T.; Son, N.N.; Vuong, T.V.; Hung, N.T.

    1989-01-01

    A modified steady method for determining the macroscopic thermal neutron absorption cross section of small samples 500 cm 3 in volume is described. The method uses a moderating block of paraffin, Pu-Be neutron source emitting 1.1x10 6 n.s. -1 , SNM-14 counter and ordinary counting equipment. The interval of cross section from 2.6 to 1.3x10 4 (10 -3 cm 2 g -1 ) was measured. The experimental data are described by calculation formulae. 7 refs.; 4 figs

  6. Pulsed Direct Current Electrospray: Enabling Systematic Analysis of Small Volume Sample by Boosting Sample Economy.

    Science.gov (United States)

    Wei, Zhenwei; Xiong, Xingchuang; Guo, Chengan; Si, Xingyu; Zhao, Yaoyao; He, Muyi; Yang, Chengdui; Xu, Wei; Tang, Fei; Fang, Xiang; Zhang, Sichun; Zhang, Xinrong

    2015-11-17

    We had developed pulsed direct current electrospray ionization mass spectrometry (pulsed-dc-ESI-MS) for systematically profiling and determining components in small volume sample. Pulsed-dc-ESI utilized constant high voltage to induce the generation of single polarity pulsed electrospray remotely. This method had significantly boosted the sample economy, so as to obtain several minutes MS signal duration from merely picoliter volume sample. The elongated MS signal duration enable us to collect abundant MS(2) information on interested components in a small volume sample for systematical analysis. This method had been successfully applied for single cell metabolomics analysis. We had obtained 2-D profile of metabolites (including exact mass and MS(2) data) from single plant and mammalian cell, concerning 1034 components and 656 components for Allium cepa and HeLa cells, respectively. Further identification had found 162 compounds and 28 different modification groups of 141 saccharides in a single Allium cepa cell, indicating pulsed-dc-ESI a powerful tool for small volume sample systematical analysis.

  7. Soybean yield modeling using bootstrap methods for small samples

    Energy Technology Data Exchange (ETDEWEB)

    Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.

    2016-11-01

    One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)

  8. Measurement of phthalates in small samples of mammalian tissue

    International Nuclear Information System (INIS)

    Acott, P.D.; Murphy, M.G.; Ogborn, M.R.; Crocker, J.F.S.

    1987-01-01

    Di-(2-ethylhexyl)-phthalate (DEHP) is a phthalic acid ester that is used as a plasticizer in polyvinyl chloride products, many of which have widespread medical application. DEHP has been shown to be leached from products used for storage and delivery of blood transfusions during procedures such as plasmaphoresis, hemodialysis and open heart surgery. Results of studies in this laboratory have suggested that there is an association between the absorption and deposition of DEHP (and/or related chemicals) in the kidney and the acquired renal cystic disease (ACD) frequently seen in patients who have undergone prolonged dialysis treatment. In order to determine the relationship between the two, it has been necessary to establish a method for extracting and accurately quantitating minute amounts of these chemicals in small tissue samples. The authors have now established such a method using kidneys from normal rats and from a rat model for ACD

  9. Evaluation of energy deposition by 153Sm in small samples

    International Nuclear Information System (INIS)

    Cury, M.I.C.; Siqueira, P.T.D.; Yoriyaz, H.; Coelho, P.R.P.; Da Silva, M.A.; Okazaki, K.

    2002-01-01

    Aim: This work presents evaluations of the absorbed dose by 'in vitro' blood cultures when mixed with 153 Sm solutions of different concentrations. Although 153 Sm is used as radiopharmaceutical mainly due to its beta emission, which is short-range radiation, it also emits gamma radiation which has a longer-range penetration. Therefore it turns to be a difficult task to determine the absorbed dose by small samples where the infinite approximation is no longer valid. Materials and Methods: MCNP-4C (Monte Carlo N - Particle transport code) has been used to perform the evaluations. It is not a deterministic code that calculates the value of a specific quantity solving the physical equations involved in the problem, but a virtual experiment where the events related to the problems are simulated and the concerned quantities are tallied. MCNP also stands out by its possibilities to specify geometrically any problem. However, these features, among others, turns MCNP in a time consuming code. The simulated problem consists of a cylindrical plastic tube with 1.5 cm internal diameter and 0.1cm thickness. It also has 2.0 cm height conic bottom end, so that the represented sample has 4.0 ml ( consisted by 1 ml of blood and 3 ml culture medium). To evaluate the energy deposition in the blood culture in each 153 Sm decay, the problem has been divided in 3 steps to account to the β- emissions (which has a continuum spectrum), gammas and conversion and Auger electrons emissions. Afterwards each emission contribution was weighted and summed to present the final value. Besides this radiation 'fragmentation', simulations were performed for many different amounts of 153 Sm solution added to the sample. These amounts cover a range from 1μl to 0.5 ml. Results: The average energy per disintegration of 153 Sm is 331 keV [1]. Gammas account for 63 keV and β-, conversion and Auger electrons account for 268 keV. The simulations performed showed an average energy deposition of 260 ke

  10. Cerebral Small Vessel Disease: Cognition, Mood, Daily Functioning, and Imaging Findings from a Small Pilot Sample

    Directory of Open Access Journals (Sweden)

    John G. Baker

    2012-04-01

    Full Text Available Cerebral small vessel disease, a leading cause of cognitive decline, is considered a relatively homogeneous disease process, and it can co-occur with Alzheimer’s disease. Clinical reports of magnetic resonance imaging (MRI/computed tomography and single photon emission computed tomography (SPECT imaging and neuropsychology testing for a small pilot sample of 14 patients are presented to illustrate disease characteristics through findings from structural and functional imaging and cognitive assessment. Participants showed some decreases in executive functioning, attention, processing speed, and memory retrieval, consistent with previous literature. An older subgroup showed lower age-corrected scores at a single time point compared to younger participants. Performance on a computer-administered cognitive measure showed a slight overall decline over a period of 8–28 months. For a case study with mild neuropsychology findings, the MRI report was normal while the SPECT report identified perfusion abnormalities. Future research can test whether advances in imaging analysis allow for identification of cerebral small vessel disease before changes are detected in cognition.

  11. Systematic studies of small scintillators for new sampling calorimeter

    International Nuclear Information System (INIS)

    Jacosalem, E.P.; Sanchez, A.L.C.; Bacala, A.M.; Iba, S.; Nakajima, N.; Ono, H.; Miyata, H.

    2007-01-01

    A new sampling calorimeter using very thin scintillators and the multi-pixel photon counter (MPPC) has been proposed to produce better position resolution for the international linear collider (ILC) experiment. As part of this R and D study, small plastic scintillators of different sizes, thickness and wrapping reflectors are systematically studied. The scintillation light due to beta rays from a collimated 90 Sr source are collected from the scintillator by wavelength-shifting (WLS) fiber and converted into electrical signals at the PMT. The wrapped scintillator that gives the best light yield is determined by comparing the measured pulse height of each 10 x 40 x 2 mm strip scintillator covered with 3M reflective mirror film, teflon, white paint, black tape, gold, aluminum and white paint+teflon. The pulse height dependence on position, length and thickness of the 3M reflective mirror film and teflon wrapped scintillators are measured. Results show that the 3M radiant mirror film-wrapped scintillator has the greatest light yield with an average of 9.2 photoelectrons. It is observed that light yield slightly increases with scintillator length, but increases to about 100% when WLS fiber diameter is increased from 1.0 mm to 1.6 mm. The position dependence measurement along the strip scintillator showed the uniformity of light transmission from the sensor to the PMT. A dip across the strip is observed which is 40% of the maximum pulse height. The block type scintillator pulse height, on the other hand, is found to be almost proportional to scintillator thickness. (author)

  12. Inverse Gaussian model for small area estimation via Gibbs sampling

    African Journals Online (AJOL)

    We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...

  13. Is a 'convenience' sample useful for estimating immunization coverage in a small population?

    Science.gov (United States)

    Weir, Jean E; Jones, Carrie

    2008-01-01

    Rapid survey methodologies are widely used for assessing immunization coverage in developing countries, approximating true stratified random sampling. Non-random ('convenience') sampling is not considered appropriate for estimating immunization coverage rates but has the advantages of low cost and expediency. We assessed the validity of a convenience sample of children presenting to a travelling clinic by comparing the coverage rate in the convenience sample to the true coverage established by surveying each child in three villages in rural Papua New Guinea. The rate of DTF immunization coverage as estimated by the convenience sample was within 10% of the true coverage when the proportion of children in the sample was two-thirds or when only children over the age of one year were counted, but differed by 11% when the sample included only 53% of the children and when all eligible children were included. The convenience sample may be sufficiently accurate for reporting purposes and is useful for identifying areas of low coverage.

  14. Collateral Information for Equating in Small Samples: A Preliminary Investigation

    Science.gov (United States)

    Kim, Sooyeon; Livingston, Samuel A.; Lewis, Charles

    2011-01-01

    This article describes a preliminary investigation of an empirical Bayes (EB) procedure for using collateral information to improve equating of scores on test forms taken by small numbers of examinees. Resampling studies were done on two different forms of the same test. In each study, EB and non-EB versions of two equating methods--chained linear…

  15. Safety evaluation of small samples for isotope production

    International Nuclear Information System (INIS)

    Sharma, Archana; Singh, Tej; Varde, P.V.

    2015-09-01

    Radioactive isotopes are widely used in basic and applied science and engineering, most notably as environmental and industrial tracers, and for medical imaging procedures. Production of radioisotope constitutes important activity of Indian nuclear program. Since its initial criticality DHRUVA reactor has been facilitating the regular supply of most of the radioisotopes required in the country for application in the fields of medicine, industry and agriculture. In-pile irradiation of the samples requires a prior estimation of the sample reactivity load, heating rate, activity developed and shielding thickness required for post irradiation handling. This report is an attempt to highlight the contributions of DHRUVA reactor, as well as to explain in detail the methodologies used in safety evaluation of the in pile irradiation samples. (author)

  16. A high-efficiency neutron coincidence counter for small samples

    International Nuclear Information System (INIS)

    Miller, M.C.; Menlove, H.O.; Russo, P.A.

    1991-01-01

    The inventory sample coincidence counter (INVS) has been modified to enhance its performance. The new design is suitable for use with a glove box sample-well (in-line application) as well as for use in the standard at-line mode. The counter has been redesigned to count more efficiently and be less sensitive to variations in sample position. These factors lead to a higher degree of precision and accuracy in a given counting period and allow for the practical use of the INVS counter with gamma-ray isotopics to obtain a plutonium assay independent of operator declarations and time-consuming chemicals analysis. A calculation study was performed using the Los Alamos transport code MCNP to optimize the design parameters. 5 refs., 7 figs., 8 tabs

  17. Reliability assessment based on small samples of normal distribution

    International Nuclear Information System (INIS)

    Ma Zhibo; Zhu Jianshi; Xu Naixin

    2003-01-01

    When the pertinent parameter involved in reliability definition complies with normal distribution, the conjugate prior of its distributing parameters (μ, h) is of normal-gamma distribution. With the help of maximum entropy and the moments-equivalence principles, the subjective information of the parameter and the sampling data of its independent variables are transformed to a Bayesian prior of (μ,h). The desired estimates are obtained from either the prior or the posterior which is formed by combining the prior and sampling data. Computing methods are described and examples are presented to give demonstrations

  18. Estimation of individual reference intervals in small sample sizes

    DEFF Research Database (Denmark)

    Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz

    2007-01-01

    In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... from various variables such as gender, age, BMI, alcohol, smoking, and menopause. The reference intervals were compared to reference intervals calculated using IFCC recommendations. Where comparable, the IFCC calculated reference intervals had a wider range compared to the variance component models...

  19. Mars ascent propulsion options for small sample return vehicles

    International Nuclear Information System (INIS)

    Whitehead, J. C.

    1997-01-01

    An unprecedented combination of high propellant fraction and small size is required for affordable-scale Mars return, regardless of the number of stages, or whether Mars orbit rendezvous or in-situ propellant options are used. Conventional space propulsion technology is too heavy, even without structure or other stage subsystems. The application of launch vehicle design principles to the development of new hardware on a tiny scale is therefore suggested. Miniature pump-fed rocket engines fed by low pressure tanks can help to meet this challenge. New concepts for engine cycles using piston pumps are described, and development issues are outlined

  20. Advanced path sampling of the kinetic network of small proteins

    NARCIS (Netherlands)

    Du, W.

    2014-01-01

    This thesis is focused on developing advanced path sampling simulation methods to study protein folding and unfolding, and to build kinetic equilibrium networks describing these processes. In Chapter 1 the basic knowledge of protein structure and folding theories were introduced and a brief overview

  1. Small sample approach, and statistical and epidemiological aspects

    NARCIS (Netherlands)

    Offringa, Martin; van der Lee, Hanneke

    2011-01-01

    In this chapter, the design of pharmacokinetic studies and phase III trials in children is discussed. Classical approaches and relatively novel approaches, which may be more useful in the context of drug research in children, are discussed. The burden of repeated blood sampling in pediatric

  2. Standard Format for Chromatographic-polarimetric System small samples assessment

    International Nuclear Information System (INIS)

    Naranjo, S.; Fajer, V.; Fonfria, C.; Patinno, R.

    2012-01-01

    The treatment of samples containing optically active substances to be evaluated as part of quality control of raw material entering industrial process, and also during the modifications exerted on it to obtain the desired final composition is still and unsolved problem for many industries. That is the case of sugarcane industry. Sometimes the troubles implied are enlarged because samples to be evaluated are not bigger than one milliliter. Reduction of gel beds in G-10 and G-50 chromatographic columns having an inner diameter of 16 mm, instead of 25, and bed heights adjustable to requirements by means of sliding stoppers to increase analytical power were evaluated with glucose and sucrose standards in concentrations from 1 to 10 g/dL, using aliquots of 1 ml without undesirable dilutions that could affect either detection or chromatographic profile. Assays with seaweed extracts gave good results that are shown. It is established the advantage to know concentration of a separated substance by the height of its peak and the savings in time and reagents resulting . Sample expanded uncertainty in both systems is compared. It is also presented several programs for data acquisition, storing and processing. (Author)

  3. Impact of multicollinearity on small sample hydrologic regression models

    Science.gov (United States)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  4. Automated Sampling and Extraction of Krypton from Small Air Samples for Kr-85 Measurement Using Atom Trap Trace Analysis

    International Nuclear Information System (INIS)

    Hebel, S.; Hands, J.; Goering, F.; Kirchner, G.; Purtschert, R.

    2015-01-01

    Atom-Trap-Trace-Analysis (ATTA) provides the capability of measuring the Krypton-85 concentration in microlitre amounts of krypton extracted from air samples of about 1 litre. This sample size is sufficiently small to allow for a range of applications, including on-site spot sampling and continuous sampling over periods of several hours. All samples can be easily handled and transported to an off-site laboratory for ATTA measurement, or stored and analyzed on demand. Bayesian sampling methodologies can be applied by blending samples for bulk measurement and performing in-depth analysis as required. Prerequisite for measurement is the extraction of a pure krypton fraction from the sample. This paper introduces an extraction unit able to isolate the krypton in small ambient air samples with high speed, high efficiency and in a fully automated manner using a combination of cryogenic distillation and gas chromatography. Air samples are collected using an automated smart sampler developed in-house to achieve a constant sampling rate over adjustable time periods ranging from 5 minutes to 3 hours per sample. The smart sampler can be deployed in the field and operate on battery for one week to take up to 60 air samples. This high flexibility of sampling and the fast, robust sample preparation are a valuable tool for research and the application of Kr-85 measurements to novel Safeguards procedures. (author)

  5. Predicting Drug-Target Interactions Based on Small Positive Samples.

    Science.gov (United States)

    Hu, Pengwei; Chan, Keith C C; Hu, Yanxing

    2018-01-01

    evaluation of ODT shows that it can be potentially useful. It confirms that predicting potential or missing DTIs based on the known interactions is a promising direction to solve problems related to the use of uncertain and unreliable negative samples and those related to the great demand in computational resources. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  6. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    2014-11-01

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a

  7. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  8. Mechanical characteristics of historic mortars from tests on small-sample non-standard on small-sample non-standard specimens

    Czech Academy of Sciences Publication Activity Database

    Drdácký, Miloš; Slížková, Zuzana

    2008-01-01

    Roč. 17, č. 1 (2008), s. 20-29 ISSN 1407-7353 R&D Projects: GA ČR(CZ) GA103/06/1609 Institutional research plan: CEZ:AV0Z20710524 Keywords : small-sample non-standard testing * lime * historic mortar Subject RIV: AL - Art, Architecture, Cultural Heritage

  9. Variation of mutational burden in healthy human tissues suggests non-random strand segregation and allows measuring somatic mutation rates.

    Science.gov (United States)

    Werner, Benjamin; Sottoriva, Andrea

    2018-06-01

    The immortal strand hypothesis poses that stem cells could produce differentiated progeny while conserving the original template strand, thus avoiding accumulating somatic mutations. However, quantitating the extent of non-random DNA strand segregation in human stem cells remains difficult in vivo. Here we show that the change of the mean and variance of the mutational burden with age in healthy human tissues allows estimating strand segregation probabilities and somatic mutation rates. We analysed deep sequencing data from healthy human colon, small intestine, liver, skin and brain. We found highly effective non-random DNA strand segregation in all adult tissues (mean strand segregation probability: 0.98, standard error bounds (0.97,0.99)). In contrast, non-random strand segregation efficiency is reduced to 0.87 (0.78,0.88) in neural tissue during early development, suggesting stem cell pool expansions due to symmetric self-renewal. Healthy somatic mutation rates differed across tissue types, ranging from 3.5 × 10-9/bp/division in small intestine to 1.6 × 10-7/bp/division in skin.

  10. Variation of mutational burden in healthy human tissues suggests non-random strand segregation and allows measuring somatic mutation rates.

    Directory of Open Access Journals (Sweden)

    Benjamin Werner

    2018-06-01

    Full Text Available The immortal strand hypothesis poses that stem cells could produce differentiated progeny while conserving the original template strand, thus avoiding accumulating somatic mutations. However, quantitating the extent of non-random DNA strand segregation in human stem cells remains difficult in vivo. Here we show that the change of the mean and variance of the mutational burden with age in healthy human tissues allows estimating strand segregation probabilities and somatic mutation rates. We analysed deep sequencing data from healthy human colon, small intestine, liver, skin and brain. We found highly effective non-random DNA strand segregation in all adult tissues (mean strand segregation probability: 0.98, standard error bounds (0.97,0.99. In contrast, non-random strand segregation efficiency is reduced to 0.87 (0.78,0.88 in neural tissue during early development, suggesting stem cell pool expansions due to symmetric self-renewal. Healthy somatic mutation rates differed across tissue types, ranging from 3.5 × 10-9/bp/division in small intestine to 1.6 × 10-7/bp/division in skin.

  11. TableSim--A program for analysis of small-sample categorical data.

    Science.gov (United States)

    David J. Rugg

    2003-01-01

    Documents a computer program for calculating correct P-values of 1-way and 2-way tables when sample sizes are small. The program is written in Fortran 90; the executable code runs in 32-bit Microsoft-- command line environments.

  12. Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers and Improve Data-Limited Stock Assessments. This biosampling project...

  13. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  14. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  15. The Accuracy of Inference in Small Samples of Dynamic Panel Data Models

    NARCIS (Netherlands)

    Bun, M.J.G.; Kiviet, J.F.

    2001-01-01

    Through Monte Carlo experiments the small sample behavior is examined of various inference techniques for dynamic panel data models when both the time-series and cross-section dimensions of the data set are small. The LSDV technique and corrected versions of it are compared with IV and GMM

  16. Rules of attraction: The role of bait in small mammal sampling at ...

    African Journals Online (AJOL)

    Baits or lures are commonly used for surveying small mammal communities, not only because they attract large numbers of these animals, but also because they provide sustenance for trapped individuals. In this study we used Sherman live traps with five bait treatments to sample small mammal populations at three ...

  17. Estimating sample size for a small-quadrat method of botanical ...

    African Journals Online (AJOL)

    Reports the results of a study conducted to determine an appropriate sample size for a small-quadrat method of botanical survey for application in the Mixed Bushveld of South Africa. Species density and grass density were measured using a small-quadrat method in eight plant communities in the Nylsvley Nature Reserve.

  18. Integrating sphere based reflectance measurements for small-area semiconductor samples

    Science.gov (United States)

    Saylan, S.; Howells, C. T.; Dahlem, M. S.

    2018-05-01

    This article describes a method that enables reflectance spectroscopy of small semiconductor samples using an integrating sphere, without the use of additional optical elements. We employed an inexpensive sample holder to measure the reflectance of different samples through 2-, 3-, and 4.5-mm-diameter apertures and applied a mathematical formulation to remove the bias from the measured spectra caused by illumination of the holder. Using the proposed method, the reflectance of samples fabricated using expensive or rare materials and/or low-throughput processes can be measured. It can also be incorporated to infer the internal quantum efficiency of small-area, research-level solar cells. Moreover, small samples that reflect light at large angles and develop scattering may also be measured reliably, by virtue of an integrating sphere insensitive to directionalities.

  19. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  20. Small Sample Properties of the Wilcoxon Signed Rank Test with Discontinuous and Dependent Observations

    OpenAIRE

    Nadine Chlass; Jens J. Krueger

    2007-01-01

    This Monte-Carlo study investigates sensitivity of the Wilcoxon signed rank test to certain assumption violations in small samples. Emphasis is put on within-sample-dependence, between-sample dependence, and the presence of ties. Our results show that both assumption violations induce severe size distortions and entail power losses. Surprisingly, these consequences do vary substantially with other properties the data may display. Results provided are particularly relevant for experimental set...

  1. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    Science.gov (United States)

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  2. Method to make accurate concentration and isotopic measurements for small gas samples

    Science.gov (United States)

    Palmer, M. R.; Wahl, E.; Cunningham, K. L.

    2013-12-01

    Carbon isotopic ratio measurements of CO2 and CH4 provide valuable insight into carbon cycle processes. However, many of these studies, like soil gas, soil flux, and water head space experiments, provide very small gas sample volumes, too small for direct measurement by current constant-flow Cavity Ring-Down (CRDS) isotopic analyzers. Previously, we addressed this issue by developing a sample introduction module which enabled the isotopic ratio measurement of 40ml samples or smaller. However, the system, called the Small Sample Isotope Module (SSIM), does dilute the sample during the delivery with inert carrier gas which causes a ~5% reduction in concentration. The isotopic ratio measurements are not affected by this small dilution, but researchers are naturally interested accurate concentration measurements. We present the accuracy and precision of a new method of using this delivery module which we call 'double injection.' Two portions of the 40ml of the sample (20ml each) are introduced to the analyzer, the first injection of which flushes out the diluting gas and the second injection is measured. The accuracy of this new method is demonstrated by comparing the concentration and isotopic ratio measurements for a gas sampled directly and that same gas measured through the SSIM. The data show that the CO2 concentration measurements were the same within instrument precision. The isotopic ratio precision (1σ) of repeated measurements was 0.16 permil for CO2 and 1.15 permil for CH4 at ambient concentrations. This new method provides a significant enhancement in the information provided by small samples.

  3. Accelerator mass spectrometry of ultra-small samples with applications in the biosciences

    International Nuclear Information System (INIS)

    Salehpour, Mehran; Håkansson, Karl; Possnert, Göran

    2013-01-01

    An overview is presented covering the biological accelerator mass spectrometry activities at Uppsala University. The research utilizes the Uppsala University Tandem laboratory facilities, including a 5 MV Pelletron tandem accelerator and two stable isotope ratio mass spectrometers. In addition, a dedicated sample preparation laboratory for biological samples with natural activity is in use, as well as another laboratory specifically for 14 C-labeled samples. A variety of ongoing projects are described and presented. Examples are: (1) Ultra-small sample AMS. We routinely analyze samples with masses in the 5–10 μg C range. Data is presented regarding the sample preparation method, (2) bomb peak biological dating of ultra-small samples. A long term project is presented where purified and cell-specific DNA from various part of the human body including the heart and the brain are analyzed with the aim of extracting regeneration rate of the various human cells, (3) biological dating of various human biopsies, including atherosclerosis related plaques is presented. The average built up time of the surgically removed human carotid plaques have been measured and correlated to various data including the level of insulin in the human blood, and (4) In addition to standard microdosing type measurements using small pharmaceutical drugs, pre-clinical pharmacokinetic data from a macromolecular drug candidate are discussed.

  4. Accelerator mass spectrometry of ultra-small samples with applications in the biosciences

    Energy Technology Data Exchange (ETDEWEB)

    Salehpour, Mehran, E-mail: mehran.salehpour@physics.uu.se [Department of Physics and Astronomy, Ion Physics, PO Box 516, SE-751 20 Uppsala (Sweden); Hakansson, Karl; Possnert, Goeran [Department of Physics and Astronomy, Ion Physics, PO Box 516, SE-751 20 Uppsala (Sweden)

    2013-01-15

    An overview is presented covering the biological accelerator mass spectrometry activities at Uppsala University. The research utilizes the Uppsala University Tandem laboratory facilities, including a 5 MV Pelletron tandem accelerator and two stable isotope ratio mass spectrometers. In addition, a dedicated sample preparation laboratory for biological samples with natural activity is in use, as well as another laboratory specifically for {sup 14}C-labeled samples. A variety of ongoing projects are described and presented. Examples are: (1) Ultra-small sample AMS. We routinely analyze samples with masses in the 5-10 {mu}g C range. Data is presented regarding the sample preparation method, (2) bomb peak biological dating of ultra-small samples. A long term project is presented where purified and cell-specific DNA from various part of the human body including the heart and the brain are analyzed with the aim of extracting regeneration rate of the various human cells, (3) biological dating of various human biopsies, including atherosclerosis related plaques is presented. The average built up time of the surgically removed human carotid plaques have been measured and correlated to various data including the level of insulin in the human blood, and (4) In addition to standard microdosing type measurements using small pharmaceutical drugs, pre-clinical pharmacokinetic data from a macromolecular drug candidate are discussed.

  5. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  6. Rural and small-town attitudes about alcohol use during pregnancy: a community and provider sample.

    Science.gov (United States)

    Logan, T K; Walker, Robert; Nagle, Laura; Lewis, Jimmie; Wiesenhahn, Donna

    2003-01-01

    While there has been considerable research on prenatal alcohol use, there have been limited studies focused on women in rural and small-town environments. This 2-part study examines gender differences in attitudes and perceived barriers to intervention in large community sample of persons living in rural and small-town environments in Kentucky (n = 3,346). The study also examines rural/small-town prenatal service providers' perceptions of barriers to assessment and intervention with pregnant substance abusers (n = 138). Surveys were administered to a convenience sample of employees and customers from 16 rural and small-town community outlets. There were 1503 males (45%) and 1843 females (55%) ranging in age from under 18 years old to over 66 years old. Surveys also were mailed to prenatal providers in county health departments of the 13-county study area, with 138 of 149 responding. Overall results of the community sample suggest that neither males nor females were knowledgeable about the harmful effects of alcohol use during pregnancy. Results also indicate substantial gender differences in alcohol attitudes, knowledge, and perceived barriers. Further, prenatal care providers identified several barriers in assessment and treatment of pregnant women with alcohol use problems in rural and small-town communities, including lack of knowledge and comfort with assessment as well as a lack of available and accessible treatment for referrals.

  7. Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data

    Science.gov (United States)

    McNeish, Daniel; Harring, Jeffrey R.

    2017-01-01

    To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…

  8. Baysian estimation of P(X > x) from a small sample of Gaussian data

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2017-01-01

    The classical statistical uncertainty problem of estimation of upper tail probabilities on the basis of a small sample of observations of a Gaussian random variable is considered. Predictive posterior estimation is discussed, adopting the standard statistical model with diffuse priors of the two...

  9. Sensitivity study of micro four-point probe measurements on small samples

    DEFF Research Database (Denmark)

    Wang, Fei; Petersen, Dirch Hjorth; Hansen, Torben Mikael

    2010-01-01

    probes than near the outer ones. The sensitive area is defined for infinite film, circular, square, and rectangular test pads, and convergent sensitivities are observed for small samples. The simulations show that the Hall sheet resistance RH in micro Hall measurements with position error suppression...

  10. A scanning tunneling microscope capable of imaging specified micron-scale small samples.

    Science.gov (United States)

    Tao, Wei; Cao, Yufei; Wang, Huafeng; Wang, Kaiyou; Lu, Qingyou

    2012-12-01

    We present a home-built scanning tunneling microscope (STM) which allows us to precisely position the tip on any specified small sample or sample feature of micron scale. The core structure is a stand-alone soft junction mechanical loop (SJML), in which a small piezoelectric tube scanner is mounted on a sliding piece and a "U"-like soft spring strip has its one end fixed to the sliding piece and its opposite end holding the tip pointing to the sample on the scanner. Here, the tip can be precisely aligned to a specified small sample of micron scale by adjusting the position of the spring-clamped sample on the scanner in the field of view of an optical microscope. The aligned SJML can be transferred to a piezoelectric inertial motor for coarse approach, during which the U-spring is pushed towards the sample, causing the tip to approach the pre-aligned small sample. We have successfully approached a hand cut tip that was made from 0.1 mm thin Pt∕Ir wire to an isolated individual 32.5 × 32.5 μm(2) graphite flake. Good atomic resolution images and high quality tunneling current spectra for that specified tiny flake are obtained in ambient conditions with high repeatability within one month showing high and long term stability of the new STM structure. In addition, frequency spectra of the tunneling current signals do not show outstanding tip mount related resonant frequency (low frequency), which further confirms the stability of the STM structure.

  11. A scanning tunneling microscope capable of imaging specified micron-scale small samples

    Science.gov (United States)

    Tao, Wei; Cao, Yufei; Wang, Huafeng; Wang, Kaiyou; Lu, Qingyou

    2012-12-01

    We present a home-built scanning tunneling microscope (STM) which allows us to precisely position the tip on any specified small sample or sample feature of micron scale. The core structure is a stand-alone soft junction mechanical loop (SJML), in which a small piezoelectric tube scanner is mounted on a sliding piece and a "U"-like soft spring strip has its one end fixed to the sliding piece and its opposite end holding the tip pointing to the sample on the scanner. Here, the tip can be precisely aligned to a specified small sample of micron scale by adjusting the position of the spring-clamped sample on the scanner in the field of view of an optical microscope. The aligned SJML can be transferred to a piezoelectric inertial motor for coarse approach, during which the U-spring is pushed towards the sample, causing the tip to approach the pre-aligned small sample. We have successfully approached a hand cut tip that was made from 0.1 mm thin Pt/Ir wire to an isolated individual 32.5 × 32.5 μm2 graphite flake. Good atomic resolution images and high quality tunneling current spectra for that specified tiny flake are obtained in ambient conditions with high repeatability within one month showing high and long term stability of the new STM structure. In addition, frequency spectra of the tunneling current signals do not show outstanding tip mount related resonant frequency (low frequency), which further confirms the stability of the STM structure.

  12. System for sampling liquids in small jugs obturated by screwed taps

    International Nuclear Information System (INIS)

    Besnier, J.

    1995-01-01

    This invention describes a machine which samples automatically liquids in small jugs obturated by screwed taps. This device can be situated in an isolated room in order to work with radioactive liquids. The machine can be divided in three main parts: a module to catch the jug, in order to take and fix it, a module to open and to close it, and a module to sample. The later takes the liquid thanks to a suction device and puts it in a container, in order to analyse the sample. (TEC)

  13. Respondent-driven sampling and the recruitment of people with small injecting networks.

    Science.gov (United States)

    Paquette, Dana; Bryant, Joanne; de Wit, John

    2012-05-01

    Respondent-driven sampling (RDS) is a form of chain-referral sampling, similar to snowball sampling, which was developed to reach hidden populations such as people who inject drugs (PWID). RDS is said to reach members of a hidden population that may not be accessible through other sampling methods. However, less attention has been paid as to whether there are segments of the population that are more likely to be missed by RDS. This study examined the ability of RDS to capture people with small injecting networks. A study of PWID, using RDS, was conducted in 2009 in Sydney, Australia. The size of participants' injecting networks was examined by recruitment chain and wave. Participants' injecting network characteristics were compared to those of participants from a separate pharmacy-based study. A logistic regression analysis was conducted to examine the characteristics independently associated with having small injecting networks, using the combined RDS and pharmacy-based samples. In comparison with the pharmacy-recruited participants, RDS participants were almost 80% less likely to have small injecting networks, after adjusting for other variables. RDS participants were also more likely to have their injecting networks form a larger proportion of those in their social networks, and to have acquaintances as part of their injecting networks. Compared to those with larger injecting networks, individuals with small injecting networks were equally likely to engage in receptive sharing of injecting equipment, but less likely to have had contact with prevention services. These findings suggest that those with small injecting networks are an important group to recruit, and that RDS is less likely to capture these individuals.

  14. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  15. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  16. EDXRF applied to the chemical element determination of small invertebrate samples

    International Nuclear Information System (INIS)

    Magalhaes, Marcelo L.R.; Santos, Mariana L.O.; Cantinha, Rebeca S.; Souza, Thomas Marques de; Franca, Elvis J. de

    2015-01-01

    Energy Dispersion X-Ray Fluorescence - EDXRF is a fast analytical technique of easy operation, however demanding reliable analytical curves due to the intrinsic matrix dependence and interference during the analysis. By using biological materials of diverse matrices, multielemental analytical protocols can be implemented and a group of chemical elements could be determined in diverse biological matrices depending on the chemical element concentration. Particularly for invertebrates, EDXRF presents some advantages associated to the possibility of the analysis of small size samples, in which a collimator can be used that directing the incidence of X-rays to a small surface of the analyzed samples. In this work, EDXRF was applied to determine Cl, Fe, P, S and Zn in invertebrate samples using the collimator of 3 mm and 10 mm. For the assessment of the analytical protocol, the SRM 2976 Trace Elements in Mollusk produced and SRM 8415 Whole Egg Powder by the National Institute of Standards and Technology - NIST were also analyzed. After sampling by using pitfall traps, invertebrate were lyophilized, milled and transferred to polyethylene vials covered by XRF polyethylene. Analyses were performed at atmosphere lower than 30 Pa, varying voltage and electric current according to the chemical element to be analyzed. For comparison, Zn in the invertebrate material was also quantified by graphite furnace atomic absorption spectrometry after acid treatment (mixture of nitric acid and hydrogen peroxide) of samples have. Compared to the collimator of 10 mm, the SRM 2976 and SRM 8415 results obtained by the 3 mm collimator agreed well at the 95% confidence level since the E n Number were in the range of -1 and 1. Results from GFAAS were in accordance to the EDXRF values for composite samples. Therefore, determination of some chemical elements by EDXRF can be recommended for very small invertebrate samples (lower than 100 mg) with advantage of preserving the samples. (author)

  17. EDXRF applied to the chemical element determination of small invertebrate samples

    Energy Technology Data Exchange (ETDEWEB)

    Magalhaes, Marcelo L.R.; Santos, Mariana L.O.; Cantinha, Rebeca S.; Souza, Thomas Marques de; Franca, Elvis J. de, E-mail: marcelo_rlm@hotmail.com, E-mail: marianasantos_ufpe@hotmail.com, E-mail: rebecanuclear@gmail.com, E-mail: thomasmarques@live.com.pt, E-mail: ejfranca@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2015-07-01

    Energy Dispersion X-Ray Fluorescence - EDXRF is a fast analytical technique of easy operation, however demanding reliable analytical curves due to the intrinsic matrix dependence and interference during the analysis. By using biological materials of diverse matrices, multielemental analytical protocols can be implemented and a group of chemical elements could be determined in diverse biological matrices depending on the chemical element concentration. Particularly for invertebrates, EDXRF presents some advantages associated to the possibility of the analysis of small size samples, in which a collimator can be used that directing the incidence of X-rays to a small surface of the analyzed samples. In this work, EDXRF was applied to determine Cl, Fe, P, S and Zn in invertebrate samples using the collimator of 3 mm and 10 mm. For the assessment of the analytical protocol, the SRM 2976 Trace Elements in Mollusk produced and SRM 8415 Whole Egg Powder by the National Institute of Standards and Technology - NIST were also analyzed. After sampling by using pitfall traps, invertebrate were lyophilized, milled and transferred to polyethylene vials covered by XRF polyethylene. Analyses were performed at atmosphere lower than 30 Pa, varying voltage and electric current according to the chemical element to be analyzed. For comparison, Zn in the invertebrate material was also quantified by graphite furnace atomic absorption spectrometry after acid treatment (mixture of nitric acid and hydrogen peroxide) of samples have. Compared to the collimator of 10 mm, the SRM 2976 and SRM 8415 results obtained by the 3 mm collimator agreed well at the 95% confidence level since the E{sub n} Number were in the range of -1 and 1. Results from GFAAS were in accordance to the EDXRF values for composite samples. Therefore, determination of some chemical elements by EDXRF can be recommended for very small invertebrate samples (lower than 100 mg) with advantage of preserving the samples. (author)

  18. STATISTICAL EVALUATION OF SMALL SCALE MIXING DEMONSTRATION SAMPLING AND BATCH TRANSFER PERFORMANCE - 12093

    Energy Technology Data Exchange (ETDEWEB)

    GREER DA; THIEN MG

    2012-01-12

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste Treatment and Immobilization Plant (WTP) presents a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS) has previously presented the results of mixing performance in two different sizes of small scale DSTs to support scale up estimates of full scale DST mixing performance. Currently, sufficient sampling of DSTs is one of the largest programmatic risks that could prevent timely delivery of high level waste to the WTP. WRPS has performed small scale mixing and sampling demonstrations to study the ability to sufficiently sample the tanks. The statistical evaluation of the demonstration results which lead to the conclusion that the two scales of small DST are behaving similarly and that full scale performance is predictable will be presented. This work is essential to reduce the risk of requiring a new dedicated feed sampling facility and will guide future optimization work to ensure the waste feed delivery mission will be accomplished successfully. This paper will focus on the analytical data collected from mixing, sampling, and batch transfer testing from the small scale mixing demonstration tanks and how those data are being interpreted to begin to understand the relationship between samples taken prior to transfer and samples from the subsequent batches transferred. An overview of the types of data collected and examples of typical raw data will be provided. The paper will then discuss the processing and manipulation of the data which is necessary to begin evaluating sampling and batch transfer performance. This discussion will also include the evaluation of the analytical measurement capability with regard to the simulant material used in the demonstration tests. The

  19. Biota dose assessment of small mammals sampled near uranium mines in northern Arizona

    Energy Technology Data Exchange (ETDEWEB)

    Jannik, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Minter, K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kuhne, W. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kubilius, W. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2018-01-09

    In 2015, the U. S. Geological Survey (USGS) collected approximately 50 small mammal carcasses from Northern Arizona uranium mines and other background locations. Based on the highest gross alpha results, 11 small mammal samples were selected for radioisotopic analyses. None of the background samples had significant gross alpha results. The 11 small mammals were identified relative to the three ‘indicator’ mines located south of Fredonia, AZ on the Kanab Plateau (Kanab North Mine, Pinenut Mine, and Arizona 1 Mine) (Figure 1-1) and are operated by Energy Fuels Resources Inc. (EFRI). EFRI annually reports soil analysis for uranium and radium-226 using Arizona Department of Environmental Quality (ADEQ)-approved Standard Operating Procedures for Soil Sampling (EFRI 2016a, 2016b, 2017). In combination with the USGS small mammal radioiosotopic tissue analyses, a biota dose assessment was completed by Savannah River National Laboratory (SRNL) using the RESidual RADioactivity-BIOTA (RESRAD-BIOTA, V. 1.8) dose assessment tool provided by the Argonne National Laboratory (ANL 2017).

  20. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    Science.gov (United States)

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  1. Statistical issues in reporting quality data: small samples and casemix variation.

    Science.gov (United States)

    Zaslavsky, A M

    2001-12-01

    To present two key statistical issues that arise in analysis and reporting of quality data. Casemix variation is relevant to quality reporting when the units being measured have differing distributions of patient characteristics that also affect the quality outcome. When this is the case, adjustment using stratification or regression may be appropriate. Such adjustments may be controversial when the patient characteristic does not have an obvious relationship to the outcome. Stratified reporting poses problems for sample size and reporting format, but may be useful when casemix effects vary across units. Although there are no absolute standards of reliability, high reliabilities (interunit F > or = 10 or reliability > or = 0.9) are desirable for distinguishing above- and below-average units. When small or unequal sample sizes complicate reporting, precision may be improved using indirect estimation techniques that incorporate auxiliary information, and 'shrinkage' estimation can help to summarize the strength of evidence about units with small samples. With broader understanding of casemix adjustment and methods for analyzing small samples, quality data can be analysed and reported more accurately.

  2. Microdochium nivale and Microdochium majus in seed samples of Danish small grain cereals

    DEFF Research Database (Denmark)

    Nielsen, L. K.; Justesen, A. F.; Jensen, J. D.

    2013-01-01

    Microdochium nivale and Microdochium majus are two of fungal species found in the Fusarium Head Blight (FHB) complex infecting small grain cereals. Quantitative real-time PCR assays were designed to separate the two Microdochium species based on the translation elongation factor 1a gene (TEF-1a......) and used to analyse a total of 374 seed samples of wheat, barley, triticale, rye and oat sampled from farmers’ fields across Denmark from 2003 to 2007. Both fungal species were detected in the five cereal species but M. majus showed a higher prevalence compared to M. nivale in most years in all cereal...... species except rye, in which M. nivale represented a larger proportion of the biomass and was more prevalent than M. majus in some samples. Historical samples of wheat and barley from 1957 to 2000 similarly showed a strong prevalence of M. majus over M. nivale indicating that M. majus has been the main...

  3. Basic distribution free identification tests for small size samples of environmental data

    International Nuclear Information System (INIS)

    Federico, A.G.; Musmeci, F.

    1998-01-01

    Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data [it

  4. Quantifying predictability through information theory: small sample estimation in a non-Gaussian framework

    International Nuclear Information System (INIS)

    Haven, Kyle; Majda, Andrew; Abramov, Rafail

    2005-01-01

    Many situations in complex systems require quantitative estimates of the lack of information in one probability distribution relative to another. In short term climate and weather prediction, examples of these issues might involve the lack of information in the historical climate record compared with an ensemble prediction, or the lack of information in a particular Gaussian ensemble prediction strategy involving the first and second moments compared with the non-Gaussian ensemble itself. The relative entropy is a natural way to quantify the predictive utility in this information, and recently a systematic computationally feasible hierarchical framework has been developed. In practical systems with many degrees of freedom, computational overhead limits ensemble predictions to relatively small sample sizes. Here the notion of predictive utility, in a relative entropy framework, is extended to small random samples by the definition of a sample utility, a measure of the unlikeliness that a random sample was produced by a given prediction strategy. The sample utility is the minimum predictability, with a statistical level of confidence, which is implied by the data. Two practical algorithms for measuring such a sample utility are developed here. The first technique is based on the statistical method of null-hypothesis testing, while the second is based upon a central limit theorem for the relative entropy of moment-based probability densities. These techniques are tested on known probability densities with parameterized bimodality and skewness, and then applied to the Lorenz '96 model, a recently developed 'toy' climate model with chaotic dynamics mimicking the atmosphere. The results show a detection of non-Gaussian tendencies of prediction densities at small ensemble sizes with between 50 and 100 members, with a 95% confidence level

  5. Suitability of small diagnostic peripheral-blood samples for cell-therapy studies.

    Science.gov (United States)

    Stephanou, Coralea; Papasavva, Panayiota; Zachariou, Myria; Patsali, Petros; Epitropou, Marilena; Ladas, Petros; Al-Abdulla, Ruba; Christou, Soteroulla; Antoniou, Michael N; Lederer, Carsten W; Kleanthous, Marina

    2017-02-01

    Primary hematopoietic stem and progenitor cells (HSPCs) are key components of cell-based therapies for blood disorders and are thus the authentic substrate for related research. We propose that ubiquitous small-volume diagnostic samples represent a readily available and as yet untapped resource of primary patient-derived cells for cell- and gene-therapy studies. In the present study we compare isolation and storage methods for HSPCs from normal and thalassemic small-volume blood samples, considering genotype, density-gradient versus lysis-based cell isolation and cryostorage media with different serum contents. Downstream analyses include viability, recovery, differentiation in semi-solid media and performance in liquid cultures and viral transductions. We demonstrate that HSPCs isolated either by ammonium-chloride potassium (ACK)-based lysis or by gradient isolation are suitable for functional analyses in clonogenic assays, high-level HSPC expansion and efficient lentiviral transduction. For cryostorage of cells, gradient isolation is superior to ACK lysis, and cryostorage in freezing media containing 50% fetal bovine serum demonstrated good results across all tested criteria. For assays on freshly isolated cells, ACK lysis performed similar to, and for thalassemic samples better than, gradient isolation, at a fraction of the cost and hands-on time. All isolation and storage methods show considerable variation within sample groups, but this is particularly acute for density gradient isolation of thalassemic samples. This study demonstrates the suitability of small-volume blood samples for storage and preclinical studies, opening up the research field of HSPC and gene therapy to any blood diagnostic laboratory with corresponding bioethics approval for experimental use of surplus material. Copyright © 2017 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  6. Taking sputum samples from small children with cystic fibrosis: a matter of cooperation

    DEFF Research Database (Denmark)

    Pehn, Mette; Bregnballe, Vibeke

    2014-01-01

    Objectives: An important part of the disease control in Danish guidelines for care of patients with cystic fibrosis (CF) is a monthly sputum sample by tracheal suchtion. Coping to this unpleasant procedure in small children depends heavily on the support from parents and nurse. The objective...... of this study was to develop a tool to help parents and children to cope with tracheal suctioning. Methods: Three short videos showing how nurses perform tracheal suctioning to get a sputum sample from small children with cystic fibrosis were made. The videos were shown to and discussed with parents...... and children to help them identify their own challenges in coping with the procedure. The study was carried out in the outpatient clinic at the CF centre, Aarhus Univeristy Hospital. Results: The videos are a useful tool to convince the parents, nurses and children from the age of about four years...

  7. Auto-validating von Neumann rejection sampling from small phylogenetic tree spaces

    Directory of Open Access Journals (Sweden)

    York Thomas

    2009-01-01

    Full Text Available Abstract Background In phylogenetic inference one is interested in obtaining samples from the posterior distribution over the tree space on the basis of some observed DNA sequence data. One of the simplest sampling methods is the rejection sampler due to von Neumann. Here we introduce an auto-validating version of the rejection sampler, via interval analysis, to rigorously draw samples from posterior distributions over small phylogenetic tree spaces. Results The posterior samples from the auto-validating sampler are used to rigorously (i estimate posterior probabilities for different rooted topologies based on mitochondrial DNA from human, chimpanzee and gorilla, (ii conduct a non-parametric test of rate variation between protein-coding and tRNA-coding sites from three primates and (iii obtain a posterior estimate of the human-neanderthal divergence time. Conclusion This solves the open problem of rigorously drawing independent and identically distributed samples from the posterior distribution over rooted and unrooted small tree spaces (3 or 4 taxa based on any multiply-aligned sequence data.

  8. Mass amplifying probe for sensitive fluorescence anisotropy detection of small molecules in complex biological samples.

    Science.gov (United States)

    Cui, Liang; Zou, Yuan; Lin, Ninghang; Zhu, Zhi; Jenkins, Gareth; Yang, Chaoyong James

    2012-07-03

    Fluorescence anisotropy (FA) is a reliable and excellent choice for fluorescence sensing. One of the key factors influencing the FA value for any molecule is the molar mass of the molecule being measured. As a result, the FA method with functional nucleic acid aptamers has been limited to macromolecules such as proteins and is generally not applicable for the analysis of small molecules because their molecular masses are relatively too small to produce observable FA value changes. We report here a molecular mass amplifying strategy to construct anisotropy aptamer probes for small molecules. The probe is designed in such a way that only when a target molecule binds to the probe does it activate its binding ability to an anisotropy amplifier (a high molecular mass molecule such as protein), thus significantly increasing the molecular mass and FA value of the probe/target complex. Specifically, a mass amplifying probe (MAP) consists of a targeting aptamer domain against a target molecule and molecular mass amplifying aptamer domain for the amplifier protein. The probe is initially rendered inactive by a small blocking strand partially complementary to both target aptamer and amplifier protein aptamer so that the mass amplifying aptamer domain would not bind to the amplifier protein unless the probe has been activated by the target. In this way, we prepared two probes that constitute a target (ATP and cocaine respectively) aptamer, a thrombin (as the mass amplifier) aptamer, and a fluorophore. Both probes worked well against their corresponding small molecule targets, and the detection limits for ATP and cocaine were 0.5 μM and 0.8 μM, respectively. More importantly, because FA is less affected by environmental interferences, ATP in cell media and cocaine in urine were directly detected without any tedious sample pretreatment. Our results established that our molecular mass amplifying strategy can be used to design aptamer probes for rapid, sensitive, and selective

  9. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Report of the advisory group meeting on elemental analysis of extremely small samples

    International Nuclear Information System (INIS)

    2002-01-01

    This publication contains summary of discussions held at the meeting with brief description and comparative characteristics of most common nuclear analytical techniques used for analysis of very small samples as well as the conclusions of the meeting. Some aspect of reference materials and quality control are also discussed. The publication also contains individual contributions made by the participants, each of these papers haven provided with an abstract and indexed separately

  11. Enrichment and determination of small amounts of 90Sr/90Y in water samples

    International Nuclear Information System (INIS)

    Mundschenk, H.

    1979-01-01

    Small amounts of 90 Sr/ 90 Y can be concentrated from large volumes of surface water (100 l) by precipitation of the phosphates, using bentonite as adsorber matrix. In the case of samples containing no or nearly no suspended matter (tap water, ground water, sea water), the daughter 90 Y can be extracted directly by using filter beds impregnated with HDEHP. The applicability of both techniques is demonstrated under realistic conditions. (orig.) 891 HP/orig. 892 MKO [de

  12. A simple technique for measuring the superconducting critical temperature of small (>= 10 μg) samples

    International Nuclear Information System (INIS)

    Pereira, R.F.R.; Meyer, E.; Silveira, M.F. da.

    1983-01-01

    A simple technique for measuring the superconducting critical temperature of small (>=10μg) samples is described. The apparatus is built in the form of a probe, which can be introduced directly into a liquid He storage dewar and permits the determination of the critical temperature, with an imprecision of +- 0.05 K above 4.2 K, in about 10 minutes. (Author) [pt

  13. A Non-Random Walk Down Hollywood Boulevard

    DEFF Research Database (Denmark)

    Lepori, Gabriele

    affect (i.e. grief, proxied by the death of Hollywood Walk of Fame celebrities) on people’s willingness to invest in risky assets (proxied by the daily performance of the U.S. stock market). Using a sample of 1,374 celebrity deaths over the period 1926-2009 and controlling for seasonalities, economic....../environmental factors, and market liquidity, I find that the death of popular and beloved celebrities is immediately followed by a 16 basis point increase in stock returns, which is consistent with a rise in the net demand for risky instruments. I also find evidence that the size of this celebrity-death effect...

  14. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    Science.gov (United States)

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in

  15. Method for Measuring Thermal Conductivity of Small Samples Having Very Low Thermal Conductivity

    Science.gov (United States)

    Miller, Robert A.; Kuczmarski, Maria a.

    2009-01-01

    This paper describes the development of a hot plate method capable of using air as a standard reference material for the steady-state measurement of the thermal conductivity of very small test samples having thermal conductivity on the order of air. As with other approaches, care is taken to ensure that the heat flow through the test sample is essentially one-dimensional. However, unlike other approaches, no attempt is made to use heated guards to block the flow of heat from the hot plate to the surroundings. It is argued that since large correction factors must be applied to account for guard imperfections when sample dimensions are small, it may be preferable to simply measure and correct for the heat that flows from the heater disc to directions other than into the sample. Experimental measurements taken in a prototype apparatus, combined with extensive computational modeling of the heat transfer in the apparatus, show that sufficiently accurate measurements can be obtained to allow determination of the thermal conductivity of low thermal conductivity materials. Suggestions are made for further improvements in the method based on results from regression analyses of the generated data.

  16. A novel approach for small sample size family-based association studies: sequential tests.

    Science.gov (United States)

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  17. Improvement of 137Cs analysis in small volume seawater samples using the Ogoya underground facility

    International Nuclear Information System (INIS)

    Hirose, K.; Komura, K.; Kanazawa University, Ishikawa; Aoyama, M.; Igarashi, Y.

    2008-01-01

    137 Cs in seawater is one of the most powerful tracers of water motion. Large volumes of samples have been required for determination of 137 Cs in seawater. This paper describes improvement of separation and purification processes of 137 Cs in seawater, which includes purification of 137 Cs using hexachloroplatinic acid in addition to ammonium phosphomolybdate (AMP) precipitation. As a result, we succeeded the 137 Cs determination in seawater with a smaller sample volume of 10 liter by using ultra-low background gamma-spectrometry in the Ogoya underground facility. 137 Cs detection limit was about 0.1 mBq (counting time: 10 6 s). This method is applied to determine 137 Cs in small samples of the South Pacific deep waters. (author)

  18. Determination of phosphorus in small amounts of protein samples by ICP-MS.

    Science.gov (United States)

    Becker, J Sabine; Boulyga, Sergei F; Pickhardt, Carola; Becker, J; Buddrus, Stefan; Przybylski, Michael

    2003-02-01

    Inductively coupled plasma mass spectrometry (ICP-MS) is used for phosphorus determination in protein samples. A small amount of solid protein sample (down to 1 micro g) or digest (1-10 micro L) protein solution was denatured in nitric acid and hydrogen peroxide by closed-microvessel microwave digestion. Phosphorus determination was performed with an optimized analytical method using a double-focusing sector field inductively coupled plasma mass spectrometer (ICP-SFMS) and quadrupole-based ICP-MS (ICP-QMS). For quality control of phosphorus determination a certified reference material (CRM), single cell proteins (BCR 273) with a high phosphorus content of 26.8+/-0.4 mg g(-1), was analyzed. For studies on phosphorus determination in proteins while reducing the sample amount as low as possible the homogeneity of CRM BCR 273 was investigated. Relative standard deviation and measurement accuracy in ICP-QMS was within 2%, 3.5%, 11% and 12% when using CRM BCR 273 sample weights of 40 mg, 5 mg, 1 mg and 0.3 mg, respectively. The lowest possible sample weight for an accurate phosphorus analysis in protein samples by ICP-MS is discussed. The analytical method developed was applied for the analysis of homogeneous protein samples in very low amounts [1-100 micro g of solid protein sample, e.g. beta-casein or down to 1 micro L of protein or digest in solution (e.g., tau protein)]. A further reduction of the diluted protein solution volume was achieved by the application of flow injection in ICP-SFMS, which is discussed with reference to real protein digests after protein separation using 2D gel electrophoresis.The detection limits for phosphorus in biological samples were determined by ICP-SFMS down to the ng g(-1) level. The present work discusses the figure of merit for the determination of phosphorus in a small amount of protein sample with ICP-SFMS in comparison to ICP-QMS.

  19. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    Science.gov (United States)

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  20. Technical Note: New methodology for measuring viscosities in small volumes characteristic of environmental chamber particle samples

    Directory of Open Access Journals (Sweden)

    L. Renbaum-Wolff

    2013-01-01

    Full Text Available Herein, a method for the determination of viscosities of small sample volumes is introduced, with important implications for the viscosity determination of particle samples from environmental chambers (used to simulate atmospheric conditions. The amount of sample needed is < 1 μl, and the technique is capable of determining viscosities (η ranging between 10−3 and 103 Pascal seconds (Pa s in samples that cover a range of chemical properties and with real-time relative humidity and temperature control; hence, the technique should be well-suited for determining the viscosities, under atmospherically relevant conditions, of particles collected from environmental chambers. In this technique, supermicron particles are first deposited on an inert hydrophobic substrate. Then, insoluble beads (~1 μm in diameter are embedded in the particles. Next, a flow of gas is introduced over the particles, which generates a shear stress on the particle surfaces. The sample responds to this shear stress by generating internal circulations, which are quantified with an optical microscope by monitoring the movement of the beads. The rate of internal circulation is shown to be a function of particle viscosity but independent of the particle material for a wide range of organic and organic-water samples. A calibration curve is constructed from the experimental data that relates the rate of internal circulation to particle viscosity, and this calibration curve is successfully used to predict viscosities in multicomponent organic mixtures.

  1. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    Science.gov (United States)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application

  2. Nano-Scale Sample Acquisition Systems for Small Class Exploration Spacecraft

    Science.gov (United States)

    Paulsen, G.

    2015-12-01

    The paradigm for space exploration is changing. Large and expensive missions are very rare and the space community is turning to smaller, lighter, and less expensive missions that could still perform great exploration. These missions are also within reach of commercial companies such as the Google Lunar X Prize teams that develop small scale lunar missions. Recent commercial endeavors such as "Planet Labs inc." and Sky Box Imaging, inc. show that there are new benefits and business models associated with miniaturization of space hardware. The Nano-Scale Sample Acquisition System includes NanoDrill for capture of small rock cores and PlanetVac for capture of surface regolith. These two systems are part of the ongoing effort to develop "Micro Sampling" systems for deployment by the small spacecraft with limited payload capacities. The ideal applications include prospecting missions to the Moon and Asteroids. The MicroDrill is a rotary-percussive coring drill that captures cores 7 mm in diameter and up to 2 cm long. The drill weighs less than 1 kg and can capture a core from a 40 MPa strength rock within a few minutes, with less than 10 Watt power and less than 10 Newton of preload. The PlanetVac is a pneumatic based regolith acquisition system that can capture surface sample in touch-and-go maneuver. These sampling systems were integrated within the footpads of commercial quadcopter for testing. As such, they could also be used by geologists on Earth to explore difficult to get to locations.

  3. Colonic stem cell data are consistent with the immortal model of stem cell division under non-random strand segregation.

    Science.gov (United States)

    Walters, K

    2009-06-01

    Colonic stem cells are thought to reside towards the base of crypts of the colon, but their numbers and proliferation mechanisms are not well characterized. A defining property of stem cells is that they are able to divide asymmetrically, but it is not known whether they always divide asymmetrically (immortal model) or whether there are occasional symmetrical divisions (stochastic model). By measuring diversity of methylation patterns in colon crypt samples, a recent study found evidence in favour of the stochastic model, assuming random segregation of stem cell DNA strands during cell division. Here, the effect of preferential segregation of the template strand is considered to be consistent with the 'immortal strand hypothesis', and explore the effect on conclusions of previously published results. For a sample of crypts, it is shown how, under the immortal model, to calculate mean and variance of the number of unique methylation patterns allowing for non-random strand segregation and compare them with those observed. The calculated mean and variance are consistent with an immortal model that incorporates non-random strand segregation for a range of stem cell numbers and levels of preferential strand segregation. Allowing for preferential strand segregation considerably alters previously published conclusions relating to stem cell numbers and turnover mechanisms. Evidence in favour of the stochastic model may not be as strong as previously thought.

  4. A passive guard for low thermal conductivity measurement of small samples by the hot plate method

    International Nuclear Information System (INIS)

    Jannot, Yves; Godefroy, Justine; Degiovanni, Alain; Grigorova-Moutiers, Veneta

    2017-01-01

    Hot plate methods under steady state conditions are based on a 1D model to estimate the thermal conductivity, using measurements of the temperatures T 0 and T 1 of the two sides of the sample and of the heat flux crossing it. To be consistent with the hypothesis of the 1D heat flux, either a hot plate guarded apparatus is used, or the temperature is measured at the centre of the sample. On one hand the latter method can be used only if the ratio thickness/width of the sample is sufficiently low and on the other hand the guarded hot plate method requires large width samples (typical cross section of 0.6  ×  0.6 m 2 ). That is why both methods cannot be used for low width samples. The method presented in this paper is based on an optimal choice of the temperatures T 0 and T 1 compared to the ambient temperature T a , enabling the estimation of the thermal conductivity with a centered hot plate method, by applying the 1D heat flux model. It will be shown that these optimal values do not depend on the size or on the thermal conductivity of samples (in the range 0.015–0.2 W m −1 K −1 ), but only on T a . The experimental results obtained validate the method for several reference samples for values of the ratio thickness/width up to 0.3, thus enabling the measurement of the thermal conductivity of samples having a small cross-section, down to 0.045  ×  0.045 m 2 . (paper)

  5. A simple Bayesian approach to quantifying confidence level of adverse event incidence proportion in small samples.

    Science.gov (United States)

    Liu, Fang

    2016-01-01

    In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions.

  6. Tools for Inspecting and Sampling Waste in Underground Radioactive Storage Tanks with Small Access Riser Openings

    International Nuclear Information System (INIS)

    Nance, T.A.

    1998-01-01

    Underground storage tanks with 2 inches to 3 inches diameter access ports at the Department of Energy's Savannah River Site have been used to store radioactive solvents and sludge. In order to close these tanks, the contents of the tanks need to first be quantified in terms of volume and chemical and radioactive characteristics. To provide information on the volume of waste contained within the tanks, a small remote inspection system was needed. This inspection system was designed to provide lighting and provide pan and tilt capabilities in an inexpensive package with zoom abilities and color video. This system also needed to be utilized inside of a plastic tent built over the access port to contain any contamination exiting from the port. This system had to be build to travel into the small port opening, through the riser pipe, into the tank evacuated space, and out of the riser pipe and access port with no possibility of being caught and blocking the access riser. Long thin plates were found in many access riser pipes that blocked the inspection system from penetrating into the tank interiors. Retrieval tools to clear the plates from the tanks using developed sampling devices while providing safe containment for the samples. This paper will discuss the inspection systems, tools for clearing access pipes, and solvent sampling tools developed to evaluate the tank contents of the underground solvent storage tanks

  7. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  8. Simultaneous small-sample comparisons in longitudinal or multi-endpoint trials using multiple marginal models

    DEFF Research Database (Denmark)

    Pallmann, Philip; Ritz, Christian; Hothorn, Ludwig A

    2018-01-01

    , however only asymptotically. In this paper, we show how to make the approach also applicable to small-sample data problems. Specifically, we discuss the computation of adjusted P values and simultaneous confidence bounds for comparisons of randomised treatment groups as well as for levels......Simultaneous inference in longitudinal, repeated-measures, and multi-endpoint designs can be onerous, especially when trying to find a reasonable joint model from which the interesting effects and covariances are estimated. A novel statistical approach known as multiple marginal models greatly...... simplifies the modelling process: the core idea is to "marginalise" the problem and fit multiple small models to different portions of the data, and then estimate the overall covariance matrix in a subsequent, separate step. Using these estimates guarantees strong control of the family-wise error rate...

  9. Small Sample Reactivity Measurements in the RRR/SEG Facility: Reanalysis using TRIPOLI-4

    Energy Technology Data Exchange (ETDEWEB)

    Hummel, Andrew [Idaho National Lab. (INL), Idaho Falls, ID (United States); Palmiotti, Guiseppe [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-01

    This work involved reanalyzing the RRR/SEG integral experiments performed at the Rossendorf facility in Germany throughout the 1970s and 80s. These small sample reactivity worth measurements were carried out using the pile oscillator technique for many different fission products, structural materials, and standards. The coupled fast-thermal system was designed such that the measurements would provide insight into elemental data, specifically the competing effects between neutron capture and scatter. Comparing the measured to calculated reactivity values can then provide adjustment criteria to ultimately improve nuclear data for fast reactor designs. Due to the extremely small reactivity effects measured (typically less than 1 pcm) and the specific heterogeneity of the core, the tool chosen for this analysis was TRIPOLI-4. This code allows for high fidelity 3-dimensional geometric modeling, and the most recent, unreleased version, is capable of exact perturbation theory.

  10. Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.

    Science.gov (United States)

    Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang

    2018-02-01

    To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.

  11. Calculation of coincidence summing corrections for a specific small soil sample geometry

    Energy Technology Data Exchange (ETDEWEB)

    Helmer, R.G.; Gehrke, R.J.

    1996-10-01

    Previously, a system was developed at the INEL for measuring the {gamma}-ray emitting nuclides in small soil samples for the purpose of environmental monitoring. These samples were counted close to a {approx}20% Ge detector and, therefore, it was necessary to take into account the coincidence summing that occurs for some nuclides. In order to improve the technical basis for the coincidence summing corrections, the authors have carried out a study of the variation in the coincidence summing probability with position within the sample volume. A Monte Carlo electron and photon transport code (CYLTRAN) was used to compute peak and total efficiencies for various photon energies from 30 to 2,000 keV at 30 points throughout the sample volume. The geometry for these calculations included the various components of the detector and source along with the shielding. The associated coincidence summing corrections were computed at these 30 positions in the sample volume and then averaged for the whole source. The influence of the soil and the detector shielding on the efficiencies was investigated.

  12. Small Scale Mixing Demonstration Batch Transfer and Sampling Performance of Simulated HLW - 12307

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, Jesse; Townson, Paul; Vanatta, Matt [EnergySolutions, Engineering and Technology Group, Richland, WA, 99354 (United States)

    2012-07-01

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste treatment Plant (WTP) has been recognized as a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. At the end of 2009 DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS), awarded a contract to EnergySolutions to design, fabricate and operate a demonstration platform called the Small Scale Mixing Demonstration (SSMD) to establish pre-transfer sampling capacity, and batch transfer performance data at two different scales. This data will be used to examine the baseline capacity for a tank mixed via rotational jet mixers to transfer consistent or bounding batches, and provide scale up information to predict full scale operational performance. This information will then in turn be used to define the baseline capacity of such a system to transfer and sample batches sent to WTP. The Small Scale Mixing Demonstration (SSMD) platform consists of 43'' and 120'' diameter clear acrylic test vessels, each equipped with two scaled jet mixer pump assemblies, and all supporting vessels, controls, services, and simulant make up facilities. All tank internals have been modeled including the air lift circulators (ALCs), the steam heating coil, and the radius between the wall and floor. The test vessels are set up to simulate the transfer of HLW out of a mixed tank, and collect a pre-transfer sample in a manner similar to the proposed baseline configuration. The collected material is submitted to an NQA-1 laboratory for chemical analysis. Previous work has been done to assess tank mixing performance at both scales. This work involved a combination of unique instruments to understand the three dimensional distribution of solids using a combination of Coriolis meter measurements, in situ chord length distribution

  13. Static, Mixed-Array Total Evaporation for Improved Quantitation of Plutonium Minor Isotopes in Small Samples

    Science.gov (United States)

    Stanley, F. E.; Byerly, Benjamin L.; Thomas, Mariam R.; Spencer, Khalil J.

    2016-06-01

    Actinide isotope measurements are a critical signature capability in the modern nuclear forensics "toolbox", especially when interrogating anthropogenic constituents in real-world scenarios. Unfortunately, established methodologies, such as traditional total evaporation via thermal ionization mass spectrometry, struggle to confidently measure low abundance isotope ratios (evaporation techniques as a straightforward means of improving plutonium minor isotope measurements, which have been resistant to enhancement in recent years because of elevated radiologic concerns. Results are presented for small sample (~20 ng) applications involving a well-known plutonium isotope reference material, CRM-126a, and compared with traditional total evaporation methods.

  14. Improving Sleep for Hospitalized Antepartum Patients: A Non-Randomized Controlled Pilot Study.

    Science.gov (United States)

    Lee, Kathryn A; Gay, Caryl L

    2017-12-15

    To evaluate feasibility and efficacy of a hospital-based protocol for improving sleep in high- risk antepartum patients. Sleep measures were compared during 1 week of hospitalization before and after implementing a Sleep Improvement Protocol for Antepartum Patients (SIP-AP). A non-randomized convenience sample of usual care controls was compared to a subsequent intervention sample after the protocol was implemented. Women were eligible if they spoke English, were medically stable, pregnant for at least 20 weeks, and hospitalized at least 24 hours; 25 pregnant women had sufficient data for analyses (11 controls, 14 intervention). Sleep was assessed in 3 ways: the Pittsburgh Sleep Quality Index was completed after obtaining consent to estimate sleep quality prior to hospital admission; sleep diary completed each hospital day; and General Sleep Disturbance Scale completed at 7 days or prior to hospital discharge. Symptoms that could affect sleep were assessed with the Memorial Symptom Assessment Scale. Both groups recorded similar sleep duration (7 hours) but the intervention group had fewer symptoms and significantly ( P = .015) lower sleep disturbance scores (53.1 ± 14.5) than controls (71.9 ± 18.8). Participant feedback about the intervention was positive, although adherence to components of the intervention protocol was variable. This pilot study provides evidence of the feasibility and preliminary efficacy of the SIP-AP intervention for reducing symptoms and improving sleep of antepartum patients during hospitalization. Further detailed evaluation of specific components of this protocol is warranted, and other types of hospitalized patients may benefit from unit-based modifications to this SIP-AP protocol. © 2017 American Academy of Sleep Medicine

  15. Use of aspiration method for collecting brain samples for rabies diagnosis in small wild animals.

    Science.gov (United States)

    Iamamoto, K; Quadros, J; Queiroz, L H

    2011-02-01

    In developing countries such as Brazil, where canine rabies is still a considerable problem, samples from wildlife species are infrequently collected and submitted for screening for rabies. A collaborative study was established involving environmental biologists and veterinarians for rabies epidemiological research in a specific ecological area located at the Sao Paulo State, Brazil. The wild animals' brains are required to be collected without skull damage because the skull's measurements are important in the identification of the captured animal species. For this purpose, samples from bats and small mammals were collected using an aspiration method by inserting a plastic pipette into the brain through the magnum foramen. While there is a progressive increase in the use of the plastic pipette technique in various studies undertaken, it is also appreciated that this method could foster collaborative research between wildlife scientists and rabies epidemiologists thus improving rabies surveillance. © 2009 Blackwell Verlag GmbH.

  16. Investigation of Phase Transition-Based Tethered Systems for Small Body Sample Capture

    Science.gov (United States)

    Quadrelli, Marco; Backes, Paul; Wilkie, Keats; Giersch, Lou; Quijano, Ubaldo; Scharf, Daniel; Mukherjee, Rudranarayan

    2009-01-01

    This paper summarizes the modeling, simulation, and testing work related to the development of technology to investigate the potential that shape memory actuation has to provide mechanically simple and affordable solutions for delivering assets to a surface and for sample capture and possible return to Earth. We investigate the structural dynamics and controllability aspects of an adaptive beam carrying an end-effector which, by changing equilibrium phases is able to actively decouple the end-effector dynamics from the spacecraft dynamics during the surface contact phase. Asset delivery and sample capture and return are at the heart of several emerging potential missions to small bodies, such as asteroids and comets, and to the surface of large bodies, such as Titan.

  17. Modeling and Testing of Phase Transition-Based Deployable Systems for Small Body Sample Capture

    Science.gov (United States)

    Quadrelli, Marco; Backes, Paul; Wilkie, Keats; Giersch, Lou; Quijano, Ubaldo; Keim, Jason; Mukherjee, Rudranarayan

    2009-01-01

    This paper summarizes the modeling, simulation, and testing work related to the development of technology to investigate the potential that shape memory actuation has to provide mechanically simple and affordable solutions for delivering assets to a surface and for sample capture and return. We investigate the structural dynamics and controllability aspects of an adaptive beam carrying an end-effector which, by changing equilibrium phases is able to actively decouple the end-effector dynamics from the spacecraft dynamics during the surface contact phase. Asset delivery and sample capture and return are at the heart of several emerging potential missions to small bodies, such as asteroids and comets, and to the surface of large bodies, such as Titan.

  18. Monitoring, Modeling, and Diagnosis of Alkali-Silica Reaction in Small Concrete Samples

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Vivek [Idaho National Lab. (INL), Idaho Falls, ID (United States); Cai, Guowei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gribok, Andrei V. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mahadevan, Sankaran [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high-confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This report describes alkali-silica reaction (ASR) degradation mechanisms and factors influencing the ASR. A fully coupled thermo-hydro-mechanical-chemical model developed by Saouma and Perotti by taking into consideration the effects of stress on the reaction kinetics and anisotropic volumetric expansion is presented in this report. This model is implemented in the GRIZZLY code based on the Multiphysics Object Oriented Simulation Environment. The implemented model in the GRIZZLY code is randomly used to initiate ASR in a 2D and 3D lattice to study the percolation aspects of concrete. The percolation aspects help determine the transport properties of the material and therefore the durability and service life of concrete. This report summarizes the effort to develop small-size concrete samples with embedded glass to mimic ASR. The concrete samples were treated in water and sodium hydroxide solution at elevated temperature to study how ingress of sodium ions and hydroxide ions at elevated temperature impacts concrete samples embedded with glass. Thermal camera was used to monitor the changes in the concrete sample and results are summarized.

  19. Laparoscopic Complete Mesocolic Excision versus Open Complete Mesocolic Excision for Transverse Colon Cancer: Long-Term Survival Results of a Prospective Single Centre Non-Randomized Study.

    Science.gov (United States)

    Storli, Kristian Eeg; Eide, Geir Egil

    2016-01-01

    Laparoscopic complete mesocolic excision (CME) used in the treatment of transverse colon cancer has been questioned on the basis of the technical challenges. The aim of this study was to evaluate the medium- and long-term clinical and survival outcomes after laparoscopic and open CME for transverse colon cancer and to compare the 2 approaches. This study was a retrospective non-randomized study of patients with prospectively registered data on open and laparoscopic CME for transverse colon cancer tumour-node-metastasis stages I-III operated on between 2007 and 2014. This was a single-centre study in a community teaching hospital. A total of 56 patients with transverse colon cancer were included, excluding those with tumours in the colonic flexures. The outcome aims were 4-year time to recurrence (TTR) and cancer-specific survival (CSS). Morbidity was also measured. The 4-year TTR was 93.9% in the laparoscopic group and 91.3% in the open group (p = 0.71). The 4-year CSS was 97.0% in the laparoscopic group and 91.3% in the open group (p = 0.42). This was a prospective single-institution study with a small sample size. Results of the study suggest that the laparoscopic CME approach might be the preferred approach for transverse colon cancer, especially regarding its benefits in terms of short-term morbidity, length of stay and oncological outcome. © 2016 S. Karger AG, Basel.

  20. Predicting Antitumor Activity of Peptides by Consensus of Regression Models Trained on a Small Data Sample

    Directory of Open Access Journals (Sweden)

    Ivanka Jerić

    2011-11-01

    Full Text Available Predicting antitumor activity of compounds using regression models trained on a small number of compounds with measured biological activity is an ill-posed inverse problem. Yet, it occurs very often within the academic community. To counteract, up to some extent, overfitting problems caused by a small training data, we propose to use consensus of six regression models for prediction of biological activity of virtual library of compounds. The QSAR descriptors of 22 compounds related to the opioid growth factor (OGF, Tyr-Gly-Gly-Phe-Met with known antitumor activity were used to train regression models: the feed-forward artificial neural network, the k-nearest neighbor, sparseness constrained linear regression, the linear and nonlinear (with polynomial and Gaussian kernel support vector machine. Regression models were applied on a virtual library of 429 compounds that resulted in six lists with candidate compounds ranked by predicted antitumor activity. The highly ranked candidate compounds were synthesized, characterized and tested for an antiproliferative activity. Some of prepared peptides showed more pronounced activity compared with the native OGF; however, they were less active than highly ranked compounds selected previously by the radial basis function support vector machine (RBF SVM regression model. The ill-posedness of the related inverse problem causes unstable behavior of trained regression models on test data. These results point to high complexity of prediction based on the regression models trained on a small data sample.

  1. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai

    2015-09-16

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  2. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai; Pang, Herbert; Tong, Tiejun; Genton, Marc G.

    2015-01-01

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  3. Evaluation applications of instrument calibration research findings in psychology for very small samples

    Science.gov (United States)

    Fisher, W. P., Jr.; Petry, P.

    2016-11-01

    Many published research studies document item calibration invariance across samples using Rasch's probabilistic models for measurement. A new approach to outcomes evaluation for very small samples was employed for two workshop series focused on stress reduction and joyful living conducted for health system employees and caregivers since 2012. Rasch-calibrated self-report instruments measuring depression, anxiety and stress, and the joyful living effects of mindfulness behaviors were identified in peer-reviewed journal articles. Items from one instrument were modified for use with a US population, other items were simplified, and some new items were written. Participants provided ratings of their depression, anxiety and stress, and the effects of their mindfulness behaviors before and after each workshop series. The numbers of participants providing both pre- and post-workshop data were low (16 and 14). Analysis of these small data sets produce results showing that, with some exceptions, the item hierarchies defining the constructs retained the same invariant profiles they had exhibited in the published research (correlations (not disattenuated) range from 0.85 to 0.96). In addition, comparisons of the pre- and post-workshop measures for the three constructs showed substantively and statistically significant changes. Implications for program evaluation comparisons, quality improvement efforts, and the organization of communications concerning outcomes in clinical fields are explored.

  4. Measurements of accurate x-ray scattering data of protein solutions using small stationary sample cells

    Science.gov (United States)

    Hong, Xinguo; Hao, Quan

    2009-01-01

    In this paper, we report a method of precise in situ x-ray scattering measurements on protein solutions using small stationary sample cells. Although reduction in the radiation damage induced by intense synchrotron radiation sources is indispensable for the correct interpretation of scattering data, there is still a lack of effective methods to overcome radiation-induced aggregation and extract scattering profiles free from chemical or structural damage. It is found that radiation-induced aggregation mainly begins on the surface of the sample cell and grows along the beam path; the diameter of the damaged region is comparable to the x-ray beam size. Radiation-induced aggregation can be effectively avoided by using a two-dimensional scan (2D mode), with an interval as small as 1.5 times the beam size, at low temperature (e.g., 4 °C). A radiation sensitive protein, bovine hemoglobin, was used to test the method. A standard deviation of less than 5% in the small angle region was observed from a series of nine spectra recorded in 2D mode, in contrast to the intensity variation seen using the conventional stationary technique, which can exceed 100%. Wide-angle x-ray scattering data were collected at a standard macromolecular diffraction station using the same data collection protocol and showed a good signal/noise ratio (better than the reported data on the same protein using a flow cell). The results indicate that this method is an effective approach for obtaining precise measurements of protein solution scattering.

  5. Measurements of accurate x-ray scattering data of protein solutions using small stationary sample cells

    International Nuclear Information System (INIS)

    Hong Xinguo; Hao Quan

    2009-01-01

    In this paper, we report a method of precise in situ x-ray scattering measurements on protein solutions using small stationary sample cells. Although reduction in the radiation damage induced by intense synchrotron radiation sources is indispensable for the correct interpretation of scattering data, there is still a lack of effective methods to overcome radiation-induced aggregation and extract scattering profiles free from chemical or structural damage. It is found that radiation-induced aggregation mainly begins on the surface of the sample cell and grows along the beam path; the diameter of the damaged region is comparable to the x-ray beam size. Radiation-induced aggregation can be effectively avoided by using a two-dimensional scan (2D mode), with an interval as small as 1.5 times the beam size, at low temperature (e.g., 4 deg. C). A radiation sensitive protein, bovine hemoglobin, was used to test the method. A standard deviation of less than 5% in the small angle region was observed from a series of nine spectra recorded in 2D mode, in contrast to the intensity variation seen using the conventional stationary technique, which can exceed 100%. Wide-angle x-ray scattering data were collected at a standard macromolecular diffraction station using the same data collection protocol and showed a good signal/noise ratio (better than the reported data on the same protein using a flow cell). The results indicate that this method is an effective approach for obtaining precise measurements of protein solution scattering.

  6. The use of secondary ion mass spectrometry in forensic analyses of ultra-small samples

    Science.gov (United States)

    Cliff, John

    2010-05-01

    It is becoming increasingly important in forensic science to perform chemical and isotopic analyses on very small sample sizes. Moreover, in some instances the signature of interest may be incorporated in a vast background making analyses impossible by bulk methods. Recent advances in instrumentation make secondary ion mass spectrometry (SIMS) a powerful tool to apply to these problems. As an introduction, we present three types of forensic analyses in which SIMS may be useful. The causal organism of anthrax (Bacillus anthracis) chelates Ca and other metals during spore formation. Thus, the spores contain a trace element signature related to the growth medium that produced the organisms. Although other techniques have been shown to be useful in analyzing these signatures, the sample size requirements are generally relatively large. We have shown that time of flight SIMS (TOF-SIMS) combined with multivariate analysis, can clearly separate Bacillus sp. cultures prepared in different growth media using analytical spot sizes containing approximately one nanogram of spores. An important emerging field in forensic analysis is that of provenance of fecal pollution. The strategy of choice for these analyses-developing host-specific nucleic acid probes-has met with considerable difficulty due to lack of specificity of the probes. One potentially fruitful strategy is to combine in situ nucleic acid probing with high precision isotopic analyses. Bulk analyses of human and bovine fecal bacteria, for example, indicate a relative difference in d13C content of about 4 per mil. We have shown that sample sizes of several nanograms can be analyzed with the IMS 1280 with precisions capable of separating two per mil differences in d13C. The NanoSIMS 50 is capable of much better spatial resolution than the IMS 1280, albeit at a cost of analytical precision. Nevertheless we have documented precision capable of separating five per mil differences in d13C using analytical spots containing

  7. Vertical Sampling Scales for Atmospheric Boundary Layer Measurements from Small Unmanned Aircraft Systems (sUAS

    Directory of Open Access Journals (Sweden)

    Benjamin L. Hemingway

    2017-09-01

    Full Text Available The lowest portion of the Earth’s atmosphere, known as the atmospheric boundary layer (ABL, plays an important role in the formation of weather events. Simple meteorological measurements collected from within the ABL, such as temperature, pressure, humidity, and wind velocity, are key to understanding the exchange of energy within this region, but conventional surveillance techniques such as towers, radar, weather balloons, and satellites do not provide adequate spatial and/or temporal coverage for monitoring weather events. Small unmanned aircraft, or aerial, systems (sUAS provide a versatile, dynamic platform for atmospheric sensing that can provide higher spatio-temporal sampling frequencies than available through most satellite sensing methods. They are also able to sense portions of the atmosphere that cannot be measured from ground-based radar, weather stations, or weather balloons and have the potential to fill gaps in atmospheric sampling. However, research on the vertical sampling scales for collecting atmospheric measurements from sUAS and the variabilities of these scales across atmospheric phenomena (e.g., temperature and humidity is needed. The objective of this study is to use variogram analysis, a common geostatistical technique, to determine optimal spatial sampling scales for two atmospheric variables (temperature and relative humidity captured from sUAS. Results show that vertical sampling scales of approximately 3 m for temperature and 1.5–2 m for relative humidity were sufficient to capture the spatial structure of these phenomena under the conditions tested. Future work is needed to model these scales across the entire ABL as well as under variable conditions.

  8. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    International Nuclear Information System (INIS)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K.

    2015-01-01

    Environmental scientific research requires a detector that has sensitivity low enough to reveal the presence of any contaminant in the sample at a reasonable counting time. Canberra developed the germanium detector geometry called Small Anode Germanium (SAGe) Well detector, which is now available commercially. The SAGe Well detector is a new type of low capacitance germanium well detector manufactured using small anode technology capable of advancing many environmental scientific research applications. The performance of this detector has been evaluated for a range of sample sizes and geometries counted inside the well, and on the end cap of the detector. The detector has energy resolution performance similar to semi-planar detectors, and offers significant improvement over the existing coaxial and Well detectors. Energy resolution performance of 750 eV Full Width at Half Maximum (FWHM) at 122 keV γ-ray energy and resolution of 2.0 - 2.3 keV FWHM at 1332 keV γ-ray energy are guaranteed for detector volumes up to 425 cm 3 . The SAGe Well detector offers an optional 28 mm well diameter with the same energy resolution as the standard 16 mm well. Such outstanding resolution performance will benefit environmental applications in revealing the detailed radionuclide content of samples, particularly at low energy, and will enhance the detection sensitivity resulting in reduced counting time. The detector is compatible with electric coolers without any sacrifice in performance and supports the Canberra Mathematical efficiency calibration method (In situ Object Calibration Software or ISOCS, and Laboratory Source-less Calibration Software or LABSOCS). In addition, the SAGe Well detector supports true coincidence summing available in the ISOCS/LABSOCS framework. The improved resolution performance greatly enhances detection sensitivity of this new detector for a range of sample sizes and geometries counted inside the well. This results in lower minimum detectable

  9. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    Energy Technology Data Exchange (ETDEWEB)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K. [Canberra Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States)

    2015-07-01

    Environmental scientific research requires a detector that has sensitivity low enough to reveal the presence of any contaminant in the sample at a reasonable counting time. Canberra developed the germanium detector geometry called Small Anode Germanium (SAGe) Well detector, which is now available commercially. The SAGe Well detector is a new type of low capacitance germanium well detector manufactured using small anode technology capable of advancing many environmental scientific research applications. The performance of this detector has been evaluated for a range of sample sizes and geometries counted inside the well, and on the end cap of the detector. The detector has energy resolution performance similar to semi-planar detectors, and offers significant improvement over the existing coaxial and Well detectors. Energy resolution performance of 750 eV Full Width at Half Maximum (FWHM) at 122 keV γ-ray energy and resolution of 2.0 - 2.3 keV FWHM at 1332 keV γ-ray energy are guaranteed for detector volumes up to 425 cm{sup 3}. The SAGe Well detector offers an optional 28 mm well diameter with the same energy resolution as the standard 16 mm well. Such outstanding resolution performance will benefit environmental applications in revealing the detailed radionuclide content of samples, particularly at low energy, and will enhance the detection sensitivity resulting in reduced counting time. The detector is compatible with electric coolers without any sacrifice in performance and supports the Canberra Mathematical efficiency calibration method (In situ Object Calibration Software or ISOCS, and Laboratory Source-less Calibration Software or LABSOCS). In addition, the SAGe Well detector supports true coincidence summing available in the ISOCS/LABSOCS framework. The improved resolution performance greatly enhances detection sensitivity of this new detector for a range of sample sizes and geometries counted inside the well. This results in lower minimum detectable

  10. Measuring Blood Glucose Concentrations in Photometric Glucometers Requiring Very Small Sample Volumes.

    Science.gov (United States)

    Demitri, Nevine; Zoubir, Abdelhak M

    2017-01-01

    Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.

  11. Perspectives of an acoustic–electrostatic/electrodynamic hybrid levitator for small fluid and solid samples

    International Nuclear Information System (INIS)

    Lierke, E G; Holitzner, L

    2008-01-01

    The feasibility of an acoustic–electrostatic hybrid levitator for small fluid and solid samples is evaluated. A proposed design and its theoretical assessment are based on the optional implementation of simple hardware components (ring electrodes) and standard laboratory equipment into typical commercial ultrasonic standing wave levitators. These levitators allow precise electrical charging of drops during syringe- or ink-jet-type deployment. The homogeneous electric 'Millikan field' between the grounded ultrasonic transducer and the electrically charged reflector provide an axial compensation of the sample weight in an indifferent equilibrium, which can be balanced by using commercial optical position sensors in combination with standard electronic PID position control. Radial electrostatic repulsion forces between the charged sample and concentric ring electrodes of the same polarity provide stable positioning at the centre of the levitator. The levitator can be used in a pure acoustic or electrostatic mode or in a hybrid combination of both subsystems. Analytical evaluations of the radial–axial force profiles are verified with detailed numerical finite element calculations under consideration of alternative boundary conditions. The simple hardware modification with implemented double-ring electrodes in ac/dc operation is also feasible for an electrodynamic/acoustic hybrid levitator

  12. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  13. Sensitive power compensated scanning calorimeter for analysis of phase transformations in small samples

    International Nuclear Information System (INIS)

    Lopeandia, A.F.; Cerdo, Ll.; Clavaguera-Mora, M.T.; Arana, Leonel R.; Jensen, K.F.; Munoz, F.J.; Rodriguez-Viejo, J.

    2005-01-01

    We have designed and developed a sensitive scanning calorimeter for use with microgram or submicrogram, thin film, or powder samples. Semiconductor processing techniques are used to fabricate membrane based microreactors with a small heat capacity of the addenda, 120 nJ/K at room temperature. At heating rates below 10 K/s the heat released or absorbed by the sample during a given transformation is compensated through a resistive Pt heater by a digital controller so that the calorimeter works as a power compensated device. Its use and dynamic sensitivity is demonstrated by analyzing the melting behavior of thin films of indium and high density polyethylene. Melting enthalpies in the range of 40-250 μJ for sample masses on the order of 1.5 μg have been measured with accuracy better than 5% at heating rates ∼0.2 K/s. The signal-to-noise ratio, limited by the electronic setup, is 200 nW

  14. The small sample uncertainty aspect in relation to bullwhip effect measurement

    DEFF Research Database (Denmark)

    Nielsen, Erland Hejn

    2009-01-01

    The bullwhip effect as a concept has been known for almost half a century starting with the Forrester effect. The bullwhip effect is observed in many supply chains, and it is generally accepted as a potential malice. Despite of this fact, the bullwhip effect still seems to be first and foremost...... a conceptual phenomenon. This paper intends primarily to investigate why this might be so and thereby investigate the various aspects, possibilities and obstacles that must be taken into account, when considering the potential practical use and measure of the bullwhip effect in order to actually get the supply...... chain under control. This paper will put special emphasis on the unavoidable small-sample uncertainty aspects relating to the measurement or estimation of the bullwhip effect.  ...

  15. Determination of 35S-aminoacyl-transfer ribonucleic acid specific radioactivity in small tissue samples

    International Nuclear Information System (INIS)

    Samarel, A.M.; Ogunro, E.A.; Ferguson, A.G.; Lesch, M.

    1981-01-01

    Rate determination of protein synthesis utilizing tracer amino acid incorporation requires accurate assessment of the specific radioactivity of the labeled precursor aminoacyl-tRNA pool. Previously published methods presumably useful for the measurement of any aminoacyl-tRNA were unsuccessful when applied to [ 35 S]methionine, due to the unique chemical properties of this amino acid. Herein we describe modifications of these methods necessary for the measurement of 35 S-aminoacyl-tRNA specific radioactivity from small tissue samples incubated in the presence of [ 35 S]methionine. The use of [ 35 S]methionine of high specific radioactivity enables analysis of the methionyl-tRNA from less than 100 mg of tissue. Conditions for optimal recovery of 35 S-labeled dansyl-amino acid derivatives are presented and possible applications of this method are discussed

  16. Determination of /sup 35/S-aminoacyl-transfer ribonucleic acid specific radioactivity in small tissue samples

    Energy Technology Data Exchange (ETDEWEB)

    Samarel, A.M.; Ogunro, E.A.; Ferguson, A.G.; Lesch, M.

    1981-11-15

    Rate determination of protein synthesis utilizing tracer amino acid incorporation requires accurate assessment of the specific radioactivity of the labeled precursor aminoacyl-tRNA pool. Previously published methods presumably useful for the measurement of any aminoacyl-tRNA were unsuccessful when applied to (/sup 35/S)methionine, due to the unique chemical properties of this amino acid. Herein we describe modifications of these methods necessary for the measurement of /sup 35/S-aminoacyl-tRNA specific radioactivity from small tissue samples incubated in the presence of (/sup 35/S)methionine. The use of (/sup 35/S)methionine of high specific radioactivity enables analysis of the methionyl-tRNA from less than 100 mg of tissue. Conditions for optimal recovery of /sup 35/S-labeled dansyl-amino acid derivatives are presented and possible applications of this method are discussed.

  17. Basic distribution free identification tests for small size samples of environmental data

    Energy Technology Data Exchange (ETDEWEB)

    Federico, A.G.; Musmeci, F. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dipt. Ambiente

    1998-01-01

    Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data. [Italiano] Nell`analisi di dati ambientali ricorre spesso il caso di dover sottoporre a test l`ipotesi di provenienza di due, o piu`, insiemi di dati dalla stessa popolazione. Tipicamente i dati disponibili sono pochi e spesso l`ipotesi di provenienza da distribuzioni normali non e` sostenibile. D`altra aprte la diffusione odierna di Personal Computer fornisce nuove possibili soluzioni basate sull`uso intensivo delle risorse della CPU. Il rapporto analizza il problema e presenta la possibilita` di utilizzo di due test non parametrici basati sulle proprieta` intrinseche di equiprobabilita` dei campioni. Il primo e` basato su una tecnica di ricampionamento esaustivo mentre il secondo su un approccio di tipo bootstrap. E` presentato un programma di semplice utilizzo e un caso di studio basato su dati di contaminazione di bambini a Chernobyl.

  18. Sampling versus systematic full lymphatic dissection in surgical treatment of non-small cell lung cancer.

    Science.gov (United States)

    Koulaxouzidis, Georgios; Karagkiouzis, Grigorios; Konstantinou, Marios; Gkiozos, Ioannis; Syrigos, Konstantinos

    2013-04-22

    The extent of mediastinal lymph node assessment during surgery for non-small cell cancer remains controversial. Different techniques are used, ranging from simple visual inspection of the unopened mediastinum to an extended bilateral lymph node dissection. Furthermore, different terms are used to define these techniques. Sampling is the removal of one or more lymph nodes under the guidance of pre-operative findings. Systematic (full) nodal dissection is the removal of all mediastinal tissue containing the lymph nodes systematically within anatomical landmarks. A Medline search was conducted to identify articles in the English language that addressed the role of mediastinal lymph node resection in the treatment of non-small cell lung cancer. Opinions as to the reasons for favoring full lymphatic dissection include complete resection, improved nodal staging and better local control due to resection of undetected micrometastasis. Arguments against routine full lymphatic dissection are increased morbidity, increase in operative time, and lack of evidence of improved survival. For complete resection of non-small cell lung cancer, many authors recommend a systematic nodal dissection as the standard approach during surgery, and suggest that this provides both adequate nodal staging and guarantees complete resection. Whether extending the lymph node dissection influences survival or recurrence rate is still not known. There are valid arguments in favor in terms not only of an improved local control but also of an improved long-term survival. However, the impact of lymph node dissection on long-term survival should be further assessed by large-scale multicenter randomized trials.

  19. Small population size of Pribilof Rock Sandpipers confirmed through distance-sampling surveys in Alaska

    Science.gov (United States)

    Ruthrauff, Daniel R.; Tibbitts, T. Lee; Gill, Robert E.; Dementyev, Maksim N.; Handel, Colleen M.

    2012-01-01

    The Rock Sandpiper (Calidris ptilocnemis) is endemic to the Bering Sea region and unique among shorebirds in the North Pacific for wintering at high latitudes. The nominate subspecies, the Pribilof Rock Sandpiper (C. p. ptilocnemis), breeds on four isolated islands in the Bering Sea and appears to spend the winter primarily in Cook Inlet, Alaska. We used a stratified systematic sampling design and line-transect method to survey the entire breeding range of this population during springs 2001-2003. Densities were up to four times higher on the uninhabited and more northerly St. Matthew and Hall islands than on St. Paul and St. George islands, which both have small human settlements and introduced reindeer herds. Differences in density, however, appeared to be more related to differences in vegetation than to anthropogenic factors, raising some concern for prospective effects of climate change. We estimated the total population at 19 832 birds (95% CI 17 853–21 930), ranking it among the smallest of North American shorebird populations. To determine the vulnerability of C. p. ptilocnemis to anthropogenic and stochastic environmental threats, future studies should focus on determining the amount of gene flow among island subpopulations, the full extent of the subspecies' winter range, and the current trajectory of this small population.

  20. A review of empirical research related to the use of small quantitative samples in clinical outcome scale development.

    Science.gov (United States)

    Houts, Carrie R; Edwards, Michael C; Wirth, R J; Deal, Linda S

    2016-11-01

    There has been a notable increase in the advocacy of using small-sample designs as an initial quantitative assessment of item and scale performance during the scale development process. This is particularly true in the development of clinical outcome assessments (COAs), where Rasch analysis has been advanced as an appropriate statistical tool for evaluating the developing COAs using a small sample. We review the benefits such methods are purported to offer from both a practical and statistical standpoint and detail several problematic areas, including both practical and statistical theory concerns, with respect to the use of quantitative methods, including Rasch-consistent methods, with small samples. The feasibility of obtaining accurate information and the potential negative impacts of misusing large-sample statistical methods with small samples during COA development are discussed.

  1. Weighted piecewise LDA for solving the small sample size problem in face verification.

    Science.gov (United States)

    Kyperountas, Marios; Tefas, Anastasios; Pitas, Ioannis

    2007-03-01

    A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance.

  2. Identification of multiple mRNA and DNA sequences from small tissue samples isolated by laser-assisted microdissection.

    Science.gov (United States)

    Bernsen, M R; Dijkman, H B; de Vries, E; Figdor, C G; Ruiter, D J; Adema, G J; van Muijen, G N

    1998-10-01

    Molecular analysis of small tissue samples has become increasingly important in biomedical studies. Using a laser dissection microscope and modified nucleic acid isolation protocols, we demonstrate that multiple mRNA as well as DNA sequences can be identified from a single-cell sample. In addition, we show that the specificity of procurement of tissue samples is not compromised by smear contamination resulting from scraping of the microtome knife during sectioning of lesions. The procedures described herein thus allow for efficient RT-PCR or PCR analysis of multiple nucleic acid sequences from small tissue samples obtained by laser-assisted microdissection.

  3. 40 CFR Appendix A to Subpart F of... - Sampling Plans for Selective Enforcement Auditing of Small Nonroad Engines

    Science.gov (United States)

    2010-07-01

    ... Enforcement Auditing of Small Nonroad Engines A Appendix A to Subpart F of Part 90 Protection of Environment...-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Selective Enforcement Auditing Pt. 90, Subpt. F, App. A Appendix A to Subpart F of Part 90—Sampling Plans for Selective Enforcement Auditing of Small Nonroad Engines...

  4. Evaluating the biological potential in samples returned from planetary satellites and small solar system bodies: framework for decision making

    National Research Council Canada - National Science Library

    National Research Council Staff; Space Studies Board; Division on Engineering and Physical Sciences; National Research Council; National Academy of Sciences

    ... from Planetary Satellites and Small Solar System Bodies Framework for Decision Making Task Group on Sample Return from Small Solar System Bodies Space Studies Board Commission on Physical Sciences, Mathematics, and Applications National Research Council NATIONAL ACADEMY PRESS Washington, D.C. 1998 i Copyrightthe true use are Please breaks...

  5. Radioisotopic method for the measurement of lipolysis in small samples of human adipose tissue

    International Nuclear Information System (INIS)

    Leibel, R.L.; Hirsch, J.; Berry, E.M.; Gruen, R.K.

    1984-01-01

    To facilitate the study of adrenoreceptor response in small needle biopsy samples of human subcutaneous adipose tissue, we developed a dual radioisotopic technique for measuring lipolysis rate. Aliquots (20-75 mg) of adipose tissue fragments were incubated in a buffered albumin medium containing [ 3 H]palmitate and [ 14 C]glucose, each of high specific activity. In neutral glycerides synthesized in this system, [ 14 C]glucose is incorporated exclusively into the glyceride-glycerol moiety and 3 H appears solely in the esterified fatty acid. Alpha-2 and beta-1 adrenoreceptor activation of tissue incubated in this system does not alter rates of 14 C-labeled glyceride accumulation, but does produce a respective increase or decrease in the specific activity of fatty acids esterified into newly synthesized glycerides. This alteration in esterified fatty acid specific activity is reflected in the ratio of 14 C: 3 H in newly synthesized triglycerides extracted from the incubated adipose tissue. There is a high correlation (r . 0.90) between the 14 C: 3 H ratio in triglycerides and the rate of lipolysis as reflected in glycerol release into the incubation medium. The degree of adrenoreceptor activation by various concentrations of lipolytic and anti-lipolytic substances can be assessed by comparing this ratio in stimulated tissue to that characterizing unstimulated tissue or the incubation medium. This technique permits the study of very small, unweighed tissue biopsy fragments, the only limitation on sensitivity being the specific activity of the medium glucose and palmitate. It is, therefore, useful for serial examinations of adipose tissue adrenoreceptor dose-response characteristics under a variety of clinical circumstances

  6. Filter Bank Regularized Common Spatial Pattern Ensemble for Small Sample Motor Imagery Classification.

    Science.gov (United States)

    Park, Sang-Hoon; Lee, David; Lee, Sang-Goog

    2018-02-01

    For the last few years, many feature extraction methods have been proposed based on biological signals. Among these, the brain signals have the advantage that they can be obtained, even by people with peripheral nervous system damage. Motor imagery electroencephalograms (EEG) are inexpensive to measure, offer a high temporal resolution, and are intuitive. Therefore, these have received a significant amount of attention in various fields, including signal processing, cognitive science, and medicine. The common spatial pattern (CSP) algorithm is a useful method for feature extraction from motor imagery EEG. However, performance degradation occurs in a small-sample setting (SSS), because the CSP depends on sample-based covariance. Since the active frequency range is different for each subject, it is also inconvenient to set the frequency range to be different every time. In this paper, we propose the feature extraction method based on a filter bank to solve these problems. The proposed method consists of five steps. First, motor imagery EEG is divided by a using filter bank. Second, the regularized CSP (R-CSP) is applied to the divided EEG. Third, we select the features according to mutual information based on the individual feature algorithm. Fourth, parameter sets are selected for the ensemble. Finally, we classify using ensemble based on features. The brain-computer interface competition III data set IVa is used to evaluate the performance of the proposed method. The proposed method improves the mean classification accuracy by 12.34%, 11.57%, 9%, 4.95%, and 4.47% compared with CSP, SR-CSP, R-CSP, filter bank CSP (FBCSP), and SR-FBCSP. Compared with the filter bank R-CSP ( , ), which is a parameter selection version of the proposed method, the classification accuracy is improved by 3.49%. In particular, the proposed method shows a large improvement in performance in the SSS.

  7. Retrospective biodosimetry with small tooth enamel samples using K-Band and X-Band

    International Nuclear Information System (INIS)

    Gomez, Jorge A.; Kinoshita, Angela; Leonor, Sergio J.; Belmonte, Gustavo C.; Baffa, Oswaldo

    2011-01-01

    In an attempt to make the in vitro electron spin resonance (ESR) retrospective dosimetry of the tooth enamel a lesser invasive method, experiments using X-Band and K-Band were performed, aiming to determine conditions that could be used in cases of accidental exposures. First, a small prism from the enamel was removed and ground with an agate mortar and pestle until particles reach a diameter of approximately less than 0.5 mm. This enamel extraction process resulted in lower signal artifact compared with the direct enamel extraction performed with a diamond burr abrasion. The manual grinding of the enamel does not lead to any induced ESR signal artifact, whereas the use of a diamond burr at low speed produces a signal artifact equivalent to the dosimetric signal induced by a dose of 500 mGy of gamma irradiation. A mass of 25 mg of enamel was removed from a sound molar tooth previously irradiated in vitro with a dose of 100 mGy. This amount of enamel was enough to detect the dosimetric signal in a standard X-Band spectrometer. However using a K-Band spectrometer, samples mass between 5 and 10 mg were sufficient to obtain the same sensitivity. An overall evaluation of the uncertainties involved in the process in this and other dosimetric assessments performed at our laboratory indicates that it is possible at K-Band to estimate a 100 mGy dose with 25% accuracy. In addition, the use of K-Band also presented higher sensitivity and allowed the use of smaller sample mass in comparison with X-Band. Finally, the restoration process performed on a tooth after extraction of the 25 mg of enamel is described. This was conducted by dental treatment using photopolymerizable resin which enabled complete recovery of the tooth from the functional and aesthetic viewpoint showing that this procedure can be minimally invasive.

  8. Retrospective biodosimetry with small tooth enamel samples using K-Band and X-Band

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, Jorge A. [Departamento de Fisica, FFCLRP, Universidade de Sao Paulo, 14040-901 Ribeirao Preto, Sao Paulo (Brazil); Kinoshita, Angela [Departamento de Fisica, FFCLRP, Universidade de Sao Paulo, 14040-901 Ribeirao Preto, Sao Paulo (Brazil); Universidade Sagrado Coracao - USC, 17011-160 Bauru, Sao Paulo (Brazil); Leonor, Sergio J. [Departamento de Fisica, FFCLRP, Universidade de Sao Paulo, 14040-901 Ribeirao Preto, Sao Paulo (Brazil); Belmonte, Gustavo C. [Universidade Sagrado Coracao - USC, 17011-160 Bauru, Sao Paulo (Brazil); Baffa, Oswaldo, E-mail: baffa@usp.br [Departamento de Fisica, FFCLRP, Universidade de Sao Paulo, 14040-901 Ribeirao Preto, Sao Paulo (Brazil)

    2011-09-15

    In an attempt to make the in vitro electron spin resonance (ESR) retrospective dosimetry of the tooth enamel a lesser invasive method, experiments using X-Band and K-Band were performed, aiming to determine conditions that could be used in cases of accidental exposures. First, a small prism from the enamel was removed and ground with an agate mortar and pestle until particles reach a diameter of approximately less than 0.5 mm. This enamel extraction process resulted in lower signal artifact compared with the direct enamel extraction performed with a diamond burr abrasion. The manual grinding of the enamel does not lead to any induced ESR signal artifact, whereas the use of a diamond burr at low speed produces a signal artifact equivalent to the dosimetric signal induced by a dose of 500 mGy of gamma irradiation. A mass of 25 mg of enamel was removed from a sound molar tooth previously irradiated in vitro with a dose of 100 mGy. This amount of enamel was enough to detect the dosimetric signal in a standard X-Band spectrometer. However using a K-Band spectrometer, samples mass between 5 and 10 mg were sufficient to obtain the same sensitivity. An overall evaluation of the uncertainties involved in the process in this and other dosimetric assessments performed at our laboratory indicates that it is possible at K-Band to estimate a 100 mGy dose with 25% accuracy. In addition, the use of K-Band also presented higher sensitivity and allowed the use of smaller sample mass in comparison with X-Band. Finally, the restoration process performed on a tooth after extraction of the 25 mg of enamel is described. This was conducted by dental treatment using photopolymerizable resin which enabled complete recovery of the tooth from the functional and aesthetic viewpoint showing that this procedure can be minimally invasive.

  9. Small-kernel constrained-least-squares restoration of sampled image data

    Science.gov (United States)

    Hazra, Rajeeb; Park, Stephen K.

    1992-10-01

    Constrained least-squares image restoration, first proposed by Hunt twenty years ago, is a linear image restoration technique in which the restoration filter is derived by maximizing the smoothness of the restored image while satisfying a fidelity constraint related to how well the restored image matches the actual data. The traditional derivation and implementation of the constrained least-squares restoration filter is based on an incomplete discrete/discrete system model which does not account for the effects of spatial sampling and image reconstruction. For many imaging systems, these effects are significant and should not be ignored. In a recent paper Park demonstrated that a derivation of the Wiener filter based on the incomplete discrete/discrete model can be extended to a more comprehensive end-to-end, continuous/discrete/continuous model. In a similar way, in this paper, we show that a derivation of the constrained least-squares filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model and, by so doing, an improved restoration filter is derived. Building on previous work by Reichenbach and Park for the Wiener filter, we also show that this improved constrained least-squares restoration filter can be efficiently implemented as a small-kernel convolution in the spatial domain.

  10. Bootstrap-DEA analysis of BRICS’ energy efficiency based on small sample data

    International Nuclear Information System (INIS)

    Song, Ma-Lin; Zhang, Lin-Ling; Liu, Wei; Fisher, Ron

    2013-01-01

    Highlights: ► The BRICS’ economies have flourished with increasingly energy consumptions. ► The analyses and comparison of energy efficiency are conducted among the BRICS. ► As a whole, there is low energy efficiency but a growing trend of BRICS. ► The BRICS should adopt relevant energy policies based on their own conditions. - Abstract: As a representative of many emerging economies, BRICS’ economies have been greatly developed in recent years. Meanwhile, the proportion of energy consumption of BRICS to the whole world consumption has increased. Therefore, it is significant to analyze and compare the energy efficiency among them. This paper firstly utilizes a Super-SBM model to measure and calculate the energy efficiency of BRICS, then analyzes their present status and development trend. Further, Bootstrap is applied to modify the values based on DEA derived from small sample data, and finally the relationship between energy efficiency and carbon emissions is measured. Results show that energy efficiency of BRICS as a whole is low but has a quickly increasing trend. Also, the relationship between energy efficiency and carbon emissions vary from country to country because of their different energy structures. The governments of BRICS should make some relevant energy policies according to their own conditions

  11. The use of commercially available PC-interface cards for elemental mapping in small samples using XRF

    International Nuclear Information System (INIS)

    Abu Bakar bin Ghazali; Hoyes Garnet

    1991-01-01

    This paper demonstrates the use of ADC and reed relay cards to scan a small sample for acquiring data of X-ray fluorescence. The result shows the distribution of an element such as zinc content in the sample by means of colours, signifying the concentration

  12. Non-random mating and convergence over time for alcohol consumption, smoking, and exercise: the Nord-Trøndelag Health Study.

    Science.gov (United States)

    Ask, Helga; Rognmo, Kamilla; Torvik, Fartein Ask; Røysamb, Espen; Tambs, Kristian

    2012-05-01

    Spouses tend to have similar lifestyles. We explored the degree to which spouse similarity in alcohol use, smoking, and physical exercise is caused by non-random mating or convergence. We used data collected for the Nord-Trøndelag Health Study from 1984 to 1986 and prospective registry information about when and with whom people entered marriage/cohabitation between 1970 and 2000. Our sample included 19,599 married/cohabitating couples and 1,551 future couples that were to marry/cohabitate in the 14-16 years following data collection. All couples were grouped according to the duration between data collection and entering into marriage/cohabitation. Age-adjusted polychoric spouse correlations were used as the dependent variables in non-linear segmented regression analysis; the independent variable was time. The results indicate that spouse concordance in lifestyle is due to both non-random mating and convergence. Non-random mating appeared to be strongest for smoking. Convergence in alcohol use and smoking was evident during the period prior to marriage/cohabitation, whereas convergence in exercise was evident throughout life. Reduced spouse similarity in smoking with relationship duration may reflect secular trends.

  13. Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Ferson, S. [Applied Biomathematics, Setauket, NY (United States)

    1996-12-31

    A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.

  14. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Quantum superposition of the state discrete spectrum of mathematical correlation molecule for small samples of biometric data

    Directory of Open Access Journals (Sweden)

    Vladimir I. Volchikhin

    2017-06-01

    Full Text Available Introduction: The study promotes to decrease a number of errors of calculating the correlation coefficient in small test samples. Materials and Methods: We used simulation tool for the distribution functions of the density values of the correlation coefficient in small samples. A method for quantization of the data, allows obtaining a discrete spectrum states of one of the varieties of correlation functional. This allows us to consider the proposed structure as a mathematical correlation molecule, described by some analogue continuous-quantum Schrödinger equation. Results: The chi-squared Pearson’s molecule on small samples allows enhancing power of classical chi-squared test to 20 times. A mathematical correlation molecule described in the article has similar properties. It allows in the future reducing calculation errors of the classical correlation coefficients in small samples. Discussion and Conclusions: The authors suggest that there are infinitely many mathematical molecules are similar in their properties to the actual physical molecules. Schrödinger equations are not unique, their analogues can be constructed for each mathematical molecule. You can expect a mathematical synthesis of molecules for a large number of known statistical tests and statistical moments. All this should make it possible to reduce calculation errors due to quantum effects that occur in small test samples.

  16. ANALYSIS OF MONTE CARLO SIMULATION SAMPLING TECHNIQUES ON SMALL SIGNAL STABILITY OF WIND GENERATOR- CONNECTED POWER SYSTEM

    Directory of Open Access Journals (Sweden)

    TEMITOPE RAPHAEL AYODELE

    2016-04-01

    Full Text Available Monte Carlo simulation using Simple Random Sampling (SRS technique is popularly known for its ability to handle complex uncertainty problems. However, to produce a reasonable result, it requires huge sample size. This makes it to be computationally expensive, time consuming and unfit for online power system applications. In this article, the performance of Latin Hypercube Sampling (LHS technique is explored and compared with SRS in term of accuracy, robustness and speed for small signal stability application in a wind generator-connected power system. The analysis is performed using probabilistic techniques via eigenvalue analysis on two standard networks (Single Machine Infinite Bus and IEEE 16–machine 68 bus test system. The accuracy of the two sampling techniques is determined by comparing their different sample sizes with the IDEAL (conventional. The robustness is determined based on a significant variance reduction when the experiment is repeated 100 times with different sample sizes using the two sampling techniques in turn. Some of the results show that sample sizes generated from LHS for small signal stability application produces the same result as that of the IDEAL values starting from 100 sample size. This shows that about 100 sample size of random variable generated using LHS method is good enough to produce reasonable results for practical purpose in small signal stability application. It is also revealed that LHS has the least variance when the experiment is repeated 100 times compared to SRS techniques. This signifies the robustness of LHS over that of SRS techniques. 100 sample size of LHS produces the same result as that of the conventional method consisting of 50000 sample size. The reduced sample size required by LHS gives it computational speed advantage (about six times over the conventional method.

  17. A Rational Approach for Discovering and Validating Cancer Markers in Very Small Samples Using Mass Spectrometry and ELISA Microarrays

    Directory of Open Access Journals (Sweden)

    Richard C. Zangar

    2004-01-01

    Full Text Available Identifying useful markers of cancer can be problematic due to limited amounts of sample. Some samples such as nipple aspirate fluid (NAF or early-stage tumors are inherently small. Other samples such as serum are collected in larger volumes but archives of these samples are very valuable and only small amounts of each sample may be available for a single study. Also, given the diverse nature of cancer and the inherent variability in individual protein levels, it seems likely that the best approach to screen for cancer will be to determine the profile of a battery of proteins. As a result, a major challenge in identifying protein markers of disease is the ability to screen many proteins using very small amounts of sample. In this review, we outline some technological advances in proteomics that greatly advance this capability. Specifically, we propose a strategy for identifying markers of breast cancer in NAF that utilizes mass spectrometry (MS to simultaneously screen hundreds or thousands of proteins in each sample. The best potential markers identified by the MS analysis can then be extensively characterized using an ELISA microarray assay. Because the microarray analysis is quantitative and large numbers of samples can be efficiently analyzed, this approach offers the ability to rapidly assess a battery of selected proteins in a manner that is directly relevant to traditional clinical assays.

  18. Growth-induced strong pinning sites in laser ablated YBa2Cu3O7-δ films with a non-random distribution

    International Nuclear Information System (INIS)

    Huijbregtse, J.M.; Klaassen, F.C.; Geest, R.C.F. van der; Dam, B.; Griessen, R.

    1999-01-01

    Recently, the authors showed that natural linear defects are the origin of the high critical currents in laser ablated YGBa 2 Cu 3 O 7-δ films. Combining wet-chemical etching and Atomic Force Microscopy, they find that these dislocations are created by island coalescence during growth. Consequently, the defect density can be reproducibly varied by manipulating the density of growth islands, which in turn depends on the substrate temperature. Interestingly, the radial defect distribution function approaches zero at small distances, indicating short range order. Therefore, they are now able to study vortex matter in films with a tailored non-random distribution of natural strong pinning sites

  19. Correction of confounding bias in non-randomized studies by appropriate weighting.

    Science.gov (United States)

    Schmoor, Claudia; Gall, Christine; Stampf, Susanne; Graf, Erika

    2011-03-01

    In non-randomized studies, the assessment of a causal effect of treatment or exposure on outcome is hampered by possible confounding. Applying multiple regression models including the effects of treatment and covariates on outcome is the well-known classical approach to adjust for confounding. In recent years other approaches have been promoted. One of them is based on the propensity score and considers the effect of possible confounders on treatment as a relevant criterion for adjustment. Another proposal is based on using an instrumental variable. Here inference relies on a factor, the instrument, which affects treatment but is thought to be otherwise unrelated to outcome, so that it mimics randomization. Each of these approaches can basically be interpreted as a simple reweighting scheme, designed to address confounding. The procedures will be compared with respect to their fundamental properties, namely, which bias they aim to eliminate, which effect they aim to estimate, and which parameter is modelled. We will expand our overview of methods for analysis of non-randomized studies to methods for analysis of randomized controlled trials and show that analyses of both study types may target different effects and different parameters. The considerations will be illustrated using a breast cancer study with a so-called Comprehensive Cohort Study design, including a randomized controlled trial and a non-randomized study in the same patient population as sub-cohorts. This design offers ideal opportunities to discuss and illustrate the properties of the different approaches. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Non-random distribution of instability-associated chromosomal rearrangement breakpoints in human lymphoblastoid cells

    International Nuclear Information System (INIS)

    Moore, Stephen R.; Papworth, David; Grosovsky, Andrew J.

    2006-01-01

    Genomic instability is observed in tumors and in a large fraction of the progeny surviving irradiation. One of the best-characterized phenotypic manifestations of genomic instability is delayed chromosome aberrations. Our working hypothesis for the current study was that if genomic instability is in part attributable to cis mechanisms, we should observe a non-random distribution of chromosomes or sites involved in instability-associated rearrangements, regardless of radiation quality, dose, or trans factor expression. We report here the karyotypic examination of 296 instability-associated chromosomal rearrangement breaksites (IACRB) from 118 unstable TK6 human B lymphoblast, and isogenic derivative, clones. When we tested whether IACRB were distributed across the chromosomes based on target size, a significant non-random distribution was evident (p < 0.00001), and three IACRB hotspots (chromosomes 11, 12, and 22) and one IACRB coldspot (chromosome 2) were identified. Statistical analysis at the chromosomal band-level identified four IACRB hotspots accounting for 20% of all instability-associated breaks, two of which account for over 14% of all IACRB. Further, analysis of independent clones provided evidence within 14 individual clones of IACRB clustering at the chromosomal band level, suggesting a predisposition for further breaks after an initial break at some chromosomal bands. All of these events, independently, or when taken together, were highly unlikely to have occurred by chance (p < 0.000001). These IACRB band-level cluster hotspots were observed independent of radiation quality, dose, or cellular p53 status. The non-random distribution of instability-associated chromosomal rearrangements described here significantly differs from the distribution that was observed in a first-division post-irradiation metaphase analysis (p = 0.0004). Taken together, these results suggest that genomic instability may be in part driven by chromosomal cis mechanisms

  1. Capillary absorption spectrometer and process for isotopic analysis of small samples

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, M. Lizabeth; Kelly, James F.; Sams, Robert L.; Moran, James J.; Newburn, Matthew K.; Blake, Thomas A.

    2018-04-24

    A capillary absorption spectrometer and process are described that provide highly sensitive and accurate stable absorption measurements of analytes in a sample gas that may include isotopologues of carbon and oxygen obtained from gas and biological samples. It further provides isotopic images of microbial communities that allow tracking of nutrients at the single cell level. It further targets naturally occurring variations in carbon and oxygen isotopes that avoids need for expensive isotopically labeled mixtures which allows study of samples taken from the field without modification. The process also permits sampling in vivo permitting real-time ambient studies of microbial communities.

  2. Capillary absorption spectrometer and process for isotopic analysis of small samples

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, M. Lizabeth; Kelly, James F.; Sams, Robert L.; Moran, James J.; Newburn, Matthew K.; Blake, Thomas A.

    2016-03-29

    A capillary absorption spectrometer and process are described that provide highly sensitive and accurate stable absorption measurements of analytes in a sample gas that may include isotopologues of carbon and oxygen obtained from gas and biological samples. It further provides isotopic images of microbial communities that allow tracking of nutrients at the single cell level. It further targets naturally occurring variations in carbon and oxygen isotopes that avoids need for expensive isotopically labeled mixtures which allows study of samples taken from the field without modification. The method also permits sampling in vivo permitting real-time ambient studies of microbial communities.

  3. [Monitoring microbiological safety of small systems of water distribution. Comparison of two sampling programs in a town in central Italy].

    Science.gov (United States)

    Papini, Paolo; Faustini, Annunziata; Manganello, Rosa; Borzacchi, Giancarlo; Spera, Domenico; Perucci, Carlo A

    2005-01-01

    To determine the frequency of sampling in small water distribution systems (distribution. We carried out two sampling programs to monitor the water distribution system in a town in Central Italy between July and September 1992; the Poisson distribution assumption implied 4 water samples, the assumption of negative binomial distribution implied 21 samples. Coliform organisms were used as indicators of water safety. The network consisted of two pipe rings and two wells fed by the same water source. The number of summer customers varied considerably from 3,000 to 20,000. The mean density was 2.33 coliforms/100 ml (sd= 5.29) for 21 samples and 3 coliforms/100 ml (sd= 6) for four samples. However the hypothesis of homogeneity was rejected (p-value samples (beta= 0.24) than with 21 (beta= 0.05). For this small network, determining the samples' size according to heterogeneity hypothesis strengthens the statement that water is drinkable compared with homogeneity assumption.

  4. High-speed imaging upgrade for a standard sample scanning atomic force microscope using small cantilevers

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Jonathan D.; Nievergelt, Adrian; Erickson, Blake W.; Yang, Chen; Dukic, Maja; Fantner, Georg E., E-mail: georg.fantner@epfl.ch [Ecole Polytechnique Fédérale de Lausanne, Lausanne (Switzerland)

    2014-09-15

    We present an atomic force microscope (AFM) head for optical beam deflection on small cantilevers. Our AFM head is designed to be small in size, easily integrated into a commercial AFM system, and has a modular architecture facilitating exchange of the optical and electronic assemblies. We present two different designs for both the optical beam deflection and the electronic readout systems, and evaluate their performance. Using small cantilevers with our AFM head on an otherwise unmodified commercial AFM system, we are able to take tapping mode images approximately 5–10 times faster compared to the same AFM system using large cantilevers. By using additional scanner turnaround resonance compensation and a controller designed for high-speed AFM imaging, we show tapping mode imaging of lipid bilayers at line scan rates of 100–500 Hz for scan areas of several micrometers in size.

  5. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    Science.gov (United States)

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. A TIMS-based method for the high precision measurements of the three-isotope potassium composition of small samples

    DEFF Research Database (Denmark)

    Wielandt, Daniel Kim Peel; Bizzarro, Martin

    2011-01-01

    A novel thermal ionization mass spectrometry (TIMS) method for the three-isotope analysis of K has been developed, and ion chromatographic methods for the separation of K have been adapted for the processing of small samples. The precise measurement of K-isotopes is challenged by the presence of ...

  7. Adiponectin levels measured in dried blood spot samples from neonates born small and appropriate for gestational age

    DEFF Research Database (Denmark)

    Klamer, A; Skogstrand, Kristin; Hougaard, D M

    2007-01-01

    Adiponectin levels measured in neonatal dried blood spot samples (DBSS) might be affected by both prematurity and being born small for gestational age (SGA). The aim of the study was to measure adiponectin levels in routinely collected neonatal DBSS taken on day 5 (range 3-12) postnatal from...

  8. Classification of natural formations based on their optical characteristics using small volumes of samples

    Science.gov (United States)

    Abramovich, N. S.; Kovalev, A. A.; Plyuta, V. Y.

    1986-02-01

    A computer algorithm has been developed to classify the spectral bands of natural scenes on Earth according to their optical characteristics. The algorithm is written in FORTRAN-IV and can be used in spectral data processing programs requiring small data loads. The spectral classifications of some different types of green vegetable canopies are given in order to illustrate the effectiveness of the algorithm.

  9. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  10. Thermal neutron absorption cross-section for small samples (experiments in cylindrical geometry)

    International Nuclear Information System (INIS)

    Czubek, J.A.; Drozdowicz, K.; Igielski, A.; Krynicka-Drozdowicz, E.; Woznicka, U.

    1982-01-01

    Measurement results for thermal neutron macroscopic absorption cross-sections Σsub(a)1 when applying the cylindrical sample-moderator system are presented. Experiments for liquid (water solutions of H 3 BO 3 ) and solid (crushed basalts) samples are reported. Solid samples have been saturated with the H 3 BO 3 ''poisoning'' solution. The accuracy obtained for the determination of the absorption cross-section of the solid material was σ(Σsub(ma))=(1.2+2.2) c.u. in the case when porosity was measured with the accuracy of σ(phi)=0.001+0.002. The dispersion of the Σsub(ma) data obtained for basalts (taken from different quarries) was higher than the accuracy of the measurement. All experimental data for the fundamental decay constants lambda 0 together with the whole information about the samples are given. (author)

  11. Conditional estimation of local pooled dispersion parameter in small-sample RNA-Seq data improves differential expression test.

    Science.gov (United States)

    Gim, Jungsoo; Won, Sungho; Park, Taesung

    2016-10-01

    High throughput sequencing technology in transcriptomics studies contribute to the understanding of gene regulation mechanism and its cellular function, but also increases a need for accurate statistical methods to assess quantitative differences between experiments. Many methods have been developed to account for the specifics of count data: non-normality, a dependence of the variance on the mean, and small sample size. Among them, the small number of samples in typical experiments is still a challenge. Here we present a method for differential analysis of count data, using conditional estimation of local pooled dispersion parameters. A comprehensive evaluation of our proposed method in the aspect of differential gene expression analysis using both simulated and real data sets shows that the proposed method is more powerful than other existing methods while controlling the false discovery rates. By introducing conditional estimation of local pooled dispersion parameters, we successfully overcome the limitation of small power and enable a powerful quantitative analysis focused on differential expression test with the small number of samples.

  12. Preparing Monodisperse Macromolecular Samples for Successful Biological Small-Angle X-ray and Neutron Scattering Experiments

    Science.gov (United States)

    Jeffries, Cy M.; Graewert, Melissa A.; Blanchet, Clément E.; Langley, David B.; Whitten, Andrew E.; Svergun, Dmitri I

    2017-01-01

    Small-angle X-ray and neutron scattering (SAXS and SANS) are techniques used to extract structural parameters and determine the overall structures and shapes of biological macromolecules, complexes and assemblies in solution. The scattering intensities measured from a sample contain contributions from all atoms within the illuminated sample volume including the solvent and buffer components as well as the macromolecules of interest. In order to obtain structural information, it is essential to prepare an exactly matched solvent blank so that background scattering contributions can be accurately subtracted from the sample scattering to obtain the net scattering from the macromolecules in the sample. In addition, sample heterogeneity caused by contaminants, aggregates, mismatched solvents, radiation damage or other factors can severely influence and complicate data analysis so it is essential that the samples are pure and monodisperse for the duration of the experiment. This Protocol outlines the basic physics of SAXS and SANS and reveals how the underlying conceptual principles of the techniques ultimately ‘translate’ into practical laboratory guidance for the production of samples of sufficiently high quality for scattering experiments. The procedure describes how to prepare and characterize protein and nucleic acid samples for both SAXS and SANS using gel electrophoresis, size exclusion chromatography and light scattering. Also included are procedures specific to X-rays (in-line size exclusion chromatography SAXS) and neutrons, specifically preparing samples for contrast matching/variation experiments and deuterium labeling of proteins. PMID:27711050

  13. Hybrid image and blood sampling input function for quantification of small animal dynamic PET data

    International Nuclear Information System (INIS)

    Shoghi, Kooresh I.; Welch, Michael J.

    2007-01-01

    We describe and validate a hybrid image and blood sampling (HIBS) method to derive the input function for quantification of microPET mice data. The HIBS algorithm derives the peak of the input function from the image, which is corrected for recovery, while the tail is derived from 5 to 6 optimally placed blood sampling points. A Bezier interpolation algorithm is used to link the rightmost image peak data point to the leftmost blood sampling point. To assess the performance of HIBS, 4 mice underwent 60-min microPET imaging sessions following a 0.40-0.50-mCi bolus administration of 18 FDG. In total, 21 blood samples (blood-sampled plasma time-activity curve, bsPTAC) were obtained throughout the imaging session to compare against the proposed HIBS method. MicroPET images were reconstructed using filtered back projection with a zoom of 2.75 on the heart. Volumetric regions of interest (ROIs) were composed by drawing circular ROIs 3 pixels in diameter on 3-4 transverse planes of the left ventricle. Performance was characterized by kinetic simulations in terms of bias in parameter estimates when bsPTAC and HIBS are used as input functions. The peak of the bsPTAC curve was distorted in comparison to the HIBS-derived curve due to temporal limitations and delay in blood sampling, which affected the rates of bidirectional exchange between plasma and tissue. The results highlight limitations in using bsPTAC. The HIBS method, however, yields consistent results, and thus, is a substitute for bsPTAC

  14. Preparing and measuring ultra-small radiocarbon samples with the ARTEMIS AMS facility in Saclay, France

    Energy Technology Data Exchange (ETDEWEB)

    Delque-Kolic, E., E-mail: emmanuelle.delque-kolic@cea.fr [LMC14, CEA Saclay, Batiment 450 Porte 4E, 91191 Gif sur Yvette (France); Comby-Zerbino, C.; Ferkane, S.; Moreau, C.; Dumoulin, J.P.; Caffy, I.; Souprayen, C.; Quiles, A.; Bavay, D.; Hain, S.; Setti, V. [LMC14, CEA Saclay, Batiment 450 Porte 4E, 91191 Gif sur Yvette (France)

    2013-01-15

    The ARTEMIS facility in Saclay France measures, on average, 4500 samples a year for French organizations working in an array of fields, including environmental sciences, archeology and hydrology. In response to an increasing demand for the isolation of specific soil compounds and organic water fractions, we were motivated to evaluate our ability to reduce microgram samples using our standard graphitization lines and to measure the graphite thus obtained with our 3MV NEC Pelletron AMS. Our reduction facility consists of two fully automated graphitization lines. Each line has 12 reduction reactors with a reduction volume of 18 ml for the first line and 12 ml for the second. Under routine conditions, we determined that we could reduce the samples down to 10 {mu}g of carbon, even if the graphitization yield is consequently affected by the lower sample mass. Our results when testing different Fe/C ratios suggest that an amount of 1.5 mg of Fe powder was ideal (instead of lower amounts of catalyst) to prevent the sample from deteriorating too quickly under the Cs+ beam, and to facilitate pressing procedures. Several sets of microsamples produced from HOxI standard, international references and backgrounds were measured. When measuring {sup 14}C-free wood charcoal and HOxI samples we determined that our modern and dead blanks, due to the various preparation steps, were of 1.1 {+-} 0.8 and 0.2 {+-} 0.1 {mu}g, respectively. The results presented here were obtained for IAEA-C1, {sup 14}C-free wood, IAEA-C6, IAEA-C2 and FIRI C.

  15. Preparing and measuring ultra-small radiocarbon samples with the ARTEMIS AMS facility in Saclay, France

    International Nuclear Information System (INIS)

    Delqué-Količ, E.; Comby-Zerbino, C.; Ferkane, S.; Moreau, C.; Dumoulin, J.P.; Caffy, I.; Souprayen, C.; Quilès, A.; Bavay, D.; Hain, S.; Setti, V.

    2013-01-01

    The ARTEMIS facility in Saclay France measures, on average, 4500 samples a year for French organizations working in an array of fields, including environmental sciences, archeology and hydrology. In response to an increasing demand for the isolation of specific soil compounds and organic water fractions, we were motivated to evaluate our ability to reduce microgram samples using our standard graphitization lines and to measure the graphite thus obtained with our 3MV NEC Pelletron AMS. Our reduction facility consists of two fully automated graphitization lines. Each line has 12 reduction reactors with a reduction volume of 18 ml for the first line and 12 ml for the second. Under routine conditions, we determined that we could reduce the samples down to 10 μg of carbon, even if the graphitization yield is consequently affected by the lower sample mass. Our results when testing different Fe/C ratios suggest that an amount of 1.5 mg of Fe powder was ideal (instead of lower amounts of catalyst) to prevent the sample from deteriorating too quickly under the Cs+ beam, and to facilitate pressing procedures. Several sets of microsamples produced from HOxI standard, international references and backgrounds were measured. When measuring 14 C-free wood charcoal and HOxI samples we determined that our modern and dead blanks, due to the various preparation steps, were of 1.1 ± 0.8 and 0.2 ± 0.1 μg, respectively. The results presented here were obtained for IAEA-C1, 14 C-free wood, IAEA-C6, IAEA-C2 and FIRI C.

  16. Small sample analysis using sputter atomization/resonance ionization mass spectrometry

    International Nuclear Information System (INIS)

    Christie, W.H.; Goeringer, D.E.

    1986-01-01

    We have used secondary ion mass spectrometry (SIMS) to investigate the emission of ions via argon sputtering from U metal, UO 2 , and U 3 O 8 samples. We have also used laser resonance ionization techniques to study argon-sputtered neutral atoms and molecules emitted from these same samples. For the case of U metal, a significant enhancement in detection sensitivity for U is obtained via SA/RIMS. For U in the fully oxidized form (U 3 O 8 ), SA/RIMS offers no improvement in U detection sensitivity over conventional SIMS when sputtering with argon. 9 refs., 1 fig., 2 tabs

  17. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    Directory of Open Access Journals (Sweden)

    Cuicui Zhang

    2014-12-01

    Full Text Available Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1 how to define diverse base classifiers from the small data; (2 how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  18. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  19. Small-kernel, constrained least-squares restoration of sampled image data

    Science.gov (United States)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  20. The Dirichet-Multinomial model for multivariate randomized response data and small samples

    NARCIS (Netherlands)

    Avetisyan, Marianna; Fox, Gerardus J.A.

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The

  1. The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples

    Science.gov (United States)

    Avetisyan, Marianna; Fox, Jean-Paul

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…

  2. In situ sampling of small volumes of soil solution using modified micro-suction cups

    NARCIS (Netherlands)

    Shen, Jianbo; Hoffland, E.

    2007-01-01

    Two modified designs of micro-pore-water samplers were tested for their capacity to collect unbiased soil solution samples containing zinc and citrate. The samplers had either ceramic or polyethersulfone (PES) suction cups. Laboratory tests of the micro-samplers were conducted using (a) standard

  3. Comparing distribution models for small samples of overdispersed counts of freshwater fish

    Science.gov (United States)

    Vaudor, Lise; Lamouroux, Nicolas; Olivier, Jean-Michel

    2011-05-01

    The study of species abundance often relies on repeated abundance counts whose number is limited by logistic or financial constraints. The distribution of abundance counts is generally right-skewed (i.e. with many zeros and few high values) and needs to be modelled for statistical inference. We used an extensive dataset involving about 100,000 fish individuals of 12 freshwater fish species collected in electrofishing points (7 m 2) during 350 field surveys made in 25 stream sites, in order to compare the performance and the generality of four distribution models of counts (Poisson, negative binomial and their zero-inflated counterparts). The negative binomial distribution was the best model (Bayesian Information Criterion) for 58% of the samples (species-survey combinations) and was suitable for a variety of life histories, habitat, and sample characteristics. The performance of the models was closely related to samples' statistics such as total abundance and variance. Finally, we illustrated the consequences of a distribution assumption by calculating confidence intervals around the mean abundance, either based on the most suitable distribution assumption or on an asymptotical, distribution-free (Student's) method. Student's method generally corresponded to narrower confidence intervals, especially when there were few (≤3) non-null counts in the samples.

  4. Calculation code of heterogeneity effects for analysis of small sample reactivity worth

    International Nuclear Information System (INIS)

    Okajima, Shigeaki; Mukaiyama, Takehiko; Maeda, Akio.

    1988-03-01

    The discrepancy between experimental and calculated central reactivity worths has been one of the most significant interests for the analysis of fast reactor critical experiment. Two effects have been pointed out so as to be taken into account in the calculation as the possible cause of the discrepancy; one is the local heterogeneity effect which is associated with the measurement geometry, the other is the heterogeneity effect on the distribution of the intracell adjoint flux. In order to evaluate these effects in the analysis of FCA actinide sample reactivity worth the calculation code based on the collision probability method was developed. The code can handle the sample size effect which is one of the local heterogeneity effects and also the intracell adjoint heterogeneity effect. (author)

  5. Gravimetric and volumetric approaches adapted for hydrogen sorption measurements with in situ conditioning on small sorbent samples

    International Nuclear Information System (INIS)

    Poirier, E.; Chahine, R.; Tessier, A.; Bose, T.K.

    2005-01-01

    We present high sensitivity (0 to 1 bar, 295 K) gravimetric and volumetric hydrogen sorption measurement systems adapted for in situ sample conditioning at high temperature and high vacuum. These systems are designed especially for experiments on sorbents available in small masses (mg) and requiring thorough degassing prior to sorption measurements. Uncertainty analysis from instrumental specifications and hydrogen absorption measurements on palladium are presented. The gravimetric and volumetric systems yield cross-checkable results within about 0.05 wt % on samples weighing from (3 to 25) mg. Hydrogen storage capacities of single-walled carbon nanotubes measured at 1 bar and 295 K with both systems are presented

  6. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  7. Non-random intrachromosomal distribution of radiation-induced chromatid aberrations in Vicia faba. [Aberration clustering

    Energy Technology Data Exchange (ETDEWEB)

    Schubert, I; Rieger, R [Akademie der Wissenschaften der DDR, Gatersleben. Zentralinst. fuer Genetik und Kulturpflanzenforschung

    1976-04-01

    A reconstructed karyotype of Vicia faba, with all chromosomes individually distinguishable, was treated with X-rays, fast neutrons, (/sup 3/H) uridine (/sup 3/HU). The distribution within metaphase chromosomes of induced chromatid aberrations was non-random for all agents used. Aberration clustering, in part agent specific, occurred in chromosome segments containing heterochromatin as defined by the presence of G bands. The pattern of aberration clustering found after treatment with /sup 3/HU did not allow the recognition of chromosome regions active in transcription during treatment. Furthermore, it was impossible to obtain unambiguous indications of the presence of AT- and GC-base clusters from the patterns of /sup 3/HT- and /sup 3/HC-induced chromatid aberrations, respectively. Possible reasons underlying these observations are discussed.

  8. Determination of Organic Pollutants in Small Samples of Groundwaters by Liquid-Liquid Extraction and Capillary Gas Chromatography

    DEFF Research Database (Denmark)

    Harrison, I.; Leader, R.U.; Higgo, J.J.W.

    1994-01-01

    A method is presented for the determination of 22 organic compounds in polluted groundwaters. The method includes liquid-liquid extraction of the base/neutral organics from small, alkaline groundwater samples, followed by derivatisation and liquid-liquid extraction of phenolic compounds after neu...... neutralisation. The extracts were analysed by capillary gas chromatography. Dual detection by flame Ionisation and electron capture was used to reduce analysis time....

  9. A summary of methods of predicting reliability life of nuclear equipment with small samples

    International Nuclear Information System (INIS)

    Liao Weixian

    2000-03-01

    Some of nuclear equipment are manufactured in small batch, e.g., 1-3 sets. Their service life may be very difficult to determine experimentally in view of economy and technology. The method combining theoretical analysis with material tests to predict the life of equipment is put forward, based on that equipment consists of parts or elements which are made of different materials. The whole life of an equipment part consists of the crack forming life (i.e., the fatigue life or the damage accumulation life) and the crack extension life. Methods of predicting machine life has systematically summarized with the emphasis on those which use theoretical analysis to substitute large scale prototype experiments. Meanwhile, methods and steps of predicting reliability life have been described by taking into consideration of randomness of various variables and parameters in engineering. Finally, the latest advance and trends of machine life prediction are discussed

  10. An Inset CT Specimen for Evaluating Fracture in Small Samples of Material

    Science.gov (United States)

    Yahyazadehfar, M.; Nazari, A.; Kruzic, J.J.; Quinn, G.D.; Arola, D.

    2013-01-01

    In evaluations on the fracture behavior of hard tissues and many biomaterials, the volume of material available to study is not always sufficient to apply a standard method of practice. In the present study an inset Compact Tension (inset CT) specimen is described, which uses a small cube of material (approximately 2×2×2 mm3) that is molded within a secondary material to form the compact tension geometry. A generalized equation describing the Mode I stress intensity was developed for the specimen using the solutions from a finite element model that was defined over permissible crack lengths, variations in specimen geometry, and a range in elastic properties of the inset and mold materials. A validation of the generalized equation was performed using estimates for the fracture toughness of a commercial dental composite via the “inset CT” specimen and the standard geometry defined by ASTM E399. Results showed that the average fracture toughness obtained from the new specimen (1.23 ± 0.02 MPa•m0.5) was within 2% of that from the standard. Applications of the inset CT specimen are presented for experimental evaluations on the crack growth resistance of dental enamel and root dentin, including their fracture resistance curves. Potential errors in adopting this specimen are then discussed, including the effects of debonding between the inset and molding material on the estimated stress intensity distribution. Results of the investigation show that the inset CT specimen offers a viable approach for studying the fracture behavior of small volumes of structural materials. PMID:24268892

  11. Density-viscosity product of small-volume ionic liquid samples using quartz crystal impedance analysis.

    Science.gov (United States)

    McHale, Glen; Hardacre, Chris; Ge, Rile; Doy, Nicola; Allen, Ray W K; MacInnes, Jordan M; Bown, Mark R; Newton, Michael I

    2008-08-01

    Quartz crystal impedance analysis has been developed as a technique to assess whether room-temperature ionic liquids are Newtonian fluids and as a small-volume method for determining the values of their viscosity-density product, rho eta. Changes in the impedance spectrum of a 5-MHz fundamental frequency quartz crystal induced by a water-miscible room-temperature ionic liquid, 1-butyl-3-methylimiclazolium trifluoromethylsulfonate ([C4mim][OTf]), were measured. From coupled frequency shift and bandwidth changes as the concentration was varied from 0 to 100% ionic liquid, it was determined that this liquid provided a Newtonian response. A second water-immiscible ionic liquid, 1-butyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide [C4mim][NTf2], with concentration varied using methanol, was tested and also found to provide a Newtonian response. In both cases, the values of the square root of the viscosity-density product deduced from the small-volume quartz crystal technique were consistent with those measured using a viscometer and density meter. The third harmonic of the crystal was found to provide the closest agreement between the two measurement methods; the pure ionic liquids had the largest difference of approximately 10%. In addition, 18 pure ionic liquids were tested, and for 11 of these, good-quality frequency shift and bandwidth data were obtained; these 12 all had a Newtonian response. The frequency shift of the third harmonic was found to vary linearly with square root of viscosity-density product of the pure ionic liquids up to a value of square root(rho eta) approximately 18 kg m(-2) s(-1/2), but with a slope 10% smaller than that predicted by the Kanazawa and Gordon equation. It is envisaged that the quartz crystal technique could be used in a high-throughput microfluidic system for characterizing ionic liquids.

  12. Interstitial water studies on small core samples, Deep Sea Drilling Project, Leg 5

    Science.gov (United States)

    Manheim, F. T.; Chan, K.M.; Sayles, F.L.

    1970-01-01

    Leg 5 samples fall into two categories with respect to interstitial water composition: 1) rapidly deposited terrigenous or appreciably terrigenous deposits, such as in Hole 35 (western Escanaba trough, off Cape Mendocino, California); and, 2) slowly deposited pelagic clays and biogenic muds and oozes. Interstitial waters in the former show modest to slight variations in chloride and sodium, but drastic changes in non-conservative ions such as magnesium and sulfate. The pelagic deposits show only relatively minor changes in both conservative and non-conservative pore fluid constituents. As was pointed out in earlier Leg Reports, it is believed that much of the variation in chloride in pore fluids within individual holes is attributable to the manipulation of samples on board ship and in the laboratory. On the other hand, the scatter in sodium is due in part to analytical error (on the order of 2 to 3 per cent, in terms of a standard deviation), and it probably accounts for most of the discrepancies in total anion and cation balance. All constituents reported here, with the exception of bulk water content, were analyzed on water samples which were sealed in plastic tubes aboard ship and were subsequently opened and divided into weighed aliquots in the laboratory. Analytical methods follow the atomic absorption, wet chemical and emission spectrochemical techniques briefly summarized in previous reports, e.g. Manheim et al., 1969, and Chan and Manheim, 1970. The authors acknowledge assistance from W. Sunda, D. Kerr, C. Lawson and H. Richards, and thank D. Spencer, P. Brewer and E. Degens for allowing the use of equipment and laboratory facilities.

  13. exTAS - next-generation TAS for small samples and extreme conditions

    International Nuclear Information System (INIS)

    Kulda, J.; Hiess, A.

    2011-01-01

    The currently used implementation of horizontally and vertically focusing optics in three-axis spectrometers (TAS) permits efficient studies of excitations in sub-cm 3 - sized single crystals]. With the present proposal we wish to stimulate a further paradigm shift into the domain of mm 3 -sized samples. exTAS combines highly focused mm-sized focal spots, boosting the sensitivity limits, with a spectrometer layout down-scaled to a table-top size to provide high flexibility in optimizing acceptance angles and to achieve sub-millimeter positioning accuracy. (authors)

  14. Sophistication of 14C measurement at JAEA-AMS-MUTSU. Attempt on a small quantity of sample

    International Nuclear Information System (INIS)

    Tanaka, Takayuki; Kabuto, Shoji; Kinoshita, Naoki; Yamamoto, Nobuo

    2010-01-01

    In the investigations on substance dynamics using the molecular weight and chemical fractionation, the utilization of 14 C measurement by an accelerator mass spectrometry (AMS) have started. As a result of the fractionation, sample contents required for AMS measurement have been downsized. We expect that this trend toward a small quantity of sample will be steadily accelerated in the future. As 14 C measurement by AMS established at Mutsu office require about 2 mg of sample content at present, our AMS lags behind the others in the trend. We try to downsize the needed sample content for 14 C measurement by our AMS. In this study, we modified the shape of the target-piece in which the sample is packed and which is regularly needed to radiocarbon measurement by our AMS. Moreover, we improved on the apparatus needed to pack the sample. As a result of the improvement, we revealed that it is possible to measure the 14 C using our AMS even by the amount of the sample of about 0.5 mg. (author)

  15. A method for multiple sequential analyses of macrophage functions using a small single cell sample

    Directory of Open Access Journals (Sweden)

    F.R.F. Nascimento

    2003-09-01

    Full Text Available Microbial pathogens such as bacillus Calmette-Guérin (BCG induce the activation of macrophages. Activated macrophages can be characterized by the increased production of reactive oxygen and nitrogen metabolites, generated via NADPH oxidase and inducible nitric oxide synthase, respectively, and by the increased expression of major histocompatibility complex class II molecules (MHC II. Multiple microassays have been developed to measure these parameters. Usually each assay requires 2-5 x 10(5 cells per well. In some experimental conditions the number of cells is the limiting factor for the phenotypic characterization of macrophages. Here we describe a method whereby this limitation can be circumvented. Using a single 96-well microassay and a very small number of peritoneal cells obtained from C3H/HePas mice, containing as little as <=2 x 10(5 macrophages per well, we determined sequentially the oxidative burst (H2O2, nitric oxide production and MHC II (IAk expression of BCG-activated macrophages. More specifically, with 100 µl of cell suspension it was possible to quantify H2O2 release and nitric oxide production after 1 and 48 h, respectively, and IAk expression after 48 h of cell culture. In addition, this microassay is easy to perform, highly reproducible and more economical.

  16. Aspects of working with manipulators and small samples in an αβγ-box

    International Nuclear Information System (INIS)

    Zubler, Robert; Bertsch, Johannes; Heimgartner, Peter

    2007-01-01

    The Laboratory for Materials Behaviour, operator of the Hotlab and part of the Paul Scherrer Institute (PSI) is studying corrosion- and mechanical phenomena of irradiated fuel rod cladding materials. To improve the options for mechanical tests, a heavy shielded αβγ) universal electro-mechanical testing machine has been installed. The machine is equipped with an 800 deg. C furnace. The furnace chamber is part of the inner α-box and can be flushed with inert gas. The specimen can be observed by camera during the tests. The foreseen active specimens are very small and can not be handled by hand. Before starting active tests, tools and installations had to be improved and a lot of manipulator practise had to be absolved. For the operational permit, given by the authorities (Swiss Federal Nuclear Safety Inspectorate, HSK), many safety data concerning furnace cooling, air pressure and γ- shielding had to be collected. Up to now various inactive tests have been performed. Besides the operational and safety features, results of inactive mechanical tests and tests for active commissioning are presented. (authors)

  17. Liquid-chromatographic analysis for cyclosporine with use of a microbore column and small sample volume.

    Science.gov (United States)

    Annesley, T; Matz, K; Balogh, L; Clayton, L; Giacherio, D

    1986-07-01

    This liquid-chromatographic assay requires 0.2 to 0.5 mL of whole blood, avoids the use of diethyl ether, and consumes only 10 to 20% of the solvents used in prior methods. Sample preparation involves an acidic extraction with methyl-t-butyl ether, performed in a 13 X 100 mm disposable glass tube, then a short second extraction of the organic phase with sodium hydroxide. After evaporation of the methyl-t-butyl ether, chromatography is performed on an "Astec" 2.0-mm (i.d.) octyl column. We compared results by this procedure with those by use of earlier larger-scale extractions and their respective 4.6-mm (i.d.) columns; analytical recoveries of cyclosporins A and D were comparable with previous findings and results for patients' specimens were equivalent, but the microbore columns provided greatly increased resolution and sensitivity.

  18. Sampling large landscapes with small-scale stratification-User's Manual

    Science.gov (United States)

    Bart, Jonathan

    2011-01-01

    This manual explains procedures for partitioning a large landscape into plots, assigning the plots to strata, and selecting plots in each stratum to be surveyed. These steps are referred to as the "sampling large landscapes (SLL) process." We assume that users of the manual have a moderate knowledge of ArcGIS and Microsoft ® Excel. The manual is written for a single user but in many cases, some steps will be carried out by a biologist designing the survey and some steps will be carried out by a quantitative assistant. Thus, the manual essentially may be passed back and forth between these users. The SLL process primarily has been used to survey birds, and we refer to birds as subjects of the counts. The process, however, could be used to count any objects. ®

  19. Targeted histology sampling from atypical small acinar proliferation area detected by repeat transrectal prostate biopsy

    Directory of Open Access Journals (Sweden)

    A. V. Karman

    2017-01-01

    Full Text Available Оbjective: to define the approach to the management of patients with the detected ASAP area.Materials and methods. In the time period from 2012 through 2015, 494 patients with previously negative biopsy and remaining suspicion of prostate cancer (PCa were examined. The patients underwent repeat 24-core multifocal prostate biopsy with taking additional tissue samples from suspicious areas detected by multiparametric magnetic resonance imaging and transrectal ultrasound. An isolated ASAP area was found in 127 (25. 7 % of the 494 examined men. All of them were offered to perform repeat target transrectal biopsy of this area. Targeted transrectal ultrasound guided biopsy of the ASAP area was performed in 56 (44.1 % of the 127 patients, 53 of them being included in the final analysis.Results. PCa was diagnosed in 14 (26.4 % of the 53 patients, their mean age being 64.4 ± 6.9 years. The average level of prostate-specific antigen (PSA in PCa patients was 6.8 ± 3.0 ng/ml, in those with benign lesions – 9.3 ± 6.5 ng/ml; the percentage ratio of free/total PSA with PCa was 16.2 ± 7,8 %, with benign lesions – 23.3 ± 7.7 %; PSA density in PCa patients was 0.14 ± 0.07 ng/ml/cm3, in those with benign lesions – 0.15 ± 0.12 ng/ml/cm3. Therefore, with ASAP area being detected in repeat prostate biopsy samples, it is advisable that targeted extended biopsy of this area be performed. 

  20. Investigation of Super Learner Methodology on HIV-1 Small Sample: Application on Jaguar Trial Data.

    Science.gov (United States)

    Houssaïni, Allal; Assoumou, Lambert; Marcelin, Anne Geneviève; Molina, Jean Michel; Calvez, Vincent; Flandre, Philippe

    2012-01-01

    Background. Many statistical models have been tested to predict phenotypic or virological response from genotypic data. A statistical framework called Super Learner has been introduced either to compare different methods/learners (discrete Super Learner) or to combine them in a Super Learner prediction method. Methods. The Jaguar trial is used to apply the Super Learner framework. The Jaguar study is an "add-on" trial comparing the efficacy of adding didanosine to an on-going failing regimen. Our aim was also to investigate the impact on the use of different cross-validation strategies and different loss functions. Four different repartitions between training set and validations set were tested through two loss functions. Six statistical methods were compared. We assess performance by evaluating R(2) values and accuracy by calculating the rates of patients being correctly classified. Results. Our results indicated that the more recent Super Learner methodology of building a new predictor based on a weighted combination of different methods/learners provided good performance. A simple linear model provided similar results to those of this new predictor. Slight discrepancy arises between the two loss functions investigated, and slight difference arises also between results based on cross-validated risks and results from full dataset. The Super Learner methodology and linear model provided around 80% of patients correctly classified. The difference between the lower and higher rates is around 10 percent. The number of mutations retained in different learners also varys from one to 41. Conclusions. The more recent Super Learner methodology combining the prediction of many learners provided good performance on our small dataset.

  1. Algorithm for computing significance levels using the Kolmogorov-Smirnov statistic and valid for both large and small samples

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, S.E.; Fields, D.E.

    1983-10-01

    The KSTEST code presented here is designed to perform the Kolmogorov-Smirnov one-sample test. The code may be used as a stand-alone program or the principal subroutines may be excerpted and used to service other programs. The Kolmogorov-Smirnov one-sample test is a nonparametric goodness-of-fit test. A number of codes to perform this test are in existence, but they suffer from the inability to provide meaningful results in the case of small sample sizes (number of values less than or equal to 80). The KSTEST code overcomes this inadequacy by using two distinct algorithms. If the sample size is greater than 80, an asymptotic series developed by Smirnov is evaluated. If the sample size is 80 or less, a table of values generated by Birnbaum is referenced. Valid results can be obtained from KSTEST when the sample contains from 3 to 300 data points. The program was developed on a Digital Equipment Corporation PDP-10 computer using the FORTRAN-10 language. The code size is approximately 450 card images and the typical CPU execution time is 0.19 s.

  2. Precise Th/U-dating of small and heavily coated samples of deep sea corals

    Science.gov (United States)

    Lomitschka, Michael; Mangini, Augusto

    1999-07-01

    Marine carbonate skeletons like deep-sea corals are frequently coated with iron and manganese oxides/hydroxides which adsorb additional thorium and uranium out of the sea water. A new cleaning procedure has been developed to reduce this contamination. In this further cleaning step a solution of Na 2EDTA (Na 2H 2T B) and ascorbic acid is used which composition is optimised especially for samples of 20 mg of weight. It was first tested on aliquots of a reef-building coral which had been artificially contaminated with powdered ferromanganese nodule. Applied on heavily contaminated deep-sea corals (scleractinia), it reduced excess 230Th by another order of magnitude in addition to usual cleaning procedures. The measurement of at least three fractions of different contamination, together with an additional standard correction for contaminated carbonates results in Th/U-ages corrected for the authigenic component. A good agreement between Th/U- and 14C-ages can be achieved even for extremely coated corals.

  3. Detection of seizures from small samples using nonlinear dynamic system theory.

    Science.gov (United States)

    Yaylali, I; Koçak, H; Jayakar, P

    1996-07-01

    The electroencephalogram (EEG), like many other biological phenomena, is quite likely governed by nonlinear dynamics. Certain characteristics of the underlying dynamics have recently been quantified by computing the correlation dimensions (D2) of EEG time series data. In this paper, D2 of the unbiased autocovariance function of the scalp EEG data was used to detect electrographic seizure activity. Digital EEG data were acquired at a sampling rate of 200 Hz per channel and organized in continuous frames (duration 2.56 s, 512 data points). To increase the reliability of D2 computations with short duration data, raw EEG data were initially simplified using unbiased autocovariance analysis to highlight the periodic activity that is present during seizures. The D2 computation was then performed from the unbiased autocovariance function of each channel using the Grassberger-Procaccia method with Theiler's box-assisted correlation algorithm. Even with short duration data, this preprocessing proved to be computationally robust and displayed no significant sensitivity to implementation details such as the choices of embedding dimension and box size. The system successfully identified various types of seizures in clinical studies.

  4. Large-area imaging reveals biologically driven non-random spatial patterns of corals at a remote reef

    Science.gov (United States)

    Edwards, Clinton B.; Eynaud, Yoan; Williams, Gareth J.; Pedersen, Nicole E.; Zgliczynski, Brian J.; Gleason, Arthur C. R.; Smith, Jennifer E.; Sandin, Stuart A.

    2017-12-01

    For sessile organisms such as reef-building corals, differences in the degree of dispersion of individuals across a landscape may result from important differences in life-history strategies or may reflect patterns of habitat availability. Descriptions of spatial patterns can thus be useful not only for the identification of key biological and physical mechanisms structuring an ecosystem, but also by providing the data necessary to generate and test ecological theory. Here, we used an in situ imaging technique to create large-area photomosaics of 16 plots at Palmyra Atoll, central Pacific, each covering 100 m2 of benthic habitat. We mapped the location of 44,008 coral colonies and identified each to the lowest taxonomic level possible. Using metrics of spatial dispersion, we tested for departures from spatial randomness. We also used targeted model fitting to explore candidate processes leading to differences in spatial patterns among taxa. Most taxa were clustered and the degree of clustering varied by taxon. A small number of taxa did not significantly depart from randomness and none revealed evidence of spatial uniformity. Importantly, taxa that readily fragment or tolerate stress through partial mortality were more clustered. With little exception, clustering patterns were consistent with models of fragmentation and dispersal limitation. In some taxa, dispersion was linearly related to abundance, suggesting density dependence of spatial patterning. The spatial patterns of stony corals are non-random and reflect fundamental life-history characteristics of the taxa, suggesting that the reef landscape may, in many cases, have important elements of spatial predictability.

  5. Split Hopkinson Resonant Bar Test for Sonic-Frequency Acoustic Velocity and Attenuation Measurements of Small, Isotropic Geologic Samples

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, S.

    2011-04-01

    Mechanical properties (seismic velocities and attenuation) of geological materials are often frequency dependent, which necessitates measurements of the properties at frequencies relevant to a problem at hand. Conventional acoustic resonant bar tests allow measuring seismic properties of rocks and sediments at sonic frequencies (several kilohertz) that are close to the frequencies employed for geophysical exploration of oil and gas resources. However, the tests require a long, slender sample, which is often difficult to obtain from the deep subsurface or from weak and fractured geological formations. In this paper, an alternative measurement technique to conventional resonant bar tests is presented. This technique uses only a small, jacketed rock or sediment core sample mediating a pair of long, metal extension bars with attached seismic source and receiver - the same geometry as the split Hopkinson pressure bar test for large-strain, dynamic impact experiments. Because of the length and mass added to the sample, the resonance frequency of the entire system can be lowered significantly, compared to the sample alone. The experiment can be conducted under elevated confining pressures up to tens of MPa and temperatures above 100 C, and concurrently with x-ray CT imaging. The described Split Hopkinson Resonant Bar (SHRB) test is applied in two steps. First, extension and torsion-mode resonance frequencies and attenuation of the entire system are measured. Next, numerical inversions for the complex Young's and shear moduli of the sample are performed. One particularly important step is the correction of the inverted Young's moduli for the effect of sample-rod interfaces. Examples of the application are given for homogeneous, isotropic polymer samples and a natural rock sample.

  6. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    Energy Technology Data Exchange (ETDEWEB)

    Rabiet, M., E-mail: marion.rabiet@unilim.f [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France); Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M. [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France)

    2010-03-15

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10{sup -3} g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  7. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    International Nuclear Information System (INIS)

    Rabiet, M.; Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M.

    2010-01-01

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10 -3 g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  8. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Jamshid Jamali

    2017-01-01

    Full Text Available Evaluating measurement equivalence (also known as differential item functioning (DIF is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  9. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study.

    Science.gov (United States)

    Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman

    2017-01-01

    Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  10. Beyond simple small-angle X-ray scattering: developments in online complementary techniques and sample environments

    Directory of Open Access Journals (Sweden)

    Wim Bras

    2014-11-01

    Full Text Available Small- and wide-angle X-ray scattering (SAXS, WAXS are standard tools in materials research. The simultaneous measurement of SAXS and WAXS data in time-resolved studies has gained popularity due to the complementary information obtained. Furthermore, the combination of these data with non X-ray based techniques, via either simultaneous or independent measurements, has advanced understanding of the driving forces that lead to the structures and morphologies of materials, which in turn give rise to their properties. The simultaneous measurement of different data regimes and types, using either X-rays or neutrons, and the desire to control parameters that initiate and control structural changes have led to greater demands on sample environments. Examples of developments in technique combinations and sample environment design are discussed, together with a brief speculation about promising future developments.

  11. Beyond simple small-angle X-ray scattering: developments in online complementary techniques and sample environments.

    Science.gov (United States)

    Bras, Wim; Koizumi, Satoshi; Terrill, Nicholas J

    2014-11-01

    Small- and wide-angle X-ray scattering (SAXS, WAXS) are standard tools in materials research. The simultaneous measurement of SAXS and WAXS data in time-resolved studies has gained popularity due to the complementary information obtained. Furthermore, the combination of these data with non X-ray based techniques, via either simultaneous or independent measurements, has advanced understanding of the driving forces that lead to the structures and morphologies of materials, which in turn give rise to their properties. The simultaneous measurement of different data regimes and types, using either X-rays or neutrons, and the desire to control parameters that initiate and control structural changes have led to greater demands on sample environments. Examples of developments in technique combinations and sample environment design are discussed, together with a brief speculation about promising future developments.

  12. Non-random alkylation of DNA sequences induced in vivo by chemical mutagens

    Energy Technology Data Exchange (ETDEWEB)

    Durante, M.; Geri, C.; Bonatti, S.; Parenti, R. (Universita di Pisa (Italy))

    1989-08-01

    Previous studies of the interaction of alkylating agents on the eukaryotic genome support the idea that induction of DNA adducts is at specific genomic sites. Here we show molecular and cytological evidence that alkylation is rather specific. Mammalian cell cultures were exposed to different doses of mutagens and the DNA was analyzed by density gradient ultracentrifugation, hydroxylapatite fractionation, and by restriction enzyme analysis. Studies with the labelled mutagens N-ethyl-N-nitrosourea and N-methyl-N'-nitro-N-nitrosoguanidine show that there is a non-random distribution of the adducts. The adducts are found more frequently in A-T, G-C rich satellite DNA and highly repetitive sequences. Analysis with restriction enzymes shows that both methyl and ethyl groups influence the restriction patterns of the enzymes HpaII and MspI that recognize specific endogenous DNA methylation. These data suggest, as a subsequent mechanism, a modification in the pattern of the normal endogenous methylation of 5-methylcytosine.

  13. Random and non-random mating populations: Evolutionary dynamics in meiotic drive.

    Science.gov (United States)

    Sarkar, Bijan

    2016-01-01

    Game theoretic tools are utilized to analyze a one-locus continuous selection model of sex-specific meiotic drive by considering nonequivalence of the viabilities of reciprocal heterozygotes that might be noticed at an imprinted locus. The model draws attention to the role of viability selections of different types to examine the stable nature of polymorphic equilibrium. A bridge between population genetics and evolutionary game theory has been built up by applying the concept of the Fundamental Theorem of Natural Selection. In addition to pointing out the influences of male and female segregation ratios on selection, configuration structure reveals some noted results, e.g., Hardy-Weinberg frequencies hold in replicator dynamics, occurrence of faster evolution at the maximized variance fitness, existence of mixed Evolutionarily Stable Strategy (ESS) in asymmetric games, the tending evolution to follow not only a 1:1 sex ratio but also a 1:1 different alleles ratio at particular gene locus. Through construction of replicator dynamics in the group selection framework, our selection model introduces a redefining bases of game theory to incorporate non-random mating where a mating parameter associated with population structure is dependent on the social structure. Also, the model exposes the fact that the number of polymorphic equilibria will depend on the algebraic expression of population structure. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. A compact time-of-flight SANS instrument optimised for measurements of small sample volumes at the European Spallation Source

    Energy Technology Data Exchange (ETDEWEB)

    Kynde, Søren, E-mail: kynde@nbi.ku.dk [Niels Bohr Institute, University of Copenhagen (Denmark); Hewitt Klenø, Kaspar [Niels Bohr Institute, University of Copenhagen (Denmark); Nagy, Gergely [SINQ, Paul Scherrer Institute (Switzerland); Mortensen, Kell; Lefmann, Kim [Niels Bohr Institute, University of Copenhagen (Denmark); Kohlbrecher, Joachim, E-mail: Joachim.kohlbrecher@psi.ch [SINQ, Paul Scherrer Institute (Switzerland); Arleth, Lise, E-mail: arleth@nbi.ku.dk [Niels Bohr Institute, University of Copenhagen (Denmark)

    2014-11-11

    The high flux at European Spallation Source (ESS) will allow for performing experiments with relatively small beam-sizes while maintaining a high intensity of the incoming beam. The pulsed nature of the source makes the facility optimal for time-of-flight small-angle neutron scattering (ToF-SANS). We find that a relatively compact SANS instrument becomes the optimal choice in order to obtain the widest possible q-range in a single setting and the best possible exploitation of the neutrons in each pulse and hence obtaining the highest possible flux at the sample position. The instrument proposed in the present article is optimised for performing fast measurements of small scattering volumes, typically down to 2×2×2 mm{sup 3}, while covering a broad q-range from about 0.005 1/Å to 0.5 1/Å in a single instrument setting. This q-range corresponds to that available at a typical good BioSAXS instrument and is relevant for a wide set of biomacromolecular samples. A central advantage of covering the whole q-range in a single setting is that each sample has to be loaded only once. This makes it convenient to use the fully automated high-throughput flow-through sample changers commonly applied at modern synchrotron BioSAXS-facilities. The central drawback of choosing a very compact instrument is that the resolution in terms of δλ/λ obtained with the short wavelength neutrons becomes worse than what is usually the standard at state-of-the-art SANS instruments. Our McStas based simulations of the instrument performance for a set of characteristic biomacromolecular samples show that the resulting smearing effects still have relatively minor effects on the obtained data and can be compensated for in the data analysis. However, in cases where a better resolution is required in combination with the large simultaneous q-range characteristic of the instrument, we show that this can be obtained by inserting a set of choppers.

  15. Using Data-Dependent Priors to Mitigate Small Sample Bias in Latent Growth Models: A Discussion and Illustration Using M"plus"

    Science.gov (United States)

    McNeish, Daniel M.

    2016-01-01

    Mixed-effects models (MEMs) and latent growth models (LGMs) are often considered interchangeable save the discipline-specific nomenclature. Software implementations of these models, however, are not interchangeable, particularly with small sample sizes. Restricted maximum likelihood estimation that mitigates small sample bias in MEMs has not been…

  16. A new CF-IRMS system for quantifying stable isotopes of carbon monoxide from ice cores and small air samples

    Directory of Open Access Journals (Sweden)

    Z. Wang

    2010-10-01

    Full Text Available We present a new analysis technique for stable isotope ratios (δ13C and δ18O of atmospheric carbon monoxide (CO from ice core samples. The technique is an online cryogenic vacuum extraction followed by continuous-flow isotope ratio mass spectrometry (CF-IRMS; it can also be used with small air samples. The CO extraction system includes two multi-loop cryogenic cleanup traps, a chemical oxidant for oxidation to CO2, a cryogenic collection trap, a cryofocusing unit, gas chromatography purification, and subsequent injection into a Finnigan Delta Plus IRMS. Analytical precision of 0.2‰ (±1δ for δ13C and 0.6‰ (±1δ for δ18O can be obtained for 100 mL (STP air samples with CO mixing ratios ranging from 60 ppbv to 140 ppbv (~268–625 pmol CO. Six South Pole ice core samples from depths ranging from 133 m to 177 m were processed for CO isotope analysis after wet extraction. To our knowledge, this is the first measurement of stable isotopes of CO in ice core air.

  17. Forecasting elections with mere recognition from small, lousy samples: A comparison of collective recognition, wisdom of crowds, and representative polls

    Directory of Open Access Journals (Sweden)

    Wolfgang Gaissmeier

    2011-02-01

    Full Text Available We investigated the extent to which the human capacity for recognition helps to forecast political elections: We compared naive recognition-based election forecasts computed from convenience samples of citizens' recognition of party names to (i standard polling forecasts computed from representative samples of citizens' voting intentions, and to (ii simple---and typically very accurate---wisdom-of-crowds-forecasts computed from the same convenience samples of citizens' aggregated hunches about election results. Results from four major German elections show that mere recognition of party names forecast the parties' electoral success fairly well. Recognition-based forecasts were most competitive with the other models when forecasting the smaller parties' success and for small sample sizes. However, wisdom-of-crowds-forecasts outperformed recognition-based forecasts in most cases. It seems that wisdom-of-crowds-forecasts are able to draw on the benefits of recognition while at the same time avoiding its downsides, such as lack of discrimination among very famous parties or recognition caused by factors unrelated to electoral success. Yet it seems that a simple extension of the recognition-based forecasts---asking people what proportion of the population would recognize a party instead of whether they themselves recognize it---is also able to eliminate these downsides.

  18. Spatial Distribution of Stony Desertification and Key Influencing Factors on Different Sampling Scales in Small Karst Watersheds

    Science.gov (United States)

    Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie

    2018-01-01

    Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km2, 4.50 km2, and 1.87 km2, respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content. PMID:29652811

  19. Spatial Distribution of Stony Desertification and Key Influencing Factors on Different Sampling Scales in Small Karst Watersheds

    Directory of Open Access Journals (Sweden)

    Zhenming Zhang

    2018-04-01

    Full Text Available Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km2, 4.50 km2, and 1.87 km2, respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content.

  20. Spatial Distribution of Stony Desertification and Key Influencing Factors on Different Sampling Scales in Small Karst Watersheds.

    Science.gov (United States)

    Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie; Huang, Xianfei

    2018-04-13

    Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km², 4.50 km², and 1.87 km², respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content.

  1. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    International Nuclear Information System (INIS)

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  2. Application of inductively coupled plasma mass spectrometry for multielement analysis in small sample amounts of thyroid tissue from Chernobyl area

    International Nuclear Information System (INIS)

    Becker, J.S.; Dietze, H.J.; Boulyga, S.F.; Bazhanova, N.N.; Kanash, N.V.; Malenchenko, A.F.

    2000-01-01

    As a result of the Chernobyl nuclear power plant accident in 1986, thyroid pathologies occurred among children in some regions of belarus. Besides the irradiation of children's thyroids by radioactive iodine and caesium nuclides, toxic elements from fallout are a direct risk to health. Inductively coupled plasma quadrupole-based mass spectrometry (Icp-Ms) and instrumental neutron activation analysis (IAA) were used for multielement determination in small amounts (I-10 mg) of human thyroid tissue samples. The accuracy of the applied analytical technique for small biological sample amounts was checked using NIST standard reference material oyster tissue (SRM 1566 b). Almost all essential elements as well as a number of toxic elements such as Cd, Pb, Hg, U etc. Were determined in a multitude of human thyroid tissues by quadrupole-based Icp-Ms using micro nebulization. In general, the thyroid tissue affected by pathology is characterized by higher calcium content. Some other elements, among them Sr, Zn, Fe, Mn, V, As, Cr, Ni, Pb, U, Ba, Sb, were also Accumulated in such tissue. The results obtained will be used as initial material for further specific studies of the role of particular elements in thyroid pathology development

  3. Antibiotic Resistance in Animal and Environmental Samples Associated with Small-Scale Poultry Farming in Northwestern Ecuador.

    Science.gov (United States)

    Braykov, Nikolay P; Eisenberg, Joseph N S; Grossman, Marissa; Zhang, Lixin; Vasco, Karla; Cevallos, William; Muñoz, Diana; Acevedo, Andrés; Moser, Kara A; Marrs, Carl F; Foxman, Betsy; Trostle, James; Trueba, Gabriel; Levy, Karen

    2016-01-01

    The effects of animal agriculture on the spread of antibiotic resistance (AR) are cross-cutting and thus require a multidisciplinary perspective. Here we use ecological, epidemiological, and ethnographic methods to examine populations of Escherichia coli circulating in the production poultry farming environment versus the domestic environment in rural Ecuador, where small-scale poultry production employing nontherapeutic antibiotics is increasingly common. We sampled 262 "production birds" (commercially raised broiler chickens and laying hens) and 455 "household birds" (raised for domestic use) and household and coop environmental samples from 17 villages between 2010 and 2013. We analyzed data on zones of inhibition from Kirby-Bauer tests, rather than established clinical breakpoints for AR, to distinguish between populations of organisms. We saw significantly higher levels of AR in bacteria from production versus household birds; resistance to either amoxicillin-clavulanate, cephalothin, cefotaxime, and gentamicin was found in 52.8% of production bird isolates and 16% of household ones. A strain jointly resistant to the 4 drugs was exclusive to a subset of isolates from production birds (7.6%) and coop surfaces (6.5%) and was associated with a particular purchase site. The prevalence of AR in production birds declined with bird age (P resistance (AR) in E. coli isolates from small-scale poultry production environments versus domestic environments in rural Ecuador, where such backyard poultry operations have become established over the past decade. Our previous research in the region suggests that introduction of AR bacteria through travel and commerce may be an important source of AR in villages of this region. This report extends the prior analysis by examining small-scale production chicken farming as a potential source of resistant strains. Our results suggest that AR strains associated with poultry production likely originate from sources outside the study

  4. Mutational status of synchronous and metachronous tumor samples in patients with metastatic non-small-cell lung cancer

    International Nuclear Information System (INIS)

    Quéré, Gilles; Descourt, Renaud; Robinet, Gilles; Autret, Sandrine; Raguenes, Odile; Fercot, Brigitte; Alemany, Pierre; Uguen, Arnaud; Férec, Claude; Quintin-Roué, Isabelle; Le Gac, Gérald

    2016-01-01

    Despite reported discordance between the mutational status of primary lung cancers and their metastases, metastatic sites are rarely biopsied and targeted therapy is guided by genetic biomarkers detected in the primary tumor. This situation is mostly explained by the apparent stability of EGFR-activating mutations. Given the dramatic increase in the range of candidate drugs and high rates of drug resistance, rebiopsy or liquid biopsy may become widespread. The purpose of this study was to test genetic biomarkers used in clinical practice (EGFR, ALK) and candidate biomarkers identified by the French National Cancer Institute (KRAS, BRAF, PIK3CA, HER2) in patients with metastatic non-small-cell lung cancer for whom two tumor samples were available. A retrospective study identified 88 tumor samples collected synchronously or metachronously, from the same or two different sites, in 44 patients. Mutation analysis used SNaPshot (EGFR, KRAS, BRAF missense mutations), pyrosequencing (EGFR and PIK3CA missense mutations), sizing assays (EGFR and HER2 indels) and IHC and/or FISH (ALK rearrangements). About half the patients (52 %) harbored at least one mutation. Five patients had an activating mutation of EGFR in both the primary tumor and the metastasis. The T790M resistance mutation was detected in metastases in 3 patients with acquired resistance to EGFR tyrosine kinase inhibitors. FISH showed discordance in ALK status between a small biopsy sample and the surgical specimen. KRAS mutations were observed in 36 % of samples, six patients (14 %) having discordant genotypes; all discordances concerned sampling from different sites. Two patients (5 %) showed PI3KCA mutations. One metastasis harbored both PI3KCA and KRAS mutations, while the synchronously sampled primary tumor was mutation free. No mutations were detected in BRAF and HER2. This study highlighted noteworthy intra-individual discordance in KRAS mutational status, whereas EGFR status was stable. Intratumoral

  5. Identification of potential small molecule allosteric modulator sites on IL-1R1 ectodomain using accelerated conformational sampling method.

    Directory of Open Access Journals (Sweden)

    Chao-Yie Yang

    Full Text Available The interleukin-1 receptor (IL-1R is the founding member of the interleukin 1 receptor family which activates innate immune response by its binding to cytokines. Reports showed dysregulation of cytokine production leads to aberrant immune cells activation which contributes to auto-inflammatory disorders and diseases. Current therapeutic strategies focus on utilizing antibodies or chimeric cytokine biologics. The large protein-protein interaction interface between cytokine receptor and cytokine poses a challenge in identifying binding sites for small molecule inhibitor development. Based on the significant conformational change of IL-1R type 1 (IL-1R1 ectodomain upon binding to different ligands observed in crystal structures, we hypothesized that transient small molecule binding sites may exist when IL-1R1 undergoes conformational transition and thus suitable for inhibitor development. Here, we employed accelerated molecular dynamics (MD simulation to efficiently sample conformational space of IL-1R1 ectodomain. Representative IL-1R1 ectodomain conformations determined from the hierarchy cluster analysis were analyzed by the SiteMap program which leads to identify small molecule binding sites at the protein-protein interaction interface and allosteric modulator locations. The cosolvent mapping analysis using phenol as the probe molecule further confirms the allosteric modulator site as a binding hotspot. Eight highest ranked fragment molecules identified from in silico screening at the modulator site were evaluated by MD simulations. Four of them restricted the IL-1R1 dynamical motion to inactive conformational space. The strategy from this study, subject to in vitro experimental validation, can be useful to identify small molecule compounds targeting the allosteric modulator sites of IL-1R and prevent IL-1R from binding to cytokine by trapping IL-1R in inactive conformations.

  6. Analytical Method for Carbon and Oxygen Isotope of Small Carbonate Samples with the GasBench Ⅱ-IRMS Device

    Directory of Open Access Journals (Sweden)

    LIANG Cui-cui

    2015-01-01

    Full Text Available An analytical method for measuring carbon and oxygen isotopic compositions of trace amount carbonate (>15 μg was established by Delta V Advantage isotope Ratio MS coupled with GasBench Ⅱ. Different trace amount (5-50 μg carbonate standard samples (IAEA-CO-1 were measured by GasBench Ⅱ with 12 mL and 3.7 mL vials. When the weight of samples was less than 40 μg and it was acidified in 12 mL vials, most standard deviations of the δ13C and δ18O were more than 0.1‰, which couldn’t satisfied high-precision measurements. When the weight of samples was greater than 15 μg and it was acidified in 3.7 mL vials, standard deviations for the δ13C and δ18O were 0.01‰-0.07‰ and 0.01‰-0.08‰ respectively, which satisfied high-precision measurements. Therefore, small 3.7 mL vials were used to increase the concentration of carbon dioxide in headspace, carbonate samples even less as 15 μg can be analyzed routinely by a GasBench Ⅱ continuous-flow IRMS. Meanwhile, the linear relationship between sample’s weight and peak’s area was strong (R2>0.993 2 and it can be used to determine the carbon content of carbonate samples.

  7. Success and failure rates of tumor genotyping techniques in routine pathological samples with non-small-cell lung cancer.

    Science.gov (United States)

    Vanderlaan, Paul A; Yamaguchi, Norihiro; Folch, Erik; Boucher, David H; Kent, Michael S; Gangadharan, Sidharta P; Majid, Adnan; Goldstein, Michael A; Huberman, Mark S; Kocher, Olivier N; Costa, Daniel B

    2014-04-01

    Identification of some somatic molecular alterations in non-small-cell lung cancer (NSCLC) has become evidence-based practice. The success and failure rate of using commercially available tumor genotyping techniques in routine day-to-day NSCLC pathology samples is not well described. We sought to evaluate the success and failure rate of EGFR mutation, KRAS mutation, and ALK FISH in a cohort of lung cancers subjected to routine clinical tumor genotype. Clinicopathologic data, tumor genotype success and failure rates were retrospectively compiled and analyzed from 381 patient-tumor samples. From these 381 patients with lung cancer, the mean age was 65 years, 61.2% were women, 75.9% were white, 27.8% were never smokers, 73.8% had advanced NSCLC and 86.1% had adenocarcinoma histology. The tumor tissue was obtained from surgical specimens in 48.8%, core needle biopsies in 17.9%, and as cell blocks from aspirates or fluid in 33.3% of cases. Anatomic sites for tissue collection included lung (49.3%), lymph nodes (22.3%), pleura (11.8%), bone (6.0%), brain (6.0%), among others. The overall success rate for EGFR mutation analysis was 94.2%, for KRAS mutation 91.6% and for ALK FISH 91.6%. The highest failure rates were observed when the tissue was obtained from image-guided percutaneous transthoracic core-needle biopsies (31.8%, 27.3%, and 35.3% for EGFR, KRAS, and ALK tests, respectively) and bone specimens (23.1%, 15.4%, and 23.1%, respectively). In specimens obtained from bone, the failure rates were significantly higher for biopsies than resection specimens (40% vs. 0%, p=0.024 for EGFR) and for decalcified compared to non-decalcified samples (60% vs. 5.5%, p=0.021 for EGFR). Tumor genotype techniques are feasible in most samples, outside small image-guided percutaneous transthoracic core-needle biopsies and bone samples from core biopsies with decalcification, and therefore expansion of routine tumor genotype into the care of patients with NSCLC may not require special

  8. The nucleoid protein Dps binds genomic DNA of Escherichia coli in a non-random manner

    Science.gov (United States)

    Kondrashov, F. A.; Toshchakov, S. V.; Dominova, I.; Shvyreva, U. S.; Vrublevskaya, V. V.; Morenkov, O. S.; Panyukov, V. V.

    2017-01-01

    Dps is a multifunctional homododecameric protein that oxidizes Fe2+ ions accumulating them in the form of Fe2O3 within its protein cavity, interacts with DNA tightly condensing bacterial nucleoid upon starvation and performs some other functions. During the last two decades from discovery of this protein, its ferroxidase activity became rather well studied, but the mechanism of Dps interaction with DNA still remains enigmatic. The crucial role of lysine residues in the unstructured N-terminal tails led to the conventional point of view that Dps binds DNA without sequence or structural specificity. However, deletion of dps changed the profile of proteins in starved cells, SELEX screen revealed genomic regions preferentially bound in vitro and certain affinity of Dps for artificial branched molecules was detected by atomic force microscopy. Here we report a non-random distribution of Dps binding sites across the bacterial chromosome in exponentially growing cells and show their enrichment with inverted repeats prone to form secondary structures. We found that the Dps-bound regions overlap with sites occupied by other nucleoid proteins, and contain overrepresented motifs typical for their consensus sequences. Of the two types of genomic domains with extensive protein occupancy, which can be highly expressed or transcriptionally silent only those that are enriched with RNA polymerase molecules were preferentially occupied by Dps. In the dps-null mutant we, therefore, observed a differentially altered expression of several targeted genes and found suppressed transcription from the dps promoter. In most cases this can be explained by the relieved interference with Dps for nucleoid proteins exploiting sequence-specific modes of DNA binding. Thus, protecting bacterial cells from different stresses during exponential growth, Dps can modulate transcriptional integrity of the bacterial chromosome hampering RNA biosynthesis from some genes via competition with RNA polymerase

  9. A Non-randomized Controlled Trial of EMDR on Affective Symptoms in Patients With Glioblastoma Multiforme

    Directory of Open Access Journals (Sweden)

    Monika Szpringer

    2018-05-01

    Full Text Available Glioblastoma multiforme (GBM is a highly aggressive brain cancer and its survival after diagnosis is less than 2 years. Therefore, GBM patients are especially prone to co-occurring psychological conditions such as anxiety and depressive disorders. Furthermore, aggressive medical therapies affect patients’ lives, undermining their sense of meaning and coherence. The main aim of this study was to determine the effectiveness of Eye Movement Desensitization and Reprocessing (EMDR therapy on anxiety, depression and sense of coherence in patients with GBM. Thirty-seven GBM-diagnosed women were included in this trial and received standard medical care. Of those, 18 patients were treated during 4 months with 10–12 individual EMDR sessions (60–90 minutes each. Nineteen GBM patients were used as a non-randomized control group as they consented to psychological evaluations but not to a psychotherapeutic intervention. The groups were homogeneous in terms of gender, age, educational level and treatment, but not in anxiety and depressive levels at baseline. All patients were evaluated at baseline, after treatment (4 months and at follow-up (further 4 months by the Hospital Anxiety and Depression Scale (HADS-M and the Sense of Coherence Scale (SOC-29. Caregivers in both groups were interviewed by the Patient Caregiver Questionnaire after 4 months follow-up. Statistical analyses were conducted using ANOVA statistics, correlation and regression analysis. Results showed a statistically significant decrease in the EMDR group in anxiety, depression and anger, when compared to the experimental group. EMDR therapy also had a positive impact upon the sense of coherence level in the experimental group, whereas in the control group this declined. Finally, the caregivers reported beneficial outcomes of the EMDR therapy with less anxiety- and anger-related behaviors in patients in the experimental group compared to the control group. This study is the first to show

  10. Functional redundancy patterns reveal non-random assembly rules in a species-rich marine assemblage.

    Directory of Open Access Journals (Sweden)

    Nicolas Guillemot

    Full Text Available The relationship between species and the functional diversity of assemblages is fundamental in ecology because it contains key information on functional redundancy, and functionally redundant ecosystems are thought to be more resilient, resistant and stable. However, this relationship is poorly understood and undocumented for species-rich coastal marine ecosystems. Here, we used underwater visual censuses to examine the patterns of functional redundancy for one of the most diverse vertebrate assemblages, the coral reef fishes of New Caledonia, South Pacific. First, we found that the relationship between functional and species diversity displayed a non-asymptotic power-shaped curve, implying that rare functions and species mainly occur in highly diverse assemblages. Second, we showed that the distribution of species amongst possible functions was significantly different from a random distribution up to a threshold of ∼90 species/transect. Redundancy patterns for each function further revealed that some functions displayed fast rates of increase in redundancy at low species diversity, whereas others were only becoming redundant past a certain threshold. This suggested non-random assembly rules and the existence of some primordial functions that would need to be fulfilled in priority so that coral reef fish assemblages can gain a basic ecological structure. Last, we found little effect of habitat on the shape of the functional-species diversity relationship and on the redundancy of functions, although habitat is known to largely determine assemblage characteristics such as species composition, biomass, and abundance. Our study shows that low functional redundancy is characteristic of this highly diverse fish assemblage, and, therefore, that even species-rich ecosystems such as coral reefs may be vulnerable to the removal of a few keystone species.

  11. The effect of albedo neutrons on the neutron multiplication of small plutonium oxide samples in a PNCC chamber

    CERN Document Server

    Bourva, L C A; Weaver, D R

    2002-01-01

    This paper describes how to evaluate the effect of neutrons reflected from parts of a passive neutron coincidence chamber on the neutron leakage self-multiplication, M sub L , of a fissile sample. It is shown that albedo neutrons contribute, in the case of small plutonium bearing samples, to a significant part of M sub L , and that their effect has to be taken into account in the relationship between the measured coincidence count rates and the sup 2 sup 4 sup 0 Pu effective mass of the sample. A simple one-interaction model has been used to write the balance of neutron gains and losses in the material when exposed to the re-entrant neutron flux. The energy and intensity profiles of the re-entrant flux have been parameterised using Monte Carlo MCNP sup T sup M calculations. This technique has been implemented for the On Site Laboratory neutron/gamma counter within the existing MEPL 1.0 code for the determination of the neutron leakage self-multiplication. Benchmark tests of the resulting MEPL 2.0 code with MC...

  12. Simultaneous extraction and clean-up of polychlorinated biphenyls and their metabolites from small tissue samples using pressurized liquid extraction

    Science.gov (United States)

    Kania-Korwel, Izabela; Zhao, Hongxia; Norstrom, Karin; Li, Xueshu; Hornbuckle, Keri C.; Lehmler, Hans-Joachim

    2008-01-01

    A pressurized liquid extraction-based method for the simultaneous extraction and in situ clean-up of polychlorinated biphenyls (PCBs), hydroxylated (OH)-PCBs and methylsulfonyl (MeSO2)-PCBs from small (< 0.5 gram) tissue samples was developed and validated. Extraction of a laboratory reference material with hexane:dichloromethane:methanol (48:43:9, v/v) and Florisil as fat retainer allowed an efficient recovery of PCBs (78–112%; RSD: 13–37%), OH-PCBs (46±2%; RSD: 4%) and MeSO2-PCBs (89±21%; RSD: 24%). Comparable results were obtained with an established analysis method for PCBs, OH-PCBs and MeSO2-PCBs. PMID:19019378

  13. CA II TRIPLET SPECTROSCOPY OF SMALL MAGELLANIC CLOUD RED GIANTS. III. ABUNDANCES AND VELOCITIES FOR A SAMPLE OF 14 CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Parisi, M. C.; Clariá, J. J.; Marcionni, N. [Observatorio Astronómico, Universidad Nacional de Córdoba, Laprida 854, Córdoba, CP 5000 (Argentina); Geisler, D.; Villanova, S. [Departamento de Astronomía, Universidad de Concepción Casilla 160-C, Concepción (Chile); Sarajedini, A. [Department of Astronomy, University of Florida P.O. Box 112055, Gainesville, FL 32611 (United States); Grocholski, A. J., E-mail: celeste@oac.uncor.edu, E-mail: claria@oac.uncor.edu, E-mail: nmarcionni@oac.uncor.edu, E-mail: dgeisler@astro-udec.cl, E-mail: svillanova@astro-udec.cl, E-mail: ata@astro.ufl.edu, E-mail: grocholski@phys.lsu.edu [Department of Physics and Astronomy, Louisiana State University 202 Nicholson Hall, Tower Drive, Baton Rouge, LA 70803-4001 (United States)

    2015-05-15

    We obtained spectra of red giants in 15 Small Magellanic Cloud (SMC) clusters in the region of the Ca ii lines with FORS2 on the Very Large Telescope. We determined the mean metallicity and radial velocity with mean errors of 0.05 dex and 2.6 km s{sup −1}, respectively, from a mean of 6.5 members per cluster. One cluster (B113) was too young for a reliable metallicity determination and was excluded from the sample. We combined the sample studied here with 15 clusters previously studied by us using the same technique, and with 7 clusters whose metallicities determined by other authors are on a scale similar to ours. This compilation of 36 clusters is the largest SMC cluster sample currently available with accurate and homogeneously determined metallicities. We found a high probability that the metallicity distribution is bimodal, with potential peaks at −1.1 and −0.8 dex. Our data show no strong evidence of a metallicity gradient in the SMC clusters, somewhat at odds with recent evidence from Ca ii triplet spectra of a large sample of field stars. This may be revealing possible differences in the chemical history of clusters and field stars. Our clusters show a significant dispersion of metallicities, whatever age is considered, which could be reflecting the lack of a unique age–metallicity relation in this galaxy. None of the chemical evolution models currently available in the literature satisfactorily represents the global chemical enrichment processes of SMC clusters.

  14. MaxEnt’s parameter configuration and small samples: are we paying attention to recommendations? A systematic review

    Directory of Open Access Journals (Sweden)

    Narkis S. Morales

    2017-03-01

    Full Text Available Environmental niche modeling (ENM is commonly used to develop probabilistic maps of species distribution. Among available ENM techniques, MaxEnt has become one of the most popular tools for modeling species distribution, with hundreds of peer-reviewed articles published each year. MaxEnt’s popularity is mainly due to the use of a graphical interface and automatic parameter configuration capabilities. However, recent studies have shown that using the default automatic configuration may not be always appropriate because it can produce non-optimal models; particularly when dealing with a small number of species presence points. Thus, the recommendation is to evaluate the best potential combination of parameters (feature classes and regularization multiplier to select the most appropriate model. In this work we reviewed 244 articles published between 2013 and 2015 to assess whether researchers are following recommendations to avoid using the default parameter configuration when dealing with small sample sizes, or if they are using MaxEnt as a “black box tool.” Our results show that in only 16% of analyzed articles authors evaluated best feature classes, in 6.9% evaluated best regularization multipliers, and in a meager 3.7% evaluated simultaneously both parameters before producing the definitive distribution model. We analyzed 20 articles to quantify the potential differences in resulting outputs when using software default parameters instead of the alternative best model. Results from our analysis reveal important differences between the use of default parameters and the best model approach, especially in the total area identified as suitable for the assessed species and the specific areas that are identified as suitable by both modelling approaches. These results are worrying, because publications are potentially reporting over-complex or over-simplistic models that can undermine the applicability of their results. Of particular importance

  15. Multidisciplinary intervention reducing readmissions in medical inpatients: a prospective, non-randomized study

    Directory of Open Access Journals (Sweden)

    Torisson G

    2013-09-01

    Full Text Available Gustav Torisson,1 Lennart Minthon,1 Lars Stavenow,2 Elisabet Londos1 1Clinical Memory Research Unit, Department of Clinical Sciences, Lund University, 2Department of Internal Medicine, Skåne University Hospital, Malmö, Sweden Background: The purpose of this study was to examine whether a multidisciplinary intervention targeting drug-related problems, cognitive impairment, and discharge miscommunication could reduce readmissions in a general hospital population. Methods: This prospective, non-randomized intervention study was carried out at the department of general internal medicine at a tertiary university hospital. Two hundred medical inpatients living in the community and aged over 60 years were included. Ninety-nine patients received interventions and 101 received standard care. Control/intervention allocation was determined by geographic selection. Interventions consisted of a comprehensive medication review, improved discharge planning, post-discharge telephone follow-up, and liaison with the patient's general practitioner. The main outcome measures recorded were readmissions and hospital nights 12 months after discharge. Separate analyses were made for 12-month survivors and from an intention-to-treat perspective. Comparative analyses were made between groups as well as within groups over time. Results: After 12 months, survivors in the control group had 125 readmissions in total, compared with 58 in the intervention group (Mann–Whitney U test, P = 0.02. For hospital nights, the numbers were 1,228 and 492, respectively (P = 0.009. Yearly admissions had increased from the previous year in the control group from 77 to 125 (Wilcoxon signed-rank test, P = 0.002 and decreased from 75 to 58 in the intervention group (P = 0.25. From the intention-to-treat perspective, the same general pattern was observed but was not significant (1,827 versus 1,008 hospital nights, Mann–Whitney test, P = 0.054. Conclusion: A multidisciplinary approach

  16. Ca II TRIPLET SPECTROSCOPY OF SMALL MAGELLANIC CLOUD RED GIANTS. I. ABUNDANCES AND VELOCITIES FOR A SAMPLE OF CLUSTERS

    International Nuclear Information System (INIS)

    Parisi, M. C.; Claria, J. J.; Grocholski, A. J.; Geisler, D.; Sarajedini, A.

    2009-01-01

    We have obtained near-infrared spectra covering the Ca II triplet lines for a large number of stars associated with 16 Small Magellanic Cloud (SMC) clusters using the VLT + FORS2. These data compose the largest available sample of SMC clusters with spectroscopically derived abundances and velocities. Our clusters span a wide range of ages and provide good areal coverage of the galaxy. Cluster members are selected using a combination of their positions relative to the cluster center as well as their location in the color-magnitude diagram, abundances, and radial velocities (RVs). We determine mean cluster velocities to typically 2.7 km s -1 and metallicities to 0.05 dex (random errors), from an average of 6.4 members per cluster. By combining our clusters with previously published results, we compile a sample of 25 clusters on a homogeneous metallicity scale and with relatively small metallicity errors, and thereby investigate the metallicity distribution, metallicity gradient, and age-metallicity relation (AMR) of the SMC cluster system. For all 25 clusters in our expanded sample, the mean metallicity [Fe/H] = -0.96 with σ = 0.19. The metallicity distribution may possibly be bimodal, with peaks at ∼-0.9 dex and -1.15 dex. Similar to the Large Magellanic Cloud (LMC), the SMC cluster system gives no indication of a radial metallicity gradient. However, intermediate age SMC clusters are both significantly more metal-poor and have a larger metallicity spread than their LMC counterparts. Our AMR shows evidence for three phases: a very early (>11 Gyr) phase in which the metallicity reached ∼-1.2 dex, a long intermediate phase from ∼10 to 3 Gyr in which the metallicity only slightly increased, and a final phase from 3 to 1 Gyr ago in which the rate of enrichment was substantially faster. We find good overall agreement with the model of Pagel and Tautvaisiene, which assumes a burst of star formation at 4 Gyr. Finally, we find that the mean RV of the cluster system

  17. Overweight and obesity in Slovak high school students and body composition indicators: a non-randomized cross-sectional study

    Directory of Open Access Journals (Sweden)

    Bibiana Vadasova

    2016-08-01

    Full Text Available Abstract Background Physical development can be considered as an indicator of the overall health status of the youth population. Currently, it appears that the increasing trend of the prevalence of obesity among children and youths has stopped in a number of countries worldwide. Studies point to the fact that adolescence is a critical period for the development of obesity. Body mass index (BMI seems to be an orientation parameter in the assessment of prevalence of obesity which is not sufficient for more accurate identification of at risk individuals. The purpose of this study was to evaluate association between BMI percentile zones as health-risk for being overweight and obese and body composition indicators in high-school students from the Prešov (Slovakia region. Methods A non-randomized cross-sectional study in high school students from the Prešov (Slovakia region was conducted. The research sample consisted of 1014 participants (boys n = 466, girls n = 549. Body composition was measured using direct segmental multi-frequency bioelectrical impedance analysis (DSM-BIA. To examine the association between obesity and selected body composition indicators, Kruskal-Wallis ANOVA and Eta2 were used. The relationship between selected body composition indicators and percentile BMI zones was determined using the Kendall tau correlation. Results In groups with different BMI percentile zones (normal weight, overweight, obese, ANOVA showed significant differences for girls and boys (p ˂.05 with high effect size (η2 ˂.26 in body weight, body fat mass index, body fat percentage, fat free mass index, fat-free mass percentage, visceral fat area, waist-to-hip ratio, waist circumference, protein mass and mineral mass. The highest degree of correlation among boys was between BMI values indicating overweight and obesity and fat free mass index and waist circumference, respectively (τ = .71, τ = .70, respectively. In girls, the highest

  18. Effectiveness of a 'Global Postural Reeducation' program for persistent Low Back Pain: a non-randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Violante Francesco S

    2010-12-01

    Full Text Available Abstract Background The aim of this non-randomized controlled trial was to evaluate the effectiveness of a Global Postural Reeducation (GPR program as compared to a Stabilization Exercise (SE program in subjects with persistent low back pain (LBP at short- and mid-term follow-up (ie. 3 and 6 months. Methods According to inclusion and exclusion criteria, 100 patients with a primary complaint of persistent LBP were enrolled in the study: 50 were allocated to the GPR group and 50 to the SE group. Primary outcome measures were Roland and Morris Disability Questionnaire (RMDQ and Oswestry Disability Index (ODI. Secondary outcome measures were lumbar Visual Analogue Scale (VAS and Fingertip-to-floor test (FFT. Data were collected at baseline and at 3/6 months by health care professionals unaware of the study. An intention to treat approach was used to analyze participants according to the group to which they were originally assigned. Results Of the 100 patients initially included in the study, 78 patients completed the study: 42 in the GPR group and 36 in the SE group. At baseline, the two groups did not differ significantly with respect to gender, age, BMI and outcome measures. Comparing the differences between groups at short- and mid-term follow-up, the GPR group revealed a significant reduction (from baseline in all outcome measures with respect to the SE group. The ordered logistic regression model showed an increased likelihood of definitive improvement (reduction from baseline of at least 30% in RMDQ and VAS scores for the GPR group compared to the SE group (OR 3.9, 95% CI 2.7 to 5.7. Conclusions Our findings suggest that a GPR intervention in subjects with persistent LBP induces a greater improvement on pain and disability as compared to a SE program. These results must be confirmed by further studies with higher methodological standards, including randomization, larger sample size, longer follow-up and subgrouping of the LBP subjects. Trial

  19. Small Body GN and C Research Report: G-SAMPLE - An In-Flight Dynamical Method for Identifying Sample Mass [External Release Version

    Science.gov (United States)

    Carson, John M., III; Bayard, David S.

    2006-01-01

    G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.

  20. Home based telemedicine intervention for patients with uncontrolled hypertension: - a real life - non-randomized study

    Science.gov (United States)

    2014-01-01

    Background Control of blood pressure is frequently inadequate in spite of availability of several classes of well tolerated and effective antihypertensive drugs. Several factors, including the use of suboptimal doses of drugs, inadequate or ineffective treatments and poor drug compliance may be the reason for this phenomenon. The aim of the current non- randomized study was to evaluate the effectiveness of a Home-Based Telemedicine service in patients with uncontrolled hypertension. Methods 74 patients were enrolled in a Home Based Telemedicine group and 94 patients in the Usual Care group. At baseline and at the end of the study, patients in both groups were seen in a cardiology office. Patients in Home Based Telemedicine group additionally were followed by a physician-nurse, through scheduled and unscheduled telephone appointments. These patients also received a blood pressure measuring device that could transmit the readings to a central data monitor via secure data connection. Results During the study period (80 ± 25 days), a total of 17401 blood pressure measurements were taken in the Home Based Telemedicine group corresponding to 236 ± 136 readings per patient and a mean daily measurement of 3 ± 1.7. The scheduled telephone contacts (initiated by the nurse) equaled to 5.2 ± 4.3/patient (370 in total) and the unscheduled telephone contacts (initiated by the patients) were 0.4 ± 0.9/patient (30 in total). The mean systolic blood pressure values decreased from 153 ± 19 mmHg to 130 ± 15 mmHg (p < 0.0001) at the end of the study and diastolic blood pressure values decreased from 89 ± 10 mmHg to 76 ± 11 mmHg (p < 0.0001). In the Usual Care group, the mean systolic blood pressure values decreased from 156 ± 16 mmHg to 149 ± 17 mmHg (p < 0.05) at the end of the study and diastolic blood pressure values decreased from 90 ± 8 mmHg to 86 ± 9 mmHg (p < 0.05). The changes in drug

  1. Determination of degree of RBC agglutination for blood typing using a small quantity of blood sample in a microfluidic system.

    Science.gov (United States)

    Chang, Yaw-Jen; Ho, Ching-Yuan; Zhou, Xin-Miao; Yen, Hsiu-Rong

    2018-04-15

    Blood typing assay is a critical test to ensure the serological compatibility of a donor and an intended recipient prior to a blood transfusion. This paper presents a microfluidic blood typing system using a small quantity of blood sample to determine the degree of agglutination of red blood cell (RBC). Two measuring methods were proposed: impedimetric measurement and electroanalytical measurement. The charge transfer resistance in the impedimetric measurement and the power parameter in the electroanalytical measurement were used for the analysis of agglutination level. From the experimental results, both measuring methods provide quantitative results, and the parameters are linearly and monotonically related to the degree of RBC agglutination. However, the electroanalytical measurement is more reliable than the impedimetric technique because the impedimetric measurement may suffer from many influencing factors, such as chip conditions. Five levels from non-agglutination (level 0) to strong agglutination (level 4+) can be discriminated in this study, conforming to the clinical requirement to prevent any risks in transfusion. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Probability estimation of rare extreme events in the case of small samples: Technique and examples of analysis of earthquake catalogs

    Science.gov (United States)

    Pisarenko, V. F.; Rodkin, M. V.; Rukavishnikova, T. A.

    2017-11-01

    The most general approach to studying the recurrence law in the area of the rare largest events is associated with the use of limit law theorems of the theory of extreme values. In this paper, we use the Generalized Pareto Distribution (GPD). The unknown GPD parameters are typically determined by the method of maximal likelihood (ML). However, the ML estimation is only optimal for the case of fairly large samples (>200-300), whereas in many practical important cases, there are only dozens of large events. It is shown that in the case of a small number of events, the highest accuracy in the case of using the GPD is provided by the method of quantiles (MQs). In order to illustrate the obtained methodical results, we have formed the compiled data sets characterizing the tails of the distributions for typical subduction zones, regions of intracontinental seismicity, and for the zones of midoceanic (MO) ridges. This approach paves the way for designing a new method for seismic risk assessment. Here, instead of the unstable characteristics—the uppermost possible magnitude M max—it is recommended to use the quantiles of the distribution of random maxima for a future time interval. The results of calculating such quantiles are presented.

  3. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  4. Critical assessment of the performance of electronic moisture analyzers for small amounts of environmental samples and biological reference materials.

    Science.gov (United States)

    Krachler, M

    2001-12-01

    Two electronic moisture analyzers were critically evaluated with regard to their suitability for determining moisture in small amounts (environmental matrices such as leaves, needles, soil, peat, sediments, and sewage sludge, as well as various biological reference materials. To this end, several homogeneous bulk materials were prepared which were subsequently employed for the development and optimization of all analytical procedures. The key features of the moisture analyzers included a halogen or ceramic heater and an integrated balance with a resolution of 0.1 mg, which is an essential prerequisite for obtaining precise results. Oven drying of the bulk materials in a conventional oven at 105 degrees C until constant mass served as reference method. A heating temperature of 65degrees C was found to provide accurate and precise results for almost all matrices investigated. To further improve the accuracy and precision, other critical parameters such as handling of sample pans, standby temperature, and measurement delay were optimized. Because of its ponderous heating behavior, the performance of the ceramic radiator was inferior to that of the halogen heater, which produced moisture results comparable to those obtained by oven drying. The developed drying procedures were successfully applied to the fast moisture analysis (1.4-6.3 min) of certified biological reference materials of similar provenance to the investigated the bulk materials. Moisture results for 200 mg aliquots ranged from 1.4 to 7.8% and good agreement was obtained between the recommended drying procedure for the reference materials and the electronic moisture analyzers with absolute uncertainties amounting to 0.1% and 0.2-0.3%, respectively.

  5. Potential use of the non-random distribution of N2 and N2O mole masses in the atmosphere as a tool for tracing atmospheric mixing and isotope fractionation processes

    International Nuclear Information System (INIS)

    Well, R.; Langel, R.; Reineking, A.

    2002-01-01

    The variation in the natural abundance of 15 N in atmospheric gas species is often used to determine the mixing of trace gases from different sources. With conventional budget calculations one unknown quantity can be determined if the remaining quantities are known. From 15 N tracer studies in soils with highly enriched 15 N-nitrate a procedure is known to calculate the mixing of atmospheric and soil derived N 2 based on the measurement of the 30/28 and 29/28 ratios in gas samples collected from soil covers. Because of the non-random distribution of the mole masses 30 N 2 , 29 N 2 and 28 N 2 in the mixing gas it is possible to calculate two quantities simultaneously, i.e. the mixing ratio of atmospheric and soil derived N 2 , and the isotopic signature of the soil derived N 2 . Routine standard measurements of laboratory air had suggested a non-random distribution of N 2 -mole masses. The objective of this study was to investigate and explain the existence of non-random distributions of 15 N 15 N, 14 N 15 N and 14 N 14 N in N 2 and N 2 O in environmental samples. The calculation of theoretical isotope data resulting from hypothetical mixing of two sources differing in 15 N natural abundance demonstrated, that the deviation from an ideal random distribution of mole masses is not detectable with the current precision of mass spectrometry. 15 N-analysis of N 2 or N 2 O was conducted with randomised and non-randomised replicate samples of different origin. 15 N abundance as calculated from 29/28 ratios were generally higher in randomised samples. The differences between the treatments ranged between 0.05 and 0.17 δper mille 15 N. It was concluded that the observed randomisation effect is probably caused by 15 N 15 N fractionation during environmental processes. (author)

  6. The challenge of NSCLC diagnosis and predictive analysis on small samples. Practical approach of a working group

    DEFF Research Database (Denmark)

    Thunnissen, Erik; Kerr, Keith M; Herth, Felix J F

    2012-01-01

    Until recently, the division of pulmonary carcinomas into small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC) was adequate for therapy selection. Due to the emergence of new treatment options subtyping of NSCLC and predictive testing have become mandatory. A practical approach to...

  7. Microstructural descriptors and cellular automata simulation of the effects of non-random nuclei location on recrystallization in two dimensions

    Directory of Open Access Journals (Sweden)

    Paulo Rangel Rios

    2006-06-01

    Full Text Available The effect of non-random nuclei location and the efficiency of microstructural descriptors in assessing such a situation are studied. Cellular automata simulation of recrystallization in two dimensions is carried out to simulate microstrutural evolution for nuclei distribution ranging from a periodic arrangement to clusters of nuclei. The simulation results are compared in detail with microstrutural descriptors normally used to follow transformation evolution. It is shown that the contiguity is particularly relevant to detect microstructural deviations from randomness. This work focuses on recrystallization but its results are applicable to any nucleation and growth transformation.

  8. Effectiveness of the “What’s Up!” Intervention to Reduce Stigma and Psychometric Properties of the Youth Program Questionnaire (YPQ: Results from a Cluster Non-randomized Controlled Trial Conducted in Catalan High Schools

    Directory of Open Access Journals (Sweden)

    Laura Andrés-Rodríguez

    2017-09-01

    Full Text Available Mental disorders are highly prevalent in the general population, and people who experience them are frequently stigmatized. Stigma has a very negative impact on social, academic/professional, and personal life. Considering the high rates of mental disorders among children and adolescents (13.4% and how critical this age is in the formation of nuclear beliefs, many campaigns to combat stigma have been developed in the last decade, with mixed results. The OBERTAMENT initiative has produced various anti-stigma campaigns in Catalonia (Spain. In the present study, the main objective was to report on the effectiveness of the OBERTAMENT “What’s up!” intervention, a curricular intervention including education and social contact conducted by the teachers in the classroom with teenagers aged between 14 and 18. Prior to this, we examined the psychometric properties of the Youth Program Questionnaire (YPQ, our main outcome measure, in terms of dimensionality, reliability, and validity. A cluster non-randomized controlled trial was conducted to assess this intervention, which was tested in nine high schools situated in the Barcelona region. A convenience sample of 261 students formed the intervention group and 132 the control group (52% women, mean age = 14, SD = 0.47. The assignment to study conditions was conducted by Departament d’Ensenyament (Department of Education, Generalitat de Catalunya (Catalan Government. Participants were evaluated at baseline, post-intervention, and 9-month follow-up. The main outcome measure of this study was the YPQ. The Reported and Intended Behavior Scale (RIBS was used as secondary outcome measure. The statistical analysis indicated that the YPQ possesses a two-factor structure (stereotypical attitudes and intended behavior and sound psychometric properties. The multilevel mixed-effects models revealed statistically significant interactions for both study measures and post hoc intragroup analyses revealed a

  9. Automated microfluidic sample-preparation platform for high-throughput structural investigation of proteins by small-angle X-ray scattering

    DEFF Research Database (Denmark)

    Lafleur, Josiane P.; Snakenborg, Detlef; Nielsen, Søren Skou

    2011-01-01

    A new microfluidic sample-preparation system is presented for the structural investigation of proteins using small-angle X-ray scattering (SAXS) at synchrotrons. The system includes hardware and software features for precise fluidic control, sample mixing by diffusion, automated X-ray exposure...... control, UV absorbance measurements and automated data analysis. As little as 15 l of sample is required to perform a complete analysis cycle, including sample mixing, SAXS measurement, continuous UV absorbance measurements, and cleaning of the channels and X-ray cell with buffer. The complete analysis...

  10. Effects of Sample Impurities on the Analysis of MS2 Bacteriophage by Small-Angle Neutron Scattering

    National Research Council Canada - National Science Library

    Elashvili, Ilya; Wick, Charles H; Kuzmanovic, Deborah A; Krueger, Susan; O'Connell, Catherine

    2005-01-01

    .... The impact of small molecular weight impurities of the resolution of structural data obtained by SANS of the bacteriophage MS2 distorts the resolution and sharpness of contrast variation peaks...

  11. Development of an evaluation method for fracture mechanical tests on small samples based on a cohesive zone model

    International Nuclear Information System (INIS)

    Mahler, Michael

    2016-01-01

    The safety and reliability of nuclear power plants of the fourth generation is an important issue. It is based on a reliable interpretation of the components for which, among other fracture mechanical material properties are required. The existing irradiation in the power plants significantly affects the material properties which therefore need to be determined on irradiated material. Often only small amounts of irradiated material are available for characterization. In that case it is not possible to manufacture sufficiently large specimens, which are necessary for fracture mechanical testing in agreement with the standard. Small specimens must be used. From this follows the idea of this study, in which the fracture toughness can be predicted with the developed method based on tests of small specimens. For this purpose, the fracture process including the crack growth is described with a continuum mechanical approach using the finite element method and the cohesive zone model. The experiments on small specimens are used for parameter identification of the cohesive zone model. The two parameters of the cohesive zone model are determined by tensile tests on notched specimens (cohesive stress) and by parameter fitting to the fracture behavior of smalls specimens (cohesive energy). To account the different triaxialities of the specimens, the cohesive stress is used depending on the triaxiality. After parameter identification a large specimen can be simulated with the cohesive zone parameters derived from small specimens. The predicted fracture toughness of this big specimen fulfills the size requirements in the standard (ASTM E1820 or ASTM E399) in contrast to the small specimen. This method can be used for ductile and brittle material behavior and was validated in this work. In summary, this method offers the possibility to determine the fracture toughness indirectly based on small specimen testing. Main advantage is the low required specimen volume. Thereby massively

  12. Discovering non-random segregation of sister chromatids: The naïve treatment of a premature discovery

    Directory of Open Access Journals (Sweden)

    Karl G. Lark

    2013-02-01

    Full Text Available The discovery of non-random chromosome segregation is discussed from the perspective of what was known in1965 and1966. The distinction between daughter, parent or grandparent strands of DNA was developed in a bacterial system and led to the discovery that multiple copies of DNA elements of bacteria are not distributed randomly with respect to the age of the template strand. Experiments with higher eukaryotic cells demonstrated that during mitosis Mendel’s laws were violated; and the initial serendipitous choice of eukaryotic cell system led to the striking example of non-random segregation of parent and grand-parent DNA template strands in primary cultures of cells derived from mouse embryos. Attempts to extrapolate these findings to established TC lines demonstrated that the property could be lost. Experiments using plant root tips demonstrated that the phenomenon exists in plants and that it was, at some level, under genetic control. Despite publication in major journals and symposia (Lark et al. (1966a; Lark (1967a; 1967b; 1969, 1969a; 1969b the potential implications of these findings were ignored for several decades. Here we explore possible reasons for the pre-maturity (Stent, 1972 of this discovery.

  13. Network meta-analysis incorporating randomized controlled trials and non-randomized comparative cohort studies for assessing the safety and effectiveness of medical treatments: challenges and opportunities

    OpenAIRE

    Cameron, Chris; Fireman, Bruce; Hutton, Brian; Clifford, Tammy; Coyle, Doug; Wells, George; Dormuth, Colin R.; Platt, Robert; Toh, Sengwee

    2015-01-01

    Network meta-analysis is increasingly used to allow comparison of multiple treatment alternatives simultaneously, some of which may not have been compared directly in primary research studies. The majority of network meta-analyses published to date have incorporated data from randomized controlled trials (RCTs) only; however, inclusion of non-randomized studies may sometimes be considered. Non-randomized studies can complement RCTs or address some of their limitations, such as short follow-up...

  14. Analysis and comparison of fish growth from small samples of length-at-age data : Detection of sexual dimorphism in Eurasian perch as an example

    NARCIS (Netherlands)

    Mooij, WM; Van Rooij, JM; Wijnhoven, S

    A relatively simple approach is presented for statistical analysis and comparison of fish growth patterns inferred from size-at-age data. It can be used for any growth model and small sample sizes. Bootstrapping is used to generate confidence regions for the model parameters and for size and growth

  15. Investigation of the Effect of Small Hardening Spots Created on the Sample Surface by Laser Complex with Solid-State Laser

    Science.gov (United States)

    Nozdrina, O.; Zykov, I.; Melnikov, A.; Tsipilev, V.; Turanov, S.

    2018-03-01

    This paper describes the results of an investigation of the effect of small hardening spots (about 1 mm) created on the surface of a sample by laser complex with solid-state laser. The melted area of the steel sample is not exceed 5%. Steel microhardness change in the region subjected to laser treatment is studied. Also there is a graph of the deformation of samples dependence on the tension. As a result, the yield plateau and plastic properties changes were detected. The flow line was tracked in the series of speckle photographs. As a result we can see how mm surface inhomogeneity can influence on the deformation and strength properties of steel.

  16. Laparoscopic distal pancreatectomy: results of a prospective non-randomized study from a tertiary center.

    Science.gov (United States)

    Palanivelu, C; Shetty, R; Jani, K; Sendhilkumar, K; Rajan, P S; Maheshkumar, G S

    2007-03-01

    Though laparoscopic distal pancreatectomy for benign conditions was first described in the early 1990s, it has not become as popular as other laparoscopic surgeries. Published literature on this topic consists of several case reports and a handful of small series. We present our experience, which, to the best of our knowledge, is the largest series reported to date. Since 1998, 22 patients have undergone distal pancreatectomy at our institute. The technique of distal pancreatosplenectomy, as well as spleen-preserving distal pancreatectomy, is described. Four males and 18 females in the age range of 12-69 years underwent operation. Splenic preservation was possible in 7 patients. The tumor diameter ranged from 2.1 cm to 7.4 cm. The mean operating time was 215 min. The mean length of incision required for specimen retrieval was 3.4 cm. All patients were started on a liquid diet on the first postoperative day. The median hospital stay was 4 days. One patient developed a pancreatic fistula that was managed conservatively. At the end of an average follow-up of 4.6 years, no recurrence has been reported. Laparoscopic distal pancreatectomy is a safe procedure, with minimal morbidity, rapid recovery, and short hospital stay. In appropriate cases, splenic preservation is feasible.

  17. Approaches for cytogenetic and molecular analyses of small flow-sorted cell populations from childhood leukemia bone marrow samples

    DEFF Research Database (Denmark)

    Obro, Nina Friesgaard; Madsen, Hans O.; Ryder, Lars Peter

    2011-01-01

    defined cell populations with subsequent analyses of leukemia-associated cytogenetic and molecular marker. The approaches described here optimize the use of the same tube of unfixed, antibody-stained BM cells for flow-sorting of small cell populations and subsequent exploratory FISH and PCR-based analyses....

  18. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    Science.gov (United States)

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  19. Estimating the treatment effect from non-randomized studies: The example of reduced intensity conditioning allogeneic stem cell transplantation in hematological diseases

    Directory of Open Access Journals (Sweden)

    Resche-Rigon Matthieu

    2012-08-01

    Full Text Available Abstract Background In some clinical situations, for which RCT are rare or impossible, the majority of the evidence comes from observational studies, but standard estimations could be biased because they ignore covariates that confound treatment decisions and outcomes. Methods Three observational studies were conducted to assess the benefit of Allo-SCT in hematological malignancies of multiple myeloma, follicular lymphoma and Hodgkin’s disease. Two statistical analyses were performed: the propensity score (PS matching approach and the inverse probability weighting (IPW approach. Results Based on PS-matched samples, a survival benefit in MM patients treated by Allo-SCT, as compared to similar non-allo treated patients, was observed with an HR of death at 0.35 (95%CI: 0.14-0.88. Similar results were observed in HD, 0.23 (0.07-0.80 but not in FL, 1.28 (0.43-3.77. Estimated benefits of Allo-SCT for the original population using IPW were erased in HR for death at 0.72 (0.37-1.39 for MM patients, 0.60 (0.19-1.89 for HD patients, and 2.02 (0.88-4.66 for FL patients. Conclusion Differences in estimated benefits rely on whether the underlying population to which they apply is an ideal randomized experimental population (PS or the original population (IPW. These useful methods should be employed when assessing the effects of innovative treatment in non-randomized experiments.

  20. Non-Random Variability in Functional Composition of Coral Reef Fish Communities along an Environmental Gradient.

    Science.gov (United States)

    Plass-Johnson, Jeremiah G; Taylor, Marc H; Husain, Aidah A A; Teichberg, Mirta C; Ferse, Sebastian C A

    2016-01-01

    Changes in the coral reef complex can affect predator-prey relationships, resource availability and niche utilisation in the associated fish community, which may be reflected in decreased stability of the functional traits present in a community. This is because particular traits may be favoured by a changing environment, or by habitat degradation. Furthermore, other traits can be selected against because degradation can relax the association between fishes and benthic habitat. We characterised six important ecological traits for fish species occurring at seven sites across a disturbed coral reef archipelago in Indonesia, where reefs have been exposed to eutrophication and destructive fishing practices for decades. Functional diversity was assessed using two complementary indices (FRic and RaoQ) and correlated to important environmental factors (live coral cover and rugosity, representing local reef health, and distance from shore, representing a cross-shelf environmental gradient). Indices were examined for both a change in their mean, as well as temporal (short-term; hours) and spatial (cross-shelf) variability, to assess whether fish-habitat association became relaxed along with habitat degradation. Furthermore, variability in individual traits was examined to identify the traits that are most affected by habitat change. Increases in the general reef health indicators, live coral cover and rugosity (correlated with distance from the mainland), were associated with decreases in the variability of functional diversity and with community-level changes in the abundance of several traits (notably home range size, maximum length, microalgae, detritus and small invertebrate feeding and reproductive turnover). A decrease in coral cover increased variability of RaoQ while rugosity and distance both inversely affected variability of FRic; however, averages for these indices did not reveal patterns associated with the environment. These results suggest that increased

  1. A novel device for batch-wise isolation of α-cellulose from small-amount wholewood samples

    OpenAIRE

    T. Wieloch; Gerhard Helle; Ingo Heinrich; Michael Voigt; P. Schyma

    2011-01-01

    A novel device for the chemical isolation of α-cellulose from wholewood material of tree rings was designed by the Potsdam Dendro Laboratory. It allows the simultaneous treatment of up to several hundred micro samples. Key features are the batch-wise exchange of the chemical solutions, the reusability of all major parts and the easy and unambiguous labelling of each individual sample. Compared to classical methods labour intensity and running costs are significantly reduced.

  2. Sucralfate gel as a radioprotector against radiation induced dermatitis in a hypo-fractionated schedule: a non-randomized study.

    Science.gov (United States)

    Kouloulias, V; Asimakopoulos, C; Tolia, M; Filippou, G; Platoni, K; Dilvoi, M; Beli, I; Georgakopoulos, J; Patatoukas, G; Kelekis, N

    2013-04-01

    External beam radiotherapy with high doses provokes many acute skin reactions, such as erythema and moist desquamation. Many topical preparations are used in radiation oncology departments in the skin care. Sucralfate humid gel, a colloidal physical form of the anti-ulcer drug sucralfate, promotes epithelial regeneration and activates cell proliferation. Based on this knowledge, we performed a non-randomized clinical trial to evaluate the efficacy of topical sucralfate gel in 30 breast cancer patients receiving postoperative accelerated hypofractionated photon beam therapy. The comparison was performed with 30 patients as historical controls. The acute reaction of the skin was significantly lower in the group receiving the sucralfate gel (p<0.05, Mann Whitney test), while 90% of the patients had no evidence of radiation induced skin toxicity. There was no sucralfate gel related toxicity reported by any patient in this study. More patients in a randomized way are needed for more definite results.

  3. Sample Preparation and Extraction in Small Sample Volumes Suitable for Pediatric Clinical Studies: Challenges, Advances, and Experiences of a Bioanalytical HPLC-MS/MS Method Validation Using Enalapril and Enalaprilat

    Science.gov (United States)

    Burckhardt, Bjoern B.; Laeer, Stephanie

    2015-01-01

    In USA and Europe, medicines agencies force the development of child-appropriate medications and intend to increase the availability of information on the pediatric use. This asks for bioanalytical methods which are able to deal with small sample volumes as the trial-related blood lost is very restricted in children. Broadly used HPLC-MS/MS, being able to cope with small volumes, is susceptible to matrix effects. The latter restrains the precise drug quantification through, for example, causing signal suppression. Sophisticated sample preparation and purification utilizing solid-phase extraction was applied to reduce and control matrix effects. A scale-up from vacuum manifold to positive pressure manifold was conducted to meet the demands of high-throughput within a clinical setting. Faced challenges, advances, and experiences in solid-phase extraction are exemplarily presented on the basis of the bioanalytical method development and validation of low-volume samples (50 μL serum). Enalapril, enalaprilat, and benazepril served as sample drugs. The applied sample preparation and extraction successfully reduced the absolute and relative matrix effect to comply with international guidelines. Recoveries ranged from 77 to 104% for enalapril and from 93 to 118% for enalaprilat. The bioanalytical method comprising sample extraction by solid-phase extraction was fully validated according to FDA and EMA bioanalytical guidelines and was used in a Phase I study in 24 volunteers. PMID:25873972

  4. Sample Preparation and Extraction in Small Sample Volumes Suitable for Pediatric Clinical Studies: Challenges, Advances, and Experiences of a Bioanalytical HPLC-MS/MS Method Validation Using Enalapril and Enalaprilat

    Directory of Open Access Journals (Sweden)

    Bjoern B. Burckhardt

    2015-01-01

    Full Text Available In USA and Europe, medicines agencies force the development of child-appropriate medications and intend to increase the availability of information on the pediatric use. This asks for bioanalytical methods which are able to deal with small sample volumes as the trial-related blood lost is very restricted in children. Broadly used HPLC-MS/MS, being able to cope with small volumes, is susceptible to matrix effects. The latter restrains the precise drug quantification through, for example, causing signal suppression. Sophisticated sample preparation and purification utilizing solid-phase extraction was applied to reduce and control matrix effects. A scale-up from vacuum manifold to positive pressure manifold was conducted to meet the demands of high-throughput within a clinical setting. Faced challenges, advances, and experiences in solid-phase extraction are exemplarily presented on the basis of the bioanalytical method development and validation of low-volume samples (50 μL serum. Enalapril, enalaprilat, and benazepril served as sample drugs. The applied sample preparation and extraction successfully reduced the absolute and relative matrix effect to comply with international guidelines. Recoveries ranged from 77 to 104% for enalapril and from 93 to 118% for enalaprilat. The bioanalytical method comprising sample extraction by solid-phase extraction was fully validated according to FDA and EMA bioanalytical guidelines and was used in a Phase I study in 24 volunteers.

  5. A method for analysing small samples of floral pollen for free and protein-bound amino acids.

    Science.gov (United States)

    Stabler, Daniel; Power, Eileen F; Borland, Anne M; Barnes, Jeremy D; Wright, Geraldine A

    2018-02-01

    Pollen provides floral visitors with essential nutrients including proteins, lipids, vitamins and minerals. As an important nutrient resource for pollinators, including honeybees and bumblebees, pollen quality is of growing interest in assessing available nutrition to foraging bees. To date, quantifying the protein-bound amino acids in pollen has been difficult and methods rely on large amounts of pollen, typically more than 1 g. More usual is to estimate a crude protein value based on the nitrogen content of pollen, however, such methods provide no information on the distribution of essential and non-essential amino acids constituting the proteins.Here, we describe a method of microwave-assisted acid hydrolysis using low amounts of pollen that allows exploration of amino acid composition, quantified using ultra high performance liquid chromatography (UHPLC), and a back calculation to estimate the crude protein content of pollen.Reliable analysis of protein-bound and free amino acids as well as an estimation of crude protein concentration was obtained from pollen samples as low as 1 mg. Greater variation in both protein-bound and free amino acids was found in pollen sample sizes amino acids in smaller sample sizes, we suggest a correction factor to apply to specific sample sizes of pollen in order to estimate total crude protein content.The method described in this paper will allow researchers to explore the composition of amino acids in pollen and will aid research assessing the available nutrition to pollinating animals. This method will be particularly useful in assaying the pollen of wild plants, from which it is difficult to obtain large sample weights.

  6. Analysis of Reflectance and Transmittance Measurements on Absorbing and Scattering Small Samples Using a Modified Integrating Sphere Setup

    DEFF Research Database (Denmark)

    Jernshøj, Kit Drescher; Hassing, Søren

    2009-01-01

    Formålet med artiklen er at anlysere reflektans og transmittans målinger på små spredende og absorberende emner. Små emner, som f.eks. grønne blade udgør en speciel eksperimentel udfordring, når sample beamet har et større tværsnit end emnet, der skal måles på. De eksperimentelle fejl, der indfør...

  7. Multi-actinide analysis with AMS for ultra-trace determination and small sample sizes: advantages and drawbacks

    Energy Technology Data Exchange (ETDEWEB)

    Quinto, Francesca; Lagos, Markus; Plaschke, Markus; Schaefer, Thorsten; Geckeis, Horst [Institute for Nuclear Waste Disposal, Karlsruhe Institute of Technology (Germany); Steier, Peter; Golser, Robin [VERA Laboratory, Faculty of Physics, University of Vienna (Austria)

    2016-07-01

    With the abundance sensitivities of AMS for U-236, Np-237 and Pu-239 relative to U-238 at levels lower than 1E-15, a simultaneous determination of several actinides without previous chemical separation from each other is possible. The actinides are extracted from the matrix elements via an iron hydroxide co-precipitation and the nuclides sequentially measured from the same sputter target. This simplified method allows for the use of non-isotopic tracers and consequently the determination of Np-237 and Am-243 for which isotopic tracers with the degree of purity required by ultra-trace mass-spectrometric analysis are not available. With detection limits of circa 1E+4 atoms in a sample, 1E+8 atoms are determined with circa 1 % relative uncertainty due to counting statistics. This allows for an unprecedented reduction of the sample size down to 100 ml of natural water. However, the use of non-isotopic tracers introduces a dominating uncertainty of up to 30 % related to the reproducibility of the results. The advantages and drawbacks of the novel method will be presented with the aid of recent results from the CFM Project at the Grimsel Test Site and from the investigation of global fallout in environmental samples.

  8. SampleCNN: End-to-End Deep Convolutional Neural Networks Using Very Small Filters for Music Classification

    Directory of Open Access Journals (Sweden)

    Jongpil Lee

    2018-01-01

    Full Text Available Convolutional Neural Networks (CNN have been applied to diverse machine learning tasks for different modalities of raw data in an end-to-end fashion. In the audio domain, a raw waveform-based approach has been explored to directly learn hierarchical characteristics of audio. However, the majority of previous studies have limited their model capacity by taking a frame-level structure similar to short-time Fourier transforms. We previously proposed a CNN architecture which learns representations using sample-level filters beyond typical frame-level input representations. The architecture showed comparable performance to the spectrogram-based CNN model in music auto-tagging. In this paper, we extend the previous work in three ways. First, considering the sample-level model requires much longer training time, we progressively downsample the input signals and examine how it affects the performance. Second, we extend the model using multi-level and multi-scale feature aggregation technique and subsequently conduct transfer learning for several music classification tasks. Finally, we visualize filters learned by the sample-level CNN in each layer to identify hierarchically learned features and show that they are sensitive to log-scaled frequency.

  9. A Simple Method for Automated Solid Phase Extraction of Water Samples for Immunological Analysis of Small Pollutants.

    Science.gov (United States)

    Heub, Sarah; Tscharner, Noe; Kehl, Florian; Dittrich, Petra S; Follonier, Stéphane; Barbe, Laurent

    2016-01-01

    A new method for solid phase extraction (SPE) of environmental water samples is proposed. The developed prototype is cost-efficient and user friendly, and enables to perform rapid, automated and simple SPE. The pre-concentrated solution is compatible with analysis by immunoassay, with a low organic solvent content. A method is described for the extraction and pre-concentration of natural hormone 17β-estradiol in 100 ml water samples. Reverse phase SPE is performed with octadecyl-silica sorbent and elution is done with 200 µl of methanol 50% v/v. Eluent is diluted by adding di-water to lower the amount of methanol. After preparing manually the SPE column, the overall procedure is performed automatically within 1 hr. At the end of the process, estradiol concentration is measured by using a commercial enzyme-linked immune-sorbent assay (ELISA). 100-fold pre-concentration is achieved and the methanol content in only 10% v/v. Full recoveries of the molecule are achieved with 1 ng/L spiked de-ionized and synthetic sea water samples.

  10. A combined approach of generalized additive model and bootstrap with small sample sets for fault diagnosis in fermentation process of glutamate.

    Science.gov (United States)

    Liu, Chunbo; Pan, Feng; Li, Yun

    2016-07-29

    Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.

  11. Sampling for validation of digital soil maps

    NARCIS (Netherlands)

    Brus, D.J.; Kempen, B.; Heuvelink, G.B.M.

    2011-01-01

    The increase in digital soil mapping around the world means that appropriate and efficient sampling strategies are needed for validation. Data used for calibrating a digital soil mapping model typically are non-random samples. In such a case we recommend collection of additional independent data and

  12. Detection of Small Numbers of Campylobacter jejuni and Campylobacter coli Cells in Environmental Water, Sewage, and Food Samples by a Seminested PCR Assay

    Science.gov (United States)

    Waage, Astrid S.; Vardund, Traute; Lund, Vidar; Kapperud, Georg

    1999-01-01

    A rapid and sensitive assay was developed for detection of small numbers of Campylobacter jejuni and Campylobacter coli cells in environmental water, sewage, and food samples. Water and sewage samples were filtered, and the filters were enriched overnight in a nonselective medium. The enrichment cultures were prepared for PCR by a rapid and simple procedure consisting of centrifugation, proteinase K treatment, and boiling. A seminested PCR based on specific amplification of the intergenic sequence between the two Campylobacter flagellin genes, flaA and flaB, was performed, and the PCR products were visualized by agarose gel electrophoresis. The assay allowed us to detect 3 to 15 CFU of C. jejuni per 100 ml in water samples containing a background flora consisting of up to 8,700 heterotrophic organisms per ml and 10,000 CFU of coliform bacteria per 100 ml. Dilution of the enriched cultures 1:10 with sterile broth prior to the PCR was sometimes necessary to obtain positive results. The assay was also conducted with food samples analyzed with or without overnight enrichment. As few as ≤3 CFU per g of food could be detected with samples subjected to overnight enrichment, while variable results were obtained for samples analyzed without prior enrichment. This rapid and sensitive nested PCR assay provides a useful tool for specific detection of C. jejuni or C. coli in drinking water, as well as environmental water, sewage, and food samples containing high levels of background organisms. PMID:10103261

  13. Sample types applied for molecular diagnosis of therapeutic management of advanced non-small cell lung cancer in the precision medicine.

    Science.gov (United States)

    Han, Yanxi; Li, Jinming

    2017-10-26

    In this era of precision medicine, molecular biology is becoming increasingly significant for the diagnosis and therapeutic management of non-small cell lung cancer. The specimen as the primary element of the whole testing flow is particularly important for maintaining the accuracy of gene alteration testing. Presently, the main sample types applied in routine diagnosis are tissue and cytology biopsies. Liquid biopsies are considered as the most promising alternatives when tissue and cytology samples are not available. Each sample type possesses its own strengths and weaknesses, pertaining to the disparity of sampling, preparation and preservation procedures, the heterogeneity of inter- or intratumors, the tumor cellularity (percentage and number of tumor cells) of specimens, etc., and none of them can individually be a "one size to fit all". Therefore, in this review, we summarized the strengths and weaknesses of different sample types that are widely used in clinical practice, offered solutions to reduce the negative impact of the samples and proposed an optimized strategy for choice of samples during the entire diagnostic course. We hope to provide valuable information to laboratories for choosing optimal clinical specimens to achieve comprehensive functional genomic landscapes and formulate individually tailored treatment plans for NSCLC patients that are in advanced stages.

  14. A comparison of turtle sampling methods in a small lake in Standing Stone State Park, Overton County, Tennessee

    Science.gov (United States)

    Weber, A.; Layzer, James B.

    2011-01-01

    We used basking traps and hoop nets to sample turtles in Standing Stone Lake at 2-week intervals from May to November 2006. In alternate weeks, we conducted visual basking surveys. We collected and observed four species of turtles: spiny softshell (Apalone spinifera), northern map turtle (Graptemys geographica), pond slider (Trachernys scripta), and snapping turtle (Chelydra serpentina). Relative abundances varied greatly among sampling methods. To varying degrees, all methods were species selective. Population estimates from mark and recaptures of three species, basking counts, and hoop net catches indicated that pond sliders were the most abundant species, but northern map turtles were 8× more abundant than pond sliders in basking trap catches. We saw relatively few snapping turtles basking even though population estimates indicated they were the second most abundant species. Populations of all species were dominated by adult individuals. Sex ratios of three species differed significantly from 1:1. Visual surveys were the most efficient method for determining the presence of species, but capture methods were necessary to obtain size and sex data.

  15. MDMA-assisted psychotherapy using low doses in a small sample of women with chronic posttraumatic stress disorder.

    Science.gov (United States)

    Bouso, José Carlos; Doblin, Rick; Farré, Magí; Alcázar, Miguel Angel; Gómez-Jarabo, Gregorio

    2008-09-01

    The purpose of this study was to investigate the safety of different doses of MDMA-assisted psychotherapy administered in a psychotherapeutic setting to women with chronic PTSD secondary to a sexual assault, and also to obtain preliminary data regarding efficacy. Although this study was originally planned to include 29 subjects, political pressures led to the closing of the study before it could be finished, at which time only six subjects had been treated. Preliminary results from those six subjects are presented here. We found that low doses of MDMA (between 50 and 75 mg) were both psychologically and physiologically safe for all the subjects. Future studies in larger samples and using larger doses are needed in order to further clarify the safety and efficacy of MDMA in the clinical setting in subjects with PTSD.

  16. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2015-01-01

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman's two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  17. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  18. A rheo-optical apparatus for real time kinetic studies on shear-induced alignment of self-assembled soft matter with small sample volumes

    Science.gov (United States)

    Laiho, Ari; Ikkala, Olli

    2007-01-01

    In soft materials, self-assembled nanoscale structures can allow new functionalities but a general problem is to align such local structures aiming at monodomain overall order. In order to achieve shear alignment in a controlled manner, a novel type of rheo-optical apparatus has here been developed that allows small sample volumes and in situ monitoring of the alignment process during the shear. Both the amplitude and orientation angles of low level linear birefringence and dichroism are measured while the sample is subjected to large amplitude oscillatory shear flow. The apparatus is based on a commercial rheometer where we have constructed a flow cell that consists of two quartz teeth. The lower tooth can be set in oscillatory motion whereas the upper one is connected to the force transducers of the rheometer. A custom made cylindrical oven allows the operation of the flow cell at elevated temperatures up to 200 °C. Only a small sample volume is needed (from 9 to 25 mm3), which makes the apparatus suitable especially for studying new materials which are usually obtainable only in small quantities. Using this apparatus the flow alignment kinetics of a lamellar polystyrene-b-polyisoprene diblock copolymer is studied during shear under two different conditions which lead to parallel and perpendicular alignment of the lamellae. The open device geometry allows even combined optical/x-ray in situ characterization of the alignment process by combining small-angle x-ray scattering using concepts shown by Polushkin et al. [Macromolecules 36, 1421 (2003)].

  19. Triacylglycerol Analysis in Human Milk and Other Mammalian Species: Small-Scale Sample Preparation, Characterization, and Statistical Classification Using HPLC-ELSD Profiles.

    Science.gov (United States)

    Ten-Doménech, Isabel; Beltrán-Iturat, Eduardo; Herrero-Martínez, José Manuel; Sancho-Llopis, Juan Vicente; Simó-Alfonso, Ernesto Francisco

    2015-06-24

    In this work, a method for the separation of triacylglycerols (TAGs) present in human milk and from other mammalian species by reversed-phase high-performance liquid chromatography using a core-shell particle packed column with UV and evaporative light-scattering detectors is described. Under optimal conditions, a mobile phase containing acetonitrile/n-pentanol at 10 °C gave an excellent resolution among more than 50 TAG peaks. A small-scale method for fat extraction in these milks (particularly of interest for human milk samples) using minimal amounts of sample and reagents was also developed. The proposed extraction protocol and the traditional method were compared, giving similar results, with respect to the total fat and relative TAG contents. Finally, a statistical study based on linear discriminant analysis on the TAG composition of different types of milks (human, cow, sheep, and goat) was carried out to differentiate the samples according to their mammalian origin.

  20. Small-angle X-ray scattering tensor tomography: model of the three-dimensional reciprocal-space map, reconstruction algorithm and angular sampling requirements.

    Science.gov (United States)

    Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel

    2018-01-01

    Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.

  1. QNB: differential RNA methylation analysis for count-based small-sample sequencing data with a quad-negative binomial model.

    Science.gov (United States)

    Liu, Lian; Zhang, Shao-Wu; Huang, Yufei; Meng, Jia

    2017-08-31

    As a newly emerged research area, RNA epigenetics has drawn increasing attention recently for the participation of RNA methylation and other modifications in a number of crucial biological processes. Thanks to high throughput sequencing techniques, such as, MeRIP-Seq, transcriptome-wide RNA methylation profile is now available in the form of count-based data, with which it is often of interests to study the dynamics at epitranscriptomic layer. However, the sample size of RNA methylation experiment is usually very small due to its costs; and additionally, there usually exist a large number of genes whose methylation level cannot be accurately estimated due to their low expression level, making differential RNA methylation analysis a difficult task. We present QNB, a statistical approach for differential RNA methylation analysis with count-based small-sample sequencing data. Compared with previous approaches such as DRME model based on a statistical test covering the IP samples only with 2 negative binomial distributions, QNB is based on 4 independent negative binomial distributions with their variances and means linked by local regressions, and in the way, the input control samples are also properly taken care of. In addition, different from DRME approach, which relies only the input control sample only for estimating the background, QNB uses a more robust estimator for gene expression by combining information from both input and IP samples, which could largely improve the testing performance for very lowly expressed genes. QNB showed improved performance on both simulated and real MeRIP-Seq datasets when compared with competing algorithms. And the QNB model is also applicable to other datasets related RNA modifications, including but not limited to RNA bisulfite sequencing, m 1 A-Seq, Par-CLIP, RIP-Seq, etc.

  2. Ochratoxin A in raisins and currants: basic extraction procedure used in two small marketing surveys of the occurrence and control of the heterogeneity of the toxins in samples.

    Science.gov (United States)

    Möller, T E; Nyberg, M

    2003-11-01

    A basic extraction procedure for analysis of ochratoxin A (OTA) in currants and raisins is described, as well as the occurrence of OTA and a control of heterogeneity of the toxin in samples bought for two small marketing surveys 1999/2000 and 2001/02. Most samples in the surveys were divided into two subsamples that were individually prepared as slurries and analysed separately. The limit of quantification for the method was estimated as 0.1 microg kg(-1) and recoveries of 85, 90 and 115% were achieved in recovery experiments at 10, 5 and 0.1 microg kg(-1), respectively. Of all 118 subsamples analysed in the surveys, 96 (84%) contained ochratoxin A at levels above the quantification level and five samples (4%) contained more than the European Community legislation of 10 microg kg(-1). The OTA concentrations found in the first survey were in the range Big differences were often achieved between individual subsamples of the original sample, which indicate a wide heterogeneous distribution of the toxin. Data from the repeatability test as well as recovery experiments from the same slurries showed that preparation of slurries as described here seemed to give a homogeneous and representative sample. The extraction with the basic sodium bicarbonate-methanol mixture used in the surveys gave similar or somewhat higher OTA values on some samples tested in a comparison with a weak phosphoric acid water-methanol extraction mixture.

  3. Oxygen consumption during mineralization of organic compounds in water samples from a small sub-tropical reservoir (Brazil

    Directory of Open Access Journals (Sweden)

    Cunha-Santino Marcela Bianchessi da

    2003-01-01

    Full Text Available Assays were carried out to evaluate the oxygen consumption resulting from mineralization of different organic compounds: glucose, sucrose, starch, tannic acid, lysine and glycine. The compounds were added to 1 l of water sample from Monjolinho Reservoir. Dissolved oxygen and dissolved organic carbon were monitored during 20 days and the results were fitted to first order kinetics model. During the 20 days of experiments, the oxygen consumption varied from 4.5 mg.l-1 (tannic acid to 71.5 mg.l-1 (glucose. The highest deoxygenation rate (kD was observed for mineralization of tannic acid (0.321 day-1 followed by glycine, starch, lysine, sucrose and glucose (0.1004, 0.0504, 0.0486, 0.0251 and 0.0158 day-1, respectively. From theoretical calculations and oxygen and carbon concentrations we obtained the stoichiometry of the mineralization processes. Stoichiometric values varied from 0.17 (tannic acid to 2.55 (sucrose.

  4. Randomly and Non-Randomly Missing Renal Function Data in the Strong Heart Study: A Comparison of Imputation Methods.

    Directory of Open Access Journals (Sweden)

    Nawar Shara

    Full Text Available Kidney and cardiovascular disease are widespread among populations with high prevalence of diabetes, such as American Indians participating in the Strong Heart Study (SHS. Studying these conditions simultaneously in longitudinal studies is challenging, because the morbidity and mortality associated with these diseases result in missing data, and these data are likely not missing at random. When such data are merely excluded, study findings may be compromised. In this article, a subset of 2264 participants with complete renal function data from Strong Heart Exams 1 (1989-1991, 2 (1993-1995, and 3 (1998-1999 was used to examine the performance of five methods used to impute missing data: listwise deletion, mean of serial measures, adjacent value, multiple imputation, and pattern-mixture. Three missing at random models and one non-missing at random model were used to compare the performance of the imputation techniques on randomly and non-randomly missing data. The pattern-mixture method was found to perform best for imputing renal function data that were not missing at random. Determining whether data are missing at random or not can help in choosing the imputation method that will provide the most accurate results.

  5. Does balneotherapy with low radon concentration in water influence the endocrine system? A controlled non-randomized pilot study.

    Science.gov (United States)

    Nagy, Katalin; Berhés, István; Kovács, Tibor; Kávási, Norbert; Somlai, János; Bender, Tamás

    2009-08-01

    Radon bath is a well-established modality of balneotherapy for the management of degenerative musculoskeletal disorders. The present study was conducted to ascertain whether baths of relatively low (80 Bq/l) radon concentration have any influence on the functioning of the endocrine system. In the study, a non-randomized pilot study, 27 patients with degenerative musculoskeletal disorders received 30-min radon baths (of 31-32 degrees C temperature and 80 Bq/l average radon concentration) daily, for 15 days. Twenty-five patients with matching pathologies were subjected to balneotherapy according to the same protocol, using thermal water with negligible radon content (6 Bq/l). Serum thyroid stimulating hormone, prolactin, cortisol, adrenocorticotropic hormone, and dehydroepiandrosterone levels were measured before and after a balneotherapy course of 15 sessions. Comparison of the accumulated data using the Wilcoxon test did not reveal any significant difference between pre- and post-treatment values or between the two patient groups. It is noted that while the beneficial effects of balneotherapy with radon-containing water on degenerative disorders is widely known, only few data have been published in the literature on its effect on endocrine functions. The present study failed to demonstrate any substantial effect of thermal water with relatively low radon content on the functioning of the endocrine system.

  6. Aneuploidy in immortalized human mesenchymal stem cells with non-random loss of chromosome 13 in culture.

    Science.gov (United States)

    Takeuchi, Masao; Takeuchi, Kikuko; Ozawa, Yutaka; Kohara, Akihiro; Mizusawa, Hiroshi

    2009-01-01

    Aneuploidy (an abnormal number of chromosomes) is commonly observed in most human cancer cells, highlighting the need to examine chromosomal instability in tumorigenesis. Previously, the immortalized human mesenchymal stem cell line UE6E7T-3 was shown to undergo a preferential loss of one copy of chromosome 13 after prolonged culture. Here, the loss of chromosome 13 was found to be caused by chromosome missegregation during mitosis, which involved unequal segregation, exclusion of the misaligned chromosome 13 on the metaphase plate, and trapping of chromosome 13 in the midbody region, as observed by fluorescence in situ hybridization. Near-diploid aneuploidy, not tetraploidy, was the direct result. The loss of chromosome 13 was non-random, and was detected by analysis of microsatellites and single nucleotide polymorphism-based loss of heterozygosity (LOH). Of the five microsatellite loci on chromosome 13, four loci showed microsatellite instability at an early stage in culture, and LOH was apparent at a late stage in culture. These results suggest that the microsatellite mutations cause changes in centromere integrity provoking loss of this chromosome in the UE6E7T-3 cell line. Thus, these results support the use of this cell line as a useful model for understanding the mechanism of aneuploid formation in cell cultures.

  7. Treatment of age-related subfoveal neovascular membranes by teletherapy. Results of a non-randomized study

    Energy Technology Data Exchange (ETDEWEB)

    Subasi, M.; Akmansu, M.; Or, M. [Gazi Univ., Ankara (Turkey). Medical Faculty

    1999-03-01

    This investigation was designed to determine whether low-dose radiation to the macular region could influence the natural course of age-related subfoveal neovascularization. Thirty-one patients with subfoveal membranes due to age-related macular degeneration (ARMD) were treated with 12 Gy of 6 MV X-rays, and 72 patients who were untreated served as a control group. Both groups were followed-up. At six months of follow-up visual acuity was maintained in 54.8% and improved 25.8% of patients treated by radiotherapy. In the control group, visual acuity showed deterioration in 55.5%. There was a significant difference between the treated and untreated groups (p<0.01). Significant neovascular membrane regression or stabilization was recorded in 61.3% of treated patients at six months post-radiation, whereas the membranes in all. 72 control patients showed progressive enlargement. This non-randomized study suggested that low doses of radiation may be an alternative treatment for ARMD without an immediate drop in visual acuity or significant radiation morbidity. (author)

  8. Small polaron hopping conduction in samples of ceramic La1.4Sr1.6Mn2O7.06

    International Nuclear Information System (INIS)

    Nakatsugawa, H.; Iguchi, E.; Jung, W.H.; Munakata, F.

    1999-01-01

    The ceramic sample of La 1.4 Sr 1.6 Mn 2 O 7.06 exhibits the metal-insulator transition and a negative magnetoresistance in the vicinity of the Curie temperature (T C ∼ 100 K). The dc magnetic susceptibility between 100 K and 280 K is nearly constant and decreases gradually with increasing temperature above 280 K. The measurements of dc resistivity and the thermoelectric power indicate that small polaron hopping conduction takes place at T > 280 K. The spin ordering due to the two-dimensional d x 2 -y 2 state occurring at T > 280 K is directly related to the hopping conduction above 280 K, although the spin ordering due to the one-dimensional d 3z 2 -r 2 state takes place at T > T C . The two-dimensional d x 2 -y 2 state extending within the MnO 2 sheets starts to narrow and leads to the carrier localisation at 280 K. The effective number of holes in this sample estimated from the thermoelectric power is considerably smaller than the nominal value. This indicates that the small polaron hopping conduction takes place predominantly within the in-plane MnO 2 sheets. A discussion is given of the experimental results of the ceramic sample of La 2/3 Ca 1/3 MnO 2.98 . Copyright (1999) CSIRO Australia

  9. [Can venous iron and tranexamic acid reduce the transfusion need? Report on a non randomized, case control study].

    Science.gov (United States)

    Essola, L; Kouégnigan Rérambiah, L; Obame, R; Issembè, H; Sima Zué, A

    2017-06-01

    To evaluate if the association of injectable iron and tranexamic acid allows a significant saving in transfusion, in cases of myomectomies and hysterectomies. This is a prospective, non randomized study done over 8 months (from January 2013 to August 2013). Were included, patients undergoing hysterectomy or myomectomy who had a hemoglobin level greater than or equal to 8g/dl and less than 12g/dl. Two groups were compared: group A consisting of patients for whom a pack red cells was ordered and the group B which patients received intravenous iron preoperatively and tranexamic acid perioperatively. The level of hemoglobin, pre- and postoperative, the average number of blood units per patient and estimated blood loss was compared. The transfusion economy was evaluated. During this period, 87 patients with a mean age of 40±9 years (range: 23 and 70years) were included according to our criteria: 44 patients in group A and 43 patients in group B. Initial mean hemoglobin in both groups was 9.1±0.7g/dl. In group B, after iron administration, the mean hemoglobin was 11.3±0.7g/dl. The average number of red blood cells received intraoperative patient in group A was 1.54±0.51. The estimated blood loss was significant greater (P=0.0002) in group A (571.6±237.1ml) than in group B (213.7±131.7ml). No transfusion was performed in group B. The association intravenous iron and tranexamic acid resulted in the reduction of transfusion requirements in our setting. It could be integrated in the strategy for sparing blood transfusion in scheduled surgery with hemorrhagic risks. Copyright © 2017. Published by Elsevier SAS.

  10. A school intervention for mental health literacy in adolescents: effects of a non-randomized cluster controlled trial

    Science.gov (United States)

    2013-01-01

    Background “Mental health for everyone” is a school program for mental health literacy and prevention aimed at secondary schools (13–15 yrs). The main aim was to investigate whether mental health literacy, could be improved by a 3-days universal education programme by: a) improving naming of symptom profiles of mental disorder, b) reducing prejudiced beliefs, and c) improving knowledge about where to seek help for mental health problems. A secondary aim was to investigate whether adolescent sex and age influenced the above mentioned variables. A third aim was to investigate whether prejudiced beliefs influenced knowledge about available help. Method This non-randomized cluster controlled trial included 1070 adolescents (53.9% boys, M age14 yrs) from three schools in a Norwegian town. One school (n = 520) received the intervention, and two schools (n = 550) formed the control group. Pre-test and follow-up were three months apart. Linear mixed models and generalized estimating equations models were employed for analysis. Results Mental health literacy improved contingent on the intervention, and there was a shift towards suggesting primary health care as a place to seek help. Those with more prejudiced beleifs did not suggest places to seek help for mental health problems. Generally, girls and older adolescents recognized symptom profiles better and had lower levels of prejudiced beliefs. Conclusions A low cost general school program may improve mental health literacy in adolescents. Gender specific programs and attention to the age and maturity of the students should be considered when mental health literacy programmes are designed and tried out. Prejudice should be addressed before imparting information about mental health issues. PMID:24053381

  11. Exceptional diversity, non-random distribution, and rapid evolution of retroelements in the B73 maize genome.

    Directory of Open Access Journals (Sweden)

    Regina S Baucom

    2009-11-01

    Full Text Available Recent comprehensive sequence analysis of the maize genome now permits detailed discovery and description of all transposable elements (TEs in this complex nuclear environment. Reiteratively optimized structural and homology criteria were used in the computer-assisted search for retroelements, TEs that transpose by reverse transcription of an RNA intermediate, with the final results verified by manual inspection. Retroelements were found to occupy the majority (>75% of the nuclear genome in maize inbred B73. Unprecedented genetic diversity was discovered in the long terminal repeat (LTR retrotransposon class of retroelements, with >400 families (>350 newly discovered contributing >31,000 intact elements. The two other classes of retroelements, SINEs (four families and LINEs (at least 30 families, were observed to contribute 1,991 and approximately 35,000 copies, respectively, or a combined approximately 1% of the B73 nuclear genome. With regard to fully intact elements, median copy numbers for all retroelement families in maize was 2 because >250 LTR retrotransposon families contained only one or two intact members that could be detected in the B73 draft sequence. The majority, perhaps all, of the investigated retroelement families exhibited non-random dispersal across the maize genome, with LINEs, SINEs, and many low-copy-number LTR retrotransposons exhibiting a bias for accumulation in gene-rich regions. In contrast, most (but not all medium- and high-copy-number LTR retrotransposons were found to preferentially accumulate in gene-poor regions like pericentromeric heterochromatin, while a few high-copy-number families exhibited the opposite bias. Regions of the genome with the highest LTR retrotransposon density contained the lowest LTR retrotransposon diversity. These results indicate that the maize genome provides a great number of different niches for the survival and procreation of a great variety of retroelements that have evolved to

  12. Telomere disruption results in non-random formation of de novo dicentric chromosomes involving acrocentric human chromosomes.

    Directory of Open Access Journals (Sweden)

    Kaitlin M Stimpson

    2010-08-01

    Full Text Available Genome rearrangement often produces chromosomes with two centromeres (dicentrics that are inherently unstable because of bridge formation and breakage during cell division. However, mammalian dicentrics, and particularly those in humans, can be quite stable, usually because one centromere is functionally silenced. Molecular mechanisms of centromere inactivation are poorly understood since there are few systems to experimentally create dicentric human chromosomes. Here, we describe a human cell culture model that enriches for de novo dicentrics. We demonstrate that transient disruption of human telomere structure non-randomly produces dicentric fusions involving acrocentric chromosomes. The induced dicentrics vary in structure near fusion breakpoints and like naturally-occurring dicentrics, exhibit various inter-centromeric distances. Many functional dicentrics persist for months after formation. Even those with distantly spaced centromeres remain functionally dicentric for 20 cell generations. Other dicentrics within the population reflect centromere inactivation. In some cases, centromere inactivation occurs by an apparently epigenetic mechanism. In other dicentrics, the size of the alpha-satellite DNA array associated with CENP-A is reduced compared to the same array before dicentric formation. Extra-chromosomal fragments that contained CENP-A often appear in the same cells as dicentrics. Some of these fragments are derived from the same alpha-satellite DNA array as inactivated centromeres. Our results indicate that dicentric human chromosomes undergo alternative fates after formation. Many retain two active centromeres and are stable through multiple cell divisions. Others undergo centromere inactivation. This event occurs within a broad temporal window and can involve deletion of chromatin that marks the locus as a site for CENP-A maintenance/replenishment.

  13. High-throughput analysis using non-depletive SPME: challenges and applications to the determination of free and total concentrations in small sample volumes.

    Science.gov (United States)

    Boyacı, Ezel; Bojko, Barbara; Reyes-Garcés, Nathaly; Poole, Justen J; Gómez-Ríos, Germán Augusto; Teixeira, Alexandre; Nicol, Beate; Pawliszyn, Janusz

    2018-01-18

    In vitro high-throughput non-depletive quantitation of chemicals in biofluids is of growing interest in many areas. Some of the challenges facing researchers include the limited volume of biofluids, rapid and high-throughput sampling requirements, and the lack of reliable methods. Coupled to the above, growing interest in the monitoring of kinetics and dynamics of miniaturized biosystems has spurred the demand for development of novel and revolutionary methodologies for analysis of biofluids. The applicability of solid-phase microextraction (SPME) is investigated as a potential technology to fulfill the aforementioned requirements. As analytes with sufficient diversity in their physicochemical features, nicotine, N,N-Diethyl-meta-toluamide, and diclofenac were selected as test compounds for the study. The objective was to develop methodologies that would allow repeated non-depletive sampling from 96-well plates, using 100 µL of sample. Initially, thin film-SPME was investigated. Results revealed substantial depletion and consequent disruption in the system. Therefore, new ultra-thin coated fibers were developed. The applicability of this device to the described sampling scenario was tested by determining the protein binding of the analytes. Results showed good agreement with rapid equilibrium dialysis. The presented method allows high-throughput analysis using small volumes, enabling fast reliable free and total concentration determinations without disruption of system equilibrium.

  14. Information in small neuronal ensemble activity in the hippocampal CA1 during delayed non-matching to sample performance in rats

    Directory of Open Access Journals (Sweden)

    Takahashi Susumu

    2009-09-01

    Full Text Available Abstract Background The matrix-like organization of the hippocampus, with its several inputs and outputs, has given rise to several theories related to hippocampal information processing. Single-cell electrophysiological studies and studies of lesions or genetically altered animals using recognition memory tasks such as delayed non-matching-to-sample (DNMS tasks support the theories. However, a complete understanding of hippocampal function necessitates knowledge of the encoding of information by multiple neurons in a single trial. The role of neuronal ensembles in the hippocampal CA1 for a DNMS task was assessed quantitatively in this study using multi-neuronal recordings and an artificial neural network classifier as a decoder. Results The activity of small neuronal ensembles (6-18 cells over brief time intervals (2-50 ms contains accurate information specifically related to the matching/non-matching of continuously presented stimuli (stimulus comparison. The accuracy of the combination of neurons pooled over all the ensembles was markedly lower than those of the ensembles over all examined time intervals. Conclusion The results show that the spatiotemporal patterns of spiking activity among cells in the small neuronal ensemble contain much information that is specifically useful for the stimulus comparison. Small neuronal networks in the hippocampal CA1 might therefore act as a comparator during recognition memory tasks.

  15. An Improved Metabolism Grey Model for Predicting Small Samples with a Singular Datum and Its Application to Sulfur Dioxide Emissions in China

    Directory of Open Access Journals (Sweden)

    Wei Zhou

    2016-01-01

    Full Text Available This study proposes an improved metabolism grey model [IMGM(1,1] to predict small samples with a singular datum, which is a common phenomenon in daily economic data. This new model combines the fitting advantage of the conventional GM(1,1 in small samples and the additional advantages of the MGM(1,1 in new real-time data, while overcoming the limitations of both the conventional GM(1,1 and MGM(1,1 when the predicted results are vulnerable at any singular datum. Thus, this model can be classified as an improved grey prediction model. Its improvements are illustrated through a case study of sulfur dioxide emissions in China from 2007 to 2013 with a singular datum in 2011. Some features of this model are presented based on the error analysis in the case study. Results suggest that if action is not taken immediately, sulfur dioxide emissions in 2016 will surpass the standard level required by the Twelfth Five-Year Plan proposed by the China State Council.

  16. Comparison of Time-of-flight and Multicollector ICP Mass Spectrometers for Measuring Actinides in Small Samples using single shot Laser Ablation

    International Nuclear Information System (INIS)

    R.S. Houk; D.B. Aeschliman; S.J. Bajic; D. Baldwin

    2005-01-01

    The objective of these experiments is to evaluate the performance of two types of ICP-MS device for measurement of actinide isotopes by laser ablation (LA) ICP-MS. The key advantage of ICP-MS compared to monitoring of radioactive decay is that the element need not decay during the measurement time. Hence ICP-MS is much faster for long-lived radionuclides. The LA process yields a transient signal. When spatially resolved analysis is required for small samples, the laser ablation sample pulse lasts only ∼10 seconds. It is difficult to measure signals at several isotopes with analyzers that are scanned for such a short sample transient. In this work, a time-of-flight (TOF) ICP-MS device, the GBC Optimass 8000 (Figure 1) is one instrument used. Strictly speaking, ions at different m/z values are not measured simultaneously in TOF. However, they are measured in very rapid sequence with little or no compromise between the number of m/z values monitored and the performance. Ions can be measured throughout the m/z range in single sample transients by TOF. The other ICP-MS instrument used is a magnetic sector multicollector MS, the NU Plasma 1700 (Figure 2). Up to 8 adjacent m/z values can be monitored at one setting of the magnetic field and accelerating voltage. Three of these m/z values can be measured with an electron multiplier. This device is usually used for high precision isotope ratio measurements with the Faraday cup detectors. The electron multipliers have much higher sensitivity. In our experience with the scanning magnetic sector instrument in Ames, these devices have the highest sensitivity and lowest background of any ICP-MS device. The ability to monitor several ions simultaneously, or nearly so, should make these devices valuable for the intended application: measurement of actinide isotopes at low concentrations in very small samples for nonproliferation purposes. The primary sample analyzed was an urban dust pellet reference material, NIST 1648. The

  17. Big news in small samples

    NARCIS (Netherlands)

    P.C. Schotman (Peter); S. Straetmans; C.G. de Vries (Casper)

    1997-01-01

    textabstractUnivariate time series regressions of the forex return on the forward premium generate mostly negative slope coefficients. Simple and refined panel estimation techniques yield slope estimates that are much closer to unity. We explain the two apparently opposing results by allowing for

  18. Small Boat Bottomfish Sampling Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Fishing operations that focus on targeting bottomfish (mostly juvenile opakapaka) that are independent of a larger research vessel, i.e. the Oscar Elton Sette.

  19. Use of a 137Cs re-sampling technique to investigate temporal changes in soil erosion and sediment mobilisation for a small forested catchment in southern Italy

    International Nuclear Information System (INIS)

    Porto, Paolo; Walling, Des E.; Alewell, Christine; Callegari, Giovanni; Mabit, Lionel; Mallimo, Nicola; Meusburger, Katrin; Zehringer, Markus

    2014-01-01

    Soil erosion and both its on-site and off-site impacts are increasingly seen as a serious environmental problem across the world. The need for an improved evidence base on soil loss and soil redistribution rates has directed attention to the use of fallout radionuclides, and particularly 137 Cs, for documenting soil redistribution rates. This approach possesses important advantages over more traditional means of documenting soil erosion and soil redistribution. However, one key limitation of the approach is the time-averaged or lumped nature of the estimated erosion rates. In nearly all cases, these will relate to the period extending from the main period of bomb fallout to the time of sampling. Increasing concern for the impact of global change, particularly that related to changing land use and climate change, has frequently directed attention to the need to document changes in soil redistribution rates within this period. Re-sampling techniques, which should be distinguished from repeat-sampling techniques, have the potential to meet this requirement. As an example, the use of a re-sampling technique to derive estimates of the mean annual net soil loss from a small (1.38 ha) forested catchment in southern Italy is reported. The catchment was originally sampled in 1998 and samples were collected from points very close to the original sampling points again in 2013. This made it possible to compare the estimate of mean annual erosion for the period 1954–1998 with that for the period 1999–2013. The availability of measurements of sediment yield from the catchment for parts of the overall period made it possible to compare the results provided by the 137 Cs re-sampling study with the estimates of sediment yield for the same periods. In order to compare the estimates of soil loss and sediment yield for the two different periods, it was necessary to establish the uncertainty associated with the individual estimates. In the absence of a generally accepted procedure

  20. Miniaturizing 3D assay for high-throughput drug and genetic screens for small patient-derived tumor samples (Conference Presentation)

    Science.gov (United States)

    Rotem, Asaf; Garraway, Levi; Su, Mei-Ju; Basu, Anindita; Regev, Aviv; Struhl, Kevin

    2017-02-01

    Three-dimensional growth conditions reflect the natural environment of cancer cells and are crucial to be performed at drug screens. We developed a 3D assay for cellular transformation that involves growth in low attachment (GILA) conditions and is strongly correlated with the 50-year old benchmark assay-soft agar. Using GILA, we performed high-throughput screens for drugs and genes that selectively inhibit or increase transformation, but not proliferation. This phenotypic approach is complementary to our genetic approach that utilizes single-cell RNA-sequencing of a patient sample to identify putative oncogenes that confer sensitivity to drugs designed to specifically inhibit the identified oncoprotein. Currently, we are dealing with a big challenge in our field- the limited number of cells that might be extracted from a biopsy. Small patient-derived samples are hard to test in the traditional multiwell plate and it will be helpful to minimize the culture area and the experimental system. We managed to design a suitable microfluidic device for limited number of cells and perform the assay using image analysis. We aim to test drugs on tumor cells, outside of the patient body- and recommend on the ideal treatment that is tailored to the individual. This device will help to minimize biopsy-sampling volumes and minimize interventions in the patient's tumor.

  1. High-resolution characterization of sequence signatures due to non-random cleavage of cell-free DNA.

    Science.gov (United States)

    Chandrananda, Dineika; Thorne, Natalie P; Bahlo, Melanie

    2015-06-17

    High-throughput sequencing of cell-free DNA fragments found in human plasma has been used to non-invasively detect fetal aneuploidy, monitor organ transplants and investigate tumor DNA. However, many biological properties of this extracellular genetic material remain unknown. Research that further characterizes circulating DNA could substantially increase its diagnostic value by allowing the application of more sophisticated bioinformatics tools that lead to an improved signal to noise ratio in the sequencing data. In this study, we investigate various features of cell-free DNA in plasma using deep-sequencing data from two pregnant women (>70X, >50X) and compare them with matched cellular DNA. We utilize a descriptive approach to examine how the biological cleavage of cell-free DNA affects different sequence signatures such as fragment lengths, sequence motifs at fragment ends and the distribution of cleavage sites along the genome. We show that the size distributions of these cell-free DNA molecules are dependent on their autosomal and mitochondrial origin as well as the genomic location within chromosomes. DNA mapping to particular microsatellites and alpha repeat elements display unique size signatures. We show how cell-free fragments occur in clusters along the genome, localizing to nucleosomal arrays and are preferentially cleaved at linker regions by correlating the mapping locations of these fragments with ENCODE annotation of chromatin organization. Our work further demonstrates that cell-free autosomal DNA cleavage is sequence dependent. The region spanning up to 10 positions on either side of the DNA cleavage site show a consistent pattern of preference for specific nucleotides. This sequence motif is present in cleavage sites localized to nucleosomal cores and linker regions but is absent in nucleosome-free mitochondrial DNA. These background signals in cell-free DNA sequencing data stem from the non-random biological cleavage of these fragments. This

  2. Improvement of functional constipation with kiwifruit intake in a Mediterranean patient population: An open, non-randomized pilot study

    Directory of Open Access Journals (Sweden)

    Oriol Cunillera

    2015-07-01

    Full Text Available Introduction: Kiwifruit consumption has shown to improve functional constipation in healthy elderly population, according to studies in New Zealand and China. The aim of this study was to evaluate the effect of kiwifruit intake on functional constipation in a Mediterranean patient population characterized by its distinctive nutritional habits.Material and Methods: An open, non-controlled and non-randomized longitudinal study was conducted in 46 patients with constipation (Rome III criteria. Patients monitored for five weeks: weeks 1 and 2 no kiwifruit and weeks 3-5 three kiwifruit per day (Green kiwifruit, Actinidia deliciosa var Hayward. Bristol Scale, volume of stools, and ease of defecation was self- reported daily. The evolution of the categorical variables was tested using the Bhapkar test; functional data methodology was used for continuous variables, and Generalized Estimating Equation (GEE models were adjusted.Results: The percentage of patients with ≥3 stools per week increased from 82.61% (95% CI: 69–91.2 at week 1 to 97.78% (95% CI: 87.4–99.9 at week 2 of kiwifruit intake, with 76.09% (95% CI: 61.9–86.2 responding during the first week. The reporting of stable ideal stools increased from 17.39% (95% CI: 8.8–31 at week 2 to 33.33% (95% CI: 21.3–48 at week 5. According to GEE models, the number of depositions increased significantly (p-values<0.001 in 0.398 daily units at week 1 the first week of intake, up to 0.593 daily units at week 5; significant improvements on facility in evacuation and volume of evacuation were found from the firstweek of intake (all p-values<0.001.Conclusions: The intake of three kiwifruits per day significantly improves the quality of evacuation (number of depositions, volume, consistency and ease in a Mediterranean patient population suffering from functional constipation.

  3. Run charts revisited: a simulation study of run chart rules for detection of non-random variation in health care processes.

    Science.gov (United States)

    Anhøj, Jacob; Olesen, Anne Vingaard

    2014-01-01

    A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.

  4. Context matters: volunteer bias, small sample size, and the value of comparison groups in the assessment of research-based undergraduate introductory biology lab courses.

    Science.gov (United States)

    Brownell, Sara E; Kloser, Matthew J; Fukami, Tadashi; Shavelson, Richard J

    2013-01-01

    The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course.

  5. Context Matters: Volunteer Bias, Small Sample Size, and the Value of Comparison Groups in the Assessment of Research-Based Undergraduate Introductory Biology Lab Courses

    Directory of Open Access Journals (Sweden)

    Sara E. Brownell

    2013-08-01

    Full Text Available The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course.

  6. A technique of evaluating most probable stochastic valuables from a small number of samples and their accuracies and degrees of confidence

    Energy Technology Data Exchange (ETDEWEB)

    Katoh, K [Ibaraki Pref. Univ. Health Sci., (Japan)

    1997-12-31

    A problem of estimating stochastic characteristics of a population from a small number of samples is solved as an inverse problem, from view point of information theory and with the Bayesian statistics. For both Poisson-process and Bernoulli-process, the most probable values of the characteristics of the mother population and their accuracies and degrees of confidence are successfully obtained. Mathematical expressions are given to the general case where a limit amount of information and/or knowledge with the stochastic characteristics are available and a special case where no a priori information nor knowledge are available. Mathematical properties of the solutions obtained, practical appreciation to the problem to radiation measurement are also discussed.

  7. Data fabrication and other reasons for non-random sampling in 5087 randomised, controlled trials in anaesthetic and general medical journals.

    Science.gov (United States)

    Carlisle, J B

    2017-08-01

    Randomised, controlled trials have been retracted after publication because of data fabrication and inadequate ethical approval. Fabricated data have included baseline variables, for instance, age, height or weight. Statistical tests can determine the probability of the distribution of means, given their standard deviation and the number of participants in each group. Randomised, controlled trials have been retracted after the data distributions have been calculated as improbable. Most retracted trials have been written by anaesthetists and published by specialist anaesthetic journals. I wanted to explore whether the distribution of baseline data in trials was consistent with the expected distribution. I wanted to determine whether trials retracted after publication had distributions different to trials that have not been retracted. I wanted to determine whether data distributions in trials published in specialist anaesthetic journals have been different to distributions in non-specialist medical journals. I analysed the distribution of 72,261 means of 29,789 variables in 5087 randomised, controlled trials published in eight journals between January 2000 and December 2015: Anaesthesia (399); Anesthesia and Analgesia (1288); Anesthesiology (541); British Journal of Anaesthesia (618); Canadian Journal of Anesthesia (384); European Journal of Anaesthesiology (404); Journal of the American Medical Association (518) and New England Journal of Medicine (935). I chose these journals as I had electronic access to the full text. Trial p values were distorted by an excess of baseline means that were similar and an excess that were dissimilar: 763/5015 (15.2%) trials that had not been retracted from publication had p values that were within 0.05 of 0 or 1 (expected 10%), that is, a 5.2% excess, p = 1.2 × 10 -7 . The p values of 31/72 (43%) trials that had been retracted after publication were within 0.05 of 0 or 1, a rate different to that for unretracted trials, p = 1.03 × 10 -10 . The difference between the distributions of these two subgroups was confirmed by comparison of their overall distributions, p = 5.3 × 10 -15 . Each journal exhibited the same abnormal distribution of baseline means. There was no difference in distributions of baseline means for 1453 trials in non-anaesthetic journals and 3634 trials in anaesthetic journals, p = 0.30. The rate of retractions from JAMA and NEJM, 6/1453 or 1 in 242, was one-quarter the rate from the six anaesthetic journals, 66/3634 or 1 in 55, relative risk (99%CI) 0.23 (0.08-0.68), p = 0.00022. A probability threshold of 1 in 10,000 identified 8/72 (11%) retracted trials (7 by Fujii et al.) and 82/5015 (1.6%) unretracted trials. Some p values were so extreme that the baseline data could not be correct: for instance, for 43/5015 unretracted trials the probability was less than 1 in 10 15 (equivalent to one drop of water in 20,000 Olympic-sized swimming pools). A probability threshold of 1 in 100 for two or more trials by the same author identified three authors of retracted trials (Boldt, Fujii and Reuben) and 21 first or corresponding authors of 65 unretracted trials. Fraud, unintentional error, correlation, stratified allocation and poor methodology might have contributed to the excess of randomised, controlled trials with similar or dissimilar means, a pattern that was common to all the surveyed journals. It is likely that this work will lead to the identification, correction and retraction of hitherto unretracted randomised, controlled trials. © 2017 The Association of Anaesthetists of Great Britain and Ireland.

  8. Decomposition and forecasting analysis of China's energy efficiency: An application of three-dimensional decomposition and small-sample hybrid models

    International Nuclear Information System (INIS)

    Meng, Ming; Shang, Wei; Zhao, Xiaoli; Niu, Dongxiao; Li, Wei

    2015-01-01

    The coordinated actions of the central and the provincial governments are important in improving China's energy efficiency. This paper uses a three-dimensional decomposition model to measure the contribution of each province in improving the country's energy efficiency and a small-sample hybrid model to forecast this contribution. Empirical analysis draws the following conclusions which are useful for the central government to adjust its provincial energy-related policies. (a) There are two important areas for the Chinese government to improve its energy efficiency: adjusting the provincial economic structure and controlling the number of the small-scale private industrial enterprises; (b) Except for a few outliers, the energy efficiency growth rates of the northern provinces are higher than those of the southern provinces; provinces with high growth rates tend to converge geographically; (c) With regard to the energy sustainable development level, Beijing, Tianjin, Jiangxi, and Shaanxi are the best performers and Heilongjiang, Shanxi, Shanghai, and Guizhou are the worst performers; (d) By 2020, China's energy efficiency may reach 24.75 thousand yuan per ton of standard coal; as well as (e) Three development scenarios are designed to forecast China's energy consumption in 2012–2020. - Highlights: • Decomposition and forecasting models are used to analyze China's energy efficiency. • China should focus on the small industrial enterprises and local protectionism. • The energy sustainable development level of each province is evaluated. • Geographic distribution characteristics of energy efficiency changes are revealed. • Future energy efficiency and energy consumption are forecasted

  9. A semi-nested real-time PCR method to detect low chimerism percentage in small quantity of hematopoietic stem cell transplant DNA samples.

    Science.gov (United States)

    Aloisio, Michelangelo; Bortot, Barbara; Gandin, Ilaria; Severini, Giovanni Maria; Athanasakis, Emmanouil

    2017-02-01

    Chimerism status evaluation of post-allogeneic hematopoietic stem cell transplantation samples is essential to predict post-transplant relapse. The most commonly used technique capable of detecting small increments of chimerism is quantitative real-time PCR. Although this method is already used in several laboratories, previously described protocols often lack sensitivity and the amount of the DNA required for each chimerism analysis is too high. In the present study, we compared a novel semi-nested allele-specific real-time PCR (sNAS-qPCR) protocol with our in-house standard allele-specific real-time PCR (gAS-qPCR) protocol. We selected two genetic markers and analyzed technical parameters (slope, y-intercept, R2, and standard deviation) useful to determine the performances of the two protocols. The sNAS-qPCR protocol showed better sensitivity and precision. Moreover, the sNAS-qPCR protocol requires, as input, only 10 ng of DNA, which is at least 10-fold less than the gAS-qPCR protocols described in the literature. Finally, the proposed sNAS-qPCR protocol could prove very useful for performing chimerism analysis with a small amount of DNA, as in the case of blood cell subsets.

  10. Design and Baseline Findings of a Multi-site Non-randomized Evaluation of the Effect of a Health Programme on Microfinance Clients in India

    OpenAIRE

    Saha, Somen

    2013-01-01

    Microfinance is the provision of financial services for the poor. Health program through microfinance has the potential to address several access barriers to health. We report the design and baseline findings of a multi-site non-randomized evaluation of the effect of a health program on the members of two microfinance organizations from Karnataka and Gujarat states of India. Villages identified for roll-out of health services with microfinance were pair-matched with microfinance only villages...

  11. An Approach to Distinguish between Plasticity and Non-random Distributions of Behavioral Types Along Urban Gradients in a Wild Passerine Bird

    Directory of Open Access Journals (Sweden)

    Philipp Sprau

    2017-08-01

    Full Text Available The impact of urbanization has been widely studied in the context of species diversity and life history evolution. Behavioral adaptation, by contrast, remains poorly understood because empirical studies rarely investigate the relative importance of two key mechanisms: plastic responses vs. non-random distributions of behavioral types. We propose here an approach that enables the simultaneous estimation of the respective roles of these distinct mechanisms. We investigated why risky behaviors are often associated with urbanization, using an urban nest box population of great tits (Parus major as a study system. We simultaneously and repeatedly quantified individual behavior (aggression and flight initiation distance as well as environmental factors characterizing level of urbanization (numbers of pedestrians, cars and cyclists. This enabled us to statistically distinguish plastic responses from patterns of non-random distributions of behavioral types. Data analyses revealed that individuals did not plastically adjust their behavior to the level of urbanization. Behavioral types were instead non-randomly distributed: bold birds occurred more frequently in areas with more cars and fewer pedestrians while shy individuals were predominantly found in areas with fewer cars and more pedestrians. These novel findings imply a major role for behavioral types in the evolutionary ecology of urban environments and call for the full integration of among- and within-individual variation in urban ecological studies.

  12. Effects of growth rate, size, and light availability on tree survival across life stages: a demographic analysis accounting for missing values and small sample sizes.

    Science.gov (United States)

    Moustakas, Aristides; Evans, Matthew R

    2015-02-28

    Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.

  13. A rapid procedure for the determination of thorium, uranium, cadmium and molybdenum in small sediment samples by inductively coupled plasma-mass spectrometry: application in Chesapeake Bay

    International Nuclear Information System (INIS)

    Zheng, Y.; Weinman, B.; Cronin, T.; Fleisher, M.Q.; Anderson, R.F.

    2003-01-01

    This paper describes a rapid procedure that allows precise analysis of Mo, Cd, U and Th in sediment samples as small as 10 mg by using a novel approach that utilizes a 'pseudo' isotope dilution for Th and conventional isotope dilution for Mo, Cd and U by ICP-MS. Long-term reproducibility of the method is between 2.5 and 5% with an advantage of rapid analysis on a single digestion of sediment sample and the potential of adding other elements of interest if so desired. Application of this method to two piston cores collected near the mouth of the Patuxent River in Chesapeake Bay showed that the accumulation of authigenic Mo and Cd varied in response to the changing bottom water redox conditions, with anoxia showing consistent oscillations throughout both pre-industrial and industrial times. Accumulation of authigenic U shows consistent oscillations as well, without any apparent increase in productivity related to anoxic trends. Degrees of Mo and Cd enrichment also inversely correlate to halophilic microfaunal assemblages already established as paleoclimate proxies within the bay indicating that bottom water anoxia is driven in part by the amount of freshwater discharge that the area receives

  14. Thermal transfer and apparent-dose distributions in poorly bleached mortar samples: results from single grains and small aliquots of quartz

    International Nuclear Information System (INIS)

    Jain, M.; Thomsen, K.J.; Boetter-Jensen, L.; Urray, A.S.

    2004-01-01

    In the assessment of doses received from a nuclear accident, considerable attention has been paid to retrospective dosimetry using the optically stimulated luminescence (OSL) of heated materials such as bricks and tiles. quartz extracted from these artefacts was heated during manufacture; this process releases all the prior trapped charge and simultaneously sensitises he quartz. Unfortunately unheated materials such as mortar and concrete are ore common in industrial sites and particularly in nuclear installations. These materials are usually exposed to daylight during quarrying and construction, but in general this exposure is insufficient to completely empty (bleach) any geological trapped charge. This leads to a distribution of apparent doses in the sample at the time of construction with only some (if ny) grains exposed to sufficient light to be considered well bleached for SL dosimetry. The challenge in using such materials as retrospective dosemeters is in identifying these well-bleached grains when an accident dose as been superimposed on the original dose distribution. We investigate here, sing OSL, the background dose in three different mortar samples: render, whitewash and inner wall plaster from a building built in 1964. These samples re found to be both poorly bleached and weakly sensitive (only 0.3% of rains giving a detectable dose response). We study thermal transfer in ingle grains of quartz, investigate the grain-size dependence of bleaching n the size range 90-300 μm and compare the dose-distributions obtained rom small aliquots and single-grain procedures. A comparison of three different methods viz. (a) first 5%, (b) probability plot and (c) comparison f internal and external uncertainties, is made for equivalent dose estimation. The results have implications for accident dosimetry, archaeological studies and dating of poorly bleached sediments

  15. Ultra-trace plutonium determination in small volume seawater by sector field inductively coupled plasma mass spectrometry with application to Fukushima seawater samples.

    Science.gov (United States)

    Bu, Wenting; Zheng, Jian; Guo, Qiuju; Aono, Tatsuo; Tagami, Keiko; Uchida, Shigeo; Tazoe, Hirofumi; Yamada, Masatoshi

    2014-04-11

    Long-term monitoring of Pu isotopes in seawater is required for assessing Pu contamination in the marine environment from the Fukushima Dai-ichi Nuclear Power Plant accident. In this study, we established an accurate and precise analytical method based on anion-exchange chromatography and SF-ICP-MS. This method was able to determine Pu isotopes in seawater samples with small volumes (20-60L). The U decontamination factor was 3×10(7)-1×10(8), which provided sufficient removal of interfering U from the seawater samples. The estimated limits of detection for (239)Pu and (240)Pu were 0.11fgmL(-1) and 0.08fgmL(-1), respectively, which corresponded to 0.01mBqm(-3) for (239)Pu and 0.03mBqm(-3) for (240)Pu when a 20L volume of seawater was measured. We achieved good precision (2.9%) and accuracy (0.8%) for measurement of the (240)Pu/(239)Pu atom ratio in the standard Pu solution with a (239)Pu concentration of 11fgmL(-1) and (240)Pu concentration of 2.7fgmL(-1). Seawater reference materials were used for the method validation and both the (239+240)Pu activities and (240)Pu/(239)Pu atom ratios agreed well with the expected values. Surface and bottom seawater samples collected off Fukushima in the western North Pacific since March 2011 were analyzed. Our results suggested that there was no significant variation of the Pu distribution in seawater in the investigated areas compared to the distribution before the accident. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Spectroelectrochemical Sensing Based on Multimode Selectivity simultaneously Achievable in a Single Device. 11. Design and Evaluation of a Small Portable Sensor for the Determination of Ferrocyanide in Hanford Waste Samples

    International Nuclear Information System (INIS)

    Stegemiller, Michael L.; Heineman, William R.; Seliskar, Carl J.; Ridgway, Thomas H.; Bryan, Samuel A.; Hubler, Timothy L.; Sell, Richard L.

    2003-01-01

    Spectroelectrochemical sensing based on multimode selectivity simultaneously achievable in a single device. 11. Design and evaluation of a small portable sensor for the determination of ferrocyanide in Hanford waste samples

  17. Performance of next-generation sequencing on small tumor specimens and/or low tumor content samples using a commercially available platform.

    Directory of Open Access Journals (Sweden)

    Scott Morris

    Full Text Available Next generation sequencing tests (NGS are usually performed on relatively small core biopsy or fine needle aspiration (FNA samples. Data is limited on what amount of tumor by volume or minimum number of FNA passes are needed to yield sufficient material for running NGS. We sought to identify the amount of tumor for running the PCDx NGS platform.2,723 consecutive tumor tissues of all cancer types were queried and reviewed for inclusion. Information on tumor volume, success of performing NGS, and results of NGS were compiled. Assessment of sequence analysis, mutation calling and sensitivity, quality control, drug associations, and data aggregation and analysis were performed.6.4% of samples were rejected from all testing due to insufficient tumor quantity. The number of genes with insufficient sensitivity make definitive mutation calls increased as the percentage of tumor decreased, reaching statistical significance below 5% tumor content. The number of drug associations also decreased with a lower percentage of tumor, but this difference only became significant between 1-3%. The number of drug associations did decrease with smaller tissue size as expected. Neither specimen size or percentage of tumor affected the ability to pass mRNA quality control. A tumor area of 10 mm2 provides a good margin of error for specimens to yield adequate drug association results.Specimen suitability remains a major obstacle to clinical NGS testing. We determined that PCR-based library creation methods allow the use of smaller specimens, and those with a lower percentage of tumor cells to be run on the PCDx NGS platform.

  18. Measurement of large asymptotic reactor periods from about 103 to 4.104 sec) to determine reactivity effects of small samples

    International Nuclear Information System (INIS)

    Grinevich, F.A.; Evchuk, A.I.; Klimentov, V.B.; Tyzh, A.V.; Churkin, Yu.I.; Yaroshevich, O.I.

    1977-01-01

    All investigation programs on fast reactor physics include measurements of low reactivity values (1-0.01)x10 -5 ΔK/K. An application of the pile oscillator technique for the purpose requires a special critical assembly for an installation of the oscillator. Thus it is of interest to develop relatively simple methods. In particular, one of such methods is the asymptotic period method which is widely used for low reactivity measurements. The description of the method and equipment developed for low reactivity measurements according to the measurements of the steady-state reactor period is presented. The equipment has been tested on the BTS-2 fast-thermal critical assembly. Measurement results on the reactivity effects of small samples in the fast zone centre are given. It is shown that the application of the method of measuring long steady-state periods and developed and tested equipment enables the reactivity of (1+-0.02)x10 -5 ΔK/K to be determined at the critical assembly power of 5 to 10 Wt. The disadvantage of the method presented is the time lost on reaching the steady-state period which results in greater sensitivity of the method to reactivity drifts

  19. The study protocol for a non-randomized controlled clinical trial using a genotype-guided strategy in a dataset of patients who undergone percutaneous coronary intervention with stent

    Directory of Open Access Journals (Sweden)

    Cristina Lucía Dávila-Fajardo

    2017-02-01

    Full Text Available This article contains data related to the research article entitled “Results of genotype–guided antiplatelet therapy in patients undergone percutaneous coronary intervention with stent” (J. Sánchez-Ramos, C.L. Dávila-Fajardo, P. Toledo Frías, X. Díaz Villamarín, L.J. Martínez-González, S. Martínez Huertas, F. Burillo Gómez, J. Caballero Borrego, A. Bautista Pavés, M.C. Marín Guzmán, J.A. Ramirez Hernández, C. Correa Vilches, J. Cabeza Barrera, 2016 (1. This data article reports, for the first time, about the non-randomized clinical trial protocol that check if CYP2C19/ABCB1 genotype–guided strategy in which the choice of antiplatelet therapy is based on the genetic test, reduces the rates of cardiovascular events and bleeding compared to a non-tailored strategy in patients undergone percutaneous coronary intervention (PCI with stent. The data included in this article are: design and setting of the study, study population, inclusion and exclusion criteria, definition of the intervention, objectives, variables (baseline characteristics and during the follow-up, study procedures, collection and treatment of the biological sample, genotyping, withdrawal criteria, sample size, statistic analysis, ethical aspects, information sheet and consent form. The authors confirm that this study has been registered in Eudra CT (Eudra CT: 2016-001294-33.

  20. Criticality Safety Evaluation for Small Sample Preparation and Non-Destructive Assay (NDA) Operations in Wing 7 Basement of the CMR Facility

    Energy Technology Data Exchange (ETDEWEB)

    Kunkle, Paige Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Zhang, Ning [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-02

    Nuclear Criticality Safety (NCS) has reviewed the fissionable material small sample preparation and NDA operations in Wing 7 Basement of the CMR Facility. This is a Level-1 evaluation conducted in accordance with NCS-AP-004 [Reference 1], formerly NCS-GUIDE-01, and the guidance set forth on use of the Standard Criticality Safety Requirements (SCSRs) [Reference 2]. As stated in Reference 2, the criticality safety evaluation consists of both the SCSR CSED and the SCSR Application CSED. The SCSR CSED is a Level-3 CSED [Reference 3]. This Level-1 CSED is the SCSR Application CSED. This SCSR Application (Level-1) evaluation does not derive controls, it simply applies controls derived from the SCSR CSED (Level-3) for the application of operations conducted here. The controls derived in the SCSR CSED (Level-3) were evaluated via the process described in Section 6.6.5 of SD-130 (also reproduced in Section 4.3.5 of NCS-AP-004 [Reference 1]) and were determined to not meet the requirements for consideration of elevation into the safety basis documentation for CMR. According to the guidance set forth on use of the SCSRs [Reference 2], the SCSR CSED (Level-3) is also applicable to the CMR Facility because the process and the normal and credible abnormal conditions in question are bounded by those that are described in the SCSR CSED. The controls derived in the SCSR CSED include allowances for solid materials and solution operations. Based on the operations conducted at this location, there are less-than-accountable (LTA) amounts of 233U. Based on the evaluation documented herein, the normal and credible abnormal conditions that might arise during the execution of this process will remain subcritical with the following recommended controls.

  1. Mediastinal lymph node dissection versus mediastinal lymph node sampling for early stage non-small cell lung cancer: a systematic review and meta-analysis.

    Science.gov (United States)

    Huang, Xiongfeng; Wang, Jianmin; Chen, Qiao; Jiang, Jielin

    2014-01-01

    This systematic review and meta-analysis aimed to evaluate the overall survival, local recurrence, distant metastasis, and complications of mediastinal lymph node dissection (MLND) versus mediastinal lymph node sampling (MLNS) in stage I-IIIA non-small cell lung cancer (NSCLC) patients. A systematic search of published literature was conducted using the main databases (MEDLINE, PubMed, EMBASE, and Cochrane databases) to identify relevant randomized controlled trials that compared MLND vs. MLNS in NSCLC patients. Methodological quality of included randomized controlled trials was assessed according to the criteria from the Cochrane Handbook for Systematic Review of Interventions (Version 5.1.0). Meta-analysis was performed using The Cochrane Collaboration's Review Manager 5.3. The results of the meta-analysis were expressed as hazard ratio (HR) or risk ratio (RR), with their corresponding 95% confidence interval (CI). We included results reported from six randomized controlled trials, with a total of 1,791 patients included in the primary meta-analysis. Compared to MLNS in NSCLC patients, there was no statistically significant difference in MLND on overall survival (HR = 0.77, 95% CI 0.55 to 1.08; P = 0.13). In addition, the results indicated that local recurrence rate (RR = 0.93, 95% CI 0.68 to 1.28; P = 0.67), distant metastasis rate (RR = 0.88, 95% CI 0.74 to 1.04; P = 0.15), and total complications rate (RR = 1.10, 95% CI 0.67 to 1.79; P = 0.72) were similar, no significant difference found between the two groups. Results for overall survival, local recurrence rate, and distant metastasis rate were similar between MLND and MLNS in early stage NSCLC patients. There was no evidence that MLND increased complications compared with MLNS. Whether or not MLND is superior to MLNS for stage II-IIIA remains to be determined.

  2. Evaluation of physical activity interventions in children via the reach, efficacy/effectiveness, adoption, implementation, and maintenance (RE-AIM) framework: A systematic review of randomized and non-randomized trials.

    Science.gov (United States)

    McGoey, Tara; Root, Zach; Bruner, Mark W; Law, Barbi

    2016-01-01

    Existing reviews of physical activity (PA) interventions designed to increase PA behavior exclusively in children (ages 5 to 11years) focus primarily on the efficacy (e.g., internal validity) of the interventions without addressing the applicability of the results in terms of generalizability and translatability (e.g., external validity). This review used the RE-AIM (Reach, Efficacy/Effectiveness, Adoption, Implementation, Maintenance) framework to measure the degree to which randomized and non-randomized PA interventions in children report on internal and external validity factors. A systematic search for controlled interventions conducted within the past 12years identified 78 studies that met the inclusion criteria. Based on the RE-AIM criteria, most of the studies focused on elements of internal validity (e.g., sample size, intervention location and efficacy/effectiveness) with minimal reporting of external validity indicators (e.g., representativeness of participants, start-up costs, protocol fidelity and sustainability). Results of this RE-AIM review emphasize the need for future PA interventions in children to report on real-world challenges and limitations, and to highlight considerations for translating evidence-based results into health promotion practice. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Probability calculus of fractional order and fractional Taylor's series application to Fokker-Planck equation and information of non-random functions

    International Nuclear Information System (INIS)

    Jumarie, Guy

    2009-01-01

    A probability distribution of fractional (or fractal) order is defined by the measure μ{dx} = p(x)(dx) α , 0 α (D x α h α )f(x) provided by the modified Riemann Liouville definition, one can expand a probability calculus parallel to the standard one. A Fourier's transform of fractional order using the Mittag-Leffler function is introduced, together with its inversion formula; and it provides a suitable generalization of the characteristic function of fractal random variables. It appears that the state moments of fractional order are more especially relevant. The main properties of this fractional probability calculus are outlined, it is shown that it provides a sound approach to Fokker-Planck equation which are fractional in both space and time, and it provides new results in the information theory of non-random functions.

  4. A non-randomized [corrected] controlled trial of the active music engagement (AME) intervention on children with cancer.

    Science.gov (United States)

    Robb, Sheri L; Clair, Alicia A; Watanabe, Masayo; Monahan, Patrick O; Azzouz, Faouzi; Stouffer, Janice W; Ebberts, Allison; Darsie, Emily; Whitmer, Courtney; Walker, Joey; Nelson, Kirsten; Hanson-Abromeit, Deanna; Lane, Deforia; Hannan, Ann

    2008-07-01

    Coping theorists argue that environmental factors affect how children perceive and respond to stressful events such as cancer. However, few studies have investigated how particular interventions can change coping behaviors. The active music engagement (AME) intervention was designed to counter stressful qualities of the in-patient hospital environment by introducing three forms of environmental support. The purpose of this multi-site randomized controlled trial was to determine the efficacy of the AME intervention on three coping-related behaviors (i.e. positive facial affect, active engagement, and initiation). Eighty-three participants, ages 4-7, were randomly assigned to one of three conditions: AME (n = 27), music listening (ML; n = 28), or audio storybooks (ASB; n = 28). Conditions were videotaped to facilitate behavioral data collection using time-sampling procedures. After adjusting for baseline differences, repeated measure analyses indicated that AME participants had a significantly higher frequency of coping-related behaviors compared with ML or ASB. Positive facial affect and active engagement were significantly higher during AME compared with ML and ASB (p<0.0001). Initiation was significantly higher during AME than ASB (p<0.05). This study supports the use of the AME intervention to encourage coping-related behaviors in hospitalized children aged 4-7 receiving cancer treatment. (c) 2007 John Wiley & Sons, Ltd.

  5. Body Mass Index, family lifestyle, physical activity and eating behavior on a sample of primary school students in a small town of Western Sicily

    Directory of Open Access Journals (Sweden)

    Enza Sidoti

    2009-09-01

    Full Text Available

    Background: Obesity is actually a discernible issue in prosperous western society and is dramatically increasing in children and adolescents. Many studies indicate that obesity in childhood may become chronic disease in adulthood and, particularly, those who are severely overweight have an increased risk of death by cardiovascular disease. Understanding the determinants of life style and behavior in a person’s youth and making attempts to change children’s habits is considered a key strategy in the primary prevention of obesity. This study aims to find a correlation between Body Mass Index, (BMI, physical activity and eating behavior and to identify, eventually, risks, protective factors and possible directions for interventions on incorrect nutritional/physical activity and intra-familiar life styles in a sample of young adolescents in a small town of Western Sicily.

    Methods: The research surveyed the entire population of the last three curricular years of two Primary Schools in a town of western Sicily, (n=294. The instrument used for the survey was a questionnaire containing 20 different items with multiple choices answers. Personal information, physical activity and eating behaviors were collected both for parents and students to cross students’ and parents’ characteristics. Data were codified and statistical analysis was computed through Statistica and Openstat software.

    Results: Data obtained demonstrated a relevant percentage (18% of obese children. Prevalence of overweight was high as well, (23%, and many in this area (12% were at risk since they were on the limits of the lower class. A significant association was found between the percentage of students classified as having an elevated BMI and a sedentary habit and/or an incorrect eating behavior. Among the overweight and obese children a direct statistical association was also shown between the weight of their

  6. Linear models for airborne-laser-scanning-based operational forest inventory with small field sample size and highly correlated LiDAR data

    Science.gov (United States)

    Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.

    2015-01-01

    Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.

  7. Stratospheric Air Sub-sampler (SAS) and its application to analysis of Delta O-17(CO2) from small air samples collected with an AirCore

    NARCIS (Netherlands)

    Mrozek, Dorota Janina; van der Veen, Carina; Hofmann, Magdalena E. G.; Chen, Huilin; Kivi, Rigel; Heikkinen, Pauli; Rockmann, Thomas

    2016-01-01

    We present the set-up and a scientific application of the Stratospheric Air Sub-sampler (SAS), a device to collect and to store the vertical profile of air collected with an AirCore (Karion et al., 2010) in numerous sub-samples for later analysis in the laboratory. The SAS described here is a 20m

  8. Extreme-temperature lab on a chip for optogalvanic spectroscopy of ultra small samples - key components and a first integration attempt

    International Nuclear Information System (INIS)

    Berglund, Martin; Khaji, Zahra; Persson, Anders; Sturesson, Peter; Breivik, Johan Söderberg; Thornell, Greger; Klintberg, Lena

    2016-01-01

    This is a short summary of the authors’ recent R and D on valves, combustors, plasma sources, and pressure and temperature sensors, realized in high-temperature co-fired ceramics, and an account for the first attempt to monolithically integrate them to form a lab on a chip for sample administration, preparation and analysis, as a stage in optogalvanic spectroscopy. (paper)

  9. Using Web2.0 social network technology for sampling framework identification and respondent recruitment: experiences with a small-scale experiment

    NARCIS (Netherlands)

    Grigolon, A.B.; Kemperman, A.D.A.M.; Timmermans, H.J.P.

    2011-01-01

    In this paper, we report the results of a small–scale experiment to explore the potential of using social network technology for respondent recruitment. Of particular interest are the following questions (i) can social media be used for the identification of sampling frames, (ii) what response rates

  10. Stevens Pond: A postglacial pollen diagram from a small Typha Swamp in Northwestern Minnesota, interpreted from pollen indicators and surface samples

    NARCIS (Netherlands)

    Janssen, C.R.

    1967-01-01

    The pollen assemblages of a core in the coniferhardwood formation in northwestern Minnesota are compared with the floristics of the recent vegetation in the region. Percentage levels of the main tree components have been compared first with those from recent surface samples taken at the same short

  11. Development of a standard data base for FBR core nuclear design (XIII). Analysis of small sample reactivity experiments at ZPPR-9

    International Nuclear Information System (INIS)

    Sato, Wakaei; Fukushima, Manabu; Ishikawa, Makoto

    2000-09-01

    A comprehensive study to evaluate and accumulate the abundant results of fast reactor physics is now in progress at O-arai Engineering Center to improve analytical methods and prediction accuracy of nuclear design for large fast breeder cores such as future commercial FBRs. The present report summarizes the analytical results of sample reactivity experiments at ZPPR-9 core, which has not been evaluated by the latest analytical method yet. The intention of the work is to extend and further generalize the standard data base for FBR core nuclear design. The analytical results of the sample reactivity experiments (samples: PU-30, U-6, DU-6, SS-1 and B-1) at ZPPR-9 core in JUPITER series, with the latest nuclear data library JENDL-3.2 and the analytical method which was established by the JUPITER analysis, can be concluded as follows: The region-averaged final C/E values generally agreed with unity within 5% differences at the inner core region. However, the C/E values of every sample showed the radial space-dependency increasing from center to core edge, especially the discrepancy of B-1 was the largest by 10%. Next, the influence of the present analytical results for the ZPPR-9 sample reactivity to the cross-section adjustment was evaluated. The reference case was a unified cross-section set ADJ98 based on the recent JUPITER analysis. As a conclusion, the present analytical results have sufficient physical consistency with other JUPITER data, and possess qualification as a part of the standard data base for FBR nuclear design. (author)

  12. A high sensitivity SQUID-method for the measurement of magnetic susceptibility of small samples in the temperature range 1.5 K-40 K and application on small palladium particles

    International Nuclear Information System (INIS)

    Tu Nguyen Quang.

    1979-01-01

    In this paper a method is developed for magnetic susceptibility measurements which is superior to the common methods. The method is based on the SQUID-principle (Superconducting Quantum Interference Device) using the tunnel effect of a superconducting point contact and magnetic flux quantization for measuring electric and magnetic quantities. Due to this refined method susceptibility changes of very small palladium particles could be detected in the temperature range 1.5 K-40 K with respect to the bulk. In addition susceptibility differences of particle distributions with different means diameters (81 Angstroem and 65 Angstroem) have been measured for the first time. A quantitative comparison of the measurements with theoretical results shows satisfactory agreement. (orig./WBU) [de

  13. Accuracy assessment of digital surface models based on a small format action camera in a North-East Hungarian sample area

    Directory of Open Access Journals (Sweden)

    Barkóczi Norbert

    2017-01-01

    Full Text Available The use of the small format digital action cameras has been increased in the past few years in various applications, due to their low budget cost, flexibility and reliability. We can mount these small cameras on several devices, like unmanned air vehicles (UAV and create 3D models with photogrammetric technique. Either creating or receiving these kind of databases, one of the most important questions will always be that how accurate these systems are, what the accuracy that can be achieved is. We gathered the overlapping images, created point clouds, and then we generated 21 different digital surface models (DSM. The differences based on the number of images we used in each model, and on the flight height. We repeated the flights three times, to compare the same models with each other. Besides, we measured 129 reference points with RTK-GPS, to compare the height differences with the extracted cell values from each DSM. The results showed that higher flight height has lower errors, and the optimal air base distance is one fourth of the flying height in both cases. The lowest median was 0.08 meter, at the 180 meter flight, 50 meter air base distance model. Raising the number of images does not increase the overall accuracy. The connection between the amount of error and distance from the nearest GCP is not linear in every case.

  14. Method of determining coking temperature of coke. [Experimental method of determining final coking temperature using a small sample and calibration graph

    Energy Technology Data Exchange (ETDEWEB)

    Mel' nichuk, A.Yu.; Bondarenko, A.K.; Fialkov, B.S.; Khegay, L.U.; Khvan, L.A.; Muzyzhuk, V.D.; Zakharov, A.G.; Zelenskiy, V.P.

    1985-01-01

    The coking temperature of coke should be determined from the magnitude of the ionization current of the medium during heating (3/sup 0//min) of a coke sample (2 g, fraction < 0.2 mm) in an oxidation medium with air supply (1 1/min). The coking temperature is determined from the maximum magnitude of current using a graduated graph constructed during analysis of coke samples obtained with different final coking temperatures. The discrepancy between the established coking temperature and that defined from the proposed method is 8-19/sup 0/, and that defined from electrical resistance of coke is 26-43/sup 0/. In addition to high accuracy, this method reduces the time outlays for making the analysis.

  15. Outsourcing cytological samples to a referral laboratory for EGFR testing in non-small cell lung cancer: does theory meet practice?

    Science.gov (United States)

    Vigliar, E; Malapelle, U; Bellevicine, C; de Luca, C; Troncone, G

    2015-10-01

    Guidelines from the College of American Pathologists (CAP), the International Association for the Study of Lung Cancer (IASLC) and the Association for Molecular Pathology (AMP) consider cytology suitable for testing epidermal growth factor receptor (EGFR) mutations in lung adenocarcinoma. The guidelines recommend that cytopathologists first discuss the possibility of testing squamous cell carcinomas (SqCC) in multidisciplinary meetings. Second, cell blocks should be analysed rather than smear preparations and, third, specimens should be sent to external molecular laboratories within three working days of receiving requests. This study monitored how these recommendations are met in practice. Our laboratory received 596 requests from cytologists from 13 different institutions. For each case, the cytological diagnosis, cytopreparation type, and time between the request and sample mailing were compared with the recommendations. Of the 596 samples, 32 (5.4%) had been reported as SqCC. Three of these (9.4%) showed EGFR mutation. Cytological slides, either ThinPrep(™) (51.2%) or direct smears (43.2%), were more frequently received than cell blocks (5.7%). The mean time between the oncologist's request and specimen dispatching was 5.8 working days. The occurrence of mutations in samples reported as SqCC was higher than expected. This questions the reliability of the original diagnosis, which reinforced the recommendation to evaluate the opportunity for testing non-adenocarcinoma cytology on a case-by-case basis. In spite of CAP/IASLC/AMP recommendations, cell blocks were underutilized for EGFR testing, but cytological slides were suitable for DNA analyses. Significant efforts are needed to avoid delays in outsourcing cytological samples for EGFR testing. © 2014 John Wiley & Sons Ltd.

  16. An open-label, non-randomized comparison of venlafaxine and gabapentin as monotherapy or adjuvant therapy in the management of neuropathic pain in patients with peripheral neuropathy

    Directory of Open Access Journals (Sweden)

    William Eardley

    2010-04-01

    Full Text Available William Eardley, Cory TothDepartment of Clinical Neurosciences and the University of Calgary, Calgary, AB, CanadaAbstract: Although many therapies are used in the management of neuropathic pain (NeP due to polyneuropathy (PN, few comparison studies exist. We performed a prospective, non-randomized, unblended, efficacy comparison of the serotonin-norepinephrine reuptake inhibitor venlafaxine, as either monotherapy or adjuvant therapy, with a first-line medication for NeP, gabapentin, in patients with PN-related NeP. VAS pain scores were assessed after 3 and 6 months in intervention groups and in a cohort of patients receiving no pharmacotherapy. In a total of 223 patients, we analyzed pain quantity and quality (visual analogue scale [VAS] score, Brief Pain Inventory [BPI], quality of life and health status measures [EuroQol 5 Domains, EQ-5D], Medical Outcomes Sleep Study Scale [MOSSS], Hospital Anxiety and Depression Scale [HADS] and Short Form 36 Health Survey [SF-36] after 6 months of therapy. Significant improvements in VAS pain scores occurred for all treatment groups after 6 months. Improvements in aspects of daily life and anxiety were identified in all treatment groups. Our data suggest that monotherapy or adjuvant therapy with venlafaxine is comparable to gabapentin for NeP management. We advocate for head-to-head, randomized, double-blinded studies of current NeP therapies.Keywords: peripheral neuropathy, neuropathic pain, pharmacotherapy, venlafaxine, gabapentin

  17. Validation of the k-filtering technique for a signal composed of random-phase plane waves and non-random coherent structures

    Directory of Open Access Journals (Sweden)

    O. W. Roberts

    2014-12-01

    Full Text Available Recent observations of astrophysical magnetic fields have shown the presence of fluctuations being wave-like (propagating in the plasma frame and those described as being structure-like (advected by the plasma bulk velocity. Typically with single-spacecraft missions it is impossible to differentiate between these two fluctuations, due to the inherent spatio-temporal ambiguity associated with a single point measurement. However missions such as Cluster which contain multiple spacecraft have allowed for temporal and spatial changes to be resolved, using techniques such as k filtering. While this technique does not assume Taylor's hypothesis it requires both weak stationarity of the time series and that the fluctuations can be described by a superposition of plane waves with random phases. In this paper we test whether the method can cope with a synthetic signal which is composed of a combination of non-random-phase coherent structures with a mean radius d and a mean separation λ, as well as plane waves with random phase.

  18. Sample preparation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Sample preparation prior to HPLC analysis is certainly one of the most important steps to consider in trace or ultratrace analysis. For many years scientists have tried to simplify the sample preparation process. It is rarely possible to inject a neat liquid sample or a sample where preparation may not be any more complex than dissolution of the sample in a given solvent. The last process alone can remove insoluble materials, which is especially helpful with the samples in complex matrices if other interactions do not affect extraction. Here, it is very likely a large number of components will not dissolve and are, therefore, eliminated by a simple filtration process. In most cases, the process of sample preparation is not as simple as dissolution of the component interest. At times, enrichment is necessary, that is, the component of interest is present in very large volume or mass of material. It needs to be concentrated in some manner so a small volume of the concentrated or enriched sample can be injected into HPLC. 88 refs

  19. Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality.

    Science.gov (United States)

    Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel

    2015-12-01

    Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991-2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA).

  20. Determination of proguanil and metabolites in small sample volumes of whole blood stored on filter paper by high-performance liquid chromatography.

    Science.gov (United States)

    Kolawole, J A; Taylor, R B; Moody, R R

    1995-12-01

    A method is reported for the determination of proguanil and its two metabolites cycloguanil and 4-chlorophenylbiguanide in whole blood and plasma samples obtained by thumbprick and stored dry on filter paper. The sample preparation involves liquid extraction from the filter paper and subsequent solid-phase extraction using C8 Bond-Elut cartridges. Separation and quantification is by a previously reported ion-pairing high-performance liquid chromatographic system with ODS Hypersil as stationary phase and an 50:50 acetonitrile-pH 2 phosphate buffer mobile phase containing 200 mM sodium dodecylsulphate as ion-pairing agent. The analytical characteristics of the method are reported. Representative concentrations are shown as a function of time from a human subject after ingestion of a single 200-mg dose of proguanil hydrochloride. Typical ranges of concentration detected by the proposed method in human subjects were proguanil 12-900 ng/ml, cycloguanil 16-44 ng/ml and 4-chlorophenylbiguanide 1.5-10 ng/ml in whole blood.

  1. Combining land use information and small stream sampling with PCR-based methods for better characterization of diffuse sources of human fecal pollution.

    Science.gov (United States)

    Peed, Lindsay A; Nietch, Christopher T; Kelty, Catherine A; Meckes, Mark; Mooney, Thomas; Sivaganesan, Mano; Shanks, Orin C

    2011-07-01

    Diffuse sources of human fecal pollution allow for the direct discharge of waste into receiving waters with minimal or no treatment. Traditional culture-based methods are commonly used to characterize fecal pollution in ambient waters, however these methods do not discern between human and other animal sources of fecal pollution making it difficult to identify diffuse pollution sources. Human-associated quantitative real-time PCR (qPCR) methods in combination with low-order headwatershed sampling, precipitation information, and high-resolution geographic information system land use data can be useful for identifying diffuse source of human fecal pollution in receiving waters. To test this assertion, this study monitored nine headwatersheds over a two-year period potentially impacted by faulty septic systems and leaky sanitary sewer lines. Human fecal pollution was measured using three different human-associated qPCR methods and a positive significant correlation was seen between abundance of human-associated genetic markers and septic systems following wet weather events. In contrast, a negative correlation was observed with sanitary sewer line densities suggesting septic systems are the predominant diffuse source of human fecal pollution in the study area. These results demonstrate the advantages of combining water sampling, climate information, land-use computer-based modeling, and molecular biology disciplines to better characterize diffuse sources of human fecal pollution in environmental waters.

  2. A new set-up for simultaneous high-precision measurements of CO2, δ13C-CO2 and δ18O-CO2 on small ice core samples

    Science.gov (United States)

    Jenk, Theo Manuel; Rubino, Mauro; Etheridge, David; Ciobanu, Viorela Gabriela; Blunier, Thomas

    2016-08-01

    Palaeoatmospheric records of carbon dioxide and its stable carbon isotope composition (δ13C) obtained from polar ice cores provide important constraints on the natural variability of the carbon cycle. However, the measurements are both analytically challenging and time-consuming; thus only data exist from a limited number of sampling sites and time periods. Additional analytical resources with high analytical precision and throughput are thus desirable to extend the existing datasets. Moreover, consistent measurements derived by independent laboratories and a variety of analytical systems help to further increase confidence in the global CO2 palaeo-reconstructions. Here, we describe our new set-up for simultaneous measurements of atmospheric CO2 mixing ratios and atmospheric δ13C and δ18O-CO2 in air extracted from ice core samples. The centrepiece of the system is a newly designed needle cracker for the mechanical release of air entrapped in ice core samples of 8-13 g operated at -45 °C. The small sample size allows for high resolution and replicate sampling schemes. In our method, CO2 is cryogenically and chromatographically separated from the bulk air and its isotopic composition subsequently determined by continuous flow isotope ratio mass spectrometry (IRMS). In combination with thermal conductivity measurement of the bulk air, the CO2 mixing ratio is calculated. The analytical precision determined from standard air sample measurements over ice is ±1.9 ppm for CO2 and ±0.09 ‰ for δ13C. In a laboratory intercomparison study with CSIRO (Aspendale, Australia), good agreement between CO2 and δ13C results is found for Law Dome ice core samples. Replicate analysis of these samples resulted in a pooled standard deviation of 2.0 ppm for CO2 and 0.11 ‰ for δ13C. These numbers are good, though they are rather conservative estimates of the overall analytical precision achieved for single ice sample measurements. Facilitated by the small sample requirement

  3. Assessment of real-time PCR method for detection of EGFR mutation using both supernatant and cell pellet of malignant pleural effusion samples from non-small-cell lung cancer patients.

    Science.gov (United States)

    Shin, Saeam; Kim, Juwon; Kim, Yoonjung; Cho, Sun-Mi; Lee, Kyung-A

    2017-10-26

    EGFR mutation is an emerging biomarker for treatment selection in non-small-cell lung cancer (NSCLC) patients. However, optimal mutation detection is hindered by complications associated with the biopsy procedure, tumor heterogeneity and limited sensitivity of test methodology. In this study, we evaluated the diagnostic utility of real-time PCR using malignant pleural effusion samples. A total of 77 pleural fluid samples from 77 NSCLC patients were tested using the cobas EGFR mutation test (Roche Molecular Systems). Pleural fluid was centrifuged, and separated cell pellets and supernatants were tested in parallel. Results were compared with Sanger sequencing and/or peptide nucleic acid (PNA)-mediated PCR clamping of matched tumor tissue or pleural fluid samples. All samples showed valid real-time PCR results in one or more DNA samples extracted from cell pellets and supernatants. Compared with other molecular methods, the sensitivity of real-time PCR method was 100%. Concordance rate of real-time PCR and Sanger sequencing plus PNA-mediated PCR clamping was 98.7%. We have confirmed that real-time PCR using pleural fluid had a high concordance rate compared to conventional methods, with no failed samples. Our data demonstrated that the parallel real-time PCR testing using supernatant and cell pellet could offer reliable and robust surrogate strategy when tissue is not available.

  4. Thermal transfer and apparent-dose distributions in poorly bleached mortar samples: Results from single grains and small aliquots of quartz

    DEFF Research Database (Denmark)

    Jain, M.; Thomsen, Kristina Jørkov; Bøtter-Jensen, L.

    2004-01-01

    In the assessment of doses received from a nuclear accident, considerable attention has been paid to retrospective dosimetry using the optically stimulated luminescence (OSL) of heated materials such as bricks and tiles. Quartz extracted from these artefacts was heated during manufacture;, this p......In the assessment of doses received from a nuclear accident, considerable attention has been paid to retrospective dosimetry using the optically stimulated luminescence (OSL) of heated materials such as bricks and tiles. Quartz extracted from these artefacts was heated during manufacture......;, this process releases all the prior trapped charge and simultaneously sensitises the quartz. Unfortunately unheated materials such as mortar and concrete are more common in industrial sites and particularly in nuclear installations. These materials are usually exposed to daylight during quarrying...... dosimetry. The challenge in using such materials as retrospective dosemeters, is in identifying these well-bleached grains when an accident dose has been superimposed on the original dose distribution. We investigate here, using OSL, the background dose in three different mortar samples: render, whitewash...

  5. Evaluation of an educational "toolbox" for improving nursing staff competence and psychosocial work environment in elderly care: results of a prospective, non-randomized controlled intervention.

    Science.gov (United States)

    Arnetz, J E; Hasson, H

    2007-07-01

    Lack of professional development opportunities among nursing staff is a major concern in elderly care and has been associated with work dissatisfaction and staff turnover. There is a lack of prospective, controlled studies evaluating the effects of educational interventions on nursing competence and work satisfaction. The aim of this study was to evaluate the possible effects of an educational "toolbox" intervention on nursing staff ratings of their competence, psychosocial work environment and overall work satisfaction. The study was a prospective, non-randomized, controlled intervention. Nursing staff in two municipal elderly care organizations in western Sweden. In an initial questionnaire survey, nursing staff in the intervention municipality described several areas in which they felt a need for competence development. Measurement instruments and educational materials for improving staff knowledge and work practices were then collated by researchers and managers in a "toolbox." Nursing staff ratings of their competence and work were measured pre and post-intervention by questionnaire. Staff ratings in the intervention municipality were compared to staff ratings in the reference municipality, where no toolbox was introduced. Nursing staff ratings of their competence and psychosocial work environment, including overall work satisfaction, improved significantly over time in the intervention municipality, compared to the reference group. Both competence and work environment ratings were largely unchanged among reference municipality staff. Multivariate analysis revealed a significant interaction effect between municipalities over time for nursing staff ratings of participation, leadership, performance feedback and skills' development. Staff ratings for these four scales improved significantly in the intervention municipality as compared to the reference municipality. Compared to a reference municipality, nursing staff ratings of their competence and the

  6. Non-random temporary emigration and the robust design: Conditions for bias at the end of a time series: Section VIII

    Science.gov (United States)

    Langtimm, Catherine A.

    2008-01-01

    Deviations from model assumptions in the application of capture–recapture models to real life situations can introduce unknown bias. Understanding the type and magnitude of bias under these conditions is important to interpreting model results. In a robust design analysis of long-term photo-documented sighting histories of the endangered Florida manatee, I found high survival rates, high rates of non-random temporary emigration, significant time-dependence, and a diversity of factors affecting temporary emigration that made it difficult to model emigration in any meaningful fashion. Examination of the time-dependent survival estimates indicated a suspicious drop in survival rates near the end of the time series that persisted when the original capture histories were truncated and reanalyzed under a shorter time frame. Given the wide swings in manatee emigration estimates from year to year, a likely source of bias in survival was the convention to resolve confounding of the last survival probability in a time-dependent model with the last emigration probabilities by setting the last unmeasurable emigration probability equal to the previous year’s probability when the equality was actually false. Results of a series of simulations demonstrated that if the unmeasurable temporary emigration probabilities in the last time period were not accurately modeled, an estimation model with significant annual variation in survival probabilities and emigration probabilities produced bias in survival estimates at the end of the study or time series being explored. Furthermore, the bias propagated back in time beyond the last two time periods and the number of years affected varied positively with survival and emigration probabilities. Truncating the data to a shorter time frame and reanalyzing demonstrated that with additional years of data surviving temporary emigrants eventually return and are detected, thus in subsequent analysis unbiased estimates are eventually realized.

  7. Lifetime health effects and medical costs of integrated stroke services - a non-randomized controlled cluster-trial based life table approach

    Directory of Open Access Journals (Sweden)

    Dirks Maaike

    2010-11-01

    Full Text Available Abstract Background Economic evaluation of stroke services indicates that such services may lead to improved quality of life at affordable cost. The present study assesses lifetime health impact and cost consequences of stroke in an integrated service setting. Methods The EDISSE study is a prospective non-randomized controlled cluster trial that compared stroke services (n = 151 patients to usual care (n = 187 patients. Health status and cost trial-data were entered in multi-dimensional stroke life-tables. The tables distinguish four levels of disability which are defined by the modified Rankin scale. Quality-of-life scores (EuroQoL-5D, transition and survival probabilities are based on concurrent Dutch follow-up studies. Outcomes are quality-adjusted life years lived and lifetime medical cost by disability category. An economic analysis compares outcomes from a successful stroke service to usual care, by bootstrapping individual costs and effects data from patients in each arm. Results Lifetime costs and QALYs after stroke depend on age-of-onset of first-ever stroke. Lifetime QALYs after stroke are 2.42 (90% CI - 0.49 - 2.75 for male patients in usual care and 2.75 (-0.61; 6.26 for females. Lifetime costs for men in the usual care setting are €39,335 (15,951; 79,837 and €42,944 (14,081; 95,944 for women. A comparison with the stroke service results in an ICER of €11,685 saved per QALY gained (€14,211 and €7,745 for men and women respectively. This stroke service is with 90% certainty cost-effective. Conclusions Our analysis shows the potential of large health benefits and cost savings of stroke services, taking a lifetime perspective, also in other European settings.

  8. An open-label, non-randomized study of the pharmacokinetics of the nutritional supplement nicotinamide riboside (NR and its effects on blood NAD+ levels in healthy volunteers.

    Directory of Open Access Journals (Sweden)

    Sophia E Airhart

    Full Text Available The co-primary objectives of this study were to determine the human pharmacokinetics (PK of oral NR and the effect of NR on whole blood nicotinamide adenine dinucleotide (NAD+ levels.Though mitochondrial dysfunction plays a critical role in the development and progression of heart failure, no mitochondria-targeted therapies have been translated into clinical practice. Recent murine studies have reported associations between imbalances in the NADH/NAD+ ratio with mitochondrial dysfunction in multiple tissues, including myocardium. Moreover, an NAD+ precursor, nicotinamide mononucleotide, improved cardiac function, while another NAD+ precursor, nicotinamide riboside (NR, improved mitochondrial function in muscle, liver and brown adipose. Thus, PK studies of NR in humans is critical for future clinical trials.In this non-randomized, open-label PK study of 8 healthy volunteers, 250 mg NR was orally administered on Days 1 and 2, then uptitrated to peak dose of 1000 mg twice daily on Days 7 and 8. On the morning of Day 9, subjects completed a 24-hour PK study after receiving 1000 mg NR at t = 0. Whole-blood levels of NR, clinical blood chemistry, and NAD+ levels were analyzed.Oral NR was well tolerated with no adverse events. Significant increases comparing baseline to mean concentrations at steady state (Cave,ss were observed for both NR (p = 0.03 and NAD+ (p = 0.001; the latter increased by 100%. Absolute changes from baseline to Day 9 in NR and NAD+ levels correlated highly (R2 = 0.72, p = 0.008.Because NR increases circulating NAD+ in humans, NR may have potential as a therapy in patients with mitochondrial dysfunction due to genetic and/or acquired diseases.

  9. A prospective, non-randomized phase II trial of Trastuzumab and Capecitabine in patients with HER2 expressing metastasized pancreatic cancer

    Directory of Open Access Journals (Sweden)

    Endlicher Esther

    2009-01-01

    Full Text Available Abstract Background Pancreatic cancer is the fourth most common cause of cancer related death in Western countries. Advantages in surgical techniques, radiation and chemotherapy had almost no impact on the long term survival of affected patients. Therefore, the need for better treatment strategies is urgent. HER2, a receptor tyrosine kinase of the EGFR family, involved in signal transduction pathways leading to cell growth and differentiation is overexpressed in a number of cancers, including breast and pancreatic cancer. While in breast cancer HER2 has already been successfully used as a treatment target, there are only limited data evaluating the effects of inhibiting HER2 tyrosine kinases in patients with pancreatic cancer. Methods Here we report the design of a prospective, non-randomized multi-centered Phase II clinical study evaluating the effects of the Fluoropyrimidine-carbamate Capecitabine (Xeloda ® and the monoclonal anti-HER2 antibody Trastuzumab (Herceptin® in patients with non-resectable, HER2 overexpressing pancreatic cancer. Patients eligible for the study will receive Trastuzumab infusions on day 1, 8 and 15 concomitant to the oral intake of Capecitabine from day 1 to day 14 of each three week cylce. Cycles will be repeated until tumor progression. A total of 37 patients will be enrolled with an interim analysis after 23 patients. Discussion Primary end point of the study is to determine the progression free survival after 12 weeks of bimodal treatment with the chemotherapeutic agent Capecitabine and the anti-HER2 antibody Trastuzumab. Secondary end points include patient's survival, toxicity analysis, quality of life, the correlation of HER2 overexpression and clinical response to Trastuzumab treatment and, finally, the correlation of CA19-9 plasma levels and progression free intervals.

  10. Effectiveness of a peer-led HIV prevention intervention in secondary schools in Rwanda: results from a non-randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Michielsen Kristien

    2012-09-01

    Full Text Available Abstract Background While the HIV epidemic is levelling off in sub-Saharan Africa, it remains at an unacceptably high level. Young people aged 15-24 years remain particularly vulnerable, resulting in a regional HIV prevalence of 1.4% in young men and 3.3% in young women. This study assesses the effectiveness of a peer-led HIV prevention intervention in secondary schools in Rwanda on young people’s sexual behavior, HIV knowledge and attitudes. Methods In a non-randomized longitudinal controlled trial, fourteen schools were selected in two neighboring districts in Rwanda Bugesera (intervention and Rwamagana (control. Students (n = 1950 in eight intervention and six control schools participated in three surveys (baseline, six and twelve months in the intervention. Analysis was done using linear and logistic regression using generalized estimation equations adjusted for propensity score. Results The overall retention rate was 72%. Time trends in sexual risk behavior (being sexually active, sex in last six months, condom use at last sex were not significantly different in students from intervention and control schools, nor was the intervention associated with increased knowledge, perceived severity or perceived susceptibility. It did significantly reduce reported stigma. Conclusions Analyzing this and other interventions, we identified several reasons for the observed limited effectiveness of peer education: 1 intervention activities (spreading information are not tuned to objectives (changing behavior; 2 young people prefer receiving HIV information from other sources than peers; 3 outcome indicators are not adequate and the context of the relationship in which sex occurs and the context in which sex occurs is ignored. Effectiveness of peer education may increase through integration in holistic interventions and redefining peer educators’ role as focal points for sensitization and referral to experts and services. Finally, we argue that a

  11. An open-label, non-randomized study of the pharmacokinetics of the nutritional supplement nicotinamide riboside (NR) and its effects on blood NAD+ levels in healthy volunteers.

    Science.gov (United States)

    Airhart, Sophia E; Shireman, Laura M; Risler, Linda J; Anderson, Gail D; Nagana Gowda, G A; Raftery, Daniel; Tian, Rong; Shen, Danny D; O'Brien, Kevin D

    2017-01-01

    The co-primary objectives of this study were to determine the human pharmacokinetics (PK) of oral NR and the effect of NR on whole blood nicotinamide adenine dinucleotide (NAD+) levels. Though mitochondrial dysfunction plays a critical role in the development and progression of heart failure, no mitochondria-targeted therapies have been translated into clinical practice. Recent murine studies have reported associations between imbalances in the NADH/NAD+ ratio with mitochondrial dysfunction in multiple tissues, including myocardium. Moreover, an NAD+ precursor, nicotinamide mononucleotide, improved cardiac function, while another NAD+ precursor, nicotinamide riboside (NR), improved mitochondrial function in muscle, liver and brown adipose. Thus, PK studies of NR in humans is critical for future clinical trials. In this non-randomized, open-label PK study of 8 healthy volunteers, 250 mg NR was orally administered on Days 1 and 2, then uptitrated to peak dose of 1000 mg twice daily on Days 7 and 8. On the morning of Day 9, subjects completed a 24-hour PK study after receiving 1000 mg NR at t = 0. Whole-blood levels of NR, clinical blood chemistry, and NAD+ levels were analyzed. Oral NR was well tolerated with no adverse events. Significant increases comparing baseline to mean concentrations at steady state (Cave,ss) were observed for both NR (p = 0.03) and NAD+ (p = 0.001); the latter increased by 100%. Absolute changes from baseline to Day 9 in NR and NAD+ levels correlated highly (R2 = 0.72, p = 0.008). Because NR increases circulating NAD+ in humans, NR may have potential as a therapy in patients with mitochondrial dysfunction due to genetic and/or acquired diseases.

  12. Increasing capacity to deliver diabetes self-management education: results of the DESMOND lay educator non-randomized controlled equivalence trial.

    Science.gov (United States)

    Carey, M E; Mandalia, P K; Daly, H; Gray, L J; Hale, R; Martin Stacey, L; Taub, N; Skinner, T C; Stone, M; Heller, S; Khunti, K; Davies, M J

    2014-11-01

    To develop and test a format of delivery of diabetes self-management education by paired professional and lay educators. We conducted an equivalence trial with non-randomized participant allocation to a Diabetes Education and Self Management for Ongoing and Newly Diagnosed Type 2 diabetes (DESMOND) course, delivered in the standard format by two trained healthcare professional educators (to the control group) or by one trained lay educator and one professional educator (to the intervention group). A total of 260 people with Type 2 diabetes diagnosed within the previous 12 months were referred for self-management education as part of routine care and attended either a control or intervention format DESMOND course. The primary outcome measure was change in illness coherence score (derived from the Diabetes Illness Perception Questionnaire-Revised) between baseline and 4 months after attending education sessions. Secondary outcome measures included change in HbA1c level. The trial was conducted in four primary care organizations across England and Scotland. The 95% CI for the between-group difference in positive change in coherence scores was within the pre-set limits of equivalence (difference = 0.22, 95% CI 1.07 to 1.52). Equivalent changes related to secondary outcome measures were also observed, including equivalent reductions in HbA1c levels. Diabetes education delivered jointly by a trained lay person and a healthcare professional educator with the same educator role can provide equivalent patient benefits. This could provide a method that increases capacity, maintains quality and is cost-effective, while increasing access to self-management education. © 2014 The Authors. Diabetic Medicine © 2014 Diabetes UK.

  13. Effect of cognitive behavioral group therapy for recovery of self-esteem on community-living individuals with mental illness: Non-randomized controlled trial.

    Science.gov (United States)

    Kunikata, Hiroko; Yoshinaga, Naoki; Nakajima, Kazuo

    2016-10-01

    The aim of this study was to examine over a 12-month post-intervention period whether the participation of community-living individuals with mental illness in cognitive behavioral group therapy for recovery of self-esteem (CBGTRS) resulted in improved outcomes. This was a non-randomized controlled trial. The participants were persons with mental illness who resided in communities in the Chugoku region of Japan. In total, 41 were assigned to an experimental group (CBGTRS intervention, 12 group sessions), and 21 to a control group. Outcome indices (self-esteem, moods, cognition, subjective well-being, psychiatric symptoms) were measured for the experimental group prior to intervention (T0), immediately post-intervention (T1), and at 3 (T2) and 12 (T3) months post-intervention. The control group was measured at the same intervals. For the experimental group, self-esteem scores at T1, T2, and T3 were significantly higher than at T0. Moods and cognition scores remained significantly low until T2. Scores for Inadequate Mental Mastery in the subjective well-being index had not decreased by T3. Confidence in Coping remained significantly high until T2. Psychiatric symptoms scores at T0, T1, T2, and T3 were significantly lower than at T0. The means and standard errors for self-esteem and Inadequate Mental Mastery increased until T3, and those for Tension-Anxiety, Depression-Dejection, and Confusion decreased until T2. From within-group trends and between-group differences in self-esteem, we conclude that CBGTRS may have a relatively long-term effect on self-esteem recovery. T2 is the turning point for moods and cognition; thus, follow-up is needed 3 months following the initial program. © 2016 The Authors. Psychiatry and Clinical Neurosciences © 2016 Japanese Society of Psychiatry and Neurology.

  14. Two-stage revision surgery with preformed spacers and cementless implants for septic hip arthritis: a prospective, non-randomized cohort study

    Directory of Open Access Journals (Sweden)

    Logoluso Nicola

    2011-05-01

    Full Text Available Abstract Background Outcome data on two-stage revision surgery for deep infection after septic hip arthritis are limited and inconsistent. This study presents the medium-term results of a new, standardized two-stage arthroplasty with preformed hip spacers and cementless implants in a consecutive series of adult patients with septic arthritis of the hip treated according to a same protocol. Methods Nineteen patients (20 hips were enrolled in this prospective, non-randomized cohort study between 2000 and 2008. The first stage comprised femoral head resection, debridement, and insertion of a preformed, commercially available, antibiotic-loaded cement hip spacer. After eradication of infection, a cementless total hip arthroplasty was implanted in the second stage. Patients were assessed for infection recurrence, pain (visual analog scale [VAS] and hip joint function (Harris Hip score. Results The mean time between first diagnosis of infection and revision surgery was 5.8 ± 9.0 months; the average duration of follow up was 56.6 (range, 24 - 104 months; all 20 hips were successfully converted to prosthesis an average 22 ± 5.1 weeks after spacer implantation. Reinfection after total hip joint replacement occurred in 1 patient. The mean VAS pain score improved from 48 (range, 35 - 84 pre-operatively to 18 (range, 0 - 38 prior to spacer removal and to 8 (range, 0 - 15 at the last follow-up assessment after prosthesis implantation. The average Harris Hip score improved from 27.5 before surgery to 61.8 between the two stages to 92.3 at the final follow-up assessment. Conclusions Satisfactory outcomes can be obtained with two-stage revision hip arthroplasty using preformed spacers and cementless implants for prosthetic hip joint infections of various etiologies.

  15. Nonlinearity and thresholds in dose-response relationships for carcinogenicity due to sampling variation, logarithmic dose scaling, or small differences in individual susceptibility

    International Nuclear Information System (INIS)

    Lutz, W.K.; Gaylor, D.W.; Conolly, R.B.; Lutz, R.W.

    2005-01-01

    Nonlinear and threshold-like shapes of dose-response curves are often observed in tests for carcinogenicity. Here, we present three examples where an apparent threshold is spurious and can be misleading for low dose extrapolation and human cancer risk assessment. Case 1: For experiments that are not replicated, such as rodent bioassays for carcinogenicity, random variation can lead to misinterpretation of the result. This situation was simulated by 20 random binomial samplings of 50 animals per group, assuming a true linear dose response from 5% to 25% tumor incidence at arbitrary dose levels 0, 0.5, 1, 2, and 4. Linearity was suggested only by 8 of the 20 simulations. Four simulations did not reveal the carcinogenicity at all. Three exhibited thresholds, two showed a nonmonotonic behavior with a decrease at low dose, followed by a significant increase at high dose ('hormesis'). Case 2: Logarithmic representation of the dose axis transforms a straight line into a sublinear (up-bent) curve, which can be misinterpreted to indicate a threshold. This is most pronounced if the dose scale includes a wide low dose range. Linear regression of net tumor incidences and intersection with the dose axis results in an apparent threshold, even with an underlying true linear dose-incidence relationship. Case 3: Nonlinear shapes of dose-cancer incidence curves are rarely seen with epidemiological data in humans. The discrepancy to data in rodents may in part be explained by a wider span of individual susceptibilities for tumor induction in humans due to more diverse genetic background and modulation by co-carcinogenic lifestyle factors. Linear extrapolation of a human cancer risk could therefore be appropriate even if animal bioassays show nonlinearity

  16. Continuous-flow liquid microjunction surface sampling probe connected on-line with high-performance liquid chromatography/mass spectrometry for spatially resolved analysis of small molecules and proteins.

    Science.gov (United States)

    Van Berkel, Gary J; Kertesz, Vilmos

    2013-06-30

    A continuous-flow liquid microjunction surface sampling probe extracts soluble material from surfaces for direct ionization and detection by mass spectrometry. Demonstrated here is the on-line coupling of such a probe with high-performance liquid chromatography/mass spectrometry (HPLC/MS) enabling extraction, separation and detection of small molecules and proteins from surfaces in a spatially resolved (~0.5 mm diameter spots) manner. A continuous-flow liquid microjunction surface sampling probe was connected to a six-port, two-position valve for extract collection and injection to an HPLC column. A QTRAP® 5500 hybrid triple quadrupole linear ion trap equipped with a Turbo V™ ion source operated in positive electrospray ionization (ESI) mode was used for all experiments. The system operation was tested with the extraction, separation and detection of propranolol and associated metabolites from drug dosed tissues, caffeine from a coffee bean, cocaine from paper currency, and proteins from dried sheep blood spots on paper. Confirmed in the tissue were the parent drug and two different hydroxypropranolol glucuronides. The mass spectrometric response for these compounds from different locations in the liver showed an increase with increasing extraction time (5, 20 and 40 s). For on-line separation and detection/identification of extracted proteins from dried sheep blood spots, two major protein peaks dominated the chromatogram and could be correlated with the expected masses for the hemoglobin α and β chains. Spatially resolved sampling, separation, and detection of small molecules and proteins from surfaces can be accomplished using a continuous-flow liquid microjunction surface sampling probe coupled on-line with HPLC/MS detection. Published in 2013. This article is a U.S. Government work and is in the public domain in the USA.

  17. A psycho-educational HIV/STI prevention intervention for internally displaced women in Leogane, Haiti: results from a non-randomized cohort pilot study.

    Science.gov (United States)

    Logie, Carmen H; Daniel, CarolAnn; Newman, Peter A; Weaver, James; Loutfy, Mona R

    2014-01-01

    Little evidence exists regarding efficacious HIV and sexually transmitted infections (STI) prevention interventions with internally displaced populations. Internally displaced women are at elevated risk for HIV/STI due to limited access to health services, heightened poverty and social network breakdown. The FASY (Famn an Aksyon Pou Sante' Yo) (Women Taking Action For Their Health) study examined the effectiveness of a peer health worker (PHW) delivered psycho-educational HIV/STI pilot study with internally displaced women in Leogane, Haiti. This was a non-randomized cohort pilot study. Participants completed a computer-assisted pre-test programmed on Android tablet PCs followed by an HIV/STI educational video-based session and a 6-week psycho-educational group program of weekly meetings. Participants completed a post-test upon completion of group sessions. The primary outcome was HIV knowledge; our pre-specified index of clinically significant change was an effect size of 0.30. Secondary outcomes included: STI knowledge, condom use, social support, resilient coping, depression and relationship control. We used mixed-effects regression to calculate mean outcome pre-post score change. This study was registered (clinicaltrials.gov, NCT01492829). Between January 1-April 30, 2012 we assigned 200 participants to the study. The majority of participants (n = 176, 88%) completed the study and were followed up at 8 weeks, finishing April 30, 2012. Adjusted for socio-demographic characteristics, HIV knowledge (β = 4.81; 95% CI 4.36-5.26), STI knowledge (β = 0.84; 95% CI 0.70-0.99), condom use (AOR = 4.05, 95% CI 1.86-8.83), and depression (β = -0.63, 95% CI -0.88--0.39) scores showed statistically significant change post-intervention (pHaiti. Pilot studies are an important approach to understand feasibility and scientific impacts of HIV prevention strategies in disaster contexts. Study results may inform HIV prevention interventions among internally displaced women in

  18. TP53 status and taxane-platinum versus platinum-based therapy in ovarian cancer patients: A non-randomized retrospective study

    Directory of Open Access Journals (Sweden)

    Markowska Janina

    2008-01-01

    Full Text Available Abstract Background Taxane-platinum therapy (TP has replaced platinum-based therapy (PC or PAC, DNA damaging chemotherapy in the postoperative treatment of ovarian cancer patients; however, it is not always effective. TP53 protein plays a differential role in response to DNA-damaging agents and taxanes. We sought to define profiles of patients who benefit the most from TP and also of those who can be treated with PC. Methods We compared the effectiveness of PC/PAC (n = 253 and TP (n = 199 with respect to tumor TP53 accumulation in ovarian cancer patients with FIGO stage IIB-IV disease; this was a non-randomized retrospective study. Immunohistochemical analysis was performed on 452 archival tumors; univariate and multivariate analysis by the Cox's and logistic regression models was performed in all patients and in subgroups with [TP53(+] and without TP53 accumulation [TP53(-]. Results The advantage of taxane-platinum therapy over platinum-based therapy was seen in the TP53(+, and not in the TP53(- group. In the TP53(+ group taxane-platinum therapy enhanced the probability of complete remission (p = .018, platinum sensitivity (p = .014, platinum highly sensitive response (p = .038 and longer survival (OS, p = .008. Poor tumor differentiation diminished the advantage from taxane-platinum therapy in the TP53(+ group. In the TP53(- group PC/PAC was at least equally efficient as taxane-platinum therapy and it enhanced the chance of platinum highly sensitive response (p = .010. However, in the TP53(- group taxane-platinum therapy possibly diminished the risk of death in patients over 53 yrs (p = .077. Among factors that positively interacted with taxane-platinum therapy in some analyses were endometrioid and clear cell type, FIGO III stage, bulky residual tumor, more advanced age of patient and moderate tumor differentiation. Conclusion Our results suggest that taxane-platinum therapy is particularly justified in patients with TP53(+ tumors or older

  19. Evaluation of an advanced pressure ulcer management protocol followed by trained wound, ostomy, and continence nurses: a non-randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Kaitani T

    2015-02-01

    Full Text Available Toshiko Kaitani,1 Gojiro Nakagami,2 Junko Sugama,3 Masahiro Tachi,4 Yutaka Matsuyama,5 Yoshiki Miyachi,6 Takashi Nagase,2 Yukie Takemura,7 Hiromi Sanada2 1School of Nursing, Sapporo City University, Hokkaido, Japan; 2Department of Gerontological Nursing/Wound Care Management, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan; 3Department of Clinical Nursing, Institute of Medical, Pharmaceutical and Health Sciences, Kanazawa University, Kanazawa, Japan; 4Department of Plastic Surgery, Tohoku University Graduate School of Medicine, Miyagi, Japan; 5Department of Biostatistics, School of Public Health, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan; 6Department of Dermatology, Kyoto University Graduate School of Medicine, Kyoto, Japan; 7Department of Nursing, Research Hospital, The Institute of Medical Science, The University of Tokyo, Tokyo, Japan Aims and objectives: We investigated the effectiveness and safety of an advanced pressure ulcer (PU management protocol comprising 1 ultrasonography to assess the deep tissue, 2 use of a non-contact thermometer to detect critical colonization, 3 conservative sharp debridement, 4 dressing selection, 5 negative pressure wound therapy, and 6 vibration therapy in comparison with those of a conventional approach. Each protocol was followed by trained wound, ostomy, and continence nurses (WOCNs. Background: At present, there is no systematic PU management protocol for nurses that includes appropriate assessment and intervention techniques for deep tissue injury and critical colonization. In Japan, there is no such protocol that the nurses can follow without a physician’s orders. Design and methods: This was a prospective non-randomized controlled trial. Over a 3-week period, we evaluated the effectiveness of an advanced protocol by comparing the PU severity and healing on the basis of the DESIGN-R scale and presence of patients' discomfort. We recruited ten WOCNs to follow

  20. A psycho-educational HIV/STI prevention intervention for internally displaced women in Leogane, Haiti: results from a non-randomized cohort pilot study.

    Directory of Open Access Journals (Sweden)

    Carmen H Logie

    Full Text Available Little evidence exists regarding efficacious HIV and sexually transmitted infections (STI prevention interventions with internally displaced populations. Internally displaced women are at elevated risk for HIV/STI due to limited access to health services, heightened poverty and social network breakdown. The FASY (Famn an Aksyon Pou Sante' Yo (Women Taking Action For Their Health study examined the effectiveness of a peer health worker (PHW delivered psycho-educational HIV/STI pilot study with internally displaced women in Leogane, Haiti.This was a non-randomized cohort pilot study. Participants completed a computer-assisted pre-test programmed on Android tablet PCs followed by an HIV/STI educational video-based session and a 6-week psycho-educational group program of weekly meetings. Participants completed a post-test upon completion of group sessions. The primary outcome was HIV knowledge; our pre-specified index of clinically significant change was an effect size of 0.30. Secondary outcomes included: STI knowledge, condom use, social support, resilient coping, depression and relationship control. We used mixed-effects regression to calculate mean outcome pre-post score change. This study was registered (clinicaltrials.gov, NCT01492829.Between January 1-April 30, 2012 we assigned 200 participants to the study. The majority of participants (n = 176, 88% completed the study and were followed up at 8 weeks, finishing April 30, 2012. Adjusted for socio-demographic characteristics, HIV knowledge (β = 4.81; 95% CI 4.36-5.26, STI knowledge (β = 0.84; 95% CI 0.70-0.99, condom use (AOR = 4.05, 95% CI 1.86-8.83, and depression (β = -0.63, 95% CI -0.88--0.39 scores showed statistically significant change post-intervention (p<0.05.This pilot study evaluated a PHW psycho-educational HIV/STI prevention intervention among internally displaced women in post-earthquake Haiti. Pilot studies are an important approach to understand feasibility and scientific

  1. Gender Segregation Small Firms

    OpenAIRE

    Kenneth R Troske; William J Carrington

    1992-01-01

    This paper studies interfirm gender segregation in a unique sample of small employers. We focus on small firms because previous research on interfirm segregation has studied only large firms and because it is easier to link the demographic characteristics of employers and employees in small firms. This latter feature permits an assessment of the role of employer discrimination in creating gender segregation. Our first finding is that interfirm segregation is prevalent among small employers. I...

  2. Predictor sort sampling and one-sided confidence bounds on quantiles

    Science.gov (United States)

    Steve Verrill; Victoria L. Herian; David W. Green

    2002-01-01

    Predictor sort experiments attempt to make use of the correlation between a predictor that can be measured prior to the start of an experiment and the response variable that we are investigating. Properly designed and analyzed, they can reduce necessary sample sizes, increase statistical power, and reduce the lengths of confidence intervals. However, if the non- random...

  3. "HOOF-Print" Genotyping and Haplotype Inference Discriminates among Brucella spp Isolates From a Small Spatial Scale

    Science.gov (United States)

    We demonstrate that the “HOOF-Print” assay provides high power to discriminate among Brucella isolates collected on a small spatial scale (within Portugal). Additionally, we illustrate how haplotype identification using non-random association among markers allows resolution of B. melitensis biovars ...

  4. Is the alpha-beta ratio of prostate cancer really low? A prospective, non-randomized trial comparing standard and hyperfractionated conformal radiation therapy

    International Nuclear Information System (INIS)

    Valdagni, Riccardo; Italia, Corrado; Montanaro, Paolo; Lanceni, Angelo; Lattuada, Paola; Magnani, Tiziana; Fiorino, Claudio; Nahum, Alan

    2005-01-01

    Background and purpose: The objectives of the current study were to compare genito-urinary (GU) and gastro-intestinal (GI) toxicities as well as biochemical control (bRFS) in prostate cancer, utilizing conventional (2.0 Gy daily) (STD) or hyperfractionated (HFX) conformal irradiation (CRT). HFX (1.2 Gy BID) was chosen as a radiobiological method to try to reduce long term sequelae without compromising local control. Patients and methods: Three-hundred-and-seventy consecutive patients (pts) entered this prospective, non-randomized trial in the period January 1993-January 2003; 209 were treated with STD and 161 with HFX CRT. All were evaluable for acute toxicity analysis, 179 (STD) and 151 pts (HFX) being evaluable for late sequelae and bRFS analyses. Pt characteristics were not statistically different in the two groups. CRT consisted of a 4-field technique for prostate and/or pelvic nodes and a 5-field boost with rectal shielding. Median doses were 74 and 79.2 Gy for STD and HFX patients respectively, the latter dose being isoeffective for tumour control assuming α/β=10 (EQD 2 =73.9 Gy). Median follow-up was 29.4 months (25.2 mos for STD; 37.7 mos for HFX; P<0.01). The two regimens were compared in terms of acute and late GU and GI toxicities and 5-year bRFS by univariate and multivariate analyses. Results: Acute grade≥2 GU toxicity was higher in the STD group (48.6% versus 37.3% in HFX, P=0.03), while no significant difference was found for acute GI toxicity. Late grade≥2 GU and GI toxicities were lower in the HFX group (5-year actuarial rate: GU: 10.1% versus 20.3%, P=0.05; GI: 6.0% versus 10.6%, P=0.18). Five-year bRFS were 70% (±13.8%, 95% CI) and 82.6% (±7.2%) for STD and HFX, respectively (P=0.44); a trend favouring HFX was found in the subgroup of pts who did not receive hormonal therapy (5-year bRFS: 85.9%±12.4% versus 63.9%±23.8%, P=0.15). Multivariate analysis revealed only risk groups and age statistically related to bRFS but not fractionation

  5. EGFR T790M mutation testing of non-small cell lung cancer tissue and blood samples artificially spiked with circulating cell-free tumor DNA: results of a round robin trial.

    Science.gov (United States)

    Fassunke, Jana; Ihle, Michaela Angelika; Lenze, Dido; Lehmann, Annika; Hummel, Michael; Vollbrecht, Claudia; Penzel, Roland; Volckmar, Anna-Lena; Stenzinger, Albrecht; Endris, Volker; Jung, Andreas; Lehmann, Ulrich; Zeugner, Silke; Baretton, Gustavo; Kreipe, Hans; Schirmacher, Peter; Kirchner, Thomas; Dietel, Manfred; Büttner, Reinhard; Merkelbach-Bruse, Sabine

    2017-10-01

    The European Commision (EC) recently approved osimertinib for the treatment of adult patients with locally advanced or metastatic non-small-cell lung cancer (NSCLC) harboring EGFR T790M mutations. Besides tissue-based testing, blood samples containing cell-free circulating tumor DNA (ctDNA) can be used to interrogate T790M status. Herein, we describe the conditions and results of a round robin trial (RRT) for T790M mutation testing in NSCLC tissue specimens and peripheral blood samples spiked with cell line DNA mimicking tumor-derived ctDNA. The underlying objectives of this two-staged external quality assessment (EQA) approach were (a) to evaluate the accuracy of T790M mutations testing across multiple centers and (b) to investigate if a liquid biopsy-based testing for T790M mutations in spiked blood samples is feasible in routine diagnostic. Based on a successfully completed internal phase I RRT, an open RRT for EGFR T790M mutation testing in tumor tissue and blood samples was initiated. In total, 48 pathology centers participated in the EQA. Of these, 47 (97.9%) centers submitted their analyses within the pre-defined time frame and 44 (tissue), respectively, 40 (plasma) successfully passed the test. The overall success rates in the RRT phase II were 91.7% (tissue) and 83.3% (blood), respectively. Thirty-eight out of 48 participants (79.2%) successfully passed both parts of the RRT. The RRT for blood-based EGFR testing initiated in Germany is, to the best of our knowledge, the first of his kind in Europe. In summary, our results demonstrate that blood-based genotyping for EGFR resistance mutations can be successfully integrated in routine molecular diagnostics complementing the array of molecular methods already available at pathology centers in Germany.

  6. Enabling optical metrology on small 5×5μm2 in-cell targets to support flexible sampling and higher order overlay and CD control for advanced logic devices nodes

    Science.gov (United States)

    Salerno, Antonio; de la Fuente, Isabel; Hsu, Zack; Tai, Alan; Chang, Hammer; McNamara, Elliott; Cramer, Hugo; Li, Daoping

    2018-03-01

    In next generation Logic devices, overlay control requirements shrink to sub 2.5nm level on-product overlay. Historically on-product overlay has been defined by the overlay capability of after-develop in-scribe targets. However, due to design and dimension, the after development metrology targets are not completely representative for the final overlay of the device. In addition, they are confined to the scribe-lane area, which limits the sampling possibilities. To address these two issues, metrology on structures matching the device structure and which can be sampled with high density across the device is required. Conventional after-etch CDSEM techniques on logic devices present difficulties in discerning the layers of interest, potential destructive charging effects and finally, they are limited by the long measurement times[1] [2] [3] . All together, limit the sampling densities and making CDSEM less attractive for control applications. Optical metrology can overcome most of these limitations. Such measurement, however, does require repetitive structures. This requirement is not fulfilled by logic devices, as the features vary in pitch and CD over the exposure field. The solution is to use small targets, with a maximum pad size of 5x5um2 , which can easily be placed in the logic cell area. These targets share the process and architecture of the device features of interest, but with a modified design that replicates as close as possible the device layout, allowing for in-device metrology for both CD and Overlay. This solution enables measuring closer to the actual product feature location and, not being limited to scribe-lanes, it opens the possibility of higher-density sampling schemes across the field. In summary, these targets become the facilitator of in-device metrology (IDM), that is, enabling the measurements both in-device Overlay and the CD parameters of interest and can deliver accurate, high-throughput, dense and after-etch measurements for Logic

  7. Stereotactic body radiotherapy for medically inoperable patients with stage I non-small cell lung cancer - A first report of toxicity related to COPD/CVD in a non-randomized prospective phase II study

    International Nuclear Information System (INIS)

    Baumann, Pia; Nyman, Jan; Hoyer, Morten; Gagliardi, Giovanna; Lax, Ingmar; Wennberg, Berit; Drugge, Ninni; Ekberg, Lars; Friesland, Signe; Johansson, Karl-Axel; Lund, Jo-Asmund; Morhed, Elisabeth; Nilsson, Kristina; Levin, Nina; Paludan, Merete; Sederholm, Christer; Traberg, Anders; Wittgren, Lena; Lewensohn, Rolf

    2008-01-01

    Background and Aims: In a retrospective study using stereotactic body radiotherapy (SBRT) in medically inoperable patients with stage I NSCLC we previously reported a local control rate of 88% utilizing a median dose of 15 Gy x 3. This report records the toxicity encountered in a prospective phase II trial, and its relation to coexisting chronic obstructive pulmonary disease (COPD) and cardio vascular disease (CVD). Material and methods: Sixty patients were entered in the study between August 2003 and September 2005. Fifty-seven patients (T1 65%, T2 35%) with a median age of 75 years (59-87 years) were evaluable. The baseline mean FEV1% was 64% and median Karnofsky index was 80. A total dose of 45 Gy was delivered in three fractions at the 67% isodose of the PTV. Clinical, pulmonary and radiological evaluations were made at 6 weeks, 3, 6, 9, 12, 18, and 36 months post-SBRT. Toxicity was graded according to CTC v2.0 and performance status was graded according to the Karnofsky scale. Results: At a median follow-up of 23 months, 2 patients had relapsed locally. No grade 4 or 5 toxicity was reported. Grade 3 toxicity was seen in 12 patients (21%). There was no significant decline of FEV1% during follow-up. Low grade pneumonitis developed to the same extent in the CVD 3/17 (18%) and COPD 7/40 (18%) groups. The incidence of fibrosis was 9/17 (53%) and pleural effusions was 8/17 (47%) in the CVD group compared with 13/40 (33%) and 5/40 (13%) in the COPD group. Conclusion: SBRT for stage I NSCLC patients who are medically inoperable because of COPD and CVD results in a favourable local control rate with a low incidence of grade 3 and no grade 4 or 5 toxicity

  8. Small Data

    NARCIS (Netherlands)

    S. Pemberton (Steven)

    2014-01-01

    htmlabstractThe term “Open Data” often goes hand in hand with the term “Big Data”, where large data sets get released allowing for analysis, but the Cinderella of the Open Data ball is Small Data, small amounts of data, nonetheless possibly essential, that are too small to be put in some database or

  9. KONTAMINASI MERKURI PADA SAMPEL LINGKUNGAN DAN FAKTOR RISIKO PADA MASYARAKAT DARI KEGIATAN PENAMBANGAN EMAS SKALA KECIL KRUENG SABEE PROVINSI ACEH (Mercury Contamination in the Environmental Samples and Risk Factors in Inhabitants of the Small Scale Gold

    Directory of Open Access Journals (Sweden)

    Sofia Sofia

    2016-09-01

    Full Text Available ABSTRAK Kegiatan penambangan emas skala kecil dengan teknik amalgamasi dapat memberikan peluang introduksi merkuri (Hg ke lingkungan dan manusia. Penelitian kontaminasi Hg pada air minum, ikan, rambut kepala manusia, dan faktor risiko pada manusia telah dilakukan di wilayah Krueng Sabee, Provinsi Aceh. Metode pengambilan dan pengujian sampel yang mengandung Hg dilakukan dengan prosedur SNI, EPA dan WHO. Rancangan cross sectional survey dilakukan pada empat desa dengan 72 responden yang dipilih secara acak. Wawancara dilakukan menggunakan kuesioner terstruktur untuk mendapatkan informasi terkait faktor risiko kesehatan. Pengukuran konsentrasi Hg untuk sampel air dan ikan dilakukan dengan Cold Vapor Atomic Absorption Spectrophotometer dan untuk sampel rambut kepala menggunakan Inductively Coupled Plasma Mass Spectrometry. Analisis data dilakukan dengan analisis varian, uji t sampel bebas, dan uji t satu sampel. Model prediksi dihasilkan menggunakan analisis regresi linier berganda. Hasil penelitian ini menunjukkan konsentrasi Hg pada sampel air sumur sebesar 0,24 ± 0,25 µg/L; sampel ikan: Rastrellinger kanagurta,149,46 ± 2,00 µg/g, Selaroides sp, 58,6 ± 3,01 µg/g, Euthynnus affinis, 46,3 ± 2,98 µg/g; dan pada rambut kepala mulai dari 11,2 ± 4,02 µg/g hingga 48,3 ± 22,29 µg/g. Faktor-faktor risiko yang berpengaruh terhadap konsentrasi Hg pada responden adalah status bekerja di Krueng Sabee, lokasi, lama tinggal, status pekerja tambang dan lama penggunaan pembakar amalgam. Faktor-faktor risiko ini memberi peran sebesar 45,8% terhadap akumulasi Hg di dalam rambut kepala responden.   ABSTRACT Small-scale gold mining activities with amalgamation process can contribute the entry of mercury (Hg into environment and humans. Research on Hg contamination in drinking water, fish, human head hair, and risk factors has been conducted in the area of Krueng Sabee, Aceh Province. Methods of samples collection and Hg concentrations testing conducted

  10. 'They say Islam has a solution for everything, so why are there no guidelines for this?' Ethical dilemmas associated with the births and deaths of infants with fatal abnormalities from a small sample of Pakistani Muslim couples in Britain.

    Science.gov (United States)

    Shaw, Alison

    2012-11-01

    This paper presents ethical dilemmas concerning the termination of pregnancy, the management of childbirth, and the withdrawal of life-support from infants in special care, for a small sample of British Pakistani Muslim parents of babies diagnosed with fatal abnormalities. Case studies illustrating these dilemmas are taken from a qualitative study of 66 families of Pakistani origin referred to a genetics clinic in Southern England. The paper shows how parents negotiated between the authoritative knowledge of their doctors, religious experts, and senior family members in response to the ethical dilemmas they faced. There was little knowledge or open discussion of the view that Islam permits the termination of pregnancy for serious or fatal abnormality within 120 days and there was considerable disquiet over the idea of ending a pregnancy. For some parents, whether their newborn baby would draw breath was a main worry, with implications for the baby's Muslim identity and for the recognition of loss the parents would receive from family and community. This concern sometimes conflicted with doctors' concerns to minimize risk to future pregnancies by not performing a Caesarean delivery if a baby is sure to die. The paper also identifies parents' concerns and feelings of wrong-doing regarding the withdrawal of artificial life-support from infants with multiple abnormalities. The conclusion considers some of the implications of these observations for the counselling and support of Muslim parents following the pre- or neo-natal diagnosis of fatal abnormalities in their children. © 2011 Blackwell Publishing Ltd.

  11. Small Data

    OpenAIRE

    Pemberton, Steven

    2014-01-01

    htmlabstractThe term “Open Data” often goes hand in hand with the term “Big Data”, where large data sets get released allowing for analysis, but the Cinderella of the Open Data ball is Small Data, small amounts of data, nonetheless possibly essential, that are too small to be put in some database or online dataset to be put to use. RDFa is a technology that allows Cinderella to go to the ball.

  12. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  13. Graph sampling

    OpenAIRE

    Zhang, L.-C.; Patone, M.

    2017-01-01

    We synthesise the existing theory of graph sampling. We propose a formal definition of sampling in finite graphs, and provide a classification of potential graph parameters. We develop a general approach of Horvitz–Thompson estimation to T-stage snowball sampling, and present various reformulations of some common network sampling methods in the literature in terms of the outlined graph sampling theory.

  14. Infant-mother attachment can be improved through group intervention: a preliminary evaluation in Spain in a non-randomized controlled trial.

    Science.gov (United States)

    Torres, Bárbara; Alonso-Arbiol, Itziar; Cantero, María José; Abubakar, Amina

    2011-11-01

    The quality of infant-mother attachment has been linked to competence in different domains of child development. Research indicates that early intervention can enhance the quality of infant-mother attachment, though its efficacy in a group format has yet to be evaluated. The current study is aimed at examining the usefulness of a group intervention in enhancing infant-mother attachment. An intervention aimed at addressing aspects such as maternal responsivity, sensitivity and childrearing behaviour was developed by the researchers and experienced psychologists. The intervention spanned a period of 14 months starting from the third quarter of pregnancy. The intervention was evaluated among 24 mothers from the Basque region of Spain. The sample consisted of children of both genders in a similar proportion: 45.8% were boys and 54.2% were girls. The children in this sample were full-term born and did not present symptoms of any serious pre- or postnatal complications. The intervention had a statistically non-significant medium effect. Infants whose mothers had received the intervention showed higher rates of secure attachment compared to children from the control group, as assessed by the Strange Situation observation procedure. A potentially significant confounding variable, maternal attachment, was balanced across the intervention and comparison groups. We can tentatively point out that a group intervention may enhance the quality of infant-mother attachment. Nevertheless, because the study design was not randomized, the results of this study remain preliminary and need replication in a full randomized controlled trial designed study.

  15. Sampling in epidemiological research: issues, hazards and pitfalls

    Science.gov (United States)

    Tyrer, Stephen; Heyman, Bob

    2016-01-01

    Surveys of people's opinions are fraught with difficulties. It is easier to obtain information from those who respond to text messages or to emails than to attempt to obtain a representative sample. Samples of the population that are selected non-randomly in this way are termed convenience samples as they are easy to recruit. This introduces a sampling bias. Such non-probability samples have merit in many situations, but an epidemiological enquiry is of little value unless a random sample is obtained. If a sufficient number of those selected actually complete a survey, the results are likely to be representative of the population. This editorial describes probability and non-probability sampling methods and illustrates the difficulties and suggested solutions in performing accurate epidemiological research. PMID:27087985

  16. Impact of the β-Lactam Resistance Modifier (−-Epicatechin Gallate on the Non-Random Distribution of Phospholipids across the Cytoplasmic Membrane of Staphylococcus aureus

    Directory of Open Access Journals (Sweden)

    Helena Rosado

    2015-07-01

    Full Text Available The polyphenol (−-epicatechin gallate (ECg inserts into the cytoplasmic membrane (CM of methicillin-resistant Staphylococcus aureus (MRSA and reversibly abrogates resistance to β-lactam antibiotics. ECg elicits an increase in MRSA cell size and induces thickened cell walls. As ECg partially delocalizes penicillin-binding protein PBP2 from the septal division site, reduces PBP2 and PBP2a complexation and induces CM remodelling, we examined the impact of ECg membrane intercalation on phospholipid distribution across the CM and determined if ECg affects the equatorial, orthogonal mode of division. The major phospholipids of the staphylococcal CM, lysylphosphatidylglycerol (LPG, phosphatidylglycerol (PG, and cardiolipin (CL, were distributed in highly asymmetric fashion; 95%–97% of LPG was associated with the inner leaflet whereas PG (~90% and CL (~80% were found predominantly in the outer leaflet. ECg elicited small, significant changes in LPG distribution. Atomic force microscopy established that ECg-exposed cells divided in similar fashion to control bacteria, with a thickened band of encircling peptidoglycan representing the most recent plane of cell division, less distinct ribs indicative of previous sites of orthogonal division and concentric rings and “knobbles” representing stages of peptidoglycan remodelling during the cell cycle. Preservation of staphylococcal membrane lipid asymmetry and mode of division in sequential orthogonal planes appear key features of ECg-induced stress.

  17. Patient Safety Outcomes in Small Urban and Small Rural Hospitals

    Science.gov (United States)

    Vartak, Smruti; Ward, Marcia M.; Vaughn, Thomas E.

    2010-01-01

    Purpose: To assess patient safety outcomes in small urban and small rural hospitals and to examine the relationship of hospital and patient factors to patient safety outcomes. Methods: The Nationwide Inpatient Sample and American Hospital Association annual survey data were used for analyses. To increase comparability, the study sample was…

  18. Non-Random Distribution of 5S rDNA Sites and Its Association with 45S rDNA in Plant Chromosomes.

    Science.gov (United States)

    Roa, Fernando; Guerra, Marcelo

    2015-01-01

    5S and 45S rDNA sites are the best mapped chromosome regions in eukaryotic chromosomes. In this work, a database was built gathering information about the position and number of 5S rDNA sites in 784 plant species, aiming to identify patterns of distribution along the chromosomes and its correlation with the position of 45S rDNA sites. Data revealed that in most karyotypes (54.5%, including polyploids) two 5S rDNA sites (a single pair) are present, with 58.7% of all sites occurring in the short arm, mainly in the proximal region. In karyotypes of angiosperms with only 1 pair of sites (single sites) they are mostly found in the proximal region (52.0%), whereas in karyotypes with multiple sites the location varies according to the average chromosome size. Karyotypes with multiple sites and small chromosomes (6 µm) more commonly show terminal or interstitial sites. In species with holokinetic chromosomes, the modal value of sites per karyotype was also 2, but they were found mainly in a terminal position. Adjacent 5S and 45S rDNA sites were often found in the short arm, reflecting the preferential distribution of both sites in this arm. The high frequency of genera with at least 1 species with adjacent 5S and 45S sites reveals that this association appeared several times during angiosperm evolution, but it has been maintained only rarely as the dominant array in plant genera. © 2015 S. Karger AG, Basel.

  19. Balanced sampling

    NARCIS (Netherlands)

    Brus, D.J.

    2015-01-01

    In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling

  20. Small hydro

    International Nuclear Information System (INIS)

    Bennett, K.; Tung, T.

    1995-01-01

    A small hydro plant in Canada is defined as any project between 1 MW and 15 MW but the international standard is 10 MW. The global market for small hydro development was considered good. There are some 1000 to 2000 MW of generating capacity being added each year. In Canada, growth potential is considered small, primarily in remote areas, but significant growth is anticipated in Eastern Europe, Africa and Asia. Canada with its expertise in engineering, manufacturing and development is considered to have a good chance to take advantage of these growing markets

  1. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  2. Non-randomized therapy trial to determine the safety and efficacy of heavy ion radiotherapy in patients with non-resectable osteosarcoma

    Directory of Open Access Journals (Sweden)

    Bischof Marc

    2010-03-01

    Full Text Available Abstract Background Osteosarcoma is the most common primary malignant bone tumor in children and adolescents. For effective treatment, local control of the tumor is absolutely critical, because the chances of long term survival are EURAMOS1. Local photon radiotherapy has previously been used in small series and in an uncontrolled, highly individualized fashion, which, however, documented that high dose radiotherapy can, in principle, be used to achieve local control. Generally the radiation dose that is necessary for a curative approach can hardly be achieved with conventional photon radiotherapy in patients with non-resectable tumors that are usually located near radiosensitive critical organs such as the brain, the spine or the pelvis. In these cases particle Radiotherapy (proton therapy (PT/heavy ion therapy (HIT may offer a promising new alternative. Moreover, compared with photons, heavy ion beams provide a higher physical selectivity because of their finite depth coverage in tissue. They achieve a higher relative biological effectiveness. Phase I/II dose escalation studies of HIT in adults with non-resectable bone and soft tissue sarcomas have already shown favorable results. Methods/Design This is a monocenter, single-arm study for patients ≥ 6 years of age with non-resectable osteosarcoma. Desired target dose is 60-66 Cobalt Gray Equivalent (Gy E with 45 Gy PT (proton therapy and a carbon ion boost of 15-21 GyE. Weekly fractionation of 5-6 × 3 Gy E is used. PT/HIT will be administered exclusively at the Ion Radiotherapy Center in Heidelberg. Furthermore, FDG-PET imaging characteristics of non-resectable osteosarcoma before and after PT/HIT will be investigated prospectively. Systemic disease before and after PT/HIT is targeted by standard chemotherapy protocols and is not part of this trial. Discussion The primary objectives of this trial are the determination of feasibility and toxicity of HIT. Secondary objectives are tumor response

  3. Effectiveness of an intervention in increasing the provision of preventive care by community mental health services: a non-randomized, multiple baseline implementation trial.

    Science.gov (United States)

    Bartlem, Kate M; Bowman, Jenny; Freund, Megan; Wye, Paula M; Barker, Daniel; McElwaine, Kathleen M; Wolfenden, Luke; Campbell, Elizabeth M; McElduff, Patrick; Gillham, Karen; Wiggers, John

    2016-04-02

    Relative to the general population, people with a mental illness are more likely to have modifiable chronic disease health risk behaviours. Care to reduce such risks is not routinely provided by community mental health clinicians. This study aimed to determine the effectiveness of an intervention in increasing the provision of preventive care by such clinicians addressing four chronic disease risk behaviours. A multiple baseline trial was undertaken in two groups of community mental health services in New South Wales, Australia (2011-2014). A 12-month practice change intervention was sequentially implemented in each group. Outcome data were collected continuously via telephone interviews with a random sample of clients over a 3-year period, from 6 months pre-intervention in the first group, to 6 months post intervention in the second group. Outcomes were client-reported receipt of assessment, advice and referral for tobacco smoking, harmful alcohol consumption, inadequate fruit and/or vegetable consumption and inadequate physical activity and for the four behaviours combined. Logistic regression analyses examined change in client-reported receipt of care. There was an increase in assessment for all risks combined following the intervention (18 to 29 %; OR 3.55, p = 0.002: n = 805 at baseline, 982 at follow-up). No significant change in assessment, advice or referral for each individual risk was found. The intervention had a limited effect on increasing the provision of preventive care. Further research is required to determine how to increase the provision of preventive care in community mental health services. Australian and New Zealand Clinical Trials Registry ACTRN12613000693729.

  4. Effectiveness of a multi-strategy intervention in increasing the implementation of vegetable and fruit breaks by Australian primary schools: a non-randomized controlled trial.

    Science.gov (United States)

    Nathan, Nicole; Wolfenden, Luke; Bell, Andrew C; Wyse, Rebecca; Morgan, Philip J; Butler, Michelle; Sutherland, Rachel; Milat, Andrew J; Hector, Debra; Wiggers, John

    2012-08-13

    Limited evidence exists describing the effectiveness of strategies in facilitating the implementation of vegetable and fruit programs by schools on a population wide basis. The aim of this study was to examine the effectiveness of a multi-strategy intervention in increasing the population-wide implementation of vegetable and fruit breaks by primary schools and to determine if intervention effectiveness varied by school characteristics. A quasi-experimental study was conducted in primary schools in the state of New South Wales, Australia. All primary schools in one region of the state (n = 422) received a multi-strategy intervention. A random sample of schools (n = 406) in the remainder of the state served as comparison schools. The multi-strategy intervention to increase vegetable and fruit breaks involved the development and provision of: program consensus and leadership; staff training; program materials; incentives; follow-up support; and implementation feedback. Comparison schools had access to routine information-based Government support. Data to assess the prevalence of vegetable and fruit breaks were collected by telephone from Principals of the intervention and comparison schools at baseline (2006-2007) and 11 to 15 months following the commencement of the intervention (2009-2010). GEE analysis was used to examine the change in the prevalence of vegetable and fruit breaks in intervention schools compared to comparison schools. At follow-up, prevalence of vegetable and fruit breaks increased significantly in both intervention (50.3% to 82.0%, p strategy intervention can significantly increase the implementation of vegetable and fruit breaks by a large number of Australian primary schools.

  5. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR

    Directory of Open Access Journals (Sweden)

    Bochaton Audrey

    2007-06-01

    Full Text Available Abstract Background Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. Methods We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. Application We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. Conclusion This paper describes the conceptual reasoning behind

  6. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR).

    Science.gov (United States)

    Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard

    2007-06-01

    Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be

  7. 7 CFR 201.42 - Small containers.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Small containers. 201.42 Section 201.42 Agriculture... REGULATIONS Sampling in the Administration of the Act § 201.42 Small containers. In sampling seed in small containers that it is not practical to sample as required in § 201.41, a portion of one unopened container or...

  8. Development of an evaluation method for fracture mechanical tests on small samples based on a cohesive zone model; Entwicklung einer Auswertemethode fuer bruchmechanische Versuche an kleinen Proben auf der Basis eines Kohaesivzonenmodells

    Energy Technology Data Exchange (ETDEWEB)

    Mahler, Michael

    2016-07-01

    The safety and reliability of nuclear power plants of the fourth generation is an important issue. It is based on a reliable interpretation of the components for which, among other fracture mechanical material properties are required. The existing irradiation in the power plants significantly affects the material properties which therefore need to be determined on irradiated material. Often only small amounts of irradiated material are available for characterization. In that case it is not possible to manufacture sufficiently large specimens, which are necessary for fracture mechanical testing in agreement with the standard. Small specimens must be used. From this follows the idea of this study, in which the fracture toughness can be predicted with the developed method based on tests of small specimens. For this purpose, the fracture process including the crack growth is described with a continuum mechanical approach using the finite element method and the cohesive zone model. The experiments on small specimens are used for parameter identification of the cohesive zone model. The two parameters of the cohesive zone model are determined by tensile tests on notched specimens (cohesive stress) and by parameter fitting to the fracture behavior of smalls specimens (cohesive energy). To account the different triaxialities of the specimens, the cohesive stress is used depending on the triaxiality. After parameter identification a large specimen can be simulated with the cohesive zone parameters derived from small specimens. The predicted fracture toughness of this big specimen fulfills the size requirements in the standard (ASTM E1820 or ASTM E399) in contrast to the small specimen. This method can be used for ductile and brittle material behavior and was validated in this work. In summary, this method offers the possibility to determine the fracture toughness indirectly based on small specimen testing. Main advantage is the low required specimen volume. Thereby massively

  9. Laser sampling

    International Nuclear Information System (INIS)

    Gorbatenko, A A; Revina, E I

    2015-01-01

    The review is devoted to the major advances in laser sampling. The advantages and drawbacks of the technique are considered. Specific features of combinations of laser sampling with various instrumental analytical methods, primarily inductively coupled plasma mass spectrometry, are discussed. Examples of practical implementation of hybrid methods involving laser sampling as well as corresponding analytical characteristics are presented. The bibliography includes 78 references

  10. Sampling Methods in Cardiovascular Nursing Research: An Overview.

    Science.gov (United States)

    Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie

    2014-01-01

    Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

  11. Soil sampling

    International Nuclear Information System (INIS)

    Fortunati, G.U.; Banfi, C.; Pasturenzi, M.

    1994-01-01

    This study attempts to survey the problems associated with techniques and strategies of soil sampling. Keeping in mind the well defined objectives of a sampling campaign, the aim was to highlight the most important aspect of representativeness of samples as a function of the available resources. Particular emphasis was given to the techniques and particularly to a description of the many types of samplers which are in use. The procedures and techniques employed during the investigations following the Seveso accident are described. (orig.)

  12. Language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik

    1998-01-01

    This article has two aims: [1] to present a revised version of the sampling method that was originally proposed in 1993 by Rijkhoff, Bakker, Hengeveld and Kahrel, and [2] to discuss a number of other approaches to language sampling in the light of our own method. We will also demonstrate how our...... sampling method is used with different genetic classifications (Voegelin & Voegelin 1977, Ruhlen 1987, Grimes ed. 1997) and argue that —on the whole— our sampling technique compares favourably with other methods, especially in the case of exploratory research....

  13. Sampling Development

    Science.gov (United States)

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  14. Environmental sampling

    International Nuclear Information System (INIS)

    Puckett, J.M.

    1998-01-01

    Environmental Sampling (ES) is a technology option that can have application in transparency in nuclear nonproliferation. The basic process is to take a sample from the environment, e.g., soil, water, vegetation, or dust and debris from a surface, and through very careful sample preparation and analysis, determine the types, elemental concentration, and isotopic composition of actinides in the sample. The sample is prepared and the analysis performed in a clean chemistry laboratory (CCL). This ES capability is part of the IAEA Strengthened Safeguards System. Such a Laboratory is planned to be built by JAERI at Tokai and will give Japan an intrinsic ES capability. This paper presents options for the use of ES as a transparency measure for nuclear nonproliferation

  15. Comparative evaluation of serum, FTA filter-dried blood and oral fluid as sample material for PRRSV diagnostics by RT-qPCR in a small-scale experimental study.

    Science.gov (United States)

    Steinrigl, Adolf; Revilla-Fernández, Sandra; Wodak, Eveline; Schmoll, Friedrich; Sattler, Tatjana

    2014-01-01

    Recently, research into alternative sample materials, such as oral fluid or filter-dried blood has been intensified, in order to facilitate cost-effective and animal-friendly sampling of individuals or groups of pigs for diagnostic purposes. The objective of this study was to compare the sensitivity of porcine reproductive and respiratory syndrome virus (PRRSV)-RNA detection by reverse transcription quantitative real-time PCR (RT-qPCR) in serum, FTA filter-dried blood and oral fluid sampled from individual pigs. Ten PRRSV negative pigs were injected with an EU-type PRRSV live vaccine. Blood and oral fluid samples were taken from each pig before, and 4, 7, 14 and 21 days after vaccination. All samples were then analyzed by PRRSV RT-qPCR. In serum, eight often pigs tested RT-qPCR positive at different time points post infection. Absolute quantification showed low serum PRRSV-RNA loads in most samples. In comparison to serum, sensitivity of PRRSV-RNA detection was strongly reduced in matched FTA filter-dried blood and in oral fluid from the same pigs. These results indicate that with low PRRSV-RNA loads the diagnostic sensitivity of PRRSV-RNA detection by RT-qPCR achieved with serum is currently unmatched by either FTA filter-dried blood or oral fluid.

  16. Spherical sampling

    CERN Document Server

    Freeden, Willi; Schreiner, Michael

    2018-01-01

    This book presents, in a consistent and unified overview, results and developments in the field of today´s spherical sampling, particularly arising in mathematical geosciences. Although the book often refers to original contributions, the authors made them accessible to (graduate) students and scientists not only from mathematics but also from geosciences and geoengineering. Building a library of topics in spherical sampling theory it shows how advances in this theory lead to new discoveries in mathematical, geodetic, geophysical as well as other scientific branches like neuro-medicine. A must-to-read for everybody working in the area of spherical sampling.

  17. Complex Systems Analysis of Cell Cycling Models in Carcinogenesis:II. Cell Genome and Interactome, Neoplastic Non-random Transformation Models in Topoi with Lukasiewicz-Logic and MV Algebras

    CERN Document Server

    Baianu, I C

    2004-01-01

    Quantitative Biology, abstract q-bio.OT/0406045 From: I.C. Baianu Dr. [view email] Date (v1): Thu, 24 Jun 2004 02:45:13 GMT (164kb) Date (revised v2): Fri, 2 Jul 2004 00:58:06 GMT (160kb) Complex Systems Analysis of Cell Cycling Models in Carcinogenesis: II. Authors: I.C. Baianu Comments: 23 pages, 1 Figure Report-no: CC04 Subj-class: Other Carcinogenesis is a complex process that involves dynamically inter-connected modular sub-networks that evolve under the influence of micro-environmentally induced perturbations, in non-random, pseudo-Markov chain processes. An appropriate n-stage model of carcinogenesis involves therefore n-valued Logic treatments of nonlinear dynamic transformations of complex functional genomes and cell interactomes. Lukasiewicz Algebraic Logic models of genetic networks and signaling pathways in cells are formulated in terms of nonlinear dynamic systems with n-state components that allow for the generalization of previous, Boolean or "fuzzy", logic models of genetic activities in vivo....

  18. Impact of a Multifaceted and Clinically Integrated Training Program in Evidence-Based Practice on Knowledge, Skills, Beliefs and Behaviour among Clinical Instructors in Physiotherapy: A Non-Randomized Controlled Study.

    Science.gov (United States)

    Olsen, Nina Rydland; Bradley, Peter; Espehaug, Birgitte; Nortvedt, Monica Wammen; Lygren, Hildegunn; Frisk, Bente; Bjordal, Jan Magnus

    2015-01-01

    Physiotherapists practicing at clinical placement sites assigned the role as clinical instructors (CIs), are responsible for supervising physiotherapy students. For CIs to role model evidence-based practice (EBP) they need EBP competence. The aim of this study was to assess the short and long term impact of a six-month multifaceted and clinically integrated training program in EBP on the knowledge, skills, beliefs and behaviour of CIs supervising physiotherapy students. We invited 37 CIs to participate in this non-randomized controlled study. Three self-administered questionnaires were used pre- and post-intervention, and at six-month follow-up: 1) The Adapted Fresno test (AFT), 2) the EBP Belief Scale and 3) the EBP Implementation Scale. The analysis approach was linear regression modeling using Generalized Estimating Equations. In total, 29 CIs agreed to participate in the study: 14 were invited to participate in the intervention group and 15 were invited to participate in the control group. One in the intervention group and five in the control group were lost to follow-up. At follow-up, the group difference was statistically significant for the AFT (mean difference = 37, 95% CI (15.9 -58.1), p students.

  19. EFFICACY AND SAFETY OF INHALED NITRIC OXIDE USE IN PATIENTS WITH PULMONARY HYPERTENSION AND CHRONIC HEART FAILURE III-IV FUNCTIONAL CLASS, SURVIVED LEFT VENTRICULAR MYOCARDIAL INFARCTION: RESULTS OF NON-RANDOMIZED COMAPARIVE STUDY

    Directory of Open Access Journals (Sweden)

    P. N. Pavlenko

    2011-01-01

    Full Text Available Aim – assessment of the influence of inhaled nitric oxide (NO on the course of pulmonary hypertension (PH in patients with chronic heartfailure (CHF III–IV functional class (FC, myocardial infarction (MI.Materials and methods. Results of comparative non-randomized study that included 45 patients (26 men and 19 women aged 51 to 98 years with PH III–IV functional class, left ventricular myocardial infarction. In the study, patients underwent physical examination and Doppler echocardiography, and assessed FC LH (carried out with six-minute walking test, to determine the degree of dyspnea by Borg.Results. Symptoms of CHF and PH regressed in both groups of patients against the background of the therapy. Conclusion. It is established that the use of inhaled NO is a safe therapy and helps to reduce mean pulmonary arterial pressure (PAPmean in patients with PH and CHF III–IV FC, myocardial infarction. The method of inhaled NO is simple and easy to use.

  20. EFFICACY AND SAFETY OF INHALED NITRIC OXIDE USE IN PATIENTS WITH PULMONARY HYPERTENSION AND CHRONIC HEART FAILURE III-IV FUNCTIONAL CLASS, SURVIVED LEFT VENTRICULAR MYOCARDIAL INFARCTION: RESULTS OF NON-RANDOMIZED COMAPARIVE STUDY

    Directory of Open Access Journals (Sweden)

    P. N. Pavlenko

    2014-07-01

    Full Text Available Aim – assessment of the influence of inhaled nitric oxide (NO on the course of pulmonary hypertension (PH in patients with chronic heartfailure (CHF III–IV functional class (FC, myocardial infarction (MI.Materials and methods. Results of comparative non-randomized study that included 45 patients (26 men and 19 women aged 51 to 98 years with PH III–IV functional class, left ventricular myocardial infarction. In the study, patients underwent physical examination and Doppler echocardiography, and assessed FC LH (carried out with six-minute walking test, to determine the degree of dyspnea by Borg.Results. Symptoms of CHF and PH regressed in both groups of patients against the background of the therapy. Conclusion. It is established that the use of inhaled NO is a safe therapy and helps to reduce mean pulmonary arterial pressure (PAPmean in patients with PH and CHF III–IV FC, myocardial infarction. The method of inhaled NO is simple and easy to use.

  1. RANDOMNESS of Numbers DEFINITION(QUERY:WHAT? V HOW?) ONLY Via MAXWELL-BOLTZMANN CLASSICAL-Statistics(MBCS) Hot-Plasma VS. Digits-Clumping Log-Law NON-Randomness Inversion ONLY BOSE-EINSTEIN QUANTUM-Statistics(BEQS) .

    Science.gov (United States)

    Siegel, Z.; Siegel, Edward Carl-Ludwig

    2011-03-01

    RANDOMNESS of Numbers cognitive-semantics DEFINITION VIA Cognition QUERY: WHAT???, NOT HOW?) VS. computer-``science" mindLESS number-crunching (Harrel-Sipser-...) algorithmics Goldreich "PSEUDO-randomness"[Not.AMS(02)] mea-culpa is ONLY via MAXWELL-BOLTZMANN CLASSICAL-STATISTICS(NOT FDQS!!!) "hot-plasma" REPULSION VERSUS Newcomb(1881)-Weyl(1914;1916)-Benford(1938) "NeWBe" logarithmic-law digit-CLUMPING/ CLUSTERING NON-Randomness simple Siegel[AMS Joint.Mtg.(02)-Abs. # 973-60-124] algebraic-inversion to THE QUANTUM and ONLY BEQS preferentially SEQUENTIALLY lower-DIGITS CLUMPING/CLUSTERING with d = 0 BEC, is ONLY VIA Siegel-Baez FUZZYICS=CATEGORYICS (SON OF TRIZ)/"Category-Semantics"(C-S), latter intersection/union of Lawvere(1964)-Siegel(1964)] category-theory (matrix: MORPHISMS V FUNCTORS) "+" cognitive-semantics'' (matrix: ANTONYMS V SYNONYMS) yields Siegel-Baez FUZZYICS=CATEGORYICS/C-S tabular list-format matrix truth-table analytics: MBCS RANDOMNESS TRUTH/EMET!!!

  2. Relationship of nocturnal concentrations of melatonin, gamma-aminobutyric acid and total antioxidants in peripheral blood with insomnia after stroke: study protocol for a prospective non-randomized controlled trial

    Institute of Scientific and Technical Information of China (English)

    Wei Zhang; Fang Li; Tong Zhang

    2017-01-01

    Melatonin and gamma-aminobutyric acid (GABA) have been shown to regulate sleep. The nocturnal concentrations of melatonin, GABA and total antioxidants may relate to insomnia in stroke patients. In this prospective single-center non-randomized controlled clinical trial performed in the China Rehabilitation Research Center, we analyzed the relationship of nocturnal concentrations of melatonin, GABA and total antioxidants with insomnia after stroke. Patients during rehabilitation of stroke were recruited and assigned to the insomnia group or non-insomnia group. Simultaneously, persons without stroke or insomnia served as normal controls. Each group contained 25 cases. The primary outcome was nocturnal concentrations of melatonin, GABA and total antioxidants in peripheral blood. The secondary outcomes were Pittsburgh Sleep Quality Index, Insomnia Severity Index, Epworth Sleepiness Scale, Fatigue Severity Scale, Morningness-Eveningness Questionnaire (Chinese version), and National Institute of Health Stroke Scale. The relationship of nocturnal concentrations of melatonin, GABA and total antioxidants with insomnia after stroke was analyzed and showed that they were lower in the insomnia group than in the non-insomnia group. The severity of stroke was higher in the insomnia group than in the non-insomnia group. Correlation analysis demonstrated that the nocturnal concentrations of melatonin and GABA were associated with insomnia after stroke. This trial was regis-tered at ClinicalTrials.gov, identifier: NCT03202121.

  3. Fluidic sampling

    International Nuclear Information System (INIS)

    Houck, E.D.

    1992-01-01

    This paper covers the development of the fluidic sampler and its testing in a fluidic transfer system. The major findings of this paper are as follows. Fluidic jet samples can dependably produce unbiased samples of acceptable volume. The fluidic transfer system with a fluidic sampler in-line will transfer water to a net lift of 37.2--39.9 feet at an average ratio of 0.02--0.05 gpm (77--192 cc/min). The fluidic sample system circulation rate compares very favorably with the normal 0.016--0.026 gpm (60--100 cc/min) circulation rate that is commonly produced for this lift and solution with the jet-assisted airlift sample system that is normally used at ICPP. The volume of the sample taken with a fluidic sampler is dependant on the motive pressure to the fluidic sampler, the sample bottle size and on the fluidic sampler jet characteristics. The fluidic sampler should be supplied with fluid having the motive pressure of the 140--150 percent of the peak vacuum producing motive pressure for the jet in the sampler. Fluidic transfer systems should be operated by emptying a full pumping chamber to nearly empty or empty during the pumping cycle, this maximizes the solution transfer rate

  4. The perception of small crime

    NARCIS (Netherlands)

    Douhou, S.; Magnus, J.R.; van Soest, A.H.O.

    2011-01-01

    In this paper we measure perceptions of incorrect behavior or ‘small crime’, based on a questionnaire administered to a large representative sample from the Dutch population. In the questionnaire we ask the respondents to rate the severity or justifiability of a number of small crimes. We present

  5. Small talk

    Directory of Open Access Journals (Sweden)

    Ryszard Przybylski

    2016-12-01

    Full Text Available The poem Small talk conjures up a communicative situation in which the main character, a newcomer from Poland, answers conventional questions related to their country. Bearing in mind the fact that this poem is set during a military dictatorship, superficial interest in his homeland may trigger a feeling of impatience. This is at least the impression formed if we adopt the perspective defined within the romantic tradition, and when taking into account the conventional poetry of martial law in Poland. Nevertheless, Barańczak retains an ironic distance towards such communicative situations and, as a consequence, does not create poetry that meets most readersʼ expectations. His poetic imperative for verbal art to be the expression of mistrust remains valid.

  6. Small Composers

    DEFF Research Database (Denmark)

    Holgersen, Sven-Erik; Bruun, Peter; Tjagvad, Mette

    2018-01-01

    the study: What expectations do the class teacher and the professional musicians have to the creative practice, i.e. to the collaboration and to the musical outcome? To which extent do the collaborating partners share a common understanding of the aim, content and method of the workshop? How do the roles......The present chapter discusses roles and responsibilities of the collaborating partners in a creative music workshop called Small Composers. The aim is to be attentive to a number of potential alterations implicated by the collaborating partners’ different backgrounds. The following questions guided...... and responsibilities of the collaborating partners become visible through the practice? How do the professional identities of the teacher and the musicians become visible and what are the implications for the workshop as a musical community of practice?...

  7. Venus Suface Sampling and Analysis

    Data.gov (United States)

    National Aeronautics and Space Administration — This effort is developing the technology to transfer particulate samples from a Venus drill (being developed by Honeybee Robotics in a Phase 2 Small Business...

  8. Sampling methods

    International Nuclear Information System (INIS)

    Loughran, R.J.; Wallbrink, P.J.; Walling, D.E.; Appleby, P.G.

    2002-01-01

    Methods for the collection of soil samples to determine levels of 137 Cs and other fallout radionuclides, such as excess 210 Pb and 7 Be, will depend on the purposes (aims) of the project, site and soil characteristics, analytical capacity, the total number of samples that can be analysed and the sample mass required. The latter two will depend partly on detector type and capabilities. A variety of field methods have been developed for different field conditions and circumstances over the past twenty years, many of them inherited or adapted from soil science and sedimentology. The use of them inherited or adapted from soil science and sedimentology. The use of 137 Cs in erosion studies has been widely developed, while the application of fallout 210 Pb and 7 Be is still developing. Although it is possible to measure these nuclides simultaneously, it is common for experiments to designed around the use of 137 Cs along. Caesium studies typically involve comparison of the inventories found at eroded or sedimentation sites with that of a 'reference' site. An accurate characterization of the depth distribution of these fallout nuclides is often required in order to apply and/or calibrate the conversion models. However, depending on the tracer involved, the depth distribution, and thus the sampling resolution required to define it, differs. For example, a depth resolution of 1 cm is often adequate when using 137 Cs. However, fallout 210 Pb and 7 Be commonly has very strong surface maxima that decrease exponentially with depth, and fine depth increments are required at or close to the soil surface. Consequently, different depth incremental sampling methods are required when using different fallout radionuclides. Geomorphic investigations also frequently require determination of the depth-distribution of fallout nuclides on slopes and depositional sites as well as their total inventories

  9. Sample Reuse in Statistical Remodeling.

    Science.gov (United States)

    1987-08-01

    as the jackknife and bootstrap, is an expansion of the functional, T(Fn), or of its distribution function or both. Frangos and Schucany (1987a) used...accelerated bootstrap. In the same report Frangos and Schucany demonstrated the small sample superiority of that approach over the proposals that take...higher order terms of an Edgeworth expansion into account. In a second report Frangos and Schucany (1987b) examined the small sample performance of

  10. A 4-year non-randomized comparative phase-IV study of early rheumatoid arthritis: integrative anthroposophic medicine for patients with preference against DMARDs versus conventional therapy including DMARDs for patients without preference

    Directory of Open Access Journals (Sweden)

    Hamre HJ

    2018-03-01

    Full Text Available Harald J Hamre,1 Van N Pham,2 Christian Kern,3 Rolf Rau,4 Jörn Klasen,3 Ute Schendel,5 Lars Gerlach,6 Attyla Drabik,2 Ludger Simon6,† 1Institute for Applied Epistemology and Medical Methodology at the Witten/Herdecke University, Freiburg, Germany; 2Institute of Statistics in Medicine, Universitätsklinikum Düsseldorf, Düsseldorf, Germany; 3Department of Integrative Medicine, Asklepios Westklinikum, Hamburg, Germany; 4Department of Rheumatology, Evangelisches Fachkrankenhaus Ratingen, Ratingen, Germany; 5Department of Rheumatology, m&i-Fachklinik Bad Pyrmont, Bad Pyrmont, Germany; 6Department of Internal Medicine and Gastroenterology, Filderklinik, Filderstadt, Germany †Dr Ludger Simon passed away on June 10, 2016 Background: While disease-modifying antirheumatic drugs (DMARDs are a mainstay of therapy for rheumatoid arthritis (RA, some patients with early RA refuse DMARDs. In anthroposophic medicine (AM, a treatment strategy for early RA without DMARDs has been developed. Preliminary data suggest that RA symptoms and inflammatory markers can be reduced under AM, without DMARDs. Patients and methods: Two hundred and fifty-one self-selected patients aged 16–70 years, starting treatment for RA of <3 years duration, without prior DMARD therapy, participated in a prospective, non-randomized, comparative Phase IV study. C-patients were treated in clinics offering conventional therapy including DMARDs, while A-patients had chosen treatment in anthroposophic clinics, without DMARDs. Both groups received corticosteroids and nonsteroidal anti-inflammatory drugs (NSAIDs. Primary outcomes were intensity of RA symptoms measured by self-rating on visual analog scales, C-reactive protein, radiological progression, study withdrawals, serious adverse events (SAE, and adverse drug reactions in months 0–48. Results: The groups were similar in most baseline characteristics, while A-patients had longer disease duration (mean 15.1 vs 10.8 months, p<0

  11. Mindfulness Training for Health Profession Students-The Effect of Mindfulness Training on Psychological Well-Being, Learning and Clinical Performance of Health Professional Students: A Systematic Review of Randomized and Non-randomized Controlled Trials.

    Science.gov (United States)

    McConville, Janet; McAleer, Rachael; Hahne, Andrew

    High levels of stress have been identified in medical students and increasingly in other health profession student population groups. As stress can affect psychological well-being and interfere with learning and clinical performance, there is a clear argument for universities to include health professional student well-being as an outcome in core curriculum. Mindfulness training is a potential construct to manage stress and enhance academic success. The aims of this systematic review were to assess the effectiveness of mindfulness training in medical and other health professional student population groups and to compare the effectiveness of the different mindfulness-based programs. A literature search was completed using The Cochrane library, Medline, Cinahl, Embase, Psychinfo, and ERIC (proquest) electronic databases from inception to June 2016. Randomized and non-randomized controlled trials were included. Of the potential 5355 articles, 19 met the inclusion criteria. Studies focused on medical (n = 10), nursing (n = 4), social work (n = 1), psychology (n = 1), and medical plus other health (n = 3) students. Interventions were based on mindfulness. The 19 studies included 1815 participants. Meta-analysis was performed evaluating the effect of mindfulness training on mindfulness, anxiety, depression, stress, mood, self-efficacy, and empathy. The effect of mindfulness on academic performance was discussed. Mindfulness-based interventions decrease stress, anxiety, and depression and improve mindfulness, mood, self-efficacy, and empathy in health profession students. Due to the range of presentation options, mindfulness training can be relatively easily adapted and integrated into health professional training programs. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Impact of a Multifaceted and Clinically Integrated Training Program in Evidence-Based Practice on Knowledge, Skills, Beliefs and Behaviour among Clinical Instructors in Physiotherapy: A Non-Randomized Controlled Study.

    Directory of Open Access Journals (Sweden)

    Nina Rydland Olsen

    Full Text Available Physiotherapists practicing at clinical placement sites assigned the role as clinical instructors (CIs, are responsible for supervising physiotherapy students. For CIs to role model evidence-based practice (EBP they need EBP competence. The aim of this study was to assess the short and long term impact of a six-month multifaceted and clinically integrated training program in EBP on the knowledge, skills, beliefs and behaviour of CIs supervising physiotherapy students.We invited 37 CIs to participate in this non-randomized controlled study. Three self-administered questionnaires were used pre- and post-intervention, and at six-month follow-up: 1 The Adapted Fresno test (AFT, 2 the EBP Belief Scale and 3 the EBP Implementation Scale. The analysis approach was linear regression modeling using Generalized Estimating Equations.In total, 29 CIs agreed to participate in the study: 14 were invited to participate in the intervention group and 15 were invited to participate in the control group. One in the intervention group and five in the control group were lost to follow-up. At follow-up, the group difference was statistically significant for the AFT (mean difference = 37, 95% CI (15.9 -58.1, p < 0.001 and the EBP Beliefs scale (mean difference = 8.1, 95% CI (3.1 -13.2, p = 0.002, but not for the EBP Implementation scale (mean difference = 1.8. 95% CI (-4.5-8.1, p = 0.574. Comparing measurements over time, we found a statistically significant increase in mean scores related to all outcome measures for the intervention group only.A multifaceted and clinically integrated training program in EBP was successful in improving EBP knowledge, skills and beliefs among CIs. Future studies need to ensure long-term EBP behaviour change, in addition to assessing CIs' abilities to apply EBP knowledge and skills when supervising students.

  13. Immunogenicity and safety of an adjuvanted herpes zoster subunit candidate vaccine in adults ≥ 50 years of age with a prior history of herpes zoster: A phase III, non-randomized, open-label clinical trial.

    Science.gov (United States)

    Godeaux, Olivier; Kovac, Martina; Shu, Daniel; Grupping, Katrijn; Campora, Laura; Douha, Martine; Heineman, Thomas C; Lal, Himal

    2017-05-04

    This phase III, non-randomized, open-label, multi-center study (NCT01827839) evaluated the immunogenicity and safety of an adjuvanted recombinant subunit herpes zoster (HZ) vaccine (HZ/su) in adults aged ≥ 50 y with prior physician-documented history of HZ. Participants (stratified by age: 50-59, 60-69 and ≥ 70 y) received 2 doses of HZ/su 2 months apart and were followed-up for another 12 months. Anti-glycoprotein E (gE) antibodies were measured by enzyme-linked immunosorbent assay before vaccination and 1 month after the second dose (Month 3). Solicited local and general adverse events (AEs) were recorded for 7 d and unsolicited AEs for 30 d after each vaccination. Serious AEs were recorded until study end. The primary immunogenicity objective was met if the lower limit of the 95% confidence interval (CI) of the vaccine response rate (VRR), defined as a 4-fold increase in anti-gE over baseline, at Month 3 was ≥ 60%. 96 participants (32/age group) were enrolled. The primary immunogenicity objective was met, as the VRR at Month 3 was 90.2% (95% CI: 81.7-95.7). Geometric mean anti-gE antibody concentrations at Month 3 were similar across age groups. 77.9% and 71.6% of participants reported local and general solicited AEs, respectively. The most frequent solicited AEs were pain at injection site, fatigue, headache, myalgia and shivering. The HZ/su vaccine was immunogenic in adults aged ≥ 50 y with a physician-documented history of HZ, and no safety concerns were identified.

  14. A non-randomized study in consecutive patients with postcholecystectomy refractory biliary leaks who were managed endoscopically with the use of multiple plastic stents or fully covered self-expandable metal stents (with videos).

    Science.gov (United States)

    Canena, Jorge; Liberato, Manuel; Meireles, Liliane; Marques, Inês; Romão, Carlos; Coutinho, António Pereira; Neves, Beatriz Costa; Veiga, Pedro Mota

    2015-07-01

    Endoscopic management of postcholecystectomy biliary leaks is widely accepted as the treatment of choice. However, refractory biliary leaks after a combination of biliary sphincterotomy and the placement of a large-bore (10F) plastic stent can occur, and the optimal rescue endotherapy for this situation is unclear. To compare the clinical effectiveness of the use of a fully covered self-expandable metal stent (FCSEMS) with the placement of multiple plastic stents (MPS) for the treatment of postcholecystectomy refractory biliary leaks. Prospective study. Two tertiary-care referral academic centers and one general district hospital. Forty consecutive patients with refractory biliary leaks who underwent endoscopic management. Temporary placement of MPS (n = 20) or FCSEMSs (n = 20). Clinical outcomes of endotherapy as well as the technical success, adverse events, need for reinterventions, and prognostic factors for clinical success. Endotherapy was possible in all patients. After endotherapy, closure of the leak was accomplished in 13 patients (65%) who received MPS and in 20 patients (100%) who received FCSEMSs (P = .004). The Kaplan-Meier (log-rank) leak-free survival analysis showed a statistically significant difference between the 2 patient populations (χ(2) [1] = 8.30; P stents (P = .024), a plastic stent diameter <20F (P = .006), and a high-grade biliary leak (P = .015) were shown to be significant predictors of treatment failure with MPS. The 7 patients in whom placement of MPS failed were retreated with FCSEMSs, resulting in closure of the leaks in all cases. Non-randomized design. In our series, the results of the temporary placement of FCSEMSs for postcholecystectomy refractory biliary leaks were superior to those from the use of MPS. A randomized study is needed to confirm our results before further recommendations. Copyright © 2015 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  15. Systematic lymphadenectomy versus sampling of ipsilateral mediastinal lymph-nodes during lobectomy for non-small-cell lung cancer: a systematic review of randomized trials and a meta-analysis.

    Science.gov (United States)

    Mokhles, Sahar; Macbeth, Fergus; Treasure, Tom; Younes, Riad N; Rintoul, Robert C; Fiorentino, Francesca; Bogers, Ad J J C; Takkenberg, Johanna J M

    2017-06-01

    To re-examine the evidence for recommendations for complete dissection versus sampling of ipsilateral mediastinal lymph nodes during lobectomy for cancer. We searched for randomized trials of systematic mediastinal lymphadenectomy versus mediastinal sampling. We performed a textual analysis of the authors' own starting assumptions and conclusion. We analysed the trial designs and risk of bias. We extracted data on early mortality, perioperative complications, overall survival, local recurrence and distant recurrence for meta-analysis. We found five randomized controlled trials recruiting 1980 patients spanning 1989-2007. The expressed starting position in 3/5 studies was a conviction that systematic dissection was effective. Long-term survival was better with lymphadenectomy compared with sampling (Hazard Ratio 0.78; 95% CI 0.69-0.89) as was perioperative survival (Odds Ratio 0.59; 95% CI 0.25-1.36, non-significant). But there was an overall high risk of bias and a lack of intention to treat analysis. There were higher rates (non-significant) of perioperative complications including bleeding, chylothorax and recurrent nerve palsy with lymphadenectomy. The high risk of bias in these trials makes the overall conclusion insecure. The finding of clinically important surgically related morbidities but lower perioperative mortality with lymphadenectomy seems inconsistent. The multiple variables in patients, cancers and available treatments suggest that large pragmatic multicentre trials, testing currently available strategies, are the best way to find out which are more effective. The number of patients affected with lung cancer makes trials feasible. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  16. Respondent driven sampling: determinants of recruitment and a method to improve point estimation.

    Directory of Open Access Journals (Sweden)

    Nicky McCreesh

    Full Text Available Respondent-driven sampling (RDS is a variant of a link-tracing design intended for generating unbiased estimates of the composition of hidden populations that typically involves giving participants several coupons to recruit their peers into the study. RDS may generate biased estimates if coupons are distributed non-randomly or if potential recruits present for interview non-randomly. We explore if biases detected in an RDS study were due to either of these mechanisms, and propose and apply weights to reduce bias due to non-random presentation for interview.Using data from the total population, and the population to whom recruiters offered their coupons, we explored how age and socioeconomic status were associated with being offered a coupon, and, if offered a coupon, with presenting for interview. Population proportions were estimated by weighting by the assumed inverse probabilities of being offered a coupon (as in existing RDS methods, and also of presentation for interview if offered a coupon by age and socioeconomic status group.Younger men were under-recruited primarily because they were less likely to be offered coupons. The under-recruitment of higher socioeconomic status men was due in part to them being less likely to present for interview. Consistent with these findings, weighting for non-random presentation for interview by age and socioeconomic status group greatly improved the estimate of the proportion of men in the lowest socioeconomic group, reducing the root-mean-squared error of RDS estimates of socioeconomic status by 38%, but had little effect on estimates for age. The weighting also improved estimates for tribe and religion (reducing root-mean-squared-errors by 19-29%, but had little effect for sexual activity or HIV status.Data collected from recruiters on the characteristics of men to whom they offered coupons may be used to reduce bias in RDS studies. Further evaluation of this new method is required.

  17. Quantitative measurements of small isotopic samples in gaseous mixtures by utilization of some nuclear properties; Etude des possibilites de mesures de faibles quantites de gaz radioactifs dans un melange en utilisant simultanement plusieurs proprietes nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Maragnon, J; Delperie, C

    1967-12-01

    The question is to define the characteristics of a group of measurements allowing the analysis of traces of radioactive rare gases in gas mixtures of different composition. To distinguish the radioactive isotopes from each other and the activity level reaching 10{sup 6} due to their nuclear properties, the method was chosen to use several nuclear properties: gamma radiation energy, beta particle energy, lifetime of excited states. The choice of a plastic scintillator as beta detector allows to answer satisfactorilly to this demand by measurement of nuclear constants because of the short de/excitation time of this detector. Another advantage is that it can be a reservoir for the sample without any destruction nor modification of the sample. The study has been based on the mixture of Kr-85, the analysis of other rare gases follwos immediately from the adopted principle. [French] Les auteurs ont oriente leur recherche vers une solution permettant de distinguer les isotopes radioactifs les uns des autres et dans des rapports d'activite pouvant atteindre 10{sup 6}, grace a plusieurs de leurs proprietes nucleaires, energie de rayonnement gamma, energie de la particule beta, temps de vie des niveaux excites. Le choix d'un scintillateur plastique comme detecteur beta permet de repondre d'une maniere satisfaisante a la mesure des constantes nucleaires en raison du temps de de/excitation rapide de ce scintillateur. Il offre en outre l'avantage de pouvoir servir de reservoir a l'echantillon sans entrainer aucune destruction ni modification de celui-ci. L'etude a ete basee sur la mixture de Kr-85, l'analyse des autre gaz rares decoulant immediatement du principe adopte. (auteur)

  18. Preparation of small uranium hexafluoride samples in view of mass spectrometry analysis; Preparation de petits echantillons d'hexafluorure d'uranium en vue d'analyse spectrometrique de masse

    Energy Technology Data Exchange (ETDEWEB)

    Severin, Michel

    1958-07-01

    We have studied the preparation of uranium hexafluoride for the determination of the isotopic ratio {sup 235}U/{sup 238}U by means of a mass spectrometer. UF{sub 6} should be produced from an amount of raw material (metallic uranium or oxide) that should not exceed 0,1 g. Our method has a good yield (we have studied the rate of transformation) and gives samples which present a content of impurities (HF and SiF{sub 4}) low enough to enable correct isotopic measurements. The method which seemed the best uses the cobalt trifluoride as a fluorining agent. It is now in current use in the laboratories of mass spectrometry. (author) [French] Nous avons etudie la preparation de l'hexafluorure d'uranium en vue de la determination au spectrometre de masse du rapport isotopique {sup 235}U/{sup 238}U. L'hexafluorure d'uranium devait etre produit a partir d'une quantite de matiere premiere (uranium metallique ou oxyde) ne devant pas exceder 0,1 g. Nous avons mis au point une methode de preparation presentant un rendement eleve (etude du taux de transformation) et donnant des echantillons dont le taux d'impuretes (HF et SiF{sub 4}) est suffisamment faible pour permettre des mesures isotopiques correctes. La methode ayant donne le plus de satisfaction utilise le trifluorure de cobalt comme agent fluorant. Ce procede est maintenant couramment employe dans les laboratoires de spectrometrie de masse. (auteur)

  19. Small Business Development Center

    Data.gov (United States)

    Small Business Administration — Small Business Development Centers (SBDCs) provide assistance to small businesses and aspiring entrepreneurs throughout the United States and its territories. SBDCs...

  20. Evaluating the Effectiveness of an Antimicrobial Stewardship Program on Reducing the Incidence Rate of Healthcare-Associated Clostridium difficile Infection: A Non-Randomized, Stepped Wedge, Single-Site, Observational Study.

    Science.gov (United States)

    DiDiodato, Giulio; McArthur, Leslie

    2016-01-01

    The incidence rate of healthcare-associated Clostridium difficile infection (HA-CDI) is estimated at 1 in 100 patients. Antibiotic exposure is the most consistently reported risk factor for HA-CDI. Strategies to reduce the risk of HA-CDI have focused on reducing antibiotic utilization. Prospective audit and feedback is a commonly used antimicrobial stewardship intervention (ASi). The impact of this ASi on risk of HA-CDI is equivocal. This study examines the effectiveness of a prospective audit and feedback ASi on reducing the risk of HA-CDI. Single-site, 339 bed community-hospital in Barrie, Ontario, Canada. Primary outcome is HA-CDI incidence rate. Daily prospective and audit ASi is the exposure variable. ASi implemented across 6 wards in a non-randomized, stepped wedge design. Criteria for ASi; any intravenous antibiotic use for ≥ 48 hrs, any oral fluoroquinolone or oral second generation cephalosporin use for ≥ 48 hrs, or any antimicrobial use for ≥ 5 days. HA-CDI cases and model covariates were aggregated by ward, year and month starting September 2008 and ending February 2016. Multi-level mixed effect negative binomial regression analysis was used to model the primary outcome, with intercept and slope coefficients for ward-level random effects estimated. Other covariates tested for inclusion in the final model were derived from previously published risk factors. Deviance residuals were used to assess the model's goodness-of-fit. The dataset included 486 observation periods, of which 350 were control periods and 136 were intervention periods. After accounting for all other model covariates, the estimated overall ASi incidence rate ratio (IRR) was 0.48 (95% 0.30, 0.79). The ASi effect was independent of antimicrobial utilization. The ASi did not seem to reduce the risk of Clostridium difficile infection on the surgery wards (IRR 0.87, 95% CI 0.45, 1.69) compared to the medicine wards (IRR 0.42, 95% CI 0.28, 0.63). The ward-level burden of Clostridium

  1. Evaluating the Effectiveness of an Antimicrobial Stewardship Program on Reducing the Incidence Rate of Healthcare-Associated Clostridium difficile Infection: A Non-Randomized, Stepped Wedge, Single-Site, Observational Study.

    Directory of Open Access Journals (Sweden)

    Giulio DiDiodato

    Full Text Available The incidence rate of healthcare-associated Clostridium difficile infection (HA-CDI is estimated at 1 in 100 patients. Antibiotic exposure is the most consistently reported risk factor for HA-CDI. Strategies to reduce the risk of HA-CDI have focused on reducing antibiotic utilization. Prospective audit and feedback is a commonly used antimicrobial stewardship intervention (ASi. The impact of this ASi on risk of HA-CDI is equivocal. This study examines the effectiveness of a prospective audit and feedback ASi on reducing the risk of HA-CDI.Single-site, 339 bed community-hospital in Barrie, Ontario, Canada. Primary outcome is HA-CDI incidence rate. Daily prospective and audit ASi is the exposure variable. ASi implemented across 6 wards in a non-randomized, stepped wedge design. Criteria for ASi; any intravenous antibiotic use for ≥ 48 hrs, any oral fluoroquinolone or oral second generation cephalosporin use for ≥ 48 hrs, or any antimicrobial use for ≥ 5 days. HA-CDI cases and model covariates were aggregated by ward, year and month starting September 2008 and ending February 2016. Multi-level mixed effect negative binomial regression analysis was used to model the primary outcome, with intercept and slope coefficients for ward-level random effects estimated. Other covariates tested for inclusion in the final model were derived from previously published risk factors. Deviance residuals were used to assess the model's goodness-of-fit.The dataset included 486 observation periods, of which 350 were control periods and 136 were intervention periods. After accounting for all other model covariates, the estimated overall ASi incidence rate ratio (IRR was 0.48 (95% 0.30, 0.79. The ASi effect was independent of antimicrobial utilization. The ASi did not seem to reduce the risk of Clostridium difficile infection on the surgery wards (IRR 0.87, 95% CI 0.45, 1.69 compared to the medicine wards (IRR 0.42, 95% CI 0.28, 0.63. The ward-level burden of

  2. Towards Cost-efficient Sampling Methods

    OpenAIRE

    Peng, Luo; Yongli, Li; Chong, Wu

    2014-01-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and...

  3. The relative performance of bivariate causality tests in small samples

    NARCIS (Netherlands)

    Bult, J..R.; Leeflang, P.S.H.; Wittink, D.R.

    1997-01-01

    Causality tests have been applied to establish directional effects and to reduce the set of potential predictors, For the latter type of application only bivariate tests can be used, In this study we compare bivariate causality tests. Although the problem addressed is general and could benefit

  4. Data Quality Tools for Data Warehousing - A Small Sample Survey

    National Research Council Canada - National Science Library

    Neely, M

    1998-01-01

    It is estimated that as high as 75% of the effort spent on building a data warehouse can be attributed to back-end issues, such as readying the data and transporting it into the data warehouse (Atre, 1998...

  5. Power in Bayesian Mediation Analysis for Small Sample Research

    NARCIS (Netherlands)

    Miočević, M.; MacKinnon, David; Levy, Roy

    2017-01-01

    Bayesian methods have the potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This article compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product,

  6. Preparation of calcium-separated isotope targets using small samples

    International Nuclear Information System (INIS)

    Thomas, G.E.

    1975-01-01

    Targets are routinely evaporated using a few milligram quantities of separated isotopes of calcium with reducing agents. The source to target distance is 3.0 cm with the substrate, if necessary, as thin as 15 μg/cm 2 carbon or 100 μg/cm 2 of gold. A tantalum closed boat, heat shield, and special collimator system are used

  7. inverse gaussian model for small area estimation via gibbs sampling

    African Journals Online (AJOL)

    ADMIN

    1 Department of Decision Sciences and MIS, Concordia University, Montréal,. Québec ... method by application to household income survey data, comparing it against the usual lognormal ...... pensions, superannuation and annuities and other.

  8. Small Sample Robust Testing for Normality against Pareto Tails

    Czech Academy of Sciences Publication Activity Database

    Stehlík, M.; Fabián, Zdeněk; Střelec, L.

    2012-01-01

    Roč. 41, č. 7 (2012), s. 1167-1194 ISSN 0361-0918 Grant - others:Aktion(CZ-AT) 51p7, 54p21, 50p14, 54p13 Institutional research plan: CEZ:AV0Z10300504 Keywords : consistency * Hill estimator * t-Hill estimator * location functional * Pareto tail * power comparison * returns * robust tests for normality Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.295, year: 2012

  9. Measuring Thermal Conductivity of a Small Insulation Sample

    Science.gov (United States)

    Miller, Robert A.; Kuczmarski, Maria A.

    2009-01-01

    A multiple-throat venturi system has been invented for measuring laminar flow of air or other gas at low speed (1 to 30 cm/s) in a duct while preserving the laminar nature of the flow and keeping the velocity profile across the duct as nearly flat as possible. While means for measuring flows at higher speeds are well established, heretofore, there have been no reliable means for making consistent, accurate measurements in this speed range. In the original application for which this system was invented, the duct leads into the test section of a low-speed wind tunnel wherein uniform, low-speed, laminar flow is required for scientific experiments. The system could also be used to monitor a slow flow of gas in an industrial process like chemical vapor deposition. In the original application, the multiple- throat venturi system is mounted at the inlet end of the duct having a rectangular cross section of 19 by 14 cm, just upstream of an assembly of inlet screens and flow straighteners that help to suppress undesired flow fluctuations (see Figure 1). The basic venturi measurement principle is well established: One measures the difference in pressure between (1) a point just outside the inlet, where the pressure is highest and the kinetic energy lowest; and (2) the narrowest part (the throat) of the venturi passage, where the kinetic energy is highest and the pressure is lowest. Then by use of Bernoulli s equation for the relationship between pressure and kinetic energy, the volumetric flow speed in the duct can be calculated from the pressure difference and the inlet and throat widths. The design of this system represents a compromise among length, pressure recovery, uniformity of flow, and complexity of assembly. Traditionally, venturis are used to measure faster flows in narrower cross sections, with longer upstream and downstream passages to maintain accuracy. The dimensions of the passages of the present venturi system are sized to provide a readily measurable pressure drop. Multiple throats are used to minimize the length needed to recover internal energy and enable the velocity profile to recover to near flatness.

  10. Directional emission of single photons from small atomic samples

    DEFF Research Database (Denmark)

    Miroshnychenko, Yevhen; V. Poulsen, Uffe; Mølmer, Klaus

    2013-01-01

    We provide a formalism to describe deterministic emission of single photons with tailored spatial and temporal profiles from a regular array of multi-level atoms. We assume that a single collective excitation is initially shared by all the atoms in a metastable atomic state, and that this state i...... is coupled by a classical laser field to an optically excited state which rapidly decays to the ground atomic state. Our model accounts for the different field polarization components via re-absorption and emission of light by the Zeeman manifold of optically excited states.......We provide a formalism to describe deterministic emission of single photons with tailored spatial and temporal profiles from a regular array of multi-level atoms. We assume that a single collective excitation is initially shared by all the atoms in a metastable atomic state, and that this state...

  11. Sample Size and Saturation in PhD Studies Using Qualitative Interviews

    Directory of Open Access Journals (Sweden)

    Mark Mason

    2010-08-01

    Full Text Available A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research. URN: urn:nbn:de:0114-fqs100387

  12. Lung cancer - small cell

    Science.gov (United States)

    Cancer - lung - small cell; Small cell lung cancer; SCLC ... About 15% of all lung cancer cases are SCLC. Small cell lung cancer is slightly more common in men than women. Almost all cases of SCLC are ...

  13. Small millets, big potential

    International Development Research Centre (IDRC) Digital Library (Canada)

    consumption of small millets, mainly due to limited productivity, high ... for effective integration of small millets in the ... replicated in other cities. ... to micro-, small- and medium-entrepreneurs producing millet-based ... and Activities Network,.

  14. Sustainable Mars Sample Return

    Science.gov (United States)

    Alston, Christie; Hancock, Sean; Laub, Joshua; Perry, Christopher; Ash, Robert

    2011-01-01

    The proposed Mars sample return mission will be completed using natural Martian resources for the majority of its operations. The system uses the following technologies: In-Situ Propellant Production (ISPP), a methane-oxygen propelled Mars Ascent Vehicle (MAV), a carbon dioxide powered hopper, and a hydrogen fueled balloon system (large balloons and small weather balloons). The ISPP system will produce the hydrogen, methane, and oxygen using a Sabatier reactor. a water electrolysis cell, water extracted from the Martian surface, and carbon dioxide extracted from the Martian atmosphere. Indigenous hydrogen will fuel the balloon systems and locally-derived methane and oxygen will fuel the MAV for the return of a 50 kg sample to Earth. The ISPP system will have a production cycle of 800 days and the estimated overall mission length is 1355 days from Earth departure to return to low Earth orbit. Combining these advanced technologies will enable the proposed sample return mission to be executed with reduced initial launch mass and thus be more cost efficient. The successful completion of this mission will serve as the next step in the advancement of Mars exploration technology.

  15. Treatability study sample exemption: update

    International Nuclear Information System (INIS)

    1997-01-01

    This document is a RCRA Information Brief intended to update the information in the 1991 Small-Scale Treatability Study Information Brief, and to address questions about the waste and treatability study sample exemptions that have arisen since References 3 and 5 were published

  16. BRANDING IN SMALL BUSINESS

    OpenAIRE

    Catalin Mihail BARBU; Radu Florin OGARCA; Mihai Razvan Constantin BARBU

    2010-01-01

    In this paper we analyzed the branding in small business. Using a desk research on Internet and the press we have identified the practices small businesses use to enhance their brand and the brand dynamics in small business. Our main contribution is that we tried to figure out the strategy of branding in small business. This need further to be investigated in order to understand how branding works in small business and to better capture the role of branding in small business.

  17. A Bayesian Method for Weighted Sampling

    OpenAIRE

    Lo, Albert Y.

    1993-01-01

    Bayesian statistical inference for sampling from weighted distribution models is studied. Small-sample Bayesian bootstrap clone (BBC) approximations to the posterior distribution are discussed. A second-order property for the BBC in unweighted i.i.d. sampling is given. A consequence is that BBC approximations to a posterior distribution of the mean and to the sampling distribution of the sample average, can be made asymptotically accurate by a proper choice of the random variables that genera...

  18. Small Business Size Standards

    Data.gov (United States)

    Small Business Administration — Certain government programs, such as SBA loan programs and contracting opportunities, are reserved for small business concerns. In order to qualify, businesses must...

  19. Not Too Small To Care: Small Businesses and Child Care. National Advisory Panel Exchange #2.

    Science.gov (United States)

    Eichman, Caroline; Reisman, Barbara

    This report of the Child Care Action Committee's National Advisory Panel profiles 29 small businesses employing under 250 workers in 15 states which offer child care benefits to their employees. These businesses do not constitute a representative sample of small businesses. A series of factors for small businesses to consider when planning a child…

  20. Present status of NMCC and sample preparation method for bio-samples

    International Nuclear Information System (INIS)

    Futatsugawa, S.; Hatakeyama, S.; Saitou, S.; Sera, K.

    1993-01-01

    In NMCC(Nishina Memorial Cyclotron Center) we are doing researches on PET of nuclear medicine (Positron Emission Computed Tomography) and PIXE analysis (Particle Induced X-ray Emission) using a small cyclotron of compactly designed. The NMCC facilities have been opened to researchers of other institutions since April 1993. The present status of NMCC is described. Bio-samples (medical samples, plants, animals and environmental samples) have mainly been analyzed by PIXE in NMCC. Small amounts of bio-samples for PIXE are decomposed quickly and easily in a sealed PTFE (polytetrafluoroethylene) vessel with a microwave oven. This sample preparation method of bio-samples also is described. (author)

  1. NIF small optics laser damage test specifications

    International Nuclear Information System (INIS)

    Sheehan, L

    1999-01-01

    The Laser Damage Group is currently conducting tests on small optics samples supplied for initial evaluation of potential NIF suppliers. This document is meant to define the specification of laser-induced damage for small optics and the test methods used to collect the data. A rating system which will be applied for vendor selection is presented

  2. Small angle neutron scattering

    Directory of Open Access Journals (Sweden)

    Cousin Fabrice

    2015-01-01

    Full Text Available Small Angle Neutron Scattering (SANS is a technique that enables to probe the 3-D structure of materials on a typical size range lying from ∼ 1 nm up to ∼ a few 100 nm, the obtained information being statistically averaged on a sample whose volume is ∼ 1 cm3. This very rich technique enables to make a full structural characterization of a given object of nanometric dimensions (radius of gyration, shape, volume or mass, fractal dimension, specific area… through the determination of the form factor as well as the determination of the way objects are organized within in a continuous media, and therefore to describe interactions between them, through the determination of the structure factor. The specific properties of neutrons (possibility of tuning the scattering intensity by using the isotopic substitution, sensitivity to magnetism, negligible absorption, low energy of the incident neutrons make it particularly interesting in the fields of soft matter, biophysics, magnetic materials and metallurgy. In particular, the contrast variation methods allow to extract some informations that cannot be obtained by any other experimental techniques. This course is divided in two parts. The first one is devoted to the description of the principle of SANS: basics (formalism, coherent scattering/incoherent scattering, notion of elementary scatterer, form factor analysis (I(q→0, Guinier regime, intermediate regime, Porod regime, polydisperse system, structure factor analysis (2nd Virial coefficient, integral equations, characterization of aggregates, and contrast variation methods (how to create contrast in an homogeneous system, matching in ternary systems, extrapolation to zero concentration, Zero Averaged Contrast. It is illustrated by some representative examples. The second one describes the experimental aspects of SANS to guide user in its future experiments: description of SANS spectrometer, resolution of the spectrometer, optimization of

  3. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  4. Poor Man's Asteroid Sample Return Missions

    Science.gov (United States)

    Landis, R. R.; Graham, L. D.

    2018-02-01

    A cislunar platform at a Near-Rectilinear [Halo] Orbit in the vicinity of the Moon could provide an opportunity for a small NEA sample return mission at relatively low cost. There are a couple potential small ( 1m) object target dynamical groups.

  5. Small angle spectrometers: Summary

    International Nuclear Information System (INIS)

    Courant, E.; Foley, K.J.; Schlein, P.E.

    1986-01-01

    Aspects of experiments at small angles at the Superconducting Super Collider are considered. Topics summarized include a small angle spectrometer, a high contingency spectrometer, dipole and toroid spectrometers, and magnet choices

  6. Small Community Training & Education

    Science.gov (United States)

    Operators Small Systems Small Community Training & Education education, training and professional implement the 1996 Amendments to the Safe Drinking Water Act (SDWA). • EPA Environmental Education Center

  7. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  8. Evaluation of respondent-driven sampling.

    Science.gov (United States)

    McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required

  9. Evaluation of Respondent-Driven Sampling

    Science.gov (United States)

    McCreesh, Nicky; Frost, Simon; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda Ndagire; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Background Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex-workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total-population data. Methods Total-population data on age, tribe, religion, socioeconomic status, sexual activity and HIV status were available on a population of 2402 male household-heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, employing current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). Results We recruited 927 household-heads. Full and small RDS samples were largely representative of the total population, but both samples under-represented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven-sampling statistical-inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven-sampling bootstrap 95% confidence intervals included the population proportion. Conclusions Respondent-driven sampling produced a generally representative sample of this well-connected non-hidden population. However, current respondent-driven-sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience-sampling

  10. Estimation of Poverty in Small Areas

    Directory of Open Access Journals (Sweden)

    Agne Bikauskaite

    2014-12-01

    Full Text Available A qualitative techniques of poverty estimation is needed to better implement, monitor and determine national areas where support is most required. The problem of small area estimation (SAE is the production of reliable estimates in areas with small samples. The precision of estimates in strata deteriorates (i.e. the precision decreases when the standard deviation increases, if the sample size is smaller. In these cases traditional direct estimators may be not precise and therefore pointless. Currently there are many indirect methods for SAE. The purpose of this paper is to analyze several diff erent types of techniques which produce small area estimates of poverty.

  11. Diversity of human small intestinal Streptococcus and Veillonella populations

    NARCIS (Netherlands)

    van den Bogert, Bartholomeus; Erkus, Oylum; Boekhorst, Jos; de Goffau, Marcus; Smid, Eddy J.; Zoetendal, Erwin G.; Kleerebezem, Michiel

    Molecular and cultivation approaches were employed to study the phylogenetic richness and temporal dynamics of Streptococcus and Veillonella populations in the small intestine. Microbial profiling of human small intestinal samples collected from four ileostomy subjects at four time points displayed

  12. Small Business Commitment | NREL

    Science.gov (United States)

    Small Business Commitment Small Business Commitment Central to NREL's mission is our commitment to small business through a comprehensive and mature outreach program that combines proven techniques with the latest technology and best business practices. For More Information Contact Us Please email Rexann

  13. Small hepatocellular carcinoma versus small cavernous hemangioma

    International Nuclear Information System (INIS)

    Choi, B.I.; Park, H.W.; Kim, S.H.; Han, M.C.; Kim, C.W.

    1989-01-01

    To determine the optimal pulse sequence for detection and differential diagnosis of small hepatocellular carcinomas and cavernous hemangiomas less than 5 cm in diameter, the authors have analyzed spin-echo (SE) images of 15 small hepatocellular carcinomas and 31 small cavernous hemangiomas obtained at 2.0 T. Pulse sequences used included repetition times (TRs) of 500 and 2,000 msec and echo times (TEs) of 30,60,90,120,150, and 180 msec. Mean tumor-liver contrast-to-noise ratios on the SE 2,000/60 (TR msec/TE msec) sequence were 23.90 ± 16.33 and 62.10 ± 25.94 for small hepatocellular carcinomas and hemangiomas, respectively, and were significantly greater than for all other pulse sequences. Mean tumor-liver signal intensity ratios on the SE 2,000/150 sequence were 2.34 ± 1.72 and 6.04 ± 2.72 for small hepatocellular carcinomas and hemangiomas, respectively, and were significantly greater than for all other pulse sequences in hemangiomas

  14. Signal sampling circuit

    NARCIS (Netherlands)

    Louwsma, S.M.; Vertregt, Maarten

    2011-01-01

    A sampling circuit for sampling a signal is disclosed. The sampling circuit comprises a plurality of sampling channels adapted to sample the signal in time-multiplexed fashion, each sampling channel comprising a respective track-and-hold circuit connected to a respective analogue to digital

  15. Signal sampling circuit

    NARCIS (Netherlands)

    Louwsma, S.M.; Vertregt, Maarten

    2010-01-01

    A sampling circuit for sampling a signal is disclosed. The sampling circuit comprises a plurality of sampling channels adapted to sample the signal in time-multiplexed fashion, each sampling channel comprising a respective track-and-hold circuit connected to a respective analogue to digital

  16. Experimental technique of small angle neutron scattering

    International Nuclear Information System (INIS)

    Xia Qingzhong; Chen Bo

    2006-03-01

    The main parts of Small Angle Neutron Scattering (SANS) spectrometer, and their function and different parameters are introduced from experimental aspect. Detailed information is also introduced for SANS spectrometer 'Membrana-2'. Based on practical experiments, the fundamental requirements and working condition for SANS experiments, including sample preparation, detector calibration, standard sample selection and data preliminary process are described. (authors)

  17. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  18. Rapid Sampling from Sealed Containers

    International Nuclear Information System (INIS)

    Johnston, R.G.; Garcia, A.R.E.; Martinez, R.K.; Baca, E.T.

    1999-01-01

    The authors have developed several different types of tools for sampling from sealed containers. These tools allow the user to rapidly drill into a closed container, extract a sample of its contents (gas, liquid, or free-flowing powder), and permanently reseal the point of entry. This is accomplished without exposing the user or the environment to the container contents, even while drilling. The entire process is completed in less than 15 seconds for a 55 gallon drum. Almost any kind of container can be sampled (regardless of the materials) with wall thicknesses up to 1.3 cm and internal pressures up to 8 atm. Samples can be taken from the top, sides, or bottom of a container. The sampling tools are inexpensive, small, and easy to use. They work with any battery-powered hand drill. This allows considerable safety, speed, flexibility, and maneuverability. The tools also permit the user to rapidly attach plumbing, a pressure relief valve, alarms, or other instrumentation to a container. Possible applications include drum venting, liquid transfer, container flushing, waste characterization, monitoring, sampling for archival or quality control purposes, emergency sampling by rapid response teams, counter-terrorism, non-proliferation and treaty verification, and use by law enforcement personnel during drug or environmental raids

  19. Quantum Metropolis sampling.

    Science.gov (United States)

    Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F

    2011-03-03

    The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.

  20. Small cell glioblastoma or small cell carcinoma

    DEFF Research Database (Denmark)

    Hilbrandt, Christine; Sathyadas, Sathya; Dahlrot, Rikke H

    2013-01-01

    was admitted to the hospital with left-sided loss of motor function. A MRI revealed a 6 cm tumor in the right temporoparietal area. The histology was consistent with both glioblastoma multiforme (GBM) and small cell lung carcinoma (SCLC) but IHC was suggestive of a SCLC metastasis. PET-CT revealed...

  1. Toward cost-efficient sampling methods

    Science.gov (United States)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  2. On the Sampling

    OpenAIRE

    Güleda Doğan

    2017-01-01

    This editorial is on statistical sampling, which is one of the most two important reasons for editorial rejection from our journal Turkish Librarianship. The stages of quantitative research, the stage in which we are sampling, the importance of sampling for a research, deciding on sample size and sampling methods are summarised briefly.

  3. Information sampling behavior with explicit sampling costs

    Science.gov (United States)

    Juni, Mordechai Z.; Gureckis, Todd M.; Maloney, Laurence T.

    2015-01-01

    The decision to gather information should take into account both the value of information and its accrual costs in time, energy and money. Here we explore how people balance the monetary costs and benefits of gathering additional information in a perceptual-motor estimation task. Participants were rewarded for touching a hidden circular target on a touch-screen display. The target’s center coincided with the mean of a circular Gaussian distribution from which participants could sample repeatedly. Each “cue” — sampled one at a time — was plotted as a dot on the display. Participants had to repeatedly decide, after sampling each cue, whether to stop sampling and attempt to touch the hidden target or continue sampling. Each additional cue increased the participants’ probability of successfully touching the hidden target but reduced their potential reward. Two experimental conditions differed in the initial reward associated with touching the hidden target and the fixed cost per cue. For each condition we computed the optimal number of cues that participants should sample, before taking action, to maximize expected gain. Contrary to recent claims that people gather less information than they objectively should before taking action, we found that participants over-sampled in one experimental condition, and did not significantly under- or over-sample in the other. Additionally, while the ideal observer model ignores the current sample dispersion, we found that participants used it to decide whether to stop sampling and take action or continue sampling, a possible consequence of imperfect learning of the underlying population dispersion across trials. PMID:27429991

  4. Small but super

    International Nuclear Information System (INIS)

    Donald, R.L.

    1994-01-01

    This paper compares the advantages and disadvantages between large and small gas utility companies. It discusses areas of construction, gaining markets, technology advances, pricing, and customer service. The paper includes discussions from four chairmen of small utility companies whom describe their perceived position among the larger companies. It also describes methods which small companies use to unite for state and nationally significant issues to voice their opinions

  5. Small Business Procurement Event

    Science.gov (United States)

    2014-08-13

    Small Business Procurement Event 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK...NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Department of the Navy,Office of Small Business Programs,720 Kennon...distribution unlimited 13. SUPPLEMENTARY NOTES NDIA 27th Navy Gold Coast Small Business Procurement Event, 12-13 Aug 2014, San Diego, CA. 14. ABSTRACT

  6. Minijets at small x

    International Nuclear Information System (INIS)

    Landshoff, P.V.

    1994-01-01

    Nonperturbative pomeron exchange at high energy includes minijet production. Minijets are jets whose transverse momentum is so small that they are difficult, or even impossible, to detect experimentally. At moderate Q 2 it is responsible for the small-x behaviour of νW 2 . Hence minijet production should be a feature of deep inelastic scattering at small x. (author). 9 refs., 7 figs

  7. How Sample Size Affects a Sampling Distribution

    Science.gov (United States)

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  8. Canadian small wind market

    International Nuclear Information System (INIS)

    Moorhouse, E.

    2010-01-01

    This PowerPoint presentation discussed initiatives and strategies adopted by the Canadian Wind Energy Association (CanWEA) to support the development of Canada's small wind market. The general public has shown a significant interest in small wind projects of 300 kW. Studies have demonstrated that familiarity and comfort with small wind projects can help to ensure the successful implementation of larger wind projects. Small wind markets include residential, farming and commercial, and remote community applications. The results of CanWEA market survey show that the small wind market grew by 78 percent in 2008 over 2007, and again in 2009 by 32 percent over 2008. The average turbine size is 1 kW. A total of 11,000 turbines were purchased in 2007 and 2008. Global small wind market growth increased by 110 percent in 2008, and the average turbine size was 2.4 kW. Eighty-seven percent of the turbines made by Canadian mid-size wind turbine manufacturers are exported, and there is now a significant risk that Canada will lose its competitive advantage in small wind manufacturing as financial incentives have not been implemented. American and Canadian-based small wind manufacturers were listed, and small wind policies were reviewed. The presentation concluded with a set of recommendations for future incentives, educational programs and legislation. tabs., figs.

  9. Small Intestine Disorders

    Science.gov (United States)

    ... disease Crohn's disease Infections Intestinal cancer Intestinal obstruction Irritable bowel syndrome Ulcers, such as peptic ulcer Treatment of disorders of the small intestine depends on the cause.

  10. Radioactivity in environmental samples

    International Nuclear Information System (INIS)

    Fornaro, Laura

    2001-01-01

    The objective of this practical work is to familiarize the student with radioactivity measures in environmental samples. For that were chosen samples a salt of natural potassium, a salt of uranium or torio and a sample of drinkable water

  11. DNA Sampling Hook

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The DNA Sampling Hook is a significant improvement on a method of obtaining a tissue sample from a live fish in situ from an aquatic environment. A tissue sample...

  12. Iowa Geologic Sampling Points

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — Point locations of geologic samples/files in the IGS repository. Types of samples include well cuttings, outcrop samples, cores, drillers logs, measured sections,...

  13. Network and adaptive sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Combining the two statistical techniques of network sampling and adaptive sampling, this book illustrates the advantages of using them in tandem to effectively capture sparsely located elements in unknown pockets. It shows how network sampling is a reliable guide in capturing inaccessible entities through linked auxiliaries. The text also explores how adaptive sampling is strengthened in information content through subsidiary sampling with devices to mitigate unmanageable expanding sample sizes. Empirical data illustrates the applicability of both methods.

  14. Small angle neutron scattering and small angle X-ray scattering ...

    Indian Academy of Sciences (India)

    Abstract. The morphology of carbon nanofoam samples comprising platinum nanopar- ticles dispersed in the matrix was characterized by small angle neutron scattering (SANS) and small angle X-ray scattering (SAXS) techniques. Results show that the structure of pores of carbon matrix exhibits a mass (pore) fractal nature ...

  15. Loss of sexual reproduction and dwarfing in a small metazoan.

    Directory of Open Access Journals (Sweden)

    Claus-Peter Stelzer

    2010-09-01

    Full Text Available Asexuality has major theoretical advantages over sexual reproduction, yet newly formed asexual lineages rarely endure. The success, or failure, of such lineages is affected by their mechanism of origin, because it determines their initial genetic makeup and variability. Most previously described mechanisms imply that asexual lineages are randomly frozen subsamples of a sexual population.We found that transitions to obligate parthenogenesis (OP in the rotifer Brachionus calyciflorus, a small freshwater invertebrate which normally reproduces by cyclical parthenogenesis, were controlled by a simple Mendelian inheritance. Pedigree analysis suggested that obligate parthenogens were homozygous for a recessive allele, which caused inability to respond to the chemical signals that normally induce sexual reproduction in this species. Alternative mechanisms, such as ploidy changes, could be ruled out on the basis of flow cytometric measurements and genetic marker analysis. Interestingly, obligate parthenogens were also dwarfs (approximately 50% smaller than cyclical parthenogens, indicating pleiotropy or linkage with genes that strongly affect body size. We found no adverse effects of OP on survival or fecundity.This mechanism of inheritance implies that genes causing OP may evolve within sexual populations and remain undetected in the heterozygous state long before they get frequent enough to actually cause a transition to asexual reproduction. In this process, genetic variation at other loci might become linked to OP genes, leading to non-random associations between asexuality and other phenotypic traits.

  16. Small School Reform

    Directory of Open Access Journals (Sweden)

    Carroll E. Bronson

    2013-05-01

    Full Text Available This qualitative ethnographic case study explored the evolution of a public urban high school in its 3rd year of small school reform. The study focused on how the high school proceeded from its initial concept, moving to a small school program, and emerging as a new small high school. Data collection included interviews, observations, and document review to develop a case study of one small high school sharing a multiplex building. The first key finding, “Too Many Pieces, Not Enough Glue,” revealed that the school had too many new programs starting at once and they lacked a clear understanding of their concept and vision for their new small school, training on the Montessori philosophies, teaching and learning in small schools, and how to operate within a teacher-cooperative model. The second key finding, “A Continuous Struggle,” revealed that the shared building space presented problems for teachers and students. District policies remain unchanged, resulting in staff and students resorting to activist approaches to get things done. These findings offer small school reform leaders suggestions for developing and sustaining a small school culture and cohesion despite the pressures to revert back to top-down, comprehensive high school norms.

  17. Sensitive Small Area Photometer

    Science.gov (United States)

    Levenson, M. D.

    1970-01-01

    Describes a simple photometer capable of measuring small light intensities over small areas. The inexpensive, easy-to- construct instrument is intended for use in a student laboratory to measure the light intensities in a diffraction experiment from single or multiple slits. Typical experimental results are presented along with the theoretical…

  18. Industrial Education. "Small Engines".

    Science.gov (United States)

    Parma City School District, OH.

    Part of a series of curriculum guides dealing with industrial education in junior high schools, this guide provides the student with information and manipulative experiences on small gasoline engines. Included are sections on shop adjustment, safety, small engines, internal combustion, engine construction, four stroke engines, two stroke engines,…

  19. Small States in Europe

    DEFF Research Database (Denmark)

    This book offers an accessible, coherent and informative analysis of contemporary and future foreign policy challenges facing small states in Europe.......This book offers an accessible, coherent and informative analysis of contemporary and future foreign policy challenges facing small states in Europe....

  20. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    Science.gov (United States)

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.