WorldWideScience

Sample records for included small sample

  1. Big Data, Small Sample.

    Science.gov (United States)

    Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan

    2017-05-20

    Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.

  2. Standard Deviation for Small Samples

    Science.gov (United States)

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  3. Small sample whole-genome amplification

    Science.gov (United States)

    Hara, Christine; Nguyen, Christine; Wheeler, Elizabeth; Sorensen, Karen; Arroyo, Erin; Vrankovich, Greg; Christian, Allen

    2005-11-01

    Many challenges arise when trying to amplify and analyze human samples collected in the field due to limitations in sample quantity, and contamination of the starting material. Tests such as DNA fingerprinting and mitochondrial typing require a certain sample size and are carried out in large volume reactions; in cases where insufficient sample is present whole genome amplification (WGA) can be used. WGA allows very small quantities of DNA to be amplified in a way that enables subsequent DNA-based tests to be performed. A limiting step to WGA is sample preparation. To minimize the necessary sample size, we have developed two modifications of WGA: the first allows for an increase in amplified product from small, nanoscale, purified samples with the use of carrier DNA while the second is a single-step method for cleaning and amplifying samples all in one column. Conventional DNA cleanup involves binding the DNA to silica, washing away impurities, and then releasing the DNA for subsequent testing. We have eliminated losses associated with incomplete sample release, thereby decreasing the required amount of starting template for DNA testing. Both techniques address the limitations of sample size by providing ample copies of genomic samples. Carrier DNA, included in our WGA reactions, can be used when amplifying samples with the standard purification method, or can be used in conjunction with our single-step DNA purification technique to potentially further decrease the amount of starting sample necessary for future forensic DNA-based assays.

  4. Privacy problems in the small sample selection

    Directory of Open Access Journals (Sweden)

    Loredana Cerbara

    2013-05-01

    Full Text Available The side of social research that uses small samples for the production of micro data, today finds some operating difficulties due to the privacy law. The privacy code is a really important and necessary law because it guarantees the Italian citizen’s rights, as already happens in other Countries of the world. However it does not seem appropriate to limit once more the possibilities of the data production of the national centres of research. That possibilities are already moreover compromised due to insufficient founds is a common problem becoming more and more frequent in the research field. It would be necessary, therefore, to include in the law the possibility to use telephonic lists to select samples useful for activities directly of interest and importance to the citizen, such as the collection of the data carried out on the basis of opinion polls by the centres of research of the Italian CNR and some universities.

  5. Decision Support on Small size Passive Samples

    Directory of Open Access Journals (Sweden)

    Vladimir Popukaylo

    2018-05-01

    Full Text Available A construction technique of adequate mathematical models for small size passive samples, in conditions when classical probabilistic-statis\\-tical methods do not allow obtaining valid conclusions was developed.

  6. Maybe Small Is Too Small a Term: Introduction to Advancing Small Sample Prevention Science.

    Science.gov (United States)

    Fok, Carlotta Ching Ting; Henry, David; Allen, James

    2015-10-01

    Prevention research addressing health disparities often involves work with small population groups experiencing such disparities. The goals of this special section are to (1) address the question of what constitutes a small sample; (2) identify some of the key research design and analytic issues that arise in prevention research with small samples; (3) develop applied, problem-oriented, and methodologically innovative solutions to these design and analytic issues; and (4) evaluate the potential role of these innovative solutions in describing phenomena, testing theory, and evaluating interventions in prevention research. Through these efforts, we hope to promote broader application of these methodological innovations. We also seek whenever possible, to explore their implications in more general problems that appear in research with small samples but concern all areas of prevention research. This special section includes two sections. The first section aims to provide input for researchers at the design phase, while the second focuses on analysis. Each article describes an innovative solution to one or more challenges posed by the analysis of small samples, with special emphasis on testing for intervention effects in prevention research. A concluding article summarizes some of their broader implications, along with conclusions regarding future directions in research with small samples in prevention science. Finally, a commentary provides the perspective of the federal agencies that sponsored the conference that gave rise to this special section.

  7. Small-sample-worth perturbation methods

    International Nuclear Information System (INIS)

    1985-01-01

    It has been assumed that the perturbed region, R/sub p/, is large enough so that: (1) even without a great deal of biasing there is a substantial probability that an average source-neutron will enter it; and (2) once having entered, the neutron is likely to make several collisions in R/sub p/ during its lifetime. Unfortunately neither assumption is valid for the typical configurations one encounters in small-sample-worth experiments. In such experiments one measures the reactivity change which is induced when a very small void in a critical assembly is filled with a sample of some test-material. Only a minute fraction of the fission-source neutrons ever gets into the sample and, of those neutrons that do, most emerge uncollided. Monte Carlo small-sample perturbations computations are described

  8. Gaseous radiocarbon measurements of small samples

    International Nuclear Information System (INIS)

    Ruff, M.; Szidat, S.; Gaeggeler, H.W.; Suter, M.; Synal, H.-A.; Wacker, L.

    2010-01-01

    Radiocarbon dating by means of accelerator mass spectrometry (AMS) is a well-established method for samples containing carbon in the milligram range. However, the measurement of small samples containing less than 50 μg carbon often fails. It is difficult to graphitise these samples and the preparation is prone to contamination. To avoid graphitisation, a solution can be the direct measurement of carbon dioxide. The MICADAS, the smallest accelerator for radiocarbon dating in Zurich, is equipped with a hybrid Cs sputter ion source. It allows the measurement of both, graphite targets and gaseous CO 2 samples, without any rebuilding. This work presents experiences dealing with small samples containing 1-40 μg carbon. 500 unknown samples of different environmental research fields have been measured yet. Most of the samples were measured with the gas ion source. These data are compared with earlier measurements of small graphite samples. The performance of the two different techniques is discussed and main contributions to the blank determined. An analysis of blank and standard data measured within years allowed a quantification of the contamination, which was found to be of the order of 55 ng and 750 ng carbon (50 pMC) for the gaseous and the graphite samples, respectively. For quality control, a number of certified standards were measured using the gas ion source to demonstrate reliability of the data.

  9. A Geology Sampling System for Small Bodies

    Science.gov (United States)

    Naids, Adam J.; Hood, Anthony D.; Abell, Paul; Graff, Trevor; Buffington, Jesse

    2016-01-01

    Human exploration of microgravity bodies is being investigated as a precursor to a Mars surface mission. Asteroids, comets, dwarf planets, and the moons of Mars all fall into this microgravity category and some are being discussed as potential mission targets. Obtaining geological samples for return to Earth will be a major objective for any mission to a small body. Currently, the knowledge base for geology sampling in microgravity is in its infancy. Humans interacting with non-engineered surfaces in microgravity environment pose unique challenges. In preparation for such missions a team at the NASA Johnson Space Center has been working to gain experience on how to safely obtain numerous sample types in such an environment. This paper describes the type of samples the science community is interested in, highlights notable prototype work, and discusses an integrated geology sampling solution.

  10. Accelerator mass spectrometry of small biological samples.

    Science.gov (United States)

    Salehpour, Mehran; Forsgard, Niklas; Possnert, Göran

    2008-12-01

    Accelerator mass spectrometry (AMS) is an ultra-sensitive technique for isotopic ratio measurements. In the biomedical field, AMS can be used to measure femtomolar concentrations of labeled drugs in body fluids, with direct applications in early drug development such as Microdosing. Likewise, the regenerative properties of cells which are of fundamental significance in stem-cell research can be determined with an accuracy of a few years by AMS analysis of human DNA. However, AMS nominally requires about 1 mg of carbon per sample which is not always available when dealing with specific body substances such as localized, organ-specific DNA samples. Consequently, it is of analytical interest to develop methods for the routine analysis of small samples in the range of a few tens of microg. We have used a 5 MV Pelletron tandem accelerator to study small biological samples using AMS. Different methods are presented and compared. A (12)C-carrier sample preparation method is described which is potentially more sensitive and less susceptible to contamination than the standard procedures.

  11. Transportable high sensitivity small sample radiometric calorimeter

    International Nuclear Information System (INIS)

    Wetzel, J.R.; Biddle, R.S.; Cordova, B.S.; Sampson, T.E.; Dye, H.R.; McDow, J.G.

    1998-01-01

    A new small-sample, high-sensitivity transportable radiometric calorimeter, which can be operated in different modes, contains an electrical calibration method, and can be used to develop secondary standards, will be described in this presentation. The data taken from preliminary tests will be presented to indicate the precision and accuracy of the instrument. The calorimeter and temperature-controlled bath, at present, require only a 30-in. by 20-in. tabletop area. The calorimeter is operated from a laptop computer system using unique measurement module capable of monitoring all necessary calorimeter signals. The calorimeter can be operated in the normal calorimeter equilibration mode, as a comparison instrument, using twin chambers and an external electrical calibration method. The sample chamber is 0.75 in (1.9 cm) in diameter by 2.5 in. (6.35 cm) long. This size will accommodate most 238 Pu heat standards manufactured in the past. The power range runs from 0.001 W to <20 W. The high end is only limited by sample size

  12. ASSESSING SMALL SAMPLE WAR-GAMING DATASETS

    Directory of Open Access Journals (Sweden)

    W. J. HURLEY

    2013-10-01

    Full Text Available One of the fundamental problems faced by military planners is the assessment of changes to force structure. An example is whether to replace an existing capability with an enhanced system. This can be done directly with a comparison of measures such as accuracy, lethality, survivability, etc. However this approach does not allow an assessment of the force multiplier effects of the proposed change. To gauge these effects, planners often turn to war-gaming. For many war-gaming experiments, it is expensive, both in terms of time and dollars, to generate a large number of sample observations. This puts a premium on the statistical methodology used to examine these small datasets. In this paper we compare the power of three tests to assess population differences: the Wald-Wolfowitz test, the Mann-Whitney U test, and re-sampling. We employ a series of Monte Carlo simulation experiments. Not unexpectedly, we find that the Mann-Whitney test performs better than the Wald-Wolfowitz test. Resampling is judged to perform slightly better than the Mann-Whitney test.

  13. [Progress in sample preparation and analytical methods for trace polar small molecules in complex samples].

    Science.gov (United States)

    Zhang, Qianchun; Luo, Xialin; Li, Gongke; Xiao, Xiaohua

    2015-09-01

    Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed.

  14. Advances in SPECT Instrumentation (Including Small Animal Scanners). Chapter 4

    International Nuclear Information System (INIS)

    Di Domenico, G.; Zavattini, G.

    2009-01-01

    Fundamental major efforts have been devoted to the development of positron emission tomography (PET) imaging modality over the last few decades. Recently, a novel surge of interest in single photon emission computed tomography (SPECT) technology has occurred, particularly after the introduction of the hybrid SPECT-CT imaging system. This has led to a flourishing of investigations in new types of detectors and collimators, and to more accurate refinement of reconstruction algorithms. Along with SPECT-CT, new, fast gamma cameras have been developed for dedicated cardiac imaging. The existing gap between PET and SPECT in sensitivity and spatial resolution is progressively decreasing, and this trend is particularly apparent in the field of small animal imaging where the most important advances have been reported in SPECT tomographs. An outline of the basic features of SPECT technology, and of recent developments in SPECT instrumentation for both clinical applications and basic biological research on animal models is described. (author)

  15. Small Mammal Sampling in Mortandad and Los Alamos Canyons, 2005

    International Nuclear Information System (INIS)

    Kathy Bennett; Sherri Sherwood; Rhonda Robinson

    2006-01-01

    As part of an ongoing ecological field investigation at Los Alamos National Laboratory, a study was conducted that compared measured contaminant concentrations in sediment to population parameters for small mammals in the Mortandad Canyon watershed. Mortandad Canyon and its tributary canyons have received contaminants from multiple solid waste management units and areas of concern since establishment of the Laboratory in the 1940s. The study included three reaches within Effluent and Mortandad canyons (E-1W, M-2W, and M-3) that had a spread in the concentrations of metals and radionuclides and included locations where polychlorinated biphenyls and perchlorate had been detected. A reference location, reach LA-BKG in upper Los Alamos Canyon, was also included in the study for comparison purposes. A small mammal study was initiated to assess whether potential adverse effects were evident in Mortandad Canyon due to the presence of contaminants, designated as contaminants of potential ecological concern, in the terrestrial media. Study sites, including the reference site, were sampled in late July/early August. Species diversity and the mean daily capture rate were the highest for E-1W reach and the lowest for the reference site. Species composition among the three reaches in Mortandad was similar with very little overlap with the reference canyon. Differences in species composition and diversity were most likely due to differences in habitat. Sex ratios, body weights, and reproductive status of small mammals were also evaluated. However, small sample sizes of some species within some sites affected the analysis. Ratios of males to females by species of each site (n = 5) were tested using a Chi-square analysis. No differences were detected. Where there was sufficient sample size, body weights of adult small mammals were compared between sites. No differences in body weights were found. Reproductive status of species appears to be similar across sites. However, sample

  16. Small Mammal Sampling in Mortandad and Los Alamos Canyons, 2005

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, Kathy; Sherwood, Sherri; Robinson, Rhonda

    2006-08-15

    As part of an ongoing ecological field investigation at Los Alamos National Laboratory, a study was conducted that compared measured contaminant concentrations in sediment to population parameters for small mammals in the Mortandad Canyon watershed. Mortandad Canyon and its tributary canyons have received contaminants from multiple solid waste management units and areas of concern since establishment of the Laboratory in the 1940s. The study included three reaches within Effluent and Mortandad canyons (E-1W, M-2W, and M-3) that had a spread in the concentrations of metals and radionuclides and included locations where polychlorinated biphenyls and perchlorate had been detected. A reference location, reach LA-BKG in upper Los Alamos Canyon, was also included in the study for comparison purposes. A small mammal study was initiated to assess whether potential adverse effects were evident in Mortandad Canyon due to the presence of contaminants, designated as contaminants of potential ecological concern, in the terrestrial media. Study sites, including the reference site, were sampled in late July/early August. Species diversity and the mean daily capture rate were the highest for E-1W reach and the lowest for the reference site. Species composition among the three reaches in Mortandad was similar with very little overlap with the reference canyon. Differences in species composition and diversity were most likely due to differences in habitat. Sex ratios, body weights, and reproductive status of small mammals were also evaluated. However, small sample sizes of some species within some sites affected the analysis. Ratios of males to females by species of each site (n = 5) were tested using a Chi-square analysis. No differences were detected. Where there was sufficient sample size, body weights of adult small mammals were compared between sites. No differences in body weights were found. Reproductive status of species appears to be similar across sites. However, sample

  17. An Improvement to Interval Estimation for Small Samples

    Directory of Open Access Journals (Sweden)

    SUN Hui-Ling

    2017-02-01

    Full Text Available Because it is difficult and complex to determine the probability distribution of small samples,it is improper to use traditional probability theory to process parameter estimation for small samples. Bayes Bootstrap method is always used in the project. Although,the Bayes Bootstrap method has its own limitation,In this article an improvement is given to the Bayes Bootstrap method,This method extended the amount of samples by numerical simulation without changing the circumstances in a small sample of the original sample. And the new method can give the accurate interval estimation for the small samples. Finally,by using the Monte Carlo simulation to model simulation to the specific small sample problems. The effectiveness and practicability of the Improved-Bootstrap method was proved.

  18. Pulsed Direct Current Electrospray: Enabling Systematic Analysis of Small Volume Sample by Boosting Sample Economy.

    Science.gov (United States)

    Wei, Zhenwei; Xiong, Xingchuang; Guo, Chengan; Si, Xingyu; Zhao, Yaoyao; He, Muyi; Yang, Chengdui; Xu, Wei; Tang, Fei; Fang, Xiang; Zhang, Sichun; Zhang, Xinrong

    2015-11-17

    We had developed pulsed direct current electrospray ionization mass spectrometry (pulsed-dc-ESI-MS) for systematically profiling and determining components in small volume sample. Pulsed-dc-ESI utilized constant high voltage to induce the generation of single polarity pulsed electrospray remotely. This method had significantly boosted the sample economy, so as to obtain several minutes MS signal duration from merely picoliter volume sample. The elongated MS signal duration enable us to collect abundant MS(2) information on interested components in a small volume sample for systematical analysis. This method had been successfully applied for single cell metabolomics analysis. We had obtained 2-D profile of metabolites (including exact mass and MS(2) data) from single plant and mammalian cell, concerning 1034 components and 656 components for Allium cepa and HeLa cells, respectively. Further identification had found 162 compounds and 28 different modification groups of 141 saccharides in a single Allium cepa cell, indicating pulsed-dc-ESI a powerful tool for small volume sample systematical analysis.

  19. Estimation for small domains in double sampling for stratification ...

    African Journals Online (AJOL)

    In this article, we investigate the effect of randomness of the size of a small domain on the precision of an estimator of mean for the domain under double sampling for stratification. The result shows that for a small domain that cuts across various strata with unknown weights, the sampling variance depends on the within ...

  20. Development of electric discharge equipment for small specimen sampling

    International Nuclear Information System (INIS)

    Okamoto, Koji; Kitagawa, Hideaki; Kusumoto, Junichi; Kanaya, Akihiro; Kobayashi, Toshimi

    2009-01-01

    We have developed the on-site electric discharge sampling equipment that can effectively take samples such as small specimens from the surface portion of the plant components. Compared with the conventional sampling equipment, our sampling equipment can take samples that are thinner in depth and larger in area. In addition, the affection to the equipment can be held down to the minimum, and the thermally-affected zone of the material due to electric discharge is small, which is to be ignored. Therefore, our equipment is excellent in taking samples for various tests such as residual life evaluation.

  1. ANL small-sample calorimeter system design and operation

    International Nuclear Information System (INIS)

    Roche, C.T.; Perry, R.B.; Lewis, R.N.; Jung, E.A.; Haumann, J.R.

    1978-07-01

    The Small-Sample Calorimetric System is a portable instrument designed to measure the thermal power produced by radioactive decay of plutonium-containing fuels. The small-sample calorimeter is capable of measuring samples producing power up to 32 milliwatts at a rate of one sample every 20 min. The instrument is contained in two packages: a data-acquisition module consisting of a microprocessor with an 8K-byte nonvolatile memory, and a measurement module consisting of the calorimeter and a sample preheater. The total weight of the system is 18 kg

  2. A thermostat for precise measurements of thermoresistance of small samples

    International Nuclear Information System (INIS)

    Rusinowski, Z.; Slowinski, B.; Winiewski, R.

    1996-01-01

    In the work a simple experimental set-up is described in which special attention is paid to the important problem of the thermal stability of thermoresistance measurements of small samples of manganin

  3. Including Below Detection Limit Samples in Post Decommissioning Soil Sample Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Hwan; Yim, Man Sung [KAIST, Daejeon (Korea, Republic of)

    2016-05-15

    To meet the required standards the site owner has to show that the soil at the facility has been sufficiently cleaned up. To do this one must know the contamination of the soil at the site prior to clean up. This involves sampling that soil to identify the degree of contamination. However there is a technical difficulty in determining how much decontamination should be done. The problem arises when measured samples are below the detection limit. Regulatory guidelines for site reuse after decommissioning are commonly challenged because the majority of the activity in the soil at or below the limit of detection. Using additional statistical analyses of contaminated soil after decommissioning is expected to have the following advantages: a better and more reliable probabilistic exposure assessment, better economics (lower project costs) and improved communication with the public. This research will develop an approach that defines an acceptable method for demonstrating compliance of decommissioned NPP sites and validates that compliance. Soil samples from NPP often contain censored data. Conventional methods for dealing with censored data sets are statistically biased and limited in their usefulness.

  4. The Impact of Including Below Detection Limit Samples in Post Decommissioning Soil Sample Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Hwan; Yim, Man-Sung [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    To meet the required standards the site owner has to show that the soil at the facility has been sufficiently cleaned up. To do this one must know the contamination of the soil at the site prior to clean up. This involves sampling that soil to identify the degree of contamination. However there is a technical difficulty in determining how much decontamination should be done. The problem arises when measured samples are below the detection limit. Regulatory guidelines for site reuse after decommissioning are commonly challenged because the majority of the activity in the soil at or below the limit of detection. Using additional statistical analyses of contaminated soil after decommissioning is expected to have the following advantages: a better and more reliable probabilistic exposure assessment, better economics (lower project costs) and improved communication with the public. This research will develop an approach that defines an acceptable method for demonstrating compliance of decommissioned NPP sites and validates that compliance. Soil samples from NPP often contain censored data. Conventional methods for dealing with censored data sets are statistically biased and limited in their usefulness. In this research, additional methods are performed using real data from a monazite manufacturing factory.

  5. Radioenzymatic assay for trimethoprim in very small serum samples.

    OpenAIRE

    Yogev, R; Melick, C; Tan-Pong, L

    1985-01-01

    A modification of the methotrexate radioassay kit (supplied by New England Enzyme Center) enabled determination of trimethoprim levels in 5-microliter serum samples. An excellent correlation between this assay and high-pressure liquid chromatography assay was found. These preliminary results suggest that with this method rapid determination of trimethoprim levels in very small samples (5 to 10 microliters) can be achieved.

  6. Radioenzymatic assay for trimethoprim in very small serum samples

    International Nuclear Information System (INIS)

    Yogev, R.; Melick, C.; Tan-Pong, L.

    1985-01-01

    A modification of the methotrexate radioassay kit (supplied by New England Enzyme Center) enabled determination of trimethoprim levels in 5-microliter serum samples. An excellent correlation between this assay and high-pressure liquid chromatography assay was found. These preliminary results suggest that with this method rapid determination of trimethoprim levels in very small samples (5 to 10 microliters) can be achieved

  7. Test of a sample container for shipment of small size plutonium samples with PAT-2

    International Nuclear Information System (INIS)

    Kuhn, E.; Aigner, H.; Deron, S.

    1981-11-01

    A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO 2 -powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)

  8. Accurate EPR radiosensitivity calibration using small sample masses

    Science.gov (United States)

    Hayes, R. B.; Haskell, E. H.; Barrus, J. K.; Kenner, G. H.; Romanyukha, A. A.

    2000-03-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed.

  9. Accurate EPR radiosensitivity calibration using small sample masses

    International Nuclear Information System (INIS)

    Hayes, R.B.; Haskell, E.H.; Barrus, J.K.; Kenner, G.H.; Romanyukha, A.A.

    2000-01-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed

  10. Advances in Small Remotely Piloted Aircraft Communications and Remote Sensing in Maritime Environments including the Arctic

    Science.gov (United States)

    McGillivary, P. A.; Borges de Sousa, J.; Wackowski, S.; Walker, G.

    2011-12-01

    highlight use in the arctic of two different small remotely piloted aircraft (ScanEagle and RAVEN) for remote sensing of ice and ocean conditions as well as surveys of marine mammals. Finally, we explain how these can be used in future networked environments with DTN support not only for the collection of ocean and ice data for maritime domain awareness, but also for monitoring oil spill dynamics in high latitude environments, including spills in and under sea ice. The networked operation of heterogeneous air and ocean vehicle systems using DTN communications methods can provide unprecedented levels of spatial-temporal sampling resolution important to improving arctic remote sensing and maritime domain awareness capabilities.

  11. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Radionuclides in small mammals of the Saskatchewan prairie, including implications for the boreal forest and Arctic tundra

    International Nuclear Information System (INIS)

    Thomas, P.A.

    1995-01-01

    The focus of the study reported was to collect and examine baseline data on radionuclides in small prairie mammal food chains and to assess the feasibility of using small mammals as radionuclide monitors in terrestrial ecosystems, in anticipation of possible future nuclear developments in northern Saskatchewan and the Northwest Territories. The study report begins with a literature review that summarizes existing data on radionuclides in small mammals, their food, the ambient environment in Canadian terrestrial ecosystems, principles of terrestrial radioecology, soil and vegetation studies, and food chain studies. It then describes a field study conducted to investigate small mammal food chains at three southwestern Saskatchewan prairie sites. Activities included collection and analysis of water, soil, grains, and foliage samples; trapping of small mammals such as mice and voles, and analysis of gastrointestinal tract samples; and determination of food chain transfer of selected radionuclides from soil to plants and to small mammals. Recommendations are made for future analyses and monitoring of small mammals. Appendices include information on radiochemical methods, soil/vegetation studies and small mammal studies conducted at northern Saskatchewan mine sites, and analyses of variance

  13. Multi-element analysis of small biological samples

    International Nuclear Information System (INIS)

    Rokita, E.; Cafmeyer, J.; Maenhaut, W.

    1983-01-01

    A method combining PIXE and INAA was developed to determine the elemental composition of small biological samples. The method needs virtually no sample preparation and less than 1 mg is sufficient for the analysis. The method was used for determining up to 18 elements in leaves taken from Cracow Herbaceous. The factors which influence the elemental composition of leaves and the possible use of leaves as an environmental pollution indicator are discussed

  14. Conversion of Small Algal Oil Sample to JP-8

    Science.gov (United States)

    2012-01-01

    cracking of Algal Oil to SPK Hydroprocessing Lab Plant uop Nitrogen Hydrogen Product ., __ Small Scale Lab Hydprocessing plant - Down flow trickle ... bed configuration - Capable of retaining 25 cc of catalyst bed Meter UOP ·CONFIDENTIAL File Number The catalytic deoxygenation stage of the...content which combined with the samples acidity, is a challenge to reactor metallurgy. None the less, an attempt was made to convert this sample to

  15. Consensus of heterogeneous multi-agent systems based on sampled data with a small sampling delay

    International Nuclear Information System (INIS)

    Wang Na; Wu Zhi-Hai; Peng Li

    2014-01-01

    In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  16. inverse gaussian model for small area estimation via gibbs sampling

    African Journals Online (AJOL)

    ADMIN

    For example, MacGibbon and Tomberlin. (1989) have considered estimating small area rates and binomial parameters using empirical Bayes methods. Stroud (1991) used hierarchical Bayes approach for univariate natural exponential families with quadratic variance functions in sample survey applications, while Chaubey ...

  17. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    Science.gov (United States)

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  18. Systematic studies of small scintillators for new sampling calorimeter

    Indian Academy of Sciences (India)

    A new sampling calorimeter using very thin scintillators and the multi-pixel photon counter (MPPC) has been proposed to produce better position resolution for the international linear collider (ILC) experiment. As part of this R & D study, small plastic scintillators of different sizes, thickness and wrapping reflectors are ...

  19. A General Linear Method for Equating with Small Samples

    Science.gov (United States)

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  20. Automated Sampling and Extraction of Krypton from Small Air Samples for Kr-85 Measurement Using Atom Trap Trace Analysis

    International Nuclear Information System (INIS)

    Hebel, S.; Hands, J.; Goering, F.; Kirchner, G.; Purtschert, R.

    2015-01-01

    Atom-Trap-Trace-Analysis (ATTA) provides the capability of measuring the Krypton-85 concentration in microlitre amounts of krypton extracted from air samples of about 1 litre. This sample size is sufficiently small to allow for a range of applications, including on-site spot sampling and continuous sampling over periods of several hours. All samples can be easily handled and transported to an off-site laboratory for ATTA measurement, or stored and analyzed on demand. Bayesian sampling methodologies can be applied by blending samples for bulk measurement and performing in-depth analysis as required. Prerequisite for measurement is the extraction of a pure krypton fraction from the sample. This paper introduces an extraction unit able to isolate the krypton in small ambient air samples with high speed, high efficiency and in a fully automated manner using a combination of cryogenic distillation and gas chromatography. Air samples are collected using an automated smart sampler developed in-house to achieve a constant sampling rate over adjustable time periods ranging from 5 minutes to 3 hours per sample. The smart sampler can be deployed in the field and operate on battery for one week to take up to 60 air samples. This high flexibility of sampling and the fast, robust sample preparation are a valuable tool for research and the application of Kr-85 measurements to novel Safeguards procedures. (author)

  1. Testing of Small Graphite Samples for Nuclear Qualification

    Energy Technology Data Exchange (ETDEWEB)

    Julie Chapman

    2010-11-01

    Accurately determining the mechanical properties of small irradiated samples is crucial to predicting the behavior of the overal irradiated graphite components within a Very High Temperature Reactor. The sample size allowed in a material test reactor, however, is limited, and this poses some difficulties with respect to mechanical testing. In the case of graphite with a larger grain size, a small sample may exhibit characteristics not representative of the bulk material, leading to inaccuracies in the data. A study to determine a potential size effect on the tensile strength was pursued under the Next Generation Nuclear Plant program. It focuses first on optimizing the tensile testing procedure identified in the American Society for Testing and Materials (ASTM) Standard C 781-08. Once the testing procedure was verified, a size effect was assessed by gradually reducing the diameter of the specimens. By monitoring the material response, a size effect was successfully identified.

  2. Exploratory Factor Analysis With Small Samples and Missing Data.

    Science.gov (United States)

    McNeish, Daniel

    2017-01-01

    Exploratory factor analysis (EFA) is an extremely popular method for determining the underlying factor structure for a set of variables. Due to its exploratory nature, EFA is notorious for being conducted with small sample sizes, and recent reviews of psychological research have reported that between 40% and 60% of applied studies have 200 or fewer observations. Recent methodological studies have addressed small size requirements for EFA models; however, these models have only considered complete data, which are the exception rather than the rule in psychology. Furthermore, the extant literature on missing data techniques with small samples is scant, and nearly all existing studies focus on topics that are not of primary interest to EFA models. Therefore, this article presents a simulation to assess the performance of various missing data techniques for EFA models with both small samples and missing data. Results show that deletion methods do not extract the proper number of factors and estimate the factor loadings with severe bias, even when data are missing completely at random. Predictive mean matching is the best method overall when considering extracting the correct number of factors and estimating factor loadings without bias, although 2-stage estimation was a close second.

  3. Radiocarbon measurements of small gaseous samples at CologneAMS

    Science.gov (United States)

    Stolz, A.; Dewald, A.; Altenkirch, R.; Herb, S.; Heinze, S.; Schiffer, M.; Feuerstein, C.; Müller-Gatermann, C.; Wotte, A.; Rethemeyer, J.; Dunai, T.

    2017-09-01

    A second SO-110 B (Arnold et al., 2010) ion source was installed at the 6 MV CologneAMS for the measurement of gaseous samples. For the gas supply a dedicated device from Ionplus AG was connected to the ion source. Special effort was devoted to determine optimized operation parameters for the ion source, which give a high carbon current output and a high 14C- yield. The latter is essential in cases when only small samples are available. Additionally a modified immersion lens and modified target pieces were tested and the target position was optimized.

  4. A multi-dimensional sampling method for locating small scatterers

    International Nuclear Information System (INIS)

    Song, Rencheng; Zhong, Yu; Chen, Xudong

    2012-01-01

    A multiple signal classification (MUSIC)-like multi-dimensional sampling method (MDSM) is introduced to locate small three-dimensional scatterers using electromagnetic waves. The indicator is built with the most stable part of signal subspace of the multi-static response matrix on a set of combinatorial sampling nodes inside the domain of interest. It has two main advantages compared to the conventional MUSIC methods. First, the MDSM is more robust against noise. Second, it can work with a single incidence even for multi-scatterers. Numerical simulations are presented to show the good performance of the proposed method. (paper)

  5. Phytophthora have distinct endogenous small RNA populations that include short interfering and microRNAs.

    Directory of Open Access Journals (Sweden)

    Noah Fahlgren

    Full Text Available In eukaryotes, RNA silencing pathways utilize 20-30-nucleotide small RNAs to regulate gene expression, specify and maintain chromatin structure, and repress viruses and mobile genetic elements. RNA silencing was likely present in the common ancestor of modern eukaryotes, but most research has focused on plant and animal RNA silencing systems. Phytophthora species belong to a phylogenetically distinct group of economically important plant pathogens that cause billions of dollars in yield losses annually as well as ecologically devastating outbreaks. We analyzed the small RNA-generating components of the genomes of P. infestans, P. sojae and P. ramorum using bioinformatics, genetic, phylogenetic and high-throughput sequencing-based methods. Each species produces two distinct populations of small RNAs that are predominantly 21- or 25-nucleotides long. The 25-nucleotide small RNAs were primarily derived from loci encoding transposable elements and we propose that these small RNAs define a pathway of short-interfering RNAs that silence repetitive genetic elements. The 21-nucleotide small RNAs were primarily derived from inverted repeats, including a novel microRNA family that is conserved among the three species, and several gene families, including Crinkler effectors and type III fibronectins. The Phytophthora microRNA is predicted to target a family of amino acid/auxin permeases, and we propose that 21-nucleotide small RNAs function at the post-transcriptional level. The functional significance of microRNA-guided regulation of amino acid/auxin permeases and the association of 21-nucleotide small RNAs with Crinkler effectors remains unclear, but this work provides a framework for testing the role of small RNAs in Phytophthora biology and pathogenesis in future work.

  6. Phytophthora have distinct endogenous small RNA populations that include short interfering and microRNAs.

    Science.gov (United States)

    Fahlgren, Noah; Bollmann, Stephanie R; Kasschau, Kristin D; Cuperus, Josh T; Press, Caroline M; Sullivan, Christopher M; Chapman, Elisabeth J; Hoyer, J Steen; Gilbert, Kerrigan B; Grünwald, Niklaus J; Carrington, James C

    2013-01-01

    In eukaryotes, RNA silencing pathways utilize 20-30-nucleotide small RNAs to regulate gene expression, specify and maintain chromatin structure, and repress viruses and mobile genetic elements. RNA silencing was likely present in the common ancestor of modern eukaryotes, but most research has focused on plant and animal RNA silencing systems. Phytophthora species belong to a phylogenetically distinct group of economically important plant pathogens that cause billions of dollars in yield losses annually as well as ecologically devastating outbreaks. We analyzed the small RNA-generating components of the genomes of P. infestans, P. sojae and P. ramorum using bioinformatics, genetic, phylogenetic and high-throughput sequencing-based methods. Each species produces two distinct populations of small RNAs that are predominantly 21- or 25-nucleotides long. The 25-nucleotide small RNAs were primarily derived from loci encoding transposable elements and we propose that these small RNAs define a pathway of short-interfering RNAs that silence repetitive genetic elements. The 21-nucleotide small RNAs were primarily derived from inverted repeats, including a novel microRNA family that is conserved among the three species, and several gene families, including Crinkler effectors and type III fibronectins. The Phytophthora microRNA is predicted to target a family of amino acid/auxin permeases, and we propose that 21-nucleotide small RNAs function at the post-transcriptional level. The functional significance of microRNA-guided regulation of amino acid/auxin permeases and the association of 21-nucleotide small RNAs with Crinkler effectors remains unclear, but this work provides a framework for testing the role of small RNAs in Phytophthora biology and pathogenesis in future work.

  7. Phytophthora Have Distinct Endogenous Small RNA Populations That Include Short Interfering and microRNAs

    Science.gov (United States)

    Fahlgren, Noah; Bollmann, Stephanie R.; Kasschau, Kristin D.; Cuperus, Josh T.; Press, Caroline M.; Sullivan, Christopher M.; Chapman, Elisabeth J.; Hoyer, J. Steen; Gilbert, Kerrigan B.; Grünwald, Niklaus J.; Carrington, James C.

    2013-01-01

    In eukaryotes, RNA silencing pathways utilize 20-30-nucleotide small RNAs to regulate gene expression, specify and maintain chromatin structure, and repress viruses and mobile genetic elements. RNA silencing was likely present in the common ancestor of modern eukaryotes, but most research has focused on plant and animal RNA silencing systems. Phytophthora species belong to a phylogenetically distinct group of economically important plant pathogens that cause billions of dollars in yield losses annually as well as ecologically devastating outbreaks. We analyzed the small RNA-generating components of the genomes of P. infestans, P. sojae and P. ramorum using bioinformatics, genetic, phylogenetic and high-throughput sequencing-based methods. Each species produces two distinct populations of small RNAs that are predominantly 21- or 25-nucleotides long. The 25-nucleotide small RNAs were primarily derived from loci encoding transposable elements and we propose that these small RNAs define a pathway of short-interfering RNAs that silence repetitive genetic elements. The 21-nucleotide small RNAs were primarily derived from inverted repeats, including a novel microRNA family that is conserved among the three species, and several gene families, including Crinkler effectors and type III fibronectins. The Phytophthora microRNA is predicted to target a family of amino acid/auxin permeases, and we propose that 21-nucleotide small RNAs function at the post-transcriptional level. The functional significance of microRNA-guided regulation of amino acid/auxin permeases and the association of 21-nucleotide small RNAs with Crinkler effectors remains unclear, but this work provides a framework for testing the role of small RNAs in Phytophthora biology and pathogenesis in future work. PMID:24204767

  8. Local heterogeneity effects on small-sample worths

    International Nuclear Information System (INIS)

    Schaefer, R.W.

    1986-01-01

    One of the parameters usually measured in a fast reactor critical assembly is the reactivity associated with inserting a small sample of a material into the core (sample worth). Local heterogeneities introduced by the worth measurement techniques can have a significant effect on the sample worth. Unfortunately, the capability is lacking to model some of the heterogeneity effects associated with the experimental technique traditionally used at ANL (the radial tube technique). It has been suggested that these effects could account for a large portion of what remains of the longstanding central worth discrepancy. The purpose of this paper is to describe a large body of experimental data - most of which has never been reported - that shows the effect of radial tube-related local heterogeneities

  9. Research of pneumatic control transmission system for small irradiation samples

    International Nuclear Information System (INIS)

    Bai Zhongxiong; Zhang Haibing; Rong Ru; Zhang Tao

    2008-01-01

    In order to reduce the absorbed dose damage for the operator, pneumatic control has been adopted to realize the rapid transmission of small irradiation samples. On/off of pneumatic circuit and directions for the rapid transmission system are controlled by the electrical control part. The main program initializes the system and detects the location of the manual/automatic change-over switch, and call for the corresponding subprogram to achieve the automatic or manual operation. Automatic subprogram achieves the automatic sample transmission; Manual subprogram completes the deflation, and back and forth movement of the radiation samples. This paper introduces in detail the implementation of the system, in terms of both hardware and software design. (authors)

  10. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  11. Use of the small gas proportional counters for the carbon-14 measurement of very small samples

    International Nuclear Information System (INIS)

    Sayre, E.V.; Harbottle, G.; Stoenner, R.W.; Otlet, R.L.; Evans, G.V.

    1981-01-01

    Two recent developments are: the first is the mass-spectrometric separation of 14 C and 12 C ions, followed by counting of the 14 C, while the second is the extension of conventional proportional counter operation, using CO 2 as counting gas, to very small counters and samples. Although the second method is slow (months of counting time are required for 10 mg of carbon) it does not require operator intervention and many samples may be counted simultaneously. Also, it costs only a fraction of the capital expense of an accelerator installation. The development, construction and operation of suitable small counters are described, and results of three actual dating studies involving milligram scale carbon samples will be given. None of these could have been carried out if conventional, gram-sized samples had been needed. New installations, based on the use of these counters, are under construction or in the planning stages. These are located at Brookhaven Laboratory, the National Bureau of Standards (USA) and Harwell (UK). The Harwell installation, which is in advanced stages of construction, will be described in outline. The main significance of the small-counter method is, that although it will not suffice to measure the smallest (much less than 10 mg) or oldest samples, it will permit existing radiocarbon laboratories to extend their capability considerably, in the direction of smaller samples, at modest expense

  12. Selection Component Analysis of Natural Polymorphisms using Population Samples Including Mother-Offspring Combinations, II

    DEFF Research Database (Denmark)

    Jarmer, Hanne Østergaard; Christiansen, Freddy Bugge

    1981-01-01

    Population samples including mother-offspring combinations provide information on the selection components: zygotic selection, sexual selection, gametic seletion and fecundity selection, on the mating pattern, and on the deviation from linkage equilibrium among the loci studied. The theory...

  13. Thermal neutron absorption cross section of small samples

    International Nuclear Information System (INIS)

    Nghiep, T.D.; Vinh, T.T.; Son, N.N.; Vuong, T.V.; Hung, N.T.

    1989-01-01

    A modified steady method for determining the macroscopic thermal neutron absorption cross section of small samples 500 cm 3 in volume is described. The method uses a moderating block of paraffin, Pu-Be neutron source emitting 1.1x10 6 n.s. -1 , SNM-14 counter and ordinary counting equipment. The interval of cross section from 2.6 to 1.3x10 4 (10 -3 cm 2 g -1 ) was measured. The experimental data are described by calculation formulae. 7 refs.; 4 figs

  14. Data Stewardship in the Ocean Sciences Needs to Include Physical Samples

    Science.gov (United States)

    Carter, M.; Lehnert, K.

    2016-02-01

    Across the Ocean Sciences, research involves the collection and study of samples collected above, at, and below the seafloor, including but not limited to rocks, sediments, fluids, gases, and living organisms. Many domains in the Earth Sciences have recently expressed the need for better discovery, access, and sharing of scientific samples and collections (EarthCube End-User Domain workshops, 2012 and 2013, http://earthcube.org/info/about/end-user-workshops), as has the US government (OSTP Memo, March 2014). iSamples (Internet of Samples in the Earth Sciences) is a Research Coordination Network within the EarthCube program that aims to advance the use of innovative cyberinfrastructure to support and advance the utility of physical samples and sample collections for science and ensure reproducibility of sample-based data and research results. iSamples strives to build, grow, and foster a new community of practice, in which domain scientists, curators of sample repositories and collections, computer and information scientists, software developers and technology innovators engage in and collaborate on defining, articulating, and addressing the needs and challenges of physical samples as a critical component of digital data infrastructure. A primary goal of iSamples is to deliver a community-endorsed set of best practices and standards for the registration, description, identification, and citation of physical specimens and define an actionable plan for implementation. iSamples conducted a broad community survey about sample sharing and has created 5 different working groups to address the different challenges of developing the internet of samples - from metadata schemas and unique identifiers to an architecture for a shared cyberinfrastructure to manage collections, to digitization of existing collections, to education, and ultimately to establishing the physical infrastructure that will ensure preservation and access of the physical samples. Repositories that curate

  15. Soybean yield modeling using bootstrap methods for small samples

    Energy Technology Data Exchange (ETDEWEB)

    Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.

    2016-11-01

    One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)

  16. Measurement of phthalates in small samples of mammalian tissue

    International Nuclear Information System (INIS)

    Acott, P.D.; Murphy, M.G.; Ogborn, M.R.; Crocker, J.F.S.

    1987-01-01

    Di-(2-ethylhexyl)-phthalate (DEHP) is a phthalic acid ester that is used as a plasticizer in polyvinyl chloride products, many of which have widespread medical application. DEHP has been shown to be leached from products used for storage and delivery of blood transfusions during procedures such as plasmaphoresis, hemodialysis and open heart surgery. Results of studies in this laboratory have suggested that there is an association between the absorption and deposition of DEHP (and/or related chemicals) in the kidney and the acquired renal cystic disease (ACD) frequently seen in patients who have undergone prolonged dialysis treatment. In order to determine the relationship between the two, it has been necessary to establish a method for extracting and accurately quantitating minute amounts of these chemicals in small tissue samples. The authors have now established such a method using kidneys from normal rats and from a rat model for ACD

  17. Accelerator mass spectrometry of ultra-small samples with applications in the biosciences

    International Nuclear Information System (INIS)

    Salehpour, Mehran; Håkansson, Karl; Possnert, Göran

    2013-01-01

    An overview is presented covering the biological accelerator mass spectrometry activities at Uppsala University. The research utilizes the Uppsala University Tandem laboratory facilities, including a 5 MV Pelletron tandem accelerator and two stable isotope ratio mass spectrometers. In addition, a dedicated sample preparation laboratory for biological samples with natural activity is in use, as well as another laboratory specifically for 14 C-labeled samples. A variety of ongoing projects are described and presented. Examples are: (1) Ultra-small sample AMS. We routinely analyze samples with masses in the 5–10 μg C range. Data is presented regarding the sample preparation method, (2) bomb peak biological dating of ultra-small samples. A long term project is presented where purified and cell-specific DNA from various part of the human body including the heart and the brain are analyzed with the aim of extracting regeneration rate of the various human cells, (3) biological dating of various human biopsies, including atherosclerosis related plaques is presented. The average built up time of the surgically removed human carotid plaques have been measured and correlated to various data including the level of insulin in the human blood, and (4) In addition to standard microdosing type measurements using small pharmaceutical drugs, pre-clinical pharmacokinetic data from a macromolecular drug candidate are discussed.

  18. Accelerator mass spectrometry of ultra-small samples with applications in the biosciences

    Energy Technology Data Exchange (ETDEWEB)

    Salehpour, Mehran, E-mail: mehran.salehpour@physics.uu.se [Department of Physics and Astronomy, Ion Physics, PO Box 516, SE-751 20 Uppsala (Sweden); Hakansson, Karl; Possnert, Goeran [Department of Physics and Astronomy, Ion Physics, PO Box 516, SE-751 20 Uppsala (Sweden)

    2013-01-15

    An overview is presented covering the biological accelerator mass spectrometry activities at Uppsala University. The research utilizes the Uppsala University Tandem laboratory facilities, including a 5 MV Pelletron tandem accelerator and two stable isotope ratio mass spectrometers. In addition, a dedicated sample preparation laboratory for biological samples with natural activity is in use, as well as another laboratory specifically for {sup 14}C-labeled samples. A variety of ongoing projects are described and presented. Examples are: (1) Ultra-small sample AMS. We routinely analyze samples with masses in the 5-10 {mu}g C range. Data is presented regarding the sample preparation method, (2) bomb peak biological dating of ultra-small samples. A long term project is presented where purified and cell-specific DNA from various part of the human body including the heart and the brain are analyzed with the aim of extracting regeneration rate of the various human cells, (3) biological dating of various human biopsies, including atherosclerosis related plaques is presented. The average built up time of the surgically removed human carotid plaques have been measured and correlated to various data including the level of insulin in the human blood, and (4) In addition to standard microdosing type measurements using small pharmaceutical drugs, pre-clinical pharmacokinetic data from a macromolecular drug candidate are discussed.

  19. Evaluation of energy deposition by 153Sm in small samples

    International Nuclear Information System (INIS)

    Cury, M.I.C.; Siqueira, P.T.D.; Yoriyaz, H.; Coelho, P.R.P.; Da Silva, M.A.; Okazaki, K.

    2002-01-01

    Aim: This work presents evaluations of the absorbed dose by 'in vitro' blood cultures when mixed with 153 Sm solutions of different concentrations. Although 153 Sm is used as radiopharmaceutical mainly due to its beta emission, which is short-range radiation, it also emits gamma radiation which has a longer-range penetration. Therefore it turns to be a difficult task to determine the absorbed dose by small samples where the infinite approximation is no longer valid. Materials and Methods: MCNP-4C (Monte Carlo N - Particle transport code) has been used to perform the evaluations. It is not a deterministic code that calculates the value of a specific quantity solving the physical equations involved in the problem, but a virtual experiment where the events related to the problems are simulated and the concerned quantities are tallied. MCNP also stands out by its possibilities to specify geometrically any problem. However, these features, among others, turns MCNP in a time consuming code. The simulated problem consists of a cylindrical plastic tube with 1.5 cm internal diameter and 0.1cm thickness. It also has 2.0 cm height conic bottom end, so that the represented sample has 4.0 ml ( consisted by 1 ml of blood and 3 ml culture medium). To evaluate the energy deposition in the blood culture in each 153 Sm decay, the problem has been divided in 3 steps to account to the β- emissions (which has a continuum spectrum), gammas and conversion and Auger electrons emissions. Afterwards each emission contribution was weighted and summed to present the final value. Besides this radiation 'fragmentation', simulations were performed for many different amounts of 153 Sm solution added to the sample. These amounts cover a range from 1μl to 0.5 ml. Results: The average energy per disintegration of 153 Sm is 331 keV [1]. Gammas account for 63 keV and β-, conversion and Auger electrons account for 268 keV. The simulations performed showed an average energy deposition of 260 ke

  20. Cerebral Small Vessel Disease: Cognition, Mood, Daily Functioning, and Imaging Findings from a Small Pilot Sample

    Directory of Open Access Journals (Sweden)

    John G. Baker

    2012-04-01

    Full Text Available Cerebral small vessel disease, a leading cause of cognitive decline, is considered a relatively homogeneous disease process, and it can co-occur with Alzheimer’s disease. Clinical reports of magnetic resonance imaging (MRI/computed tomography and single photon emission computed tomography (SPECT imaging and neuropsychology testing for a small pilot sample of 14 patients are presented to illustrate disease characteristics through findings from structural and functional imaging and cognitive assessment. Participants showed some decreases in executive functioning, attention, processing speed, and memory retrieval, consistent with previous literature. An older subgroup showed lower age-corrected scores at a single time point compared to younger participants. Performance on a computer-administered cognitive measure showed a slight overall decline over a period of 8–28 months. For a case study with mild neuropsychology findings, the MRI report was normal while the SPECT report identified perfusion abnormalities. Future research can test whether advances in imaging analysis allow for identification of cerebral small vessel disease before changes are detected in cognition.

  1. Supermarket revolution in Asia and emerging development strategies to include small farmers.

    Science.gov (United States)

    Reardon, Thomas; Timmer, C Peter; Minten, Bart

    2012-07-31

    A "supermarket revolution" has occurred in developing countries in the past 2 decades. We focus on three specific issues that reflect the impact of this revolution, particularly in Asia: continuity in transformation, innovation in transformation, and unique development strategies. First, the record shows that the rapid growth observed in the early 2000s in China, Indonesia, Malaysia, and Thailand has continued, and the "newcomers"--India and Vietnam--have grown even faster. Although foreign direct investment has been important, the roles of domestic conglomerates and even state investment have been significant and unique. Second, Asia's supermarket revolution has exhibited unique pathways of retail diffusion and procurement system change. There has been "precocious" penetration of rural towns by rural supermarkets and rural business hubs, emergence of penetration of fresh produce retail that took much longer to initiate in other regions, and emergence of Asian retail developing-country multinational chains. In procurement, a symbiosis between modern retail and the emerging and consolidating modern food processing and logistics sectors has arisen. Third, several approaches are being tried to link small farmers to supermarkets. Some are unique to Asia, for example assembling into a "hub" or "platform" or "park" the various companies and services that link farmers to modern markets. Other approaches relatively new to Asia are found elsewhere, especially in Latin America, including "bringing modern markets to farmers" by establishing collection centers and multipronged collection cum service provision arrangements, and forming market cooperatives and farmer companies to help small farmers access supermarkets.

  2. Communication: importance sampling including path correlation in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Pan, Feng; Tao, Guohua

    2013-03-07

    Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.

  3. Systematic studies of small scintillators for new sampling calorimeter

    International Nuclear Information System (INIS)

    Jacosalem, E.P.; Sanchez, A.L.C.; Bacala, A.M.; Iba, S.; Nakajima, N.; Ono, H.; Miyata, H.

    2007-01-01

    A new sampling calorimeter using very thin scintillators and the multi-pixel photon counter (MPPC) has been proposed to produce better position resolution for the international linear collider (ILC) experiment. As part of this R and D study, small plastic scintillators of different sizes, thickness and wrapping reflectors are systematically studied. The scintillation light due to beta rays from a collimated 90 Sr source are collected from the scintillator by wavelength-shifting (WLS) fiber and converted into electrical signals at the PMT. The wrapped scintillator that gives the best light yield is determined by comparing the measured pulse height of each 10 x 40 x 2 mm strip scintillator covered with 3M reflective mirror film, teflon, white paint, black tape, gold, aluminum and white paint+teflon. The pulse height dependence on position, length and thickness of the 3M reflective mirror film and teflon wrapped scintillators are measured. Results show that the 3M radiant mirror film-wrapped scintillator has the greatest light yield with an average of 9.2 photoelectrons. It is observed that light yield slightly increases with scintillator length, but increases to about 100% when WLS fiber diameter is increased from 1.0 mm to 1.6 mm. The position dependence measurement along the strip scintillator showed the uniformity of light transmission from the sensor to the PMT. A dip across the strip is observed which is 40% of the maximum pulse height. The block type scintillator pulse height, on the other hand, is found to be almost proportional to scintillator thickness. (author)

  4. Inverse Gaussian model for small area estimation via Gibbs sampling

    African Journals Online (AJOL)

    We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...

  5. Resolution of fine biological structure including small narcomedusae across a front in the Southern California Bight

    Science.gov (United States)

    McClatchie, Sam; Cowen, Robert; Nieto, Karen; Greer, Adam; Luo, Jessica Y.; Guigand, Cedric; Demer, David; Griffith, David; Rudnick, Daniel

    2012-04-01

    We sampled a front detected by SST gradient, ocean color imagery, and a Spray glider south of San Nicolas Island in the Southern California Bight between 14 and 18 October 2010. We sampled the front with an unusually extensive array of instrumentation, including the Continuous Underway Fish Egg Sampler (CUFES), the undulating In Situ Ichthyoplankton Imaging System (ISIIS) (fitted with temperature, salinity, oxygen, and fluorescence sensors), multifrequency acoustics, a surface pelagic trawl, a bongo net, and a neuston net. We found higher fluorescence and greater cladoceran, decapod, and euphausiid densities in the front, indicating increased primary and secondary production. Mesopelagic fish were most abundant in oceanic waters to the west of the front, market squid were abundant in the front associated with higher krill and decapod densities, and jack mackerel were most common in the front and on the shoreward side of the front. Egg densities peaked to either side of the front, consistent with both offshore (for oceanic squid and mesopelagic fish) and shelf origins (for white croaker and California halibut). We discovered unusually high concentrations of predatory narcomedusae in the surface layer of the frontal zone. Potential ichthyoplankton predators were more abundant either in the front (decapods, euphausiids, and squid) or shoreward of the front (medusae, chaetognaths, and jack mackerel). For pelagic fish like sardine, which can thrive in less productive waters, the safest place to spawn would be offshore because there are fewer potential predators.

  6. Polymerase chain reaction system using magnetic beads for analyzing a sample that includes nucleic acid

    Science.gov (United States)

    Nasarabadi, Shanavaz [Livermore, CA

    2011-01-11

    A polymerase chain reaction system for analyzing a sample containing nucleic acid includes providing magnetic beads; providing a flow channel having a polymerase chain reaction chamber, a pre polymerase chain reaction magnet position adjacent the polymerase chain reaction chamber, and a post pre polymerase magnet position adjacent the polymerase chain reaction chamber. The nucleic acid is bound to the magnetic beads. The magnetic beads with the nucleic acid flow to the pre polymerase chain reaction magnet position in the flow channel. The magnetic beads and the nucleic acid are washed with ethanol. The nucleic acid in the polymerase chain reaction chamber is amplified. The magnetic beads and the nucleic acid are separated into a waste stream containing the magnetic beads and a post polymerase chain reaction mix containing the nucleic acid. The reaction mix containing the nucleic acid flows to an analysis unit in the channel for analysis.

  7. Rural and small-town attitudes about alcohol use during pregnancy: a community and provider sample.

    Science.gov (United States)

    Logan, T K; Walker, Robert; Nagle, Laura; Lewis, Jimmie; Wiesenhahn, Donna

    2003-01-01

    While there has been considerable research on prenatal alcohol use, there have been limited studies focused on women in rural and small-town environments. This 2-part study examines gender differences in attitudes and perceived barriers to intervention in large community sample of persons living in rural and small-town environments in Kentucky (n = 3,346). The study also examines rural/small-town prenatal service providers' perceptions of barriers to assessment and intervention with pregnant substance abusers (n = 138). Surveys were administered to a convenience sample of employees and customers from 16 rural and small-town community outlets. There were 1503 males (45%) and 1843 females (55%) ranging in age from under 18 years old to over 66 years old. Surveys also were mailed to prenatal providers in county health departments of the 13-county study area, with 138 of 149 responding. Overall results of the community sample suggest that neither males nor females were knowledgeable about the harmful effects of alcohol use during pregnancy. Results also indicate substantial gender differences in alcohol attitudes, knowledge, and perceived barriers. Further, prenatal care providers identified several barriers in assessment and treatment of pregnant women with alcohol use problems in rural and small-town communities, including lack of knowledge and comfort with assessment as well as a lack of available and accessible treatment for referrals.

  8. Collateral Information for Equating in Small Samples: A Preliminary Investigation

    Science.gov (United States)

    Kim, Sooyeon; Livingston, Samuel A.; Lewis, Charles

    2011-01-01

    This article describes a preliminary investigation of an empirical Bayes (EB) procedure for using collateral information to improve equating of scores on test forms taken by small numbers of examinees. Resampling studies were done on two different forms of the same test. In each study, EB and non-EB versions of two equating methods--chained linear…

  9. A review of empirical research related to the use of small quantitative samples in clinical outcome scale development.

    Science.gov (United States)

    Houts, Carrie R; Edwards, Michael C; Wirth, R J; Deal, Linda S

    2016-11-01

    There has been a notable increase in the advocacy of using small-sample designs as an initial quantitative assessment of item and scale performance during the scale development process. This is particularly true in the development of clinical outcome assessments (COAs), where Rasch analysis has been advanced as an appropriate statistical tool for evaluating the developing COAs using a small sample. We review the benefits such methods are purported to offer from both a practical and statistical standpoint and detail several problematic areas, including both practical and statistical theory concerns, with respect to the use of quantitative methods, including Rasch-consistent methods, with small samples. The feasibility of obtaining accurate information and the potential negative impacts of misusing large-sample statistical methods with small samples during COA development are discussed.

  10. Dosimetry in CBCT with different protocols: emphasis on small FOVs including exams for TMJ

    Energy Technology Data Exchange (ETDEWEB)

    Nascimento, Helena Aguiar Ribeiro; Nascimento, Eduarda Helena Leandro; Freitas, Deborah Queiroz, E-mail: eduarda.hln@gmail.com [Universidade de Campinas (UNICAMP), Piracicaba, SP (Brazil). Departmento de Diagnose Oral; Andrade, Marcos Ely Almeida [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departmento de Energia Nuclear; Frazão, Marco Antonio Gomes [Faculdade de Odontologia de Recife (FOR), Recife, PE (Brazil). Divisao de Radiologia Oral; Ramos-Perez, Flavia Maria Moraes [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departmento de Clinica e Odontologica Preventiva

    2017-07-15

    This study aimed to estimate the absorbed dose in cone beam computed tomography (CBCT) exams according to different exposure parameters and size and position of the field of view (FOV). In addition was compared the absorbed dose of two smaller FOV scans with that of a larger FOV scan for evaluation of temporomandibular joint (TMJ), as it is a bilateral structure. CBCT scans were obtained on OP300 Maxio unit varying scanning mode (standard, high and endo) as well as size (5 x 5, 6 x 8 and 8 x 15 cm) and positioning of FOV. With a small FOV, different areas were scanned (maxilla or mandible, anterior or posterior and TMJ). Absorbed doses were determined using thermoluminescent dosimeters on the skin surface of sensitive organs of an anthropomorphic phantom. Endo mode showed the highest dose, followed by the high and standard modes in all FOV positions. With small FOV, doses were higher in the posterior region, especially in the mandible. Dose reduction occurred when small FOVs were used, but it was not proportional to FOV size reduction. For TMJ, the dose in a single acquisition with large FOV was greater than two acquisitions with small FOV, but lower than two acquisitions with medium FOV (6x8 cm). In conclusion, scanning mode, size and FOV position have great influence on the absorbed dose. Small FOV decreases the dose, but there is no linear relation between FOV size and dose. For bilateral exams of TMJ, double acquisition with small FOVs produces decrease in absorbed dose relative to a large FOV. (author)

  11. Safety evaluation of small samples for isotope production

    International Nuclear Information System (INIS)

    Sharma, Archana; Singh, Tej; Varde, P.V.

    2015-09-01

    Radioactive isotopes are widely used in basic and applied science and engineering, most notably as environmental and industrial tracers, and for medical imaging procedures. Production of radioisotope constitutes important activity of Indian nuclear program. Since its initial criticality DHRUVA reactor has been facilitating the regular supply of most of the radioisotopes required in the country for application in the fields of medicine, industry and agriculture. In-pile irradiation of the samples requires a prior estimation of the sample reactivity load, heating rate, activity developed and shielding thickness required for post irradiation handling. This report is an attempt to highlight the contributions of DHRUVA reactor, as well as to explain in detail the methodologies used in safety evaluation of the in pile irradiation samples. (author)

  12. A high-efficiency neutron coincidence counter for small samples

    International Nuclear Information System (INIS)

    Miller, M.C.; Menlove, H.O.; Russo, P.A.

    1991-01-01

    The inventory sample coincidence counter (INVS) has been modified to enhance its performance. The new design is suitable for use with a glove box sample-well (in-line application) as well as for use in the standard at-line mode. The counter has been redesigned to count more efficiently and be less sensitive to variations in sample position. These factors lead to a higher degree of precision and accuracy in a given counting period and allow for the practical use of the INVS counter with gamma-ray isotopics to obtain a plutonium assay independent of operator declarations and time-consuming chemicals analysis. A calculation study was performed using the Los Alamos transport code MCNP to optimize the design parameters. 5 refs., 7 figs., 8 tabs

  13. Reliability assessment based on small samples of normal distribution

    International Nuclear Information System (INIS)

    Ma Zhibo; Zhu Jianshi; Xu Naixin

    2003-01-01

    When the pertinent parameter involved in reliability definition complies with normal distribution, the conjugate prior of its distributing parameters (μ, h) is of normal-gamma distribution. With the help of maximum entropy and the moments-equivalence principles, the subjective information of the parameter and the sampling data of its independent variables are transformed to a Bayesian prior of (μ,h). The desired estimates are obtained from either the prior or the posterior which is formed by combining the prior and sampling data. Computing methods are described and examples are presented to give demonstrations

  14. Estimation of individual reference intervals in small sample sizes

    DEFF Research Database (Denmark)

    Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz

    2007-01-01

    In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... from various variables such as gender, age, BMI, alcohol, smoking, and menopause. The reference intervals were compared to reference intervals calculated using IFCC recommendations. Where comparable, the IFCC calculated reference intervals had a wider range compared to the variance component models...

  15. Mars ascent propulsion options for small sample return vehicles

    International Nuclear Information System (INIS)

    Whitehead, J. C.

    1997-01-01

    An unprecedented combination of high propellant fraction and small size is required for affordable-scale Mars return, regardless of the number of stages, or whether Mars orbit rendezvous or in-situ propellant options are used. Conventional space propulsion technology is too heavy, even without structure or other stage subsystems. The application of launch vehicle design principles to the development of new hardware on a tiny scale is therefore suggested. Miniature pump-fed rocket engines fed by low pressure tanks can help to meet this challenge. New concepts for engine cycles using piston pumps are described, and development issues are outlined

  16. Advanced path sampling of the kinetic network of small proteins

    NARCIS (Netherlands)

    Du, W.

    2014-01-01

    This thesis is focused on developing advanced path sampling simulation methods to study protein folding and unfolding, and to build kinetic equilibrium networks describing these processes. In Chapter 1 the basic knowledge of protein structure and folding theories were introduced and a brief overview

  17. Small sample approach, and statistical and epidemiological aspects

    NARCIS (Netherlands)

    Offringa, Martin; van der Lee, Hanneke

    2011-01-01

    In this chapter, the design of pharmacokinetic studies and phase III trials in children is discussed. Classical approaches and relatively novel approaches, which may be more useful in the context of drug research in children, are discussed. The burden of repeated blood sampling in pediatric

  18. STATISTICAL EVALUATION OF SMALL SCALE MIXING DEMONSTRATION SAMPLING AND BATCH TRANSFER PERFORMANCE - 12093

    Energy Technology Data Exchange (ETDEWEB)

    GREER DA; THIEN MG

    2012-01-12

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste Treatment and Immobilization Plant (WTP) presents a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS) has previously presented the results of mixing performance in two different sizes of small scale DSTs to support scale up estimates of full scale DST mixing performance. Currently, sufficient sampling of DSTs is one of the largest programmatic risks that could prevent timely delivery of high level waste to the WTP. WRPS has performed small scale mixing and sampling demonstrations to study the ability to sufficiently sample the tanks. The statistical evaluation of the demonstration results which lead to the conclusion that the two scales of small DST are behaving similarly and that full scale performance is predictable will be presented. This work is essential to reduce the risk of requiring a new dedicated feed sampling facility and will guide future optimization work to ensure the waste feed delivery mission will be accomplished successfully. This paper will focus on the analytical data collected from mixing, sampling, and batch transfer testing from the small scale mixing demonstration tanks and how those data are being interpreted to begin to understand the relationship between samples taken prior to transfer and samples from the subsequent batches transferred. An overview of the types of data collected and examples of typical raw data will be provided. The paper will then discuss the processing and manipulation of the data which is necessary to begin evaluating sampling and batch transfer performance. This discussion will also include the evaluation of the analytical measurement capability with regard to the simulant material used in the demonstration tests. The

  19. Toward Understanding Tip Leakage Flows in Small Compressor Cores Including Stator Leakage Flow

    Science.gov (United States)

    Berdanier, Reid A.; Key, Nicole L.

    2017-01-01

    The focus of this work was to provide additional data to supplement the work reported in NASA/CR-2015-218868 (Berdanier and Key, 2015b). The aim of that project was to characterize the fundamental flow physics and the overall performance effects due to increased rotor tip clearance heights in axial compressors. Data have been collected in the three-stage axial research compressor at Purdue University with a specific focus on analyzing the multistage effects resulting from the tip leakage flow. Three separate rotor tip clearances were studied with nominal tip clearance gaps of 1.5 percent, 3.0 percent, and 4.0 percent based on a constant annulus height. Overall compressor performance was previously investigated at four corrected speedlines (100 percent, 90 percent, 80 percent, and 68 percent) for each of the three tip clearance configurations. This study extends the previously published results to include detailed steady and time-resolved pressure data at two loading conditions, nominal loading (NL) and high loading (HL), on the 100 percent corrected speedline for the intermediate clearance level (3.0 percent). Steady detailed radial traverses of total pressure at the exit of each stator row are supported by flow visualization techniques to identify regions of flow recirculation and separation. Furthermore, detailed radial traverses of time-resolved total pressures at the exit of each rotor row have been measured with a fast-response pressure probe. These data were combined with existing three-component velocity measurements to identify a novel technique for calculating blockage in a multistage compressor. Time-resolved static pressure measurements have been collected over the rotor tips for all rotors with each of the three tip clearance configurations for up to five loading conditions along the 100 percent corrected speedline using fast-response piezoresistive pressure sensors. These time-resolved static pressure measurements reveal new knowledge about the

  20. Standard Format for Chromatographic-polarimetric System small samples assessment

    International Nuclear Information System (INIS)

    Naranjo, S.; Fajer, V.; Fonfria, C.; Patinno, R.

    2012-01-01

    The treatment of samples containing optically active substances to be evaluated as part of quality control of raw material entering industrial process, and also during the modifications exerted on it to obtain the desired final composition is still and unsolved problem for many industries. That is the case of sugarcane industry. Sometimes the troubles implied are enlarged because samples to be evaluated are not bigger than one milliliter. Reduction of gel beds in G-10 and G-50 chromatographic columns having an inner diameter of 16 mm, instead of 25, and bed heights adjustable to requirements by means of sliding stoppers to increase analytical power were evaluated with glucose and sucrose standards in concentrations from 1 to 10 g/dL, using aliquots of 1 ml without undesirable dilutions that could affect either detection or chromatographic profile. Assays with seaweed extracts gave good results that are shown. It is established the advantage to know concentration of a separated substance by the height of its peak and the savings in time and reagents resulting . Sample expanded uncertainty in both systems is compared. It is also presented several programs for data acquisition, storing and processing. (Author)

  1. Impact of multicollinearity on small sample hydrologic regression models

    Science.gov (United States)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  2. 40 CFR 142.307 - What terms and conditions must be included in a small system variance?

    Science.gov (United States)

    2010-07-01

    ... that may affect proper and effective operation and maintenance of the technology; (2) Monitoring... effective installation, operation and maintenance of the applicable small system variance technology in... health, which may include: (i) Public education requirements; and (ii) Source water protection...

  3. Suitability of small diagnostic peripheral-blood samples for cell-therapy studies.

    Science.gov (United States)

    Stephanou, Coralea; Papasavva, Panayiota; Zachariou, Myria; Patsali, Petros; Epitropou, Marilena; Ladas, Petros; Al-Abdulla, Ruba; Christou, Soteroulla; Antoniou, Michael N; Lederer, Carsten W; Kleanthous, Marina

    2017-02-01

    Primary hematopoietic stem and progenitor cells (HSPCs) are key components of cell-based therapies for blood disorders and are thus the authentic substrate for related research. We propose that ubiquitous small-volume diagnostic samples represent a readily available and as yet untapped resource of primary patient-derived cells for cell- and gene-therapy studies. In the present study we compare isolation and storage methods for HSPCs from normal and thalassemic small-volume blood samples, considering genotype, density-gradient versus lysis-based cell isolation and cryostorage media with different serum contents. Downstream analyses include viability, recovery, differentiation in semi-solid media and performance in liquid cultures and viral transductions. We demonstrate that HSPCs isolated either by ammonium-chloride potassium (ACK)-based lysis or by gradient isolation are suitable for functional analyses in clonogenic assays, high-level HSPC expansion and efficient lentiviral transduction. For cryostorage of cells, gradient isolation is superior to ACK lysis, and cryostorage in freezing media containing 50% fetal bovine serum demonstrated good results across all tested criteria. For assays on freshly isolated cells, ACK lysis performed similar to, and for thalassemic samples better than, gradient isolation, at a fraction of the cost and hands-on time. All isolation and storage methods show considerable variation within sample groups, but this is particularly acute for density gradient isolation of thalassemic samples. This study demonstrates the suitability of small-volume blood samples for storage and preclinical studies, opening up the research field of HSPC and gene therapy to any blood diagnostic laboratory with corresponding bioethics approval for experimental use of surplus material. Copyright © 2017 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  4. Sampled data CT system including analog filter and compensating digital filter

    International Nuclear Information System (INIS)

    Glover, G. H.; DallaPiazza, D. G.; Pelc, N. J.

    1985-01-01

    A CT scanner in which the amount of x-ray information acquired per unit time is substantially increased by using a continuous-on x-ray source and a sampled data system with the detector. An analog filter is used in the sampling system for band limiting the detector signal below the highest frequency of interest, but is a practically realizable filter and is therefore non-ideal. A digital filter is applied to the detector data after digitization to compensate for the characteristics of the analog filter, and to provide an overall filter characteristic more nearly like the ideal

  5. Improvement of 137Cs analysis in small volume seawater samples using the Ogoya underground facility

    International Nuclear Information System (INIS)

    Hirose, K.; Komura, K.; Kanazawa University, Ishikawa; Aoyama, M.; Igarashi, Y.

    2008-01-01

    137 Cs in seawater is one of the most powerful tracers of water motion. Large volumes of samples have been required for determination of 137 Cs in seawater. This paper describes improvement of separation and purification processes of 137 Cs in seawater, which includes purification of 137 Cs using hexachloroplatinic acid in addition to ammonium phosphomolybdate (AMP) precipitation. As a result, we succeeded the 137 Cs determination in seawater with a smaller sample volume of 10 liter by using ultra-low background gamma-spectrometry in the Ogoya underground facility. 137 Cs detection limit was about 0.1 mBq (counting time: 10 6 s). This method is applied to determine 137 Cs in small samples of the South Pacific deep waters. (author)

  6. Method of extruding and packaging a thin sample of reactive material including forming the extrusion die

    International Nuclear Information System (INIS)

    Lewandowski, E.F.; Peterson, L.L.

    1985-01-01

    This invention teaches a method of cutting a narrow slot in an extrusion die with an electrical discharge machine by first drilling spaced holes at the ends of where the slot will be, whereby the oil can flow through the holes and slot to flush the material eroded away as the slot is being cut. The invention further teaches a method of extruding a very thin ribbon of solid highly reactive material such as lithium or sodium through the die in an inert atmosphere of nitrogen, argon or the like as in a glovebox. The invention further teaches a method of stamping out sample discs from the ribbon and of packaging each disc by sandwiching it between two aluminum sheets and cold welding the sheets together along an annular seam beyond the outer periphery of the disc. This provides a sample of high purity reactive material that can have a long shelf life

  7. Predicting Drug-Target Interactions Based on Small Positive Samples.

    Science.gov (United States)

    Hu, Pengwei; Chan, Keith C C; Hu, Yanxing

    2018-01-01

    evaluation of ODT shows that it can be potentially useful. It confirms that predicting potential or missing DTIs based on the known interactions is a promising direction to solve problems related to the use of uncertain and unreliable negative samples and those related to the great demand in computational resources. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    2014-11-01

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a

  9. Nano-Scale Sample Acquisition Systems for Small Class Exploration Spacecraft

    Science.gov (United States)

    Paulsen, G.

    2015-12-01

    The paradigm for space exploration is changing. Large and expensive missions are very rare and the space community is turning to smaller, lighter, and less expensive missions that could still perform great exploration. These missions are also within reach of commercial companies such as the Google Lunar X Prize teams that develop small scale lunar missions. Recent commercial endeavors such as "Planet Labs inc." and Sky Box Imaging, inc. show that there are new benefits and business models associated with miniaturization of space hardware. The Nano-Scale Sample Acquisition System includes NanoDrill for capture of small rock cores and PlanetVac for capture of surface regolith. These two systems are part of the ongoing effort to develop "Micro Sampling" systems for deployment by the small spacecraft with limited payload capacities. The ideal applications include prospecting missions to the Moon and Asteroids. The MicroDrill is a rotary-percussive coring drill that captures cores 7 mm in diameter and up to 2 cm long. The drill weighs less than 1 kg and can capture a core from a 40 MPa strength rock within a few minutes, with less than 10 Watt power and less than 10 Newton of preload. The PlanetVac is a pneumatic based regolith acquisition system that can capture surface sample in touch-and-go maneuver. These sampling systems were integrated within the footpads of commercial quadcopter for testing. As such, they could also be used by geologists on Earth to explore difficult to get to locations.

  10. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  11. Information required from States, including 'small quantities protocol' status, under the Protocol Additional to Safeguards Agreements

    International Nuclear Information System (INIS)

    Tuley, N.

    1999-01-01

    The Model, or Additional, Protocol to the Model Safeguards Agreement, INFCIRC/153, contains, inter alia, provisions for expanded declarations from Member States to the IAEA. These provisions include earlier design information declarations and information on fuel cycles activities, such a mining and milling, that were not previously part of safeguards. The session discusses the extent of the expanded declarations and provides examples of the forms that will be used to provide the information to the Agency. (author)

  12. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    Science.gov (United States)

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in

  13. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  14. MHD model including small-scale perturbations in a plasma with temperature variations

    International Nuclear Information System (INIS)

    Kuvshinov, B.N.; Mikhailovskii, A.B.

    1996-01-01

    The possibility is studied of using a hydrodynamic model to describe a magnetized plasma with density and temperature variations on scales that are arbitrary with respect to the ion Larmor radius. It is shown that the inertial component of the transverse ion thermal flux should be taken into account. This component is found from the collisionless kinetic equation. It can also be obtained from the equations of the Grad type. A set of two-dimensional hydrodynamic equations for ions is obtained with this component taken into account. These equations are used to derive model hydrodynamic expressions for the density and temperature variations. It is shown that, for large-scale perturbations (when the wavelengths are longer than the ion Larmor radius), the expressions derived coincide with the corresponding kinetic expressions and, for perturbations on sub-Larmor scales (when the wavelengths are shorter than the Larmor radius), they agree qualitatively. Hydrodynamic dispersion relations are derived for several types of drift waves with arbitrary wavenumbers. The range of applicability of the MHD model is determined from a comparison of these dispersion relations with the kinetic ones. It is noted that, on the basis of results obtained, drift effects can be included in numerical MHD codes for studying plasma instabilities in high-temperature regimes in tokamaks

  15. Calculation of coincidence summing corrections for a specific small soil sample geometry

    Energy Technology Data Exchange (ETDEWEB)

    Helmer, R.G.; Gehrke, R.J.

    1996-10-01

    Previously, a system was developed at the INEL for measuring the {gamma}-ray emitting nuclides in small soil samples for the purpose of environmental monitoring. These samples were counted close to a {approx}20% Ge detector and, therefore, it was necessary to take into account the coincidence summing that occurs for some nuclides. In order to improve the technical basis for the coincidence summing corrections, the authors have carried out a study of the variation in the coincidence summing probability with position within the sample volume. A Monte Carlo electron and photon transport code (CYLTRAN) was used to compute peak and total efficiencies for various photon energies from 30 to 2,000 keV at 30 points throughout the sample volume. The geometry for these calculations included the various components of the detector and source along with the shielding. The associated coincidence summing corrections were computed at these 30 positions in the sample volume and then averaged for the whole source. The influence of the soil and the detector shielding on the efficiencies was investigated.

  16. Mechanical characteristics of historic mortars from tests on small-sample non-standard on small-sample non-standard specimens

    Czech Academy of Sciences Publication Activity Database

    Drdácký, Miloš; Slížková, Zuzana

    2008-01-01

    Roč. 17, č. 1 (2008), s. 20-29 ISSN 1407-7353 R&D Projects: GA ČR(CZ) GA103/06/1609 Institutional research plan: CEZ:AV0Z20710524 Keywords : small-sample non-standard testing * lime * historic mortar Subject RIV: AL - Art, Architecture, Cultural Heritage

  17. Small Scale Mixing Demonstration Batch Transfer and Sampling Performance of Simulated HLW - 12307

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, Jesse; Townson, Paul; Vanatta, Matt [EnergySolutions, Engineering and Technology Group, Richland, WA, 99354 (United States)

    2012-07-01

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste treatment Plant (WTP) has been recognized as a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. At the end of 2009 DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS), awarded a contract to EnergySolutions to design, fabricate and operate a demonstration platform called the Small Scale Mixing Demonstration (SSMD) to establish pre-transfer sampling capacity, and batch transfer performance data at two different scales. This data will be used to examine the baseline capacity for a tank mixed via rotational jet mixers to transfer consistent or bounding batches, and provide scale up information to predict full scale operational performance. This information will then in turn be used to define the baseline capacity of such a system to transfer and sample batches sent to WTP. The Small Scale Mixing Demonstration (SSMD) platform consists of 43'' and 120'' diameter clear acrylic test vessels, each equipped with two scaled jet mixer pump assemblies, and all supporting vessels, controls, services, and simulant make up facilities. All tank internals have been modeled including the air lift circulators (ALCs), the steam heating coil, and the radius between the wall and floor. The test vessels are set up to simulate the transfer of HLW out of a mixed tank, and collect a pre-transfer sample in a manner similar to the proposed baseline configuration. The collected material is submitted to an NQA-1 laboratory for chemical analysis. Previous work has been done to assess tank mixing performance at both scales. This work involved a combination of unique instruments to understand the three dimensional distribution of solids using a combination of Coriolis meter measurements, in situ chord length distribution

  18. TableSim--A program for analysis of small-sample categorical data.

    Science.gov (United States)

    David J. Rugg

    2003-01-01

    Documents a computer program for calculating correct P-values of 1-way and 2-way tables when sample sizes are small. The program is written in Fortran 90; the executable code runs in 32-bit Microsoft-- command line environments.

  19. Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers and Improve Data-Limited Stock Assessments. This biosampling project...

  20. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  1. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  3. The Accuracy of Inference in Small Samples of Dynamic Panel Data Models

    NARCIS (Netherlands)

    Bun, M.J.G.; Kiviet, J.F.

    2001-01-01

    Through Monte Carlo experiments the small sample behavior is examined of various inference techniques for dynamic panel data models when both the time-series and cross-section dimensions of the data set are small. The LSDV technique and corrected versions of it are compared with IV and GMM

  4. Rules of attraction: The role of bait in small mammal sampling at ...

    African Journals Online (AJOL)

    Baits or lures are commonly used for surveying small mammal communities, not only because they attract large numbers of these animals, but also because they provide sustenance for trapped individuals. In this study we used Sherman live traps with five bait treatments to sample small mammal populations at three ...

  5. Estimating sample size for a small-quadrat method of botanical ...

    African Journals Online (AJOL)

    Reports the results of a study conducted to determine an appropriate sample size for a small-quadrat method of botanical survey for application in the Mixed Bushveld of South Africa. Species density and grass density were measured using a small-quadrat method in eight plant communities in the Nylsvley Nature Reserve.

  6. Integrating sphere based reflectance measurements for small-area semiconductor samples

    Science.gov (United States)

    Saylan, S.; Howells, C. T.; Dahlem, M. S.

    2018-05-01

    This article describes a method that enables reflectance spectroscopy of small semiconductor samples using an integrating sphere, without the use of additional optical elements. We employed an inexpensive sample holder to measure the reflectance of different samples through 2-, 3-, and 4.5-mm-diameter apertures and applied a mathematical formulation to remove the bias from the measured spectra caused by illumination of the holder. Using the proposed method, the reflectance of samples fabricated using expensive or rare materials and/or low-throughput processes can be measured. It can also be incorporated to infer the internal quantum efficiency of small-area, research-level solar cells. Moreover, small samples that reflect light at large angles and develop scattering may also be measured reliably, by virtue of an integrating sphere insensitive to directionalities.

  7. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  8. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  9. Small Sample Properties of the Wilcoxon Signed Rank Test with Discontinuous and Dependent Observations

    OpenAIRE

    Nadine Chlass; Jens J. Krueger

    2007-01-01

    This Monte-Carlo study investigates sensitivity of the Wilcoxon signed rank test to certain assumption violations in small samples. Emphasis is put on within-sample-dependence, between-sample dependence, and the presence of ties. Our results show that both assumption violations induce severe size distortions and entail power losses. Surprisingly, these consequences do vary substantially with other properties the data may display. Results provided are particularly relevant for experimental set...

  10. NASTRAN thermal analyzer: Theory and application including a guide to modeling engineering problems, volume 2. [sample problem library guide

    Science.gov (United States)

    Jackson, C. E., Jr.

    1977-01-01

    A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.

  11. Sampling versus systematic full lymphatic dissection in surgical treatment of non-small cell lung cancer.

    Science.gov (United States)

    Koulaxouzidis, Georgios; Karagkiouzis, Grigorios; Konstantinou, Marios; Gkiozos, Ioannis; Syrigos, Konstantinos

    2013-04-22

    The extent of mediastinal lymph node assessment during surgery for non-small cell cancer remains controversial. Different techniques are used, ranging from simple visual inspection of the unopened mediastinum to an extended bilateral lymph node dissection. Furthermore, different terms are used to define these techniques. Sampling is the removal of one or more lymph nodes under the guidance of pre-operative findings. Systematic (full) nodal dissection is the removal of all mediastinal tissue containing the lymph nodes systematically within anatomical landmarks. A Medline search was conducted to identify articles in the English language that addressed the role of mediastinal lymph node resection in the treatment of non-small cell lung cancer. Opinions as to the reasons for favoring full lymphatic dissection include complete resection, improved nodal staging and better local control due to resection of undetected micrometastasis. Arguments against routine full lymphatic dissection are increased morbidity, increase in operative time, and lack of evidence of improved survival. For complete resection of non-small cell lung cancer, many authors recommend a systematic nodal dissection as the standard approach during surgery, and suggest that this provides both adequate nodal staging and guarantees complete resection. Whether extending the lymph node dissection influences survival or recurrence rate is still not known. There are valid arguments in favor in terms not only of an improved local control but also of an improved long-term survival. However, the impact of lymph node dissection on long-term survival should be further assessed by large-scale multicenter randomized trials.

  12. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    Science.gov (United States)

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  13. Method to make accurate concentration and isotopic measurements for small gas samples

    Science.gov (United States)

    Palmer, M. R.; Wahl, E.; Cunningham, K. L.

    2013-12-01

    Carbon isotopic ratio measurements of CO2 and CH4 provide valuable insight into carbon cycle processes. However, many of these studies, like soil gas, soil flux, and water head space experiments, provide very small gas sample volumes, too small for direct measurement by current constant-flow Cavity Ring-Down (CRDS) isotopic analyzers. Previously, we addressed this issue by developing a sample introduction module which enabled the isotopic ratio measurement of 40ml samples or smaller. However, the system, called the Small Sample Isotope Module (SSIM), does dilute the sample during the delivery with inert carrier gas which causes a ~5% reduction in concentration. The isotopic ratio measurements are not affected by this small dilution, but researchers are naturally interested accurate concentration measurements. We present the accuracy and precision of a new method of using this delivery module which we call 'double injection.' Two portions of the 40ml of the sample (20ml each) are introduced to the analyzer, the first injection of which flushes out the diluting gas and the second injection is measured. The accuracy of this new method is demonstrated by comparing the concentration and isotopic ratio measurements for a gas sampled directly and that same gas measured through the SSIM. The data show that the CO2 concentration measurements were the same within instrument precision. The isotopic ratio precision (1σ) of repeated measurements was 0.16 permil for CO2 and 1.15 permil for CH4 at ambient concentrations. This new method provides a significant enhancement in the information provided by small samples.

  14. Persistent Bovine Viral Diarrhea Virus infection in domestic and wild small ruminants and camelids including the mountain goat (Oreamnos americanus

    Directory of Open Access Journals (Sweden)

    Danielle Darracq Nelson

    2016-01-01

    Full Text Available Bovine viral diarrhea virus (BVDV is a Pestivirus best known for causing a variety of disease syndromes in cattle, including gastrointestinal disease, reproductive insufficiency, immunosuppression, mucosal disease, and hemorrhagic syndrome. The virus can be spread by transiently infected individuals and by persistently infected animals that may be asymptomatic while shedding large amounts of virus throughout their lifetime. BVDV has been reported in over 40 domestic and free-ranging species, and persistent infection has been described in eight of those species: white-tailed deer, mule deer, eland, mousedeer, mountain goats, alpacas, sheep, and domestic swine. This paper reviews the various aspects of BVDV transmission, disease syndromes, diagnosis, control, and prevention, as well as examines BVDV infection in domestic and wild small ruminants and camelids including mountain goats (Oreamnos americanus.

  15. AAMQS: A non-linear QCD analysis of new HERA data at small-x including heavy quarks

    International Nuclear Information System (INIS)

    Albacete, Javier L.; Armesto, Nestor; Salgado, Carlos A.; Milhano, Jose Guilherme; Quiroga Arias, Paloma

    2011-01-01

    We present a global analysis of available data on inclusive structure functions and reduced cross sections measured in electron-proton scattering at small values of Bjorken-x, x<0.01, including the latest data from HERA on reduced cross sections. Our approach relies on the dipole formulation of DIS together with the use of the non-linear running coupling Balitsky-Kovchegov equation for the description of the small-x dynamics. We improve our previous studies by including the heavy quark (charm and beauty) contribution to the reduced cross sections, and also by considering a variable flavor scheme for the running of the coupling. We obtain a good description of the data, with the fit parameters remaining stable with respect to our previous analyses where only light quarks were considered. The inclusion of the heavy quark contributions resulted in a good description of available experimental data for the charm component of the structure function and reduced cross section provided the initial transverse distribution of heavy quarks was allowed to differ from (more specifically, to have a smaller radius than) that of the light flavors. (orig.)

  16. AAMQS: A non-linear QCD analysis of new HERA data at small-x including heavy quarks

    Energy Technology Data Exchange (ETDEWEB)

    Albacete, Javier L. [CEA/Saclay, URA 2306, Unite de Recherche Associee au CNRS, Institut de Physique Theorique, Gif-sur-Yvette cedex (France); Armesto, Nestor; Salgado, Carlos A. [Universidade de Santiago de Compostela, Departamento de Fisica de Particulas and IGFAE, Santiago de Compostela (Spain); Milhano, Jose Guilherme [Instituto Superior Tecnico (IST), Universidade Tecnica de Lisboa, CENTRA, Lisboa (Portugal); Theory Unit, CERN, Physics Department, Geneve 23 (Switzerland); Quiroga Arias, Paloma [UPMC Univ. Paris 6 and CNRS UMR7589, LPTHE, Paris (France)

    2011-07-15

    We present a global analysis of available data on inclusive structure functions and reduced cross sections measured in electron-proton scattering at small values of Bjorken-x, x<0.01, including the latest data from HERA on reduced cross sections. Our approach relies on the dipole formulation of DIS together with the use of the non-linear running coupling Balitsky-Kovchegov equation for the description of the small-x dynamics. We improve our previous studies by including the heavy quark (charm and beauty) contribution to the reduced cross sections, and also by considering a variable flavor scheme for the running of the coupling. We obtain a good description of the data, with the fit parameters remaining stable with respect to our previous analyses where only light quarks were considered. The inclusion of the heavy quark contributions resulted in a good description of available experimental data for the charm component of the structure function and reduced cross section provided the initial transverse distribution of heavy quarks was allowed to differ from (more specifically, to have a smaller radius than) that of the light flavors. (orig.)

  17. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  18. Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data

    Science.gov (United States)

    McNeish, Daniel; Harring, Jeffrey R.

    2017-01-01

    To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…

  19. Baysian estimation of P(X > x) from a small sample of Gaussian data

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2017-01-01

    The classical statistical uncertainty problem of estimation of upper tail probabilities on the basis of a small sample of observations of a Gaussian random variable is considered. Predictive posterior estimation is discussed, adopting the standard statistical model with diffuse priors of the two...

  20. Sensitivity study of micro four-point probe measurements on small samples

    DEFF Research Database (Denmark)

    Wang, Fei; Petersen, Dirch Hjorth; Hansen, Torben Mikael

    2010-01-01

    probes than near the outer ones. The sensitive area is defined for infinite film, circular, square, and rectangular test pads, and convergent sensitivities are observed for small samples. The simulations show that the Hall sheet resistance RH in micro Hall measurements with position error suppression...

  1. A scanning tunneling microscope capable of imaging specified micron-scale small samples.

    Science.gov (United States)

    Tao, Wei; Cao, Yufei; Wang, Huafeng; Wang, Kaiyou; Lu, Qingyou

    2012-12-01

    We present a home-built scanning tunneling microscope (STM) which allows us to precisely position the tip on any specified small sample or sample feature of micron scale. The core structure is a stand-alone soft junction mechanical loop (SJML), in which a small piezoelectric tube scanner is mounted on a sliding piece and a "U"-like soft spring strip has its one end fixed to the sliding piece and its opposite end holding the tip pointing to the sample on the scanner. Here, the tip can be precisely aligned to a specified small sample of micron scale by adjusting the position of the spring-clamped sample on the scanner in the field of view of an optical microscope. The aligned SJML can be transferred to a piezoelectric inertial motor for coarse approach, during which the U-spring is pushed towards the sample, causing the tip to approach the pre-aligned small sample. We have successfully approached a hand cut tip that was made from 0.1 mm thin Pt∕Ir wire to an isolated individual 32.5 × 32.5 μm(2) graphite flake. Good atomic resolution images and high quality tunneling current spectra for that specified tiny flake are obtained in ambient conditions with high repeatability within one month showing high and long term stability of the new STM structure. In addition, frequency spectra of the tunneling current signals do not show outstanding tip mount related resonant frequency (low frequency), which further confirms the stability of the STM structure.

  2. A scanning tunneling microscope capable of imaging specified micron-scale small samples

    Science.gov (United States)

    Tao, Wei; Cao, Yufei; Wang, Huafeng; Wang, Kaiyou; Lu, Qingyou

    2012-12-01

    We present a home-built scanning tunneling microscope (STM) which allows us to precisely position the tip on any specified small sample or sample feature of micron scale. The core structure is a stand-alone soft junction mechanical loop (SJML), in which a small piezoelectric tube scanner is mounted on a sliding piece and a "U"-like soft spring strip has its one end fixed to the sliding piece and its opposite end holding the tip pointing to the sample on the scanner. Here, the tip can be precisely aligned to a specified small sample of micron scale by adjusting the position of the spring-clamped sample on the scanner in the field of view of an optical microscope. The aligned SJML can be transferred to a piezoelectric inertial motor for coarse approach, during which the U-spring is pushed towards the sample, causing the tip to approach the pre-aligned small sample. We have successfully approached a hand cut tip that was made from 0.1 mm thin Pt/Ir wire to an isolated individual 32.5 × 32.5 μm2 graphite flake. Good atomic resolution images and high quality tunneling current spectra for that specified tiny flake are obtained in ambient conditions with high repeatability within one month showing high and long term stability of the new STM structure. In addition, frequency spectra of the tunneling current signals do not show outstanding tip mount related resonant frequency (low frequency), which further confirms the stability of the STM structure.

  3. Preparing Monodisperse Macromolecular Samples for Successful Biological Small-Angle X-ray and Neutron Scattering Experiments

    Science.gov (United States)

    Jeffries, Cy M.; Graewert, Melissa A.; Blanchet, Clément E.; Langley, David B.; Whitten, Andrew E.; Svergun, Dmitri I

    2017-01-01

    Small-angle X-ray and neutron scattering (SAXS and SANS) are techniques used to extract structural parameters and determine the overall structures and shapes of biological macromolecules, complexes and assemblies in solution. The scattering intensities measured from a sample contain contributions from all atoms within the illuminated sample volume including the solvent and buffer components as well as the macromolecules of interest. In order to obtain structural information, it is essential to prepare an exactly matched solvent blank so that background scattering contributions can be accurately subtracted from the sample scattering to obtain the net scattering from the macromolecules in the sample. In addition, sample heterogeneity caused by contaminants, aggregates, mismatched solvents, radiation damage or other factors can severely influence and complicate data analysis so it is essential that the samples are pure and monodisperse for the duration of the experiment. This Protocol outlines the basic physics of SAXS and SANS and reveals how the underlying conceptual principles of the techniques ultimately ‘translate’ into practical laboratory guidance for the production of samples of sufficiently high quality for scattering experiments. The procedure describes how to prepare and characterize protein and nucleic acid samples for both SAXS and SANS using gel electrophoresis, size exclusion chromatography and light scattering. Also included are procedures specific to X-rays (in-line size exclusion chromatography SAXS) and neutrons, specifically preparing samples for contrast matching/variation experiments and deuterium labeling of proteins. PMID:27711050

  4. Time domain contact model for tyre/road interaction including nonlinear contact stiffness due to small-scale roughness

    Science.gov (United States)

    Andersson, P. B. U.; Kropp, W.

    2008-11-01

    Rolling resistance, traction, wear, excitation of vibrations, and noise generation are all attributes to consider in optimisation of the interaction between automotive tyres and wearing courses of roads. The key to understand and describe the interaction is to include a wide range of length scales in the description of the contact geometry. This means including scales on the order of micrometres that have been neglected in previous tyre/road interaction models. A time domain contact model for the tyre/road interaction that includes interfacial details is presented. The contact geometry is discretised into multiple elements forming pairs of matching points. The dynamic response of the tyre is calculated by convolving the contact forces with pre-calculated Green's functions. The smaller-length scales are included by using constitutive interfacial relations, i.e. by using nonlinear contact springs, for each pair of contact elements. The method is presented for normal (out-of-plane) contact and a method for assessing the stiffness of the nonlinear springs based on detailed geometry and elastic data of the tread is suggested. The governing equations of the nonlinear contact problem are solved with the Newton-Raphson iterative scheme. Relations between force, indentation, and contact stiffness are calculated for a single tread block in contact with a road surface. The calculated results have the same character as results from measurements found in literature. Comparison to traditional contact formulations shows that the effect of the small-scale roughness is large; the contact stiffness is only up to half of the stiffness that would result if contact is made over the whole element directly to the bulk of the tread. It is concluded that the suggested contact formulation is a suitable model to include more details of the contact interface. Further, the presented result for the tread block in contact with the road is a suitable input for a global tyre/road interaction model

  5. System for sampling liquids in small jugs obturated by screwed taps

    International Nuclear Information System (INIS)

    Besnier, J.

    1995-01-01

    This invention describes a machine which samples automatically liquids in small jugs obturated by screwed taps. This device can be situated in an isolated room in order to work with radioactive liquids. The machine can be divided in three main parts: a module to catch the jug, in order to take and fix it, a module to open and to close it, and a module to sample. The later takes the liquid thanks to a suction device and puts it in a container, in order to analyse the sample. (TEC)

  6. Determination of Organic Pollutants in Small Samples of Groundwaters by Liquid-Liquid Extraction and Capillary Gas Chromatography

    DEFF Research Database (Denmark)

    Harrison, I.; Leader, R.U.; Higgo, J.J.W.

    1994-01-01

    A method is presented for the determination of 22 organic compounds in polluted groundwaters. The method includes liquid-liquid extraction of the base/neutral organics from small, alkaline groundwater samples, followed by derivatisation and liquid-liquid extraction of phenolic compounds after neu...... neutralisation. The extracts were analysed by capillary gas chromatography. Dual detection by flame Ionisation and electron capture was used to reduce analysis time....

  7. Respondent-driven sampling and the recruitment of people with small injecting networks.

    Science.gov (United States)

    Paquette, Dana; Bryant, Joanne; de Wit, John

    2012-05-01

    Respondent-driven sampling (RDS) is a form of chain-referral sampling, similar to snowball sampling, which was developed to reach hidden populations such as people who inject drugs (PWID). RDS is said to reach members of a hidden population that may not be accessible through other sampling methods. However, less attention has been paid as to whether there are segments of the population that are more likely to be missed by RDS. This study examined the ability of RDS to capture people with small injecting networks. A study of PWID, using RDS, was conducted in 2009 in Sydney, Australia. The size of participants' injecting networks was examined by recruitment chain and wave. Participants' injecting network characteristics were compared to those of participants from a separate pharmacy-based study. A logistic regression analysis was conducted to examine the characteristics independently associated with having small injecting networks, using the combined RDS and pharmacy-based samples. In comparison with the pharmacy-recruited participants, RDS participants were almost 80% less likely to have small injecting networks, after adjusting for other variables. RDS participants were also more likely to have their injecting networks form a larger proportion of those in their social networks, and to have acquaintances as part of their injecting networks. Compared to those with larger injecting networks, individuals with small injecting networks were equally likely to engage in receptive sharing of injecting equipment, but less likely to have had contact with prevention services. These findings suggest that those with small injecting networks are an important group to recruit, and that RDS is less likely to capture these individuals.

  8. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  9. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  10. EDXRF applied to the chemical element determination of small invertebrate samples

    International Nuclear Information System (INIS)

    Magalhaes, Marcelo L.R.; Santos, Mariana L.O.; Cantinha, Rebeca S.; Souza, Thomas Marques de; Franca, Elvis J. de

    2015-01-01

    Energy Dispersion X-Ray Fluorescence - EDXRF is a fast analytical technique of easy operation, however demanding reliable analytical curves due to the intrinsic matrix dependence and interference during the analysis. By using biological materials of diverse matrices, multielemental analytical protocols can be implemented and a group of chemical elements could be determined in diverse biological matrices depending on the chemical element concentration. Particularly for invertebrates, EDXRF presents some advantages associated to the possibility of the analysis of small size samples, in which a collimator can be used that directing the incidence of X-rays to a small surface of the analyzed samples. In this work, EDXRF was applied to determine Cl, Fe, P, S and Zn in invertebrate samples using the collimator of 3 mm and 10 mm. For the assessment of the analytical protocol, the SRM 2976 Trace Elements in Mollusk produced and SRM 8415 Whole Egg Powder by the National Institute of Standards and Technology - NIST were also analyzed. After sampling by using pitfall traps, invertebrate were lyophilized, milled and transferred to polyethylene vials covered by XRF polyethylene. Analyses were performed at atmosphere lower than 30 Pa, varying voltage and electric current according to the chemical element to be analyzed. For comparison, Zn in the invertebrate material was also quantified by graphite furnace atomic absorption spectrometry after acid treatment (mixture of nitric acid and hydrogen peroxide) of samples have. Compared to the collimator of 10 mm, the SRM 2976 and SRM 8415 results obtained by the 3 mm collimator agreed well at the 95% confidence level since the E n Number were in the range of -1 and 1. Results from GFAAS were in accordance to the EDXRF values for composite samples. Therefore, determination of some chemical elements by EDXRF can be recommended for very small invertebrate samples (lower than 100 mg) with advantage of preserving the samples. (author)

  11. EDXRF applied to the chemical element determination of small invertebrate samples

    Energy Technology Data Exchange (ETDEWEB)

    Magalhaes, Marcelo L.R.; Santos, Mariana L.O.; Cantinha, Rebeca S.; Souza, Thomas Marques de; Franca, Elvis J. de, E-mail: marcelo_rlm@hotmail.com, E-mail: marianasantos_ufpe@hotmail.com, E-mail: rebecanuclear@gmail.com, E-mail: thomasmarques@live.com.pt, E-mail: ejfranca@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2015-07-01

    Energy Dispersion X-Ray Fluorescence - EDXRF is a fast analytical technique of easy operation, however demanding reliable analytical curves due to the intrinsic matrix dependence and interference during the analysis. By using biological materials of diverse matrices, multielemental analytical protocols can be implemented and a group of chemical elements could be determined in diverse biological matrices depending on the chemical element concentration. Particularly for invertebrates, EDXRF presents some advantages associated to the possibility of the analysis of small size samples, in which a collimator can be used that directing the incidence of X-rays to a small surface of the analyzed samples. In this work, EDXRF was applied to determine Cl, Fe, P, S and Zn in invertebrate samples using the collimator of 3 mm and 10 mm. For the assessment of the analytical protocol, the SRM 2976 Trace Elements in Mollusk produced and SRM 8415 Whole Egg Powder by the National Institute of Standards and Technology - NIST were also analyzed. After sampling by using pitfall traps, invertebrate were lyophilized, milled and transferred to polyethylene vials covered by XRF polyethylene. Analyses were performed at atmosphere lower than 30 Pa, varying voltage and electric current according to the chemical element to be analyzed. For comparison, Zn in the invertebrate material was also quantified by graphite furnace atomic absorption spectrometry after acid treatment (mixture of nitric acid and hydrogen peroxide) of samples have. Compared to the collimator of 10 mm, the SRM 2976 and SRM 8415 results obtained by the 3 mm collimator agreed well at the 95% confidence level since the E{sub n} Number were in the range of -1 and 1. Results from GFAAS were in accordance to the EDXRF values for composite samples. Therefore, determination of some chemical elements by EDXRF can be recommended for very small invertebrate samples (lower than 100 mg) with advantage of preserving the samples. (author)

  12. Filter Bank Regularized Common Spatial Pattern Ensemble for Small Sample Motor Imagery Classification.

    Science.gov (United States)

    Park, Sang-Hoon; Lee, David; Lee, Sang-Goog

    2018-02-01

    For the last few years, many feature extraction methods have been proposed based on biological signals. Among these, the brain signals have the advantage that they can be obtained, even by people with peripheral nervous system damage. Motor imagery electroencephalograms (EEG) are inexpensive to measure, offer a high temporal resolution, and are intuitive. Therefore, these have received a significant amount of attention in various fields, including signal processing, cognitive science, and medicine. The common spatial pattern (CSP) algorithm is a useful method for feature extraction from motor imagery EEG. However, performance degradation occurs in a small-sample setting (SSS), because the CSP depends on sample-based covariance. Since the active frequency range is different for each subject, it is also inconvenient to set the frequency range to be different every time. In this paper, we propose the feature extraction method based on a filter bank to solve these problems. The proposed method consists of five steps. First, motor imagery EEG is divided by a using filter bank. Second, the regularized CSP (R-CSP) is applied to the divided EEG. Third, we select the features according to mutual information based on the individual feature algorithm. Fourth, parameter sets are selected for the ensemble. Finally, we classify using ensemble based on features. The brain-computer interface competition III data set IVa is used to evaluate the performance of the proposed method. The proposed method improves the mean classification accuracy by 12.34%, 11.57%, 9%, 4.95%, and 4.47% compared with CSP, SR-CSP, R-CSP, filter bank CSP (FBCSP), and SR-FBCSP. Compared with the filter bank R-CSP ( , ), which is a parameter selection version of the proposed method, the classification accuracy is improved by 3.49%. In particular, the proposed method shows a large improvement in performance in the SSS.

  13. A systematic review of studies on the faecal microbiota in anorexia nervosa: future research may need to include microbiota from the small intestine.

    Science.gov (United States)

    Schwensen, Hanna Ferløv; Kan, Carol; Treasure, Janet; Høiby, Niels; Sjögren, Magnus

    2018-03-14

    Anorexia nervosa (AN) is a poorly understood and often chronic condition. Deviations in the gut microbiota have been reported to influence the gut-brain axis in other disorders. Therefore, if present in AN, it may impact on symptoms and illness progression. A review of the gut microbiota studies in AN is presented. A literature search on PubMed yielded 27 articles; 14 were selected and based on relevance, 9 articles were included. The findings were interpreted in the larger context of preclinical research and clinical observations. 8 out of 9 included studies analysed microbiota from faeces samples, while the last analysed a protein in plasma produced by the gut. Two studies were longitudinal and included an intervention (i.e., weight restoration), five were cross-sectional, one was a case report, and the last was a case series consisting of three cases. Deviations in abundance, diversity, and microbial composition of the faecal microbiota in AN were found. There are currently only a few studies on the gut microbiota in AN, all done on faeces samples, and not all describe the microbiota at the species level extensively. The Archaeon Methanobrevibacter smithii was increased in participants with a BMI study and specifically in AN patients in three studies. Methanobrevibacter smithii may, if detected, be a benchmark biomarker for future studies. We propose that microbiota samples could also be collected from the small intestine, where a major exchange of nutrients takes place and where the microbiota may have a biological impact on AN.

  14. Biota dose assessment of small mammals sampled near uranium mines in northern Arizona

    Energy Technology Data Exchange (ETDEWEB)

    Jannik, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Minter, K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kuhne, W. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kubilius, W. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2018-01-09

    In 2015, the U. S. Geological Survey (USGS) collected approximately 50 small mammal carcasses from Northern Arizona uranium mines and other background locations. Based on the highest gross alpha results, 11 small mammal samples were selected for radioisotopic analyses. None of the background samples had significant gross alpha results. The 11 small mammals were identified relative to the three ‘indicator’ mines located south of Fredonia, AZ on the Kanab Plateau (Kanab North Mine, Pinenut Mine, and Arizona 1 Mine) (Figure 1-1) and are operated by Energy Fuels Resources Inc. (EFRI). EFRI annually reports soil analysis for uranium and radium-226 using Arizona Department of Environmental Quality (ADEQ)-approved Standard Operating Procedures for Soil Sampling (EFRI 2016a, 2016b, 2017). In combination with the USGS small mammal radioiosotopic tissue analyses, a biota dose assessment was completed by Savannah River National Laboratory (SRNL) using the RESidual RADioactivity-BIOTA (RESRAD-BIOTA, V. 1.8) dose assessment tool provided by the Argonne National Laboratory (ANL 2017).

  15. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    Science.gov (United States)

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  16. Statistical issues in reporting quality data: small samples and casemix variation.

    Science.gov (United States)

    Zaslavsky, A M

    2001-12-01

    To present two key statistical issues that arise in analysis and reporting of quality data. Casemix variation is relevant to quality reporting when the units being measured have differing distributions of patient characteristics that also affect the quality outcome. When this is the case, adjustment using stratification or regression may be appropriate. Such adjustments may be controversial when the patient characteristic does not have an obvious relationship to the outcome. Stratified reporting poses problems for sample size and reporting format, but may be useful when casemix effects vary across units. Although there are no absolute standards of reliability, high reliabilities (interunit F > or = 10 or reliability > or = 0.9) are desirable for distinguishing above- and below-average units. When small or unequal sample sizes complicate reporting, precision may be improved using indirect estimation techniques that incorporate auxiliary information, and 'shrinkage' estimation can help to summarize the strength of evidence about units with small samples. With broader understanding of casemix adjustment and methods for analyzing small samples, quality data can be analysed and reported more accurately.

  17. Microdochium nivale and Microdochium majus in seed samples of Danish small grain cereals

    DEFF Research Database (Denmark)

    Nielsen, L. K.; Justesen, A. F.; Jensen, J. D.

    2013-01-01

    Microdochium nivale and Microdochium majus are two of fungal species found in the Fusarium Head Blight (FHB) complex infecting small grain cereals. Quantitative real-time PCR assays were designed to separate the two Microdochium species based on the translation elongation factor 1a gene (TEF-1a......) and used to analyse a total of 374 seed samples of wheat, barley, triticale, rye and oat sampled from farmers’ fields across Denmark from 2003 to 2007. Both fungal species were detected in the five cereal species but M. majus showed a higher prevalence compared to M. nivale in most years in all cereal...... species except rye, in which M. nivale represented a larger proportion of the biomass and was more prevalent than M. majus in some samples. Historical samples of wheat and barley from 1957 to 2000 similarly showed a strong prevalence of M. majus over M. nivale indicating that M. majus has been the main...

  18. Basic distribution free identification tests for small size samples of environmental data

    International Nuclear Information System (INIS)

    Federico, A.G.; Musmeci, F.

    1998-01-01

    Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data [it

  19. Quantifying predictability through information theory: small sample estimation in a non-Gaussian framework

    International Nuclear Information System (INIS)

    Haven, Kyle; Majda, Andrew; Abramov, Rafail

    2005-01-01

    Many situations in complex systems require quantitative estimates of the lack of information in one probability distribution relative to another. In short term climate and weather prediction, examples of these issues might involve the lack of information in the historical climate record compared with an ensemble prediction, or the lack of information in a particular Gaussian ensemble prediction strategy involving the first and second moments compared with the non-Gaussian ensemble itself. The relative entropy is a natural way to quantify the predictive utility in this information, and recently a systematic computationally feasible hierarchical framework has been developed. In practical systems with many degrees of freedom, computational overhead limits ensemble predictions to relatively small sample sizes. Here the notion of predictive utility, in a relative entropy framework, is extended to small random samples by the definition of a sample utility, a measure of the unlikeliness that a random sample was produced by a given prediction strategy. The sample utility is the minimum predictability, with a statistical level of confidence, which is implied by the data. Two practical algorithms for measuring such a sample utility are developed here. The first technique is based on the statistical method of null-hypothesis testing, while the second is based upon a central limit theorem for the relative entropy of moment-based probability densities. These techniques are tested on known probability densities with parameterized bimodality and skewness, and then applied to the Lorenz '96 model, a recently developed 'toy' climate model with chaotic dynamics mimicking the atmosphere. The results show a detection of non-Gaussian tendencies of prediction densities at small ensemble sizes with between 50 and 100 members, with a 95% confidence level

  20. Taking sputum samples from small children with cystic fibrosis: a matter of cooperation

    DEFF Research Database (Denmark)

    Pehn, Mette; Bregnballe, Vibeke

    2014-01-01

    Objectives: An important part of the disease control in Danish guidelines for care of patients with cystic fibrosis (CF) is a monthly sputum sample by tracheal suchtion. Coping to this unpleasant procedure in small children depends heavily on the support from parents and nurse. The objective...... of this study was to develop a tool to help parents and children to cope with tracheal suctioning. Methods: Three short videos showing how nurses perform tracheal suctioning to get a sputum sample from small children with cystic fibrosis were made. The videos were shown to and discussed with parents...... and children to help them identify their own challenges in coping with the procedure. The study was carried out in the outpatient clinic at the CF centre, Aarhus Univeristy Hospital. Results: The videos are a useful tool to convince the parents, nurses and children from the age of about four years...

  1. Auto-validating von Neumann rejection sampling from small phylogenetic tree spaces

    Directory of Open Access Journals (Sweden)

    York Thomas

    2009-01-01

    Full Text Available Abstract Background In phylogenetic inference one is interested in obtaining samples from the posterior distribution over the tree space on the basis of some observed DNA sequence data. One of the simplest sampling methods is the rejection sampler due to von Neumann. Here we introduce an auto-validating version of the rejection sampler, via interval analysis, to rigorously draw samples from posterior distributions over small phylogenetic tree spaces. Results The posterior samples from the auto-validating sampler are used to rigorously (i estimate posterior probabilities for different rooted topologies based on mitochondrial DNA from human, chimpanzee and gorilla, (ii conduct a non-parametric test of rate variation between protein-coding and tRNA-coding sites from three primates and (iii obtain a posterior estimate of the human-neanderthal divergence time. Conclusion This solves the open problem of rigorously drawing independent and identically distributed samples from the posterior distribution over rooted and unrooted small tree spaces (3 or 4 taxa based on any multiply-aligned sequence data.

  2. Mass amplifying probe for sensitive fluorescence anisotropy detection of small molecules in complex biological samples.

    Science.gov (United States)

    Cui, Liang; Zou, Yuan; Lin, Ninghang; Zhu, Zhi; Jenkins, Gareth; Yang, Chaoyong James

    2012-07-03

    Fluorescence anisotropy (FA) is a reliable and excellent choice for fluorescence sensing. One of the key factors influencing the FA value for any molecule is the molar mass of the molecule being measured. As a result, the FA method with functional nucleic acid aptamers has been limited to macromolecules such as proteins and is generally not applicable for the analysis of small molecules because their molecular masses are relatively too small to produce observable FA value changes. We report here a molecular mass amplifying strategy to construct anisotropy aptamer probes for small molecules. The probe is designed in such a way that only when a target molecule binds to the probe does it activate its binding ability to an anisotropy amplifier (a high molecular mass molecule such as protein), thus significantly increasing the molecular mass and FA value of the probe/target complex. Specifically, a mass amplifying probe (MAP) consists of a targeting aptamer domain against a target molecule and molecular mass amplifying aptamer domain for the amplifier protein. The probe is initially rendered inactive by a small blocking strand partially complementary to both target aptamer and amplifier protein aptamer so that the mass amplifying aptamer domain would not bind to the amplifier protein unless the probe has been activated by the target. In this way, we prepared two probes that constitute a target (ATP and cocaine respectively) aptamer, a thrombin (as the mass amplifier) aptamer, and a fluorophore. Both probes worked well against their corresponding small molecule targets, and the detection limits for ATP and cocaine were 0.5 μM and 0.8 μM, respectively. More importantly, because FA is less affected by environmental interferences, ATP in cell media and cocaine in urine were directly detected without any tedious sample pretreatment. Our results established that our molecular mass amplifying strategy can be used to design aptamer probes for rapid, sensitive, and selective

  3. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Report of the advisory group meeting on elemental analysis of extremely small samples

    International Nuclear Information System (INIS)

    2002-01-01

    This publication contains summary of discussions held at the meeting with brief description and comparative characteristics of most common nuclear analytical techniques used for analysis of very small samples as well as the conclusions of the meeting. Some aspect of reference materials and quality control are also discussed. The publication also contains individual contributions made by the participants, each of these papers haven provided with an abstract and indexed separately

  5. Enrichment and determination of small amounts of 90Sr/90Y in water samples

    International Nuclear Information System (INIS)

    Mundschenk, H.

    1979-01-01

    Small amounts of 90 Sr/ 90 Y can be concentrated from large volumes of surface water (100 l) by precipitation of the phosphates, using bentonite as adsorber matrix. In the case of samples containing no or nearly no suspended matter (tap water, ground water, sea water), the daughter 90 Y can be extracted directly by using filter beds impregnated with HDEHP. The applicability of both techniques is demonstrated under realistic conditions. (orig.) 891 HP/orig. 892 MKO [de

  6. A simple technique for measuring the superconducting critical temperature of small (>= 10 μg) samples

    International Nuclear Information System (INIS)

    Pereira, R.F.R.; Meyer, E.; Silveira, M.F. da.

    1983-01-01

    A simple technique for measuring the superconducting critical temperature of small (>=10μg) samples is described. The apparatus is built in the form of a probe, which can be introduced directly into a liquid He storage dewar and permits the determination of the critical temperature, with an imprecision of +- 0.05 K above 4.2 K, in about 10 minutes. (Author) [pt

  7. Capillary absorption spectrometer and process for isotopic analysis of small samples

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, M. Lizabeth; Kelly, James F.; Sams, Robert L.; Moran, James J.; Newburn, Matthew K.; Blake, Thomas A.

    2018-04-24

    A capillary absorption spectrometer and process are described that provide highly sensitive and accurate stable absorption measurements of analytes in a sample gas that may include isotopologues of carbon and oxygen obtained from gas and biological samples. It further provides isotopic images of microbial communities that allow tracking of nutrients at the single cell level. It further targets naturally occurring variations in carbon and oxygen isotopes that avoids need for expensive isotopically labeled mixtures which allows study of samples taken from the field without modification. The process also permits sampling in vivo permitting real-time ambient studies of microbial communities.

  8. Capillary absorption spectrometer and process for isotopic analysis of small samples

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, M. Lizabeth; Kelly, James F.; Sams, Robert L.; Moran, James J.; Newburn, Matthew K.; Blake, Thomas A.

    2016-03-29

    A capillary absorption spectrometer and process are described that provide highly sensitive and accurate stable absorption measurements of analytes in a sample gas that may include isotopologues of carbon and oxygen obtained from gas and biological samples. It further provides isotopic images of microbial communities that allow tracking of nutrients at the single cell level. It further targets naturally occurring variations in carbon and oxygen isotopes that avoids need for expensive isotopically labeled mixtures which allows study of samples taken from the field without modification. The method also permits sampling in vivo permitting real-time ambient studies of microbial communities.

  9. Transcriptome landscape of Lactococcus lactis reveals many novel RNAs including a small regulatory RNA involved in carbon uptake and metabolism.

    Science.gov (United States)

    van der Meulen, Sjoerd B; de Jong, Anne; Kok, Jan

    2016-01-01

    RNA sequencing has revolutionized genome-wide transcriptome analyses, and the identification of non-coding regulatory RNAs in bacteria has thus increased concurrently. Here we reveal the transcriptome map of the lactic acid bacterial paradigm Lactococcus lactis MG1363 by employing differential RNA sequencing (dRNA-seq) and a combination of manual and automated transcriptome mining. This resulted in a high-resolution genome annotation of L. lactis and the identification of 60 cis-encoded antisense RNAs (asRNAs), 186 trans-encoded putative regulatory RNAs (sRNAs) and 134 novel small ORFs. Based on the putative targets of asRNAs, a novel classification is proposed. Several transcription factor DNA binding motifs were identified in the promoter sequences of (a)sRNAs, providing insight in the interplay between lactococcal regulatory RNAs and transcription factors. The presence and lengths of 14 putative sRNAs were experimentally confirmed by differential Northern hybridization, including the abundant RNA 6S that is differentially expressed depending on the available carbon source. For another sRNA, LLMGnc_147, functional analysis revealed that it is involved in carbon uptake and metabolism. L. lactis contains 13% leaderless mRNAs (lmRNAs) that, from an analysis of overrepresentation in GO classes, seem predominantly involved in nucleotide metabolism and DNA/RNA binding. Moreover, an A-rich sequence motif immediately following the start codon was uncovered, which could provide novel insight in the translation of lmRNAs. Altogether, this first experimental genome-wide assessment of the transcriptome landscape of L. lactis and subsequent sRNA studies provide an extensive basis for the investigation of regulatory RNAs in L. lactis and related lactococcal species.

  10. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  11. Method for Measuring Thermal Conductivity of Small Samples Having Very Low Thermal Conductivity

    Science.gov (United States)

    Miller, Robert A.; Kuczmarski, Maria a.

    2009-01-01

    This paper describes the development of a hot plate method capable of using air as a standard reference material for the steady-state measurement of the thermal conductivity of very small test samples having thermal conductivity on the order of air. As with other approaches, care is taken to ensure that the heat flow through the test sample is essentially one-dimensional. However, unlike other approaches, no attempt is made to use heated guards to block the flow of heat from the hot plate to the surroundings. It is argued that since large correction factors must be applied to account for guard imperfections when sample dimensions are small, it may be preferable to simply measure and correct for the heat that flows from the heater disc to directions other than into the sample. Experimental measurements taken in a prototype apparatus, combined with extensive computational modeling of the heat transfer in the apparatus, show that sufficiently accurate measurements can be obtained to allow determination of the thermal conductivity of low thermal conductivity materials. Suggestions are made for further improvements in the method based on results from regression analyses of the generated data.

  12. A novel approach for small sample size family-based association studies: sequential tests.

    Science.gov (United States)

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  13. Determination of phosphorus in small amounts of protein samples by ICP-MS.

    Science.gov (United States)

    Becker, J Sabine; Boulyga, Sergei F; Pickhardt, Carola; Becker, J; Buddrus, Stefan; Przybylski, Michael

    2003-02-01

    Inductively coupled plasma mass spectrometry (ICP-MS) is used for phosphorus determination in protein samples. A small amount of solid protein sample (down to 1 micro g) or digest (1-10 micro L) protein solution was denatured in nitric acid and hydrogen peroxide by closed-microvessel microwave digestion. Phosphorus determination was performed with an optimized analytical method using a double-focusing sector field inductively coupled plasma mass spectrometer (ICP-SFMS) and quadrupole-based ICP-MS (ICP-QMS). For quality control of phosphorus determination a certified reference material (CRM), single cell proteins (BCR 273) with a high phosphorus content of 26.8+/-0.4 mg g(-1), was analyzed. For studies on phosphorus determination in proteins while reducing the sample amount as low as possible the homogeneity of CRM BCR 273 was investigated. Relative standard deviation and measurement accuracy in ICP-QMS was within 2%, 3.5%, 11% and 12% when using CRM BCR 273 sample weights of 40 mg, 5 mg, 1 mg and 0.3 mg, respectively. The lowest possible sample weight for an accurate phosphorus analysis in protein samples by ICP-MS is discussed. The analytical method developed was applied for the analysis of homogeneous protein samples in very low amounts [1-100 micro g of solid protein sample, e.g. beta-casein or down to 1 micro L of protein or digest in solution (e.g., tau protein)]. A further reduction of the diluted protein solution volume was achieved by the application of flow injection in ICP-SFMS, which is discussed with reference to real protein digests after protein separation using 2D gel electrophoresis.The detection limits for phosphorus in biological samples were determined by ICP-SFMS down to the ng g(-1) level. The present work discusses the figure of merit for the determination of phosphorus in a small amount of protein sample with ICP-SFMS in comparison to ICP-QMS.

  14. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    Science.gov (United States)

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  15. Technical Note: New methodology for measuring viscosities in small volumes characteristic of environmental chamber particle samples

    Directory of Open Access Journals (Sweden)

    L. Renbaum-Wolff

    2013-01-01

    Full Text Available Herein, a method for the determination of viscosities of small sample volumes is introduced, with important implications for the viscosity determination of particle samples from environmental chambers (used to simulate atmospheric conditions. The amount of sample needed is < 1 μl, and the technique is capable of determining viscosities (η ranging between 10−3 and 103 Pascal seconds (Pa s in samples that cover a range of chemical properties and with real-time relative humidity and temperature control; hence, the technique should be well-suited for determining the viscosities, under atmospherically relevant conditions, of particles collected from environmental chambers. In this technique, supermicron particles are first deposited on an inert hydrophobic substrate. Then, insoluble beads (~1 μm in diameter are embedded in the particles. Next, a flow of gas is introduced over the particles, which generates a shear stress on the particle surfaces. The sample responds to this shear stress by generating internal circulations, which are quantified with an optical microscope by monitoring the movement of the beads. The rate of internal circulation is shown to be a function of particle viscosity but independent of the particle material for a wide range of organic and organic-water samples. A calibration curve is constructed from the experimental data that relates the rate of internal circulation to particle viscosity, and this calibration curve is successfully used to predict viscosities in multicomponent organic mixtures.

  16. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  17. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    Science.gov (United States)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application

  18. Is a 'convenience' sample useful for estimating immunization coverage in a small population?

    Science.gov (United States)

    Weir, Jean E; Jones, Carrie

    2008-01-01

    Rapid survey methodologies are widely used for assessing immunization coverage in developing countries, approximating true stratified random sampling. Non-random ('convenience') sampling is not considered appropriate for estimating immunization coverage rates but has the advantages of low cost and expediency. We assessed the validity of a convenience sample of children presenting to a travelling clinic by comparing the coverage rate in the convenience sample to the true coverage established by surveying each child in three villages in rural Papua New Guinea. The rate of DTF immunization coverage as estimated by the convenience sample was within 10% of the true coverage when the proportion of children in the sample was two-thirds or when only children over the age of one year were counted, but differed by 11% when the sample included only 53% of the children and when all eligible children were included. The convenience sample may be sufficiently accurate for reporting purposes and is useful for identifying areas of low coverage.

  19. A passive guard for low thermal conductivity measurement of small samples by the hot plate method

    International Nuclear Information System (INIS)

    Jannot, Yves; Godefroy, Justine; Degiovanni, Alain; Grigorova-Moutiers, Veneta

    2017-01-01

    Hot plate methods under steady state conditions are based on a 1D model to estimate the thermal conductivity, using measurements of the temperatures T 0 and T 1 of the two sides of the sample and of the heat flux crossing it. To be consistent with the hypothesis of the 1D heat flux, either a hot plate guarded apparatus is used, or the temperature is measured at the centre of the sample. On one hand the latter method can be used only if the ratio thickness/width of the sample is sufficiently low and on the other hand the guarded hot plate method requires large width samples (typical cross section of 0.6  ×  0.6 m 2 ). That is why both methods cannot be used for low width samples. The method presented in this paper is based on an optimal choice of the temperatures T 0 and T 1 compared to the ambient temperature T a , enabling the estimation of the thermal conductivity with a centered hot plate method, by applying the 1D heat flux model. It will be shown that these optimal values do not depend on the size or on the thermal conductivity of samples (in the range 0.015–0.2 W m −1 K −1 ), but only on T a . The experimental results obtained validate the method for several reference samples for values of the ratio thickness/width up to 0.3, thus enabling the measurement of the thermal conductivity of samples having a small cross-section, down to 0.045  ×  0.045 m 2 . (paper)

  20. A simple Bayesian approach to quantifying confidence level of adverse event incidence proportion in small samples.

    Science.gov (United States)

    Liu, Fang

    2016-01-01

    In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions.

  1. Tools for Inspecting and Sampling Waste in Underground Radioactive Storage Tanks with Small Access Riser Openings

    International Nuclear Information System (INIS)

    Nance, T.A.

    1998-01-01

    Underground storage tanks with 2 inches to 3 inches diameter access ports at the Department of Energy's Savannah River Site have been used to store radioactive solvents and sludge. In order to close these tanks, the contents of the tanks need to first be quantified in terms of volume and chemical and radioactive characteristics. To provide information on the volume of waste contained within the tanks, a small remote inspection system was needed. This inspection system was designed to provide lighting and provide pan and tilt capabilities in an inexpensive package with zoom abilities and color video. This system also needed to be utilized inside of a plastic tent built over the access port to contain any contamination exiting from the port. This system had to be build to travel into the small port opening, through the riser pipe, into the tank evacuated space, and out of the riser pipe and access port with no possibility of being caught and blocking the access riser. Long thin plates were found in many access riser pipes that blocked the inspection system from penetrating into the tank interiors. Retrieval tools to clear the plates from the tanks using developed sampling devices while providing safe containment for the samples. This paper will discuss the inspection systems, tools for clearing access pipes, and solvent sampling tools developed to evaluate the tank contents of the underground solvent storage tanks

  2. Simultaneous small-sample comparisons in longitudinal or multi-endpoint trials using multiple marginal models

    DEFF Research Database (Denmark)

    Pallmann, Philip; Ritz, Christian; Hothorn, Ludwig A

    2018-01-01

    , however only asymptotically. In this paper, we show how to make the approach also applicable to small-sample data problems. Specifically, we discuss the computation of adjusted P values and simultaneous confidence bounds for comparisons of randomised treatment groups as well as for levels......Simultaneous inference in longitudinal, repeated-measures, and multi-endpoint designs can be onerous, especially when trying to find a reasonable joint model from which the interesting effects and covariances are estimated. A novel statistical approach known as multiple marginal models greatly...... simplifies the modelling process: the core idea is to "marginalise" the problem and fit multiple small models to different portions of the data, and then estimate the overall covariance matrix in a subsequent, separate step. Using these estimates guarantees strong control of the family-wise error rate...

  3. Small Sample Reactivity Measurements in the RRR/SEG Facility: Reanalysis using TRIPOLI-4

    Energy Technology Data Exchange (ETDEWEB)

    Hummel, Andrew [Idaho National Lab. (INL), Idaho Falls, ID (United States); Palmiotti, Guiseppe [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-01

    This work involved reanalyzing the RRR/SEG integral experiments performed at the Rossendorf facility in Germany throughout the 1970s and 80s. These small sample reactivity worth measurements were carried out using the pile oscillator technique for many different fission products, structural materials, and standards. The coupled fast-thermal system was designed such that the measurements would provide insight into elemental data, specifically the competing effects between neutron capture and scatter. Comparing the measured to calculated reactivity values can then provide adjustment criteria to ultimately improve nuclear data for fast reactor designs. Due to the extremely small reactivity effects measured (typically less than 1 pcm) and the specific heterogeneity of the core, the tool chosen for this analysis was TRIPOLI-4. This code allows for high fidelity 3-dimensional geometric modeling, and the most recent, unreleased version, is capable of exact perturbation theory.

  4. Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.

    Science.gov (United States)

    Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang

    2018-02-01

    To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.

  5. Auxiliary variables in multiple imputation in regression with missing X: a warning against including too many in small sample research

    Directory of Open Access Journals (Sweden)

    Hardt Jochen

    2012-12-01

    Full Text Available Abstract Background Multiple imputation is becoming increasingly popular. Theoretical considerations as well as simulation studies have shown that the inclusion of auxiliary variables is generally of benefit. Methods A simulation study of a linear regression with a response Y and two predictors X1 and X2 was performed on data with n = 50, 100 and 200 using complete cases or multiple imputation with 0, 10, 20, 40 and 80 auxiliary variables. Mechanisms of missingness were either 100% MCAR or 50% MAR + 50% MCAR. Auxiliary variables had low (r=.10 vs. moderate correlations (r=.50 with X’s and Y. Results The inclusion of auxiliary variables can improve a multiple imputation model. However, inclusion of too many variables leads to downward bias of regression coefficients and decreases precision. When the correlations are low, inclusion of auxiliary variables is not useful. Conclusion More research on auxiliary variables in multiple imputation should be performed. A preliminary rule of thumb could be that the ratio of variables to cases with complete data should not go below 1 : 3.

  6. Static, Mixed-Array Total Evaporation for Improved Quantitation of Plutonium Minor Isotopes in Small Samples

    Science.gov (United States)

    Stanley, F. E.; Byerly, Benjamin L.; Thomas, Mariam R.; Spencer, Khalil J.

    2016-06-01

    Actinide isotope measurements are a critical signature capability in the modern nuclear forensics "toolbox", especially when interrogating anthropogenic constituents in real-world scenarios. Unfortunately, established methodologies, such as traditional total evaporation via thermal ionization mass spectrometry, struggle to confidently measure low abundance isotope ratios (evaporation techniques as a straightforward means of improving plutonium minor isotope measurements, which have been resistant to enhancement in recent years because of elevated radiologic concerns. Results are presented for small sample (~20 ng) applications involving a well-known plutonium isotope reference material, CRM-126a, and compared with traditional total evaporation methods.

  7. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    Science.gov (United States)

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. On-chip acoustophoretic isolation of microflora including S. typhimurium from raw chicken, beef and blood samples.

    Science.gov (United States)

    Ngamsom, Bongkot; Lopez-Martinez, Maria J; Raymond, Jean-Claude; Broyer, Patrick; Patel, Pradip; Pamme, Nicole

    2016-04-01

    Pathogen analysis in food samples routinely involves lengthy growth-based pre-enrichment and selective enrichment of food matrices to increase the ratio of pathogen to background flora. Similarly, for blood culture analysis, pathogens must be isolated and enriched from a large excess of blood cells to allow further analysis. Conventional techniques of centrifugation and filtration are cumbersome, suffer from low sample throughput, are not readily amenable to automation and carry a risk of damaging biological samples. We report on-chip acoustophoresis as a pre-analytical technique for the resolution of total microbial flora from food and blood samples. The resulting 'clarified' sample is expected to increase the performance of downstream systems for the specific detection of the pathogens. A microfluidic chip with three inlets, a central separation channel and three outlets was utilized. Samples were introduced through the side inlets, and buffer solution through the central inlet. Upon ultrasound actuation, large debris particles (10-100 μm) from meat samples were continuously partitioned into the central buffer channel, leaving the 'clarified' outer sample streams containing both, the pathogenic cells and the background flora (ca. 1 μm) to be collected over a 30 min operation cycle before further analysis. The system was successfully tested with Salmonella typhimurium-spiked (ca. 10(3)CFU mL(-1)) samples of chicken and minced beef, demonstrating a high level of the pathogen recovery (60-90%). When applied to S. typhimurium contaminated blood samples (10(7)CFU mL(-1)), acoustophoresis resulted in a high depletion (99.8%) of the red blood cells (RBC) which partitioned in the buffer stream, whilst sufficient numbers of the viable S. typhimurium remained in the outer channels for further analysis. These results indicate that the technology may provide a generic approach for pre-analytical sample preparation prior to integrated and automated downstream detection of

  9. Use of aspiration method for collecting brain samples for rabies diagnosis in small wild animals.

    Science.gov (United States)

    Iamamoto, K; Quadros, J; Queiroz, L H

    2011-02-01

    In developing countries such as Brazil, where canine rabies is still a considerable problem, samples from wildlife species are infrequently collected and submitted for screening for rabies. A collaborative study was established involving environmental biologists and veterinarians for rabies epidemiological research in a specific ecological area located at the Sao Paulo State, Brazil. The wild animals' brains are required to be collected without skull damage because the skull's measurements are important in the identification of the captured animal species. For this purpose, samples from bats and small mammals were collected using an aspiration method by inserting a plastic pipette into the brain through the magnum foramen. While there is a progressive increase in the use of the plastic pipette technique in various studies undertaken, it is also appreciated that this method could foster collaborative research between wildlife scientists and rabies epidemiologists thus improving rabies surveillance. © 2009 Blackwell Verlag GmbH.

  10. Investigation of Phase Transition-Based Tethered Systems for Small Body Sample Capture

    Science.gov (United States)

    Quadrelli, Marco; Backes, Paul; Wilkie, Keats; Giersch, Lou; Quijano, Ubaldo; Scharf, Daniel; Mukherjee, Rudranarayan

    2009-01-01

    This paper summarizes the modeling, simulation, and testing work related to the development of technology to investigate the potential that shape memory actuation has to provide mechanically simple and affordable solutions for delivering assets to a surface and for sample capture and possible return to Earth. We investigate the structural dynamics and controllability aspects of an adaptive beam carrying an end-effector which, by changing equilibrium phases is able to actively decouple the end-effector dynamics from the spacecraft dynamics during the surface contact phase. Asset delivery and sample capture and return are at the heart of several emerging potential missions to small bodies, such as asteroids and comets, and to the surface of large bodies, such as Titan.

  11. Modeling and Testing of Phase Transition-Based Deployable Systems for Small Body Sample Capture

    Science.gov (United States)

    Quadrelli, Marco; Backes, Paul; Wilkie, Keats; Giersch, Lou; Quijano, Ubaldo; Keim, Jason; Mukherjee, Rudranarayan

    2009-01-01

    This paper summarizes the modeling, simulation, and testing work related to the development of technology to investigate the potential that shape memory actuation has to provide mechanically simple and affordable solutions for delivering assets to a surface and for sample capture and return. We investigate the structural dynamics and controllability aspects of an adaptive beam carrying an end-effector which, by changing equilibrium phases is able to actively decouple the end-effector dynamics from the spacecraft dynamics during the surface contact phase. Asset delivery and sample capture and return are at the heart of several emerging potential missions to small bodies, such as asteroids and comets, and to the surface of large bodies, such as Titan.

  12. Monitoring, Modeling, and Diagnosis of Alkali-Silica Reaction in Small Concrete Samples

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Vivek [Idaho National Lab. (INL), Idaho Falls, ID (United States); Cai, Guowei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gribok, Andrei V. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mahadevan, Sankaran [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high-confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This report describes alkali-silica reaction (ASR) degradation mechanisms and factors influencing the ASR. A fully coupled thermo-hydro-mechanical-chemical model developed by Saouma and Perotti by taking into consideration the effects of stress on the reaction kinetics and anisotropic volumetric expansion is presented in this report. This model is implemented in the GRIZZLY code based on the Multiphysics Object Oriented Simulation Environment. The implemented model in the GRIZZLY code is randomly used to initiate ASR in a 2D and 3D lattice to study the percolation aspects of concrete. The percolation aspects help determine the transport properties of the material and therefore the durability and service life of concrete. This report summarizes the effort to develop small-size concrete samples with embedded glass to mimic ASR. The concrete samples were treated in water and sodium hydroxide solution at elevated temperature to study how ingress of sodium ions and hydroxide ions at elevated temperature impacts concrete samples embedded with glass. Thermal camera was used to monitor the changes in the concrete sample and results are summarized.

  13. Predicting Antitumor Activity of Peptides by Consensus of Regression Models Trained on a Small Data Sample

    Directory of Open Access Journals (Sweden)

    Ivanka Jerić

    2011-11-01

    Full Text Available Predicting antitumor activity of compounds using regression models trained on a small number of compounds with measured biological activity is an ill-posed inverse problem. Yet, it occurs very often within the academic community. To counteract, up to some extent, overfitting problems caused by a small training data, we propose to use consensus of six regression models for prediction of biological activity of virtual library of compounds. The QSAR descriptors of 22 compounds related to the opioid growth factor (OGF, Tyr-Gly-Gly-Phe-Met with known antitumor activity were used to train regression models: the feed-forward artificial neural network, the k-nearest neighbor, sparseness constrained linear regression, the linear and nonlinear (with polynomial and Gaussian kernel support vector machine. Regression models were applied on a virtual library of 429 compounds that resulted in six lists with candidate compounds ranked by predicted antitumor activity. The highly ranked candidate compounds were synthesized, characterized and tested for an antiproliferative activity. Some of prepared peptides showed more pronounced activity compared with the native OGF; however, they were less active than highly ranked compounds selected previously by the radial basis function support vector machine (RBF SVM regression model. The ill-posedness of the related inverse problem causes unstable behavior of trained regression models on test data. These results point to high complexity of prediction based on the regression models trained on a small data sample.

  14. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai

    2015-09-16

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  15. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai; Pang, Herbert; Tong, Tiejun; Genton, Marc G.

    2015-01-01

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  16. Evaluation applications of instrument calibration research findings in psychology for very small samples

    Science.gov (United States)

    Fisher, W. P., Jr.; Petry, P.

    2016-11-01

    Many published research studies document item calibration invariance across samples using Rasch's probabilistic models for measurement. A new approach to outcomes evaluation for very small samples was employed for two workshop series focused on stress reduction and joyful living conducted for health system employees and caregivers since 2012. Rasch-calibrated self-report instruments measuring depression, anxiety and stress, and the joyful living effects of mindfulness behaviors were identified in peer-reviewed journal articles. Items from one instrument were modified for use with a US population, other items were simplified, and some new items were written. Participants provided ratings of their depression, anxiety and stress, and the effects of their mindfulness behaviors before and after each workshop series. The numbers of participants providing both pre- and post-workshop data were low (16 and 14). Analysis of these small data sets produce results showing that, with some exceptions, the item hierarchies defining the constructs retained the same invariant profiles they had exhibited in the published research (correlations (not disattenuated) range from 0.85 to 0.96). In addition, comparisons of the pre- and post-workshop measures for the three constructs showed substantively and statistically significant changes. Implications for program evaluation comparisons, quality improvement efforts, and the organization of communications concerning outcomes in clinical fields are explored.

  17. Measurements of accurate x-ray scattering data of protein solutions using small stationary sample cells

    Science.gov (United States)

    Hong, Xinguo; Hao, Quan

    2009-01-01

    In this paper, we report a method of precise in situ x-ray scattering measurements on protein solutions using small stationary sample cells. Although reduction in the radiation damage induced by intense synchrotron radiation sources is indispensable for the correct interpretation of scattering data, there is still a lack of effective methods to overcome radiation-induced aggregation and extract scattering profiles free from chemical or structural damage. It is found that radiation-induced aggregation mainly begins on the surface of the sample cell and grows along the beam path; the diameter of the damaged region is comparable to the x-ray beam size. Radiation-induced aggregation can be effectively avoided by using a two-dimensional scan (2D mode), with an interval as small as 1.5 times the beam size, at low temperature (e.g., 4 °C). A radiation sensitive protein, bovine hemoglobin, was used to test the method. A standard deviation of less than 5% in the small angle region was observed from a series of nine spectra recorded in 2D mode, in contrast to the intensity variation seen using the conventional stationary technique, which can exceed 100%. Wide-angle x-ray scattering data were collected at a standard macromolecular diffraction station using the same data collection protocol and showed a good signal/noise ratio (better than the reported data on the same protein using a flow cell). The results indicate that this method is an effective approach for obtaining precise measurements of protein solution scattering.

  18. Measurements of accurate x-ray scattering data of protein solutions using small stationary sample cells

    International Nuclear Information System (INIS)

    Hong Xinguo; Hao Quan

    2009-01-01

    In this paper, we report a method of precise in situ x-ray scattering measurements on protein solutions using small stationary sample cells. Although reduction in the radiation damage induced by intense synchrotron radiation sources is indispensable for the correct interpretation of scattering data, there is still a lack of effective methods to overcome radiation-induced aggregation and extract scattering profiles free from chemical or structural damage. It is found that radiation-induced aggregation mainly begins on the surface of the sample cell and grows along the beam path; the diameter of the damaged region is comparable to the x-ray beam size. Radiation-induced aggregation can be effectively avoided by using a two-dimensional scan (2D mode), with an interval as small as 1.5 times the beam size, at low temperature (e.g., 4 deg. C). A radiation sensitive protein, bovine hemoglobin, was used to test the method. A standard deviation of less than 5% in the small angle region was observed from a series of nine spectra recorded in 2D mode, in contrast to the intensity variation seen using the conventional stationary technique, which can exceed 100%. Wide-angle x-ray scattering data were collected at a standard macromolecular diffraction station using the same data collection protocol and showed a good signal/noise ratio (better than the reported data on the same protein using a flow cell). The results indicate that this method is an effective approach for obtaining precise measurements of protein solution scattering.

  19. The use of secondary ion mass spectrometry in forensic analyses of ultra-small samples

    Science.gov (United States)

    Cliff, John

    2010-05-01

    It is becoming increasingly important in forensic science to perform chemical and isotopic analyses on very small sample sizes. Moreover, in some instances the signature of interest may be incorporated in a vast background making analyses impossible by bulk methods. Recent advances in instrumentation make secondary ion mass spectrometry (SIMS) a powerful tool to apply to these problems. As an introduction, we present three types of forensic analyses in which SIMS may be useful. The causal organism of anthrax (Bacillus anthracis) chelates Ca and other metals during spore formation. Thus, the spores contain a trace element signature related to the growth medium that produced the organisms. Although other techniques have been shown to be useful in analyzing these signatures, the sample size requirements are generally relatively large. We have shown that time of flight SIMS (TOF-SIMS) combined with multivariate analysis, can clearly separate Bacillus sp. cultures prepared in different growth media using analytical spot sizes containing approximately one nanogram of spores. An important emerging field in forensic analysis is that of provenance of fecal pollution. The strategy of choice for these analyses-developing host-specific nucleic acid probes-has met with considerable difficulty due to lack of specificity of the probes. One potentially fruitful strategy is to combine in situ nucleic acid probing with high precision isotopic analyses. Bulk analyses of human and bovine fecal bacteria, for example, indicate a relative difference in d13C content of about 4 per mil. We have shown that sample sizes of several nanograms can be analyzed with the IMS 1280 with precisions capable of separating two per mil differences in d13C. The NanoSIMS 50 is capable of much better spatial resolution than the IMS 1280, albeit at a cost of analytical precision. Nevertheless we have documented precision capable of separating five per mil differences in d13C using analytical spots containing

  20. Vertical Sampling Scales for Atmospheric Boundary Layer Measurements from Small Unmanned Aircraft Systems (sUAS

    Directory of Open Access Journals (Sweden)

    Benjamin L. Hemingway

    2017-09-01

    Full Text Available The lowest portion of the Earth’s atmosphere, known as the atmospheric boundary layer (ABL, plays an important role in the formation of weather events. Simple meteorological measurements collected from within the ABL, such as temperature, pressure, humidity, and wind velocity, are key to understanding the exchange of energy within this region, but conventional surveillance techniques such as towers, radar, weather balloons, and satellites do not provide adequate spatial and/or temporal coverage for monitoring weather events. Small unmanned aircraft, or aerial, systems (sUAS provide a versatile, dynamic platform for atmospheric sensing that can provide higher spatio-temporal sampling frequencies than available through most satellite sensing methods. They are also able to sense portions of the atmosphere that cannot be measured from ground-based radar, weather stations, or weather balloons and have the potential to fill gaps in atmospheric sampling. However, research on the vertical sampling scales for collecting atmospheric measurements from sUAS and the variabilities of these scales across atmospheric phenomena (e.g., temperature and humidity is needed. The objective of this study is to use variogram analysis, a common geostatistical technique, to determine optimal spatial sampling scales for two atmospheric variables (temperature and relative humidity captured from sUAS. Results show that vertical sampling scales of approximately 3 m for temperature and 1.5–2 m for relative humidity were sufficient to capture the spatial structure of these phenomena under the conditions tested. Future work is needed to model these scales across the entire ABL as well as under variable conditions.

  1. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    International Nuclear Information System (INIS)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K.

    2015-01-01

    Environmental scientific research requires a detector that has sensitivity low enough to reveal the presence of any contaminant in the sample at a reasonable counting time. Canberra developed the germanium detector geometry called Small Anode Germanium (SAGe) Well detector, which is now available commercially. The SAGe Well detector is a new type of low capacitance germanium well detector manufactured using small anode technology capable of advancing many environmental scientific research applications. The performance of this detector has been evaluated for a range of sample sizes and geometries counted inside the well, and on the end cap of the detector. The detector has energy resolution performance similar to semi-planar detectors, and offers significant improvement over the existing coaxial and Well detectors. Energy resolution performance of 750 eV Full Width at Half Maximum (FWHM) at 122 keV γ-ray energy and resolution of 2.0 - 2.3 keV FWHM at 1332 keV γ-ray energy are guaranteed for detector volumes up to 425 cm 3 . The SAGe Well detector offers an optional 28 mm well diameter with the same energy resolution as the standard 16 mm well. Such outstanding resolution performance will benefit environmental applications in revealing the detailed radionuclide content of samples, particularly at low energy, and will enhance the detection sensitivity resulting in reduced counting time. The detector is compatible with electric coolers without any sacrifice in performance and supports the Canberra Mathematical efficiency calibration method (In situ Object Calibration Software or ISOCS, and Laboratory Source-less Calibration Software or LABSOCS). In addition, the SAGe Well detector supports true coincidence summing available in the ISOCS/LABSOCS framework. The improved resolution performance greatly enhances detection sensitivity of this new detector for a range of sample sizes and geometries counted inside the well. This results in lower minimum detectable

  2. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    Energy Technology Data Exchange (ETDEWEB)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K. [Canberra Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States)

    2015-07-01

    Environmental scientific research requires a detector that has sensitivity low enough to reveal the presence of any contaminant in the sample at a reasonable counting time. Canberra developed the germanium detector geometry called Small Anode Germanium (SAGe) Well detector, which is now available commercially. The SAGe Well detector is a new type of low capacitance germanium well detector manufactured using small anode technology capable of advancing many environmental scientific research applications. The performance of this detector has been evaluated for a range of sample sizes and geometries counted inside the well, and on the end cap of the detector. The detector has energy resolution performance similar to semi-planar detectors, and offers significant improvement over the existing coaxial and Well detectors. Energy resolution performance of 750 eV Full Width at Half Maximum (FWHM) at 122 keV γ-ray energy and resolution of 2.0 - 2.3 keV FWHM at 1332 keV γ-ray energy are guaranteed for detector volumes up to 425 cm{sup 3}. The SAGe Well detector offers an optional 28 mm well diameter with the same energy resolution as the standard 16 mm well. Such outstanding resolution performance will benefit environmental applications in revealing the detailed radionuclide content of samples, particularly at low energy, and will enhance the detection sensitivity resulting in reduced counting time. The detector is compatible with electric coolers without any sacrifice in performance and supports the Canberra Mathematical efficiency calibration method (In situ Object Calibration Software or ISOCS, and Laboratory Source-less Calibration Software or LABSOCS). In addition, the SAGe Well detector supports true coincidence summing available in the ISOCS/LABSOCS framework. The improved resolution performance greatly enhances detection sensitivity of this new detector for a range of sample sizes and geometries counted inside the well. This results in lower minimum detectable

  3. Measuring Blood Glucose Concentrations in Photometric Glucometers Requiring Very Small Sample Volumes.

    Science.gov (United States)

    Demitri, Nevine; Zoubir, Abdelhak M

    2017-01-01

    Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.

  4. Perspectives of an acoustic–electrostatic/electrodynamic hybrid levitator for small fluid and solid samples

    International Nuclear Information System (INIS)

    Lierke, E G; Holitzner, L

    2008-01-01

    The feasibility of an acoustic–electrostatic hybrid levitator for small fluid and solid samples is evaluated. A proposed design and its theoretical assessment are based on the optional implementation of simple hardware components (ring electrodes) and standard laboratory equipment into typical commercial ultrasonic standing wave levitators. These levitators allow precise electrical charging of drops during syringe- or ink-jet-type deployment. The homogeneous electric 'Millikan field' between the grounded ultrasonic transducer and the electrically charged reflector provide an axial compensation of the sample weight in an indifferent equilibrium, which can be balanced by using commercial optical position sensors in combination with standard electronic PID position control. Radial electrostatic repulsion forces between the charged sample and concentric ring electrodes of the same polarity provide stable positioning at the centre of the levitator. The levitator can be used in a pure acoustic or electrostatic mode or in a hybrid combination of both subsystems. Analytical evaluations of the radial–axial force profiles are verified with detailed numerical finite element calculations under consideration of alternative boundary conditions. The simple hardware modification with implemented double-ring electrodes in ac/dc operation is also feasible for an electrodynamic/acoustic hybrid levitator

  5. Sensitive power compensated scanning calorimeter for analysis of phase transformations in small samples

    International Nuclear Information System (INIS)

    Lopeandia, A.F.; Cerdo, Ll.; Clavaguera-Mora, M.T.; Arana, Leonel R.; Jensen, K.F.; Munoz, F.J.; Rodriguez-Viejo, J.

    2005-01-01

    We have designed and developed a sensitive scanning calorimeter for use with microgram or submicrogram, thin film, or powder samples. Semiconductor processing techniques are used to fabricate membrane based microreactors with a small heat capacity of the addenda, 120 nJ/K at room temperature. At heating rates below 10 K/s the heat released or absorbed by the sample during a given transformation is compensated through a resistive Pt heater by a digital controller so that the calorimeter works as a power compensated device. Its use and dynamic sensitivity is demonstrated by analyzing the melting behavior of thin films of indium and high density polyethylene. Melting enthalpies in the range of 40-250 μJ for sample masses on the order of 1.5 μg have been measured with accuracy better than 5% at heating rates ∼0.2 K/s. The signal-to-noise ratio, limited by the electronic setup, is 200 nW

  6. Automated microfluidic sample-preparation platform for high-throughput structural investigation of proteins by small-angle X-ray scattering

    DEFF Research Database (Denmark)

    Lafleur, Josiane P.; Snakenborg, Detlef; Nielsen, Søren Skou

    2011-01-01

    A new microfluidic sample-preparation system is presented for the structural investigation of proteins using small-angle X-ray scattering (SAXS) at synchrotrons. The system includes hardware and software features for precise fluidic control, sample mixing by diffusion, automated X-ray exposure...... control, UV absorbance measurements and automated data analysis. As little as 15 l of sample is required to perform a complete analysis cycle, including sample mixing, SAXS measurement, continuous UV absorbance measurements, and cleaning of the channels and X-ray cell with buffer. The complete analysis...

  7. Comprehensive processing of high-throughput small RNA sequencing data including quality checking, normalization, and differential expression analysis using the UEA sRNA Workbench.

    Science.gov (United States)

    Beckers, Matthew; Mohorianu, Irina; Stocks, Matthew; Applegate, Christopher; Dalmay, Tamas; Moulton, Vincent

    2017-06-01

    Recently, high-throughput sequencing (HTS) has revealed compelling details about the small RNA (sRNA) population in eukaryotes. These 20 to 25 nt noncoding RNAs can influence gene expression by acting as guides for the sequence-specific regulatory mechanism known as RNA silencing. The increase in sequencing depth and number of samples per project enables a better understanding of the role sRNAs play by facilitating the study of expression patterns. However, the intricacy of the biological hypotheses coupled with a lack of appropriate tools often leads to inadequate mining of the available data and thus, an incomplete description of the biological mechanisms involved. To enable a comprehensive study of differential expression in sRNA data sets, we present a new interactive pipeline that guides researchers through the various stages of data preprocessing and analysis. This includes various tools, some of which we specifically developed for sRNA analysis, for quality checking and normalization of sRNA samples as well as tools for the detection of differentially expressed sRNAs and identification of the resulting expression patterns. The pipeline is available within the UEA sRNA Workbench, a user-friendly software package for the processing of sRNA data sets. We demonstrate the use of the pipeline on a H. sapiens data set; additional examples on a B. terrestris data set and on an A. thaliana data set are described in the Supplemental Information A comparison with existing approaches is also included, which exemplifies some of the issues that need to be addressed for sRNA analysis and how the new pipeline may be used to do this. © 2017 Beckers et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  8. The small sample uncertainty aspect in relation to bullwhip effect measurement

    DEFF Research Database (Denmark)

    Nielsen, Erland Hejn

    2009-01-01

    The bullwhip effect as a concept has been known for almost half a century starting with the Forrester effect. The bullwhip effect is observed in many supply chains, and it is generally accepted as a potential malice. Despite of this fact, the bullwhip effect still seems to be first and foremost...... a conceptual phenomenon. This paper intends primarily to investigate why this might be so and thereby investigate the various aspects, possibilities and obstacles that must be taken into account, when considering the potential practical use and measure of the bullwhip effect in order to actually get the supply...... chain under control. This paper will put special emphasis on the unavoidable small-sample uncertainty aspects relating to the measurement or estimation of the bullwhip effect.  ...

  9. Determination of 35S-aminoacyl-transfer ribonucleic acid specific radioactivity in small tissue samples

    International Nuclear Information System (INIS)

    Samarel, A.M.; Ogunro, E.A.; Ferguson, A.G.; Lesch, M.

    1981-01-01

    Rate determination of protein synthesis utilizing tracer amino acid incorporation requires accurate assessment of the specific radioactivity of the labeled precursor aminoacyl-tRNA pool. Previously published methods presumably useful for the measurement of any aminoacyl-tRNA were unsuccessful when applied to [ 35 S]methionine, due to the unique chemical properties of this amino acid. Herein we describe modifications of these methods necessary for the measurement of 35 S-aminoacyl-tRNA specific radioactivity from small tissue samples incubated in the presence of [ 35 S]methionine. The use of [ 35 S]methionine of high specific radioactivity enables analysis of the methionyl-tRNA from less than 100 mg of tissue. Conditions for optimal recovery of 35 S-labeled dansyl-amino acid derivatives are presented and possible applications of this method are discussed

  10. Determination of /sup 35/S-aminoacyl-transfer ribonucleic acid specific radioactivity in small tissue samples

    Energy Technology Data Exchange (ETDEWEB)

    Samarel, A.M.; Ogunro, E.A.; Ferguson, A.G.; Lesch, M.

    1981-11-15

    Rate determination of protein synthesis utilizing tracer amino acid incorporation requires accurate assessment of the specific radioactivity of the labeled precursor aminoacyl-tRNA pool. Previously published methods presumably useful for the measurement of any aminoacyl-tRNA were unsuccessful when applied to (/sup 35/S)methionine, due to the unique chemical properties of this amino acid. Herein we describe modifications of these methods necessary for the measurement of /sup 35/S-aminoacyl-tRNA specific radioactivity from small tissue samples incubated in the presence of (/sup 35/S)methionine. The use of (/sup 35/S)methionine of high specific radioactivity enables analysis of the methionyl-tRNA from less than 100 mg of tissue. Conditions for optimal recovery of /sup 35/S-labeled dansyl-amino acid derivatives are presented and possible applications of this method are discussed.

  11. Basic distribution free identification tests for small size samples of environmental data

    Energy Technology Data Exchange (ETDEWEB)

    Federico, A.G.; Musmeci, F. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dipt. Ambiente

    1998-01-01

    Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data. [Italiano] Nell`analisi di dati ambientali ricorre spesso il caso di dover sottoporre a test l`ipotesi di provenienza di due, o piu`, insiemi di dati dalla stessa popolazione. Tipicamente i dati disponibili sono pochi e spesso l`ipotesi di provenienza da distribuzioni normali non e` sostenibile. D`altra aprte la diffusione odierna di Personal Computer fornisce nuove possibili soluzioni basate sull`uso intensivo delle risorse della CPU. Il rapporto analizza il problema e presenta la possibilita` di utilizzo di due test non parametrici basati sulle proprieta` intrinseche di equiprobabilita` dei campioni. Il primo e` basato su una tecnica di ricampionamento esaustivo mentre il secondo su un approccio di tipo bootstrap. E` presentato un programma di semplice utilizzo e un caso di studio basato su dati di contaminazione di bambini a Chernobyl.

  12. Small population size of Pribilof Rock Sandpipers confirmed through distance-sampling surveys in Alaska

    Science.gov (United States)

    Ruthrauff, Daniel R.; Tibbitts, T. Lee; Gill, Robert E.; Dementyev, Maksim N.; Handel, Colleen M.

    2012-01-01

    The Rock Sandpiper (Calidris ptilocnemis) is endemic to the Bering Sea region and unique among shorebirds in the North Pacific for wintering at high latitudes. The nominate subspecies, the Pribilof Rock Sandpiper (C. p. ptilocnemis), breeds on four isolated islands in the Bering Sea and appears to spend the winter primarily in Cook Inlet, Alaska. We used a stratified systematic sampling design and line-transect method to survey the entire breeding range of this population during springs 2001-2003. Densities were up to four times higher on the uninhabited and more northerly St. Matthew and Hall islands than on St. Paul and St. George islands, which both have small human settlements and introduced reindeer herds. Differences in density, however, appeared to be more related to differences in vegetation than to anthropogenic factors, raising some concern for prospective effects of climate change. We estimated the total population at 19 832 birds (95% CI 17 853–21 930), ranking it among the smallest of North American shorebird populations. To determine the vulnerability of C. p. ptilocnemis to anthropogenic and stochastic environmental threats, future studies should focus on determining the amount of gene flow among island subpopulations, the full extent of the subspecies' winter range, and the current trajectory of this small population.

  13. Acceleration of small, light projectiles (including hydrogen isotopes) to high speeds using a two-stage light gas gun

    International Nuclear Information System (INIS)

    Combs, S.K.; Foust, C.R.; Gouge, M.J.; Milora, S.L.

    1989-01-01

    Small, light projectiles have been accelerated to high speeds using a two-stage light gas gun at Oak Ridge National Laboratory. With 35-mg plastic projectiles (4 mm in diameter), speeds of up to 4.5 km/s have been recorded. The ''pipe gun'' technique for freezing hydrogen isotopes in situ in the gun barrel has been used to accelerate deuterium pellets (nominal diameter of 4 mm) to velocities of up to 2.85 km/s. The primary application of this technology is for plasma fueling of fusion devices via pellet injection of hydrogen isotopes. Conventional pellet injectors are limited to pellet speeds in the range 1-2 km/s. Higher velocities are desirable for plasma fueling applications, and the two-stage pneumatic technique offers performance in a higher velocity regime. However, experimental results indicate that the use of sabots to encase the cryogenic pellets and protect them for the high peak pressures will be required to reliably attain intact pellets at speeds of ∼3 km/s or greater. In some limited tests, lithium hydride pellets were accelerated to speeds of up to 4.2 km/s. Also, repetitive operation of the two-stage gun (four plastic pellets fired at ∼0.5 Hz) was demonstrated for the first time in preliminary tests. The equipment and operation are described, and experimental results and some comparisons with a theoretical model are presented. 17 refs., 6 figs., 2 tabs

  14. Simplifying sample pretreatment: application of dried blood spot (DBS) method to blood samples, including postmortem, for UHPLC-MS/MS analysis of drugs of abuse.

    Science.gov (United States)

    Odoardi, Sara; Anzillotti, Luca; Strano-Rossi, Sabina

    2014-10-01

    The complexity of biological matrices, such as blood, requires the development of suitably selective and reliable sample pretreatment procedures prior to their instrumental analysis. A method has been developed for the analysis of drugs of abuse and their metabolites from different chemical classes (opiates, methadone, fentanyl and analogues, cocaine, amphetamines and amphetamine-like substances, ketamine, LSD) in human blood using dried blood spot (DBS) and subsequent UHPLC-MS/MS analysis. DBS extraction required only 100μL of sample, added with the internal standards and then three droplets (30μL each) of this solution were spotted on the card, let dry for 1h, punched and extracted with methanol with 0.1% of formic acid. The supernatant was evaporated and the residue was then reconstituted in 100μL of water with 0.1% of formic acid and injected in the UHPLC-MS/MS system. The method was validated considering the following parameters: LOD and LOQ, linearity, precision, accuracy, matrix effect and dilution integrity. LODs were 0.05-1ng/mL and LOQs were 0.2-2ng/mL. The method showed satisfactory linearity for all substances, with determination coefficients always higher than 0.99. Intra and inter day precision, accuracy, matrix effect and dilution integrity were acceptable for all the studied substances. The addition of internal standards before DBS extraction and the deposition of a fixed volume of blood on the filter cards ensured the accurate quantification of the analytes. The validated method was then applied to authentic postmortem blood samples. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. Accumulation of small heat shock proteins, including mitochondrial HSP22, induced by oxidative stress and adaptive response in tomato cells

    International Nuclear Information System (INIS)

    Banzet, N.; Richaud, C.; Deveaux, Y.; Kazmaier, M.; Gagnon, J.; Triantaphylides, C.

    1998-01-01

    Changes in gene expression, by application of H2O2, O2.- generating agents (methyl viologen, digitonin) and gamma irradiation to tomato suspension cultures, were investigated and compared to the well-described heat shock response. Two-dimensional gel protein mapping analyses gave the first indication that at least small heat shock proteins (smHSP) accumulated in response to application of H2O2 and gamma irradiation, but not to O2.- generating agents. While some proteins seemed to be induced specifically by each treatment, only part of the heat shock response was observed. On the basis of Northern hybridization experiments performed with four heterologous cDNA, corresponding to classes I-IV of pea smHSP, it could be concluded that significant amounts of class I and II smHSP mRNA are induced by H2O2 and by irradiation. Taken together, these results demonstrate that in plants some HSP genes are inducible by oxidative stresses, as in micro-organisms and other eukaryotic cells. HSP22, the main stress protein that accumulates following H2O2 action or gamma irradiation, was also purified. Sequence homology of amino terminal and internal sequences, and immunoreactivity with Chenopodium rubrum mitochondrial smHSP antibody, indicated that the protein belongs to the recently discovered class of plant mitochondrial smHSP. Heat shock or a mild H2O2 pretreatment was also shown to lead to plant cell protection against oxidative injury. Therefore, the synthesis of these stress proteins can be considered as an adaptive mechanism in which mitochondrial protection could be essential

  16. Weighted piecewise LDA for solving the small sample size problem in face verification.

    Science.gov (United States)

    Kyperountas, Marios; Tefas, Anastasios; Pitas, Ioannis

    2007-03-01

    A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance.

  17. A new CF-IRMS system for quantifying stable isotopes of carbon monoxide from ice cores and small air samples

    Directory of Open Access Journals (Sweden)

    Z. Wang

    2010-10-01

    Full Text Available We present a new analysis technique for stable isotope ratios (δ13C and δ18O of atmospheric carbon monoxide (CO from ice core samples. The technique is an online cryogenic vacuum extraction followed by continuous-flow isotope ratio mass spectrometry (CF-IRMS; it can also be used with small air samples. The CO extraction system includes two multi-loop cryogenic cleanup traps, a chemical oxidant for oxidation to CO2, a cryogenic collection trap, a cryofocusing unit, gas chromatography purification, and subsequent injection into a Finnigan Delta Plus IRMS. Analytical precision of 0.2‰ (±1δ for δ13C and 0.6‰ (±1δ for δ18O can be obtained for 100 mL (STP air samples with CO mixing ratios ranging from 60 ppbv to 140 ppbv (~268–625 pmol CO. Six South Pole ice core samples from depths ranging from 133 m to 177 m were processed for CO isotope analysis after wet extraction. To our knowledge, this is the first measurement of stable isotopes of CO in ice core air.

  18. Biomechanical analysis of a salt-modified polyvinyl alcohol hydrogel for knee meniscus applications, including comparison with human donor samples.

    Science.gov (United States)

    Hayes, Jennifer C; Curley, Colin; Tierney, Paul; Kennedy, James E

    2016-03-01

    The primary objective of this research was the biomechanical analysis of a salt-modified polyvinyl alcohol hydrogel, in order to assess its potential for use as an artificial meniscal implant. Aqueous polyvinyl alcohol (PVA) was treated with a sodium sulphate (Na2SO4) solution to precipitate out the polyvinyl alcohol resulting in a pliable hydrogel. The freeze-thaw process, a strictly physical method of crosslinking, was employed to crosslink the hydrogel. Development of a meniscal shaped mould and sample housing unit allowed the production of meniscal shaped hydrogels for direct comparison to human meniscal tissue. Results obtained show that compressive responses were slightly higher in PVA/Na2SO4 menisci, displaying maximum compressive loads of 2472N, 2482N and 2476N for samples having undergone 1, 3 and 5 freeze-thaw cycles respectively. When compared to the human meniscal tissue tested under the same conditions, an average maximum load of 2467.5N was observed. This suggests that the PVA/Na2SO4 menisci are mechanically comparable to the human meniscus. Biocompatibility analysis of PVA/Na2SO4 hydrogels revealed no acute cytotoxicity. The work described herein has innovative potential in load bearing applications, specifically as an alternative to meniscectomy as replacement of critically damaged meniscal tissue in the knee joint where repair is not viable. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Preparing and measuring ultra-small radiocarbon samples with the ARTEMIS AMS facility in Saclay, France

    Energy Technology Data Exchange (ETDEWEB)

    Delque-Kolic, E., E-mail: emmanuelle.delque-kolic@cea.fr [LMC14, CEA Saclay, Batiment 450 Porte 4E, 91191 Gif sur Yvette (France); Comby-Zerbino, C.; Ferkane, S.; Moreau, C.; Dumoulin, J.P.; Caffy, I.; Souprayen, C.; Quiles, A.; Bavay, D.; Hain, S.; Setti, V. [LMC14, CEA Saclay, Batiment 450 Porte 4E, 91191 Gif sur Yvette (France)

    2013-01-15

    The ARTEMIS facility in Saclay France measures, on average, 4500 samples a year for French organizations working in an array of fields, including environmental sciences, archeology and hydrology. In response to an increasing demand for the isolation of specific soil compounds and organic water fractions, we were motivated to evaluate our ability to reduce microgram samples using our standard graphitization lines and to measure the graphite thus obtained with our 3MV NEC Pelletron AMS. Our reduction facility consists of two fully automated graphitization lines. Each line has 12 reduction reactors with a reduction volume of 18 ml for the first line and 12 ml for the second. Under routine conditions, we determined that we could reduce the samples down to 10 {mu}g of carbon, even if the graphitization yield is consequently affected by the lower sample mass. Our results when testing different Fe/C ratios suggest that an amount of 1.5 mg of Fe powder was ideal (instead of lower amounts of catalyst) to prevent the sample from deteriorating too quickly under the Cs+ beam, and to facilitate pressing procedures. Several sets of microsamples produced from HOxI standard, international references and backgrounds were measured. When measuring {sup 14}C-free wood charcoal and HOxI samples we determined that our modern and dead blanks, due to the various preparation steps, were of 1.1 {+-} 0.8 and 0.2 {+-} 0.1 {mu}g, respectively. The results presented here were obtained for IAEA-C1, {sup 14}C-free wood, IAEA-C6, IAEA-C2 and FIRI C.

  20. Preparing and measuring ultra-small radiocarbon samples with the ARTEMIS AMS facility in Saclay, France

    International Nuclear Information System (INIS)

    Delqué-Količ, E.; Comby-Zerbino, C.; Ferkane, S.; Moreau, C.; Dumoulin, J.P.; Caffy, I.; Souprayen, C.; Quilès, A.; Bavay, D.; Hain, S.; Setti, V.

    2013-01-01

    The ARTEMIS facility in Saclay France measures, on average, 4500 samples a year for French organizations working in an array of fields, including environmental sciences, archeology and hydrology. In response to an increasing demand for the isolation of specific soil compounds and organic water fractions, we were motivated to evaluate our ability to reduce microgram samples using our standard graphitization lines and to measure the graphite thus obtained with our 3MV NEC Pelletron AMS. Our reduction facility consists of two fully automated graphitization lines. Each line has 12 reduction reactors with a reduction volume of 18 ml for the first line and 12 ml for the second. Under routine conditions, we determined that we could reduce the samples down to 10 μg of carbon, even if the graphitization yield is consequently affected by the lower sample mass. Our results when testing different Fe/C ratios suggest that an amount of 1.5 mg of Fe powder was ideal (instead of lower amounts of catalyst) to prevent the sample from deteriorating too quickly under the Cs+ beam, and to facilitate pressing procedures. Several sets of microsamples produced from HOxI standard, international references and backgrounds were measured. When measuring 14 C-free wood charcoal and HOxI samples we determined that our modern and dead blanks, due to the various preparation steps, were of 1.1 ± 0.8 and 0.2 ± 0.1 μg, respectively. The results presented here were obtained for IAEA-C1, 14 C-free wood, IAEA-C6, IAEA-C2 and FIRI C.

  1. Identification of multiple mRNA and DNA sequences from small tissue samples isolated by laser-assisted microdissection.

    Science.gov (United States)

    Bernsen, M R; Dijkman, H B; de Vries, E; Figdor, C G; Ruiter, D J; Adema, G J; van Muijen, G N

    1998-10-01

    Molecular analysis of small tissue samples has become increasingly important in biomedical studies. Using a laser dissection microscope and modified nucleic acid isolation protocols, we demonstrate that multiple mRNA as well as DNA sequences can be identified from a single-cell sample. In addition, we show that the specificity of procurement of tissue samples is not compromised by smear contamination resulting from scraping of the microtome knife during sectioning of lesions. The procedures described herein thus allow for efficient RT-PCR or PCR analysis of multiple nucleic acid sequences from small tissue samples obtained by laser-assisted microdissection.

  2. Including Online-Recruited Seeds: A Respondent-Driven Sample of Men Who Have Sex With Men.

    Science.gov (United States)

    Lachowsky, Nathan John; Lal, Allan; Forrest, Jamie I; Card, Kiffer George; Cui, Zishan; Sereda, Paul; Rich, Ashleigh; Raymond, Henry Fisher; Roth, Eric A; Moore, David M; Hogg, Robert S

    2016-03-15

    Technology has changed the way men who have sex with men (MSM) seek sex and socialize, which may impact the implementation of respondent-driven sampling (RDS) among this population. Initial participants (also known as seeds) are a critical consideration in RDS because they begin the recruitment chains. However, little information is available on how the online-recruited seeds may effect RDS implementation. The objectives of this study were to compare (1) online-recruited versus offline-recruited seeds and (2) subsequent recruitment chains of online-recruited versus offline-recruited seeds. Between 2012 and 2014, we recruited MSM using RDS in Vancouver, Canada. RDS weights were used with logistic regression to address each objective. A total of 119 seeds were used, 85 of whom were online-recruited seeds, to recruit an additional 600 MSM. Compared with offline-recruited seeds, online-recruited seeds were less likely to be HIV-positive (OR 0.34, 95% CI 0.13-0.88), to have attended a gay community group (AOR 0.33, 95% CI 0.12-0.90), and to feel gay community involvement was "very important" (AOR 0.16, 95% CI 0.03-0.93). Online-recruited seeds were more likely to ask a sexual partner's HIV status always versus online (AOR 4.29, 95% CI 1.53-12-12.05). Further, compared with recruitment chains started by offline-recruited seeds, recruits from chains started by online-recruited seeds (283/600, 47.2%) were less likely to be HIV-positive (AOR 0.25, 95% CI 0.16-0.40), to report "versatile" versus "bottom" sexual position preference (AOR 0.56, 95% CI 0.35-0.88), and to be in a relationship lasting >1 year (AOR 1.65, 95% CI 1.06-2.56). Recruits of online seeds were more likely to be out as gay for longer (eg, 11-21 vs 1-4 years, AOR 2.22, 95% CI 1.27-3.88) and have fewer Facebook friends (eg, 201-500 vs >500, AOR 1.69, 95% CI 1.02-2.80). Online-recruited seeds were more prevalent, recruited fewer participants, but were different from those recruited offline. This may therefore

  3. 40 CFR Appendix A to Subpart F of... - Sampling Plans for Selective Enforcement Auditing of Small Nonroad Engines

    Science.gov (United States)

    2010-07-01

    ... Enforcement Auditing of Small Nonroad Engines A Appendix A to Subpart F of Part 90 Protection of Environment...-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Selective Enforcement Auditing Pt. 90, Subpt. F, App. A Appendix A to Subpart F of Part 90—Sampling Plans for Selective Enforcement Auditing of Small Nonroad Engines...

  4. Evaluating the biological potential in samples returned from planetary satellites and small solar system bodies: framework for decision making

    National Research Council Canada - National Science Library

    National Research Council Staff; Space Studies Board; Division on Engineering and Physical Sciences; National Research Council; National Academy of Sciences

    ... from Planetary Satellites and Small Solar System Bodies Framework for Decision Making Task Group on Sample Return from Small Solar System Bodies Space Studies Board Commission on Physical Sciences, Mathematics, and Applications National Research Council NATIONAL ACADEMY PRESS Washington, D.C. 1998 i Copyrightthe true use are Please breaks...

  5. Radioisotopic method for the measurement of lipolysis in small samples of human adipose tissue

    International Nuclear Information System (INIS)

    Leibel, R.L.; Hirsch, J.; Berry, E.M.; Gruen, R.K.

    1984-01-01

    To facilitate the study of adrenoreceptor response in small needle biopsy samples of human subcutaneous adipose tissue, we developed a dual radioisotopic technique for measuring lipolysis rate. Aliquots (20-75 mg) of adipose tissue fragments were incubated in a buffered albumin medium containing [ 3 H]palmitate and [ 14 C]glucose, each of high specific activity. In neutral glycerides synthesized in this system, [ 14 C]glucose is incorporated exclusively into the glyceride-glycerol moiety and 3 H appears solely in the esterified fatty acid. Alpha-2 and beta-1 adrenoreceptor activation of tissue incubated in this system does not alter rates of 14 C-labeled glyceride accumulation, but does produce a respective increase or decrease in the specific activity of fatty acids esterified into newly synthesized glycerides. This alteration in esterified fatty acid specific activity is reflected in the ratio of 14 C: 3 H in newly synthesized triglycerides extracted from the incubated adipose tissue. There is a high correlation (r . 0.90) between the 14 C: 3 H ratio in triglycerides and the rate of lipolysis as reflected in glycerol release into the incubation medium. The degree of adrenoreceptor activation by various concentrations of lipolytic and anti-lipolytic substances can be assessed by comparing this ratio in stimulated tissue to that characterizing unstimulated tissue or the incubation medium. This technique permits the study of very small, unweighed tissue biopsy fragments, the only limitation on sensitivity being the specific activity of the medium glucose and palmitate. It is, therefore, useful for serial examinations of adipose tissue adrenoreceptor dose-response characteristics under a variety of clinical circumstances

  6. An Inset CT Specimen for Evaluating Fracture in Small Samples of Material

    Science.gov (United States)

    Yahyazadehfar, M.; Nazari, A.; Kruzic, J.J.; Quinn, G.D.; Arola, D.

    2013-01-01

    In evaluations on the fracture behavior of hard tissues and many biomaterials, the volume of material available to study is not always sufficient to apply a standard method of practice. In the present study an inset Compact Tension (inset CT) specimen is described, which uses a small cube of material (approximately 2×2×2 mm3) that is molded within a secondary material to form the compact tension geometry. A generalized equation describing the Mode I stress intensity was developed for the specimen using the solutions from a finite element model that was defined over permissible crack lengths, variations in specimen geometry, and a range in elastic properties of the inset and mold materials. A validation of the generalized equation was performed using estimates for the fracture toughness of a commercial dental composite via the “inset CT” specimen and the standard geometry defined by ASTM E399. Results showed that the average fracture toughness obtained from the new specimen (1.23 ± 0.02 MPa•m0.5) was within 2% of that from the standard. Applications of the inset CT specimen are presented for experimental evaluations on the crack growth resistance of dental enamel and root dentin, including their fracture resistance curves. Potential errors in adopting this specimen are then discussed, including the effects of debonding between the inset and molding material on the estimated stress intensity distribution. Results of the investigation show that the inset CT specimen offers a viable approach for studying the fracture behavior of small volumes of structural materials. PMID:24268892

  7. Retrospective biodosimetry with small tooth enamel samples using K-Band and X-Band

    International Nuclear Information System (INIS)

    Gomez, Jorge A.; Kinoshita, Angela; Leonor, Sergio J.; Belmonte, Gustavo C.; Baffa, Oswaldo

    2011-01-01

    In an attempt to make the in vitro electron spin resonance (ESR) retrospective dosimetry of the tooth enamel a lesser invasive method, experiments using X-Band and K-Band were performed, aiming to determine conditions that could be used in cases of accidental exposures. First, a small prism from the enamel was removed and ground with an agate mortar and pestle until particles reach a diameter of approximately less than 0.5 mm. This enamel extraction process resulted in lower signal artifact compared with the direct enamel extraction performed with a diamond burr abrasion. The manual grinding of the enamel does not lead to any induced ESR signal artifact, whereas the use of a diamond burr at low speed produces a signal artifact equivalent to the dosimetric signal induced by a dose of 500 mGy of gamma irradiation. A mass of 25 mg of enamel was removed from a sound molar tooth previously irradiated in vitro with a dose of 100 mGy. This amount of enamel was enough to detect the dosimetric signal in a standard X-Band spectrometer. However using a K-Band spectrometer, samples mass between 5 and 10 mg were sufficient to obtain the same sensitivity. An overall evaluation of the uncertainties involved in the process in this and other dosimetric assessments performed at our laboratory indicates that it is possible at K-Band to estimate a 100 mGy dose with 25% accuracy. In addition, the use of K-Band also presented higher sensitivity and allowed the use of smaller sample mass in comparison with X-Band. Finally, the restoration process performed on a tooth after extraction of the 25 mg of enamel is described. This was conducted by dental treatment using photopolymerizable resin which enabled complete recovery of the tooth from the functional and aesthetic viewpoint showing that this procedure can be minimally invasive.

  8. Retrospective biodosimetry with small tooth enamel samples using K-Band and X-Band

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, Jorge A. [Departamento de Fisica, FFCLRP, Universidade de Sao Paulo, 14040-901 Ribeirao Preto, Sao Paulo (Brazil); Kinoshita, Angela [Departamento de Fisica, FFCLRP, Universidade de Sao Paulo, 14040-901 Ribeirao Preto, Sao Paulo (Brazil); Universidade Sagrado Coracao - USC, 17011-160 Bauru, Sao Paulo (Brazil); Leonor, Sergio J. [Departamento de Fisica, FFCLRP, Universidade de Sao Paulo, 14040-901 Ribeirao Preto, Sao Paulo (Brazil); Belmonte, Gustavo C. [Universidade Sagrado Coracao - USC, 17011-160 Bauru, Sao Paulo (Brazil); Baffa, Oswaldo, E-mail: baffa@usp.br [Departamento de Fisica, FFCLRP, Universidade de Sao Paulo, 14040-901 Ribeirao Preto, Sao Paulo (Brazil)

    2011-09-15

    In an attempt to make the in vitro electron spin resonance (ESR) retrospective dosimetry of the tooth enamel a lesser invasive method, experiments using X-Band and K-Band were performed, aiming to determine conditions that could be used in cases of accidental exposures. First, a small prism from the enamel was removed and ground with an agate mortar and pestle until particles reach a diameter of approximately less than 0.5 mm. This enamel extraction process resulted in lower signal artifact compared with the direct enamel extraction performed with a diamond burr abrasion. The manual grinding of the enamel does not lead to any induced ESR signal artifact, whereas the use of a diamond burr at low speed produces a signal artifact equivalent to the dosimetric signal induced by a dose of 500 mGy of gamma irradiation. A mass of 25 mg of enamel was removed from a sound molar tooth previously irradiated in vitro with a dose of 100 mGy. This amount of enamel was enough to detect the dosimetric signal in a standard X-Band spectrometer. However using a K-Band spectrometer, samples mass between 5 and 10 mg were sufficient to obtain the same sensitivity. An overall evaluation of the uncertainties involved in the process in this and other dosimetric assessments performed at our laboratory indicates that it is possible at K-Band to estimate a 100 mGy dose with 25% accuracy. In addition, the use of K-Band also presented higher sensitivity and allowed the use of smaller sample mass in comparison with X-Band. Finally, the restoration process performed on a tooth after extraction of the 25 mg of enamel is described. This was conducted by dental treatment using photopolymerizable resin which enabled complete recovery of the tooth from the functional and aesthetic viewpoint showing that this procedure can be minimally invasive.

  9. Small-kernel constrained-least-squares restoration of sampled image data

    Science.gov (United States)

    Hazra, Rajeeb; Park, Stephen K.

    1992-10-01

    Constrained least-squares image restoration, first proposed by Hunt twenty years ago, is a linear image restoration technique in which the restoration filter is derived by maximizing the smoothness of the restored image while satisfying a fidelity constraint related to how well the restored image matches the actual data. The traditional derivation and implementation of the constrained least-squares restoration filter is based on an incomplete discrete/discrete system model which does not account for the effects of spatial sampling and image reconstruction. For many imaging systems, these effects are significant and should not be ignored. In a recent paper Park demonstrated that a derivation of the Wiener filter based on the incomplete discrete/discrete model can be extended to a more comprehensive end-to-end, continuous/discrete/continuous model. In a similar way, in this paper, we show that a derivation of the constrained least-squares filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model and, by so doing, an improved restoration filter is derived. Building on previous work by Reichenbach and Park for the Wiener filter, we also show that this improved constrained least-squares restoration filter can be efficiently implemented as a small-kernel convolution in the spatial domain.

  10. Bootstrap-DEA analysis of BRICS’ energy efficiency based on small sample data

    International Nuclear Information System (INIS)

    Song, Ma-Lin; Zhang, Lin-Ling; Liu, Wei; Fisher, Ron

    2013-01-01

    Highlights: ► The BRICS’ economies have flourished with increasingly energy consumptions. ► The analyses and comparison of energy efficiency are conducted among the BRICS. ► As a whole, there is low energy efficiency but a growing trend of BRICS. ► The BRICS should adopt relevant energy policies based on their own conditions. - Abstract: As a representative of many emerging economies, BRICS’ economies have been greatly developed in recent years. Meanwhile, the proportion of energy consumption of BRICS to the whole world consumption has increased. Therefore, it is significant to analyze and compare the energy efficiency among them. This paper firstly utilizes a Super-SBM model to measure and calculate the energy efficiency of BRICS, then analyzes their present status and development trend. Further, Bootstrap is applied to modify the values based on DEA derived from small sample data, and finally the relationship between energy efficiency and carbon emissions is measured. Results show that energy efficiency of BRICS as a whole is low but has a quickly increasing trend. Also, the relationship between energy efficiency and carbon emissions vary from country to country because of their different energy structures. The governments of BRICS should make some relevant energy policies according to their own conditions

  11. Preferences for Depression Treatment Including Internet-Based Interventions: Results From a Large Sample of Primary Care Patients

    Directory of Open Access Journals (Sweden)

    Marie Dorow

    2018-05-01

    Full Text Available Background: To date, little is known about treatment preferences for depression concerning new media. This study aims to (1 investigate treatment preferences for depression including internet-based interventions and (2 examine subgroup differences concerning age, gender and severity of depression as well as patient-related factors associated with treatment preferences.Methods: Data were derived from the baseline assessment of the @ktiv-trial. Depression treatment preferences were assessed from n = 641 primary care patients with mild to moderate depression regarding the following treatments: medication, psychotherapy, combined treatment, alternative treatment, talking to friends and family, exercise, self-help literature, and internet-based interventions. Depression severity was specified by GPs according to ICD-10 criteria. Ordinal logistic regression models were conducted to identify associated factors of treatment preferences.Results: Patients had a mean age of 43.9 years (SD = 13.8 and more than two thirds (68.6% were female. About 43% of patients had mild depression while 57% were diagnosed with moderate depression. The majority of patients reported strong preferences for psychotherapy, talking to friends and family, and exercise. About one in five patients was very likely to consider internet-based interventions in case of depression. Younger patients expressed significantly stronger treatment preferences for psychotherapy and internet-based interventions than older patients. The most salient factors associated with treatment preferences were the patients' education and perceived self-efficacy.Conclusions: Patients with depression report individually different treatment preferences.Our results underline the importance of shared decision-making within primary care. Future studies should investigate treatment preferences for different types of internet-based interventions.

  12. The use of commercially available PC-interface cards for elemental mapping in small samples using XRF

    International Nuclear Information System (INIS)

    Abu Bakar bin Ghazali; Hoyes Garnet

    1991-01-01

    This paper demonstrates the use of ADC and reed relay cards to scan a small sample for acquiring data of X-ray fluorescence. The result shows the distribution of an element such as zinc content in the sample by means of colours, signifying the concentration

  13. Success and failure rates of tumor genotyping techniques in routine pathological samples with non-small-cell lung cancer.

    Science.gov (United States)

    Vanderlaan, Paul A; Yamaguchi, Norihiro; Folch, Erik; Boucher, David H; Kent, Michael S; Gangadharan, Sidharta P; Majid, Adnan; Goldstein, Michael A; Huberman, Mark S; Kocher, Olivier N; Costa, Daniel B

    2014-04-01

    Identification of some somatic molecular alterations in non-small-cell lung cancer (NSCLC) has become evidence-based practice. The success and failure rate of using commercially available tumor genotyping techniques in routine day-to-day NSCLC pathology samples is not well described. We sought to evaluate the success and failure rate of EGFR mutation, KRAS mutation, and ALK FISH in a cohort of lung cancers subjected to routine clinical tumor genotype. Clinicopathologic data, tumor genotype success and failure rates were retrospectively compiled and analyzed from 381 patient-tumor samples. From these 381 patients with lung cancer, the mean age was 65 years, 61.2% were women, 75.9% were white, 27.8% were never smokers, 73.8% had advanced NSCLC and 86.1% had adenocarcinoma histology. The tumor tissue was obtained from surgical specimens in 48.8%, core needle biopsies in 17.9%, and as cell blocks from aspirates or fluid in 33.3% of cases. Anatomic sites for tissue collection included lung (49.3%), lymph nodes (22.3%), pleura (11.8%), bone (6.0%), brain (6.0%), among others. The overall success rate for EGFR mutation analysis was 94.2%, for KRAS mutation 91.6% and for ALK FISH 91.6%. The highest failure rates were observed when the tissue was obtained from image-guided percutaneous transthoracic core-needle biopsies (31.8%, 27.3%, and 35.3% for EGFR, KRAS, and ALK tests, respectively) and bone specimens (23.1%, 15.4%, and 23.1%, respectively). In specimens obtained from bone, the failure rates were significantly higher for biopsies than resection specimens (40% vs. 0%, p=0.024 for EGFR) and for decalcified compared to non-decalcified samples (60% vs. 5.5%, p=0.021 for EGFR). Tumor genotype techniques are feasible in most samples, outside small image-guided percutaneous transthoracic core-needle biopsies and bone samples from core biopsies with decalcification, and therefore expansion of routine tumor genotype into the care of patients with NSCLC may not require special

  14. Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Ferson, S. [Applied Biomathematics, Setauket, NY (United States)

    1996-12-31

    A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.

  15. Quantum superposition of the state discrete spectrum of mathematical correlation molecule for small samples of biometric data

    Directory of Open Access Journals (Sweden)

    Vladimir I. Volchikhin

    2017-06-01

    Full Text Available Introduction: The study promotes to decrease a number of errors of calculating the correlation coefficient in small test samples. Materials and Methods: We used simulation tool for the distribution functions of the density values of the correlation coefficient in small samples. A method for quantization of the data, allows obtaining a discrete spectrum states of one of the varieties of correlation functional. This allows us to consider the proposed structure as a mathematical correlation molecule, described by some analogue continuous-quantum Schrödinger equation. Results: The chi-squared Pearson’s molecule on small samples allows enhancing power of classical chi-squared test to 20 times. A mathematical correlation molecule described in the article has similar properties. It allows in the future reducing calculation errors of the classical correlation coefficients in small samples. Discussion and Conclusions: The authors suggest that there are infinitely many mathematical molecules are similar in their properties to the actual physical molecules. Schrödinger equations are not unique, their analogues can be constructed for each mathematical molecule. You can expect a mathematical synthesis of molecules for a large number of known statistical tests and statistical moments. All this should make it possible to reduce calculation errors due to quantum effects that occur in small test samples.

  16. ANALYSIS OF MONTE CARLO SIMULATION SAMPLING TECHNIQUES ON SMALL SIGNAL STABILITY OF WIND GENERATOR- CONNECTED POWER SYSTEM

    Directory of Open Access Journals (Sweden)

    TEMITOPE RAPHAEL AYODELE

    2016-04-01

    Full Text Available Monte Carlo simulation using Simple Random Sampling (SRS technique is popularly known for its ability to handle complex uncertainty problems. However, to produce a reasonable result, it requires huge sample size. This makes it to be computationally expensive, time consuming and unfit for online power system applications. In this article, the performance of Latin Hypercube Sampling (LHS technique is explored and compared with SRS in term of accuracy, robustness and speed for small signal stability application in a wind generator-connected power system. The analysis is performed using probabilistic techniques via eigenvalue analysis on two standard networks (Single Machine Infinite Bus and IEEE 16–machine 68 bus test system. The accuracy of the two sampling techniques is determined by comparing their different sample sizes with the IDEAL (conventional. The robustness is determined based on a significant variance reduction when the experiment is repeated 100 times with different sample sizes using the two sampling techniques in turn. Some of the results show that sample sizes generated from LHS for small signal stability application produces the same result as that of the IDEAL values starting from 100 sample size. This shows that about 100 sample size of random variable generated using LHS method is good enough to produce reasonable results for practical purpose in small signal stability application. It is also revealed that LHS has the least variance when the experiment is repeated 100 times compared to SRS techniques. This signifies the robustness of LHS over that of SRS techniques. 100 sample size of LHS produces the same result as that of the conventional method consisting of 50000 sample size. The reduced sample size required by LHS gives it computational speed advantage (about six times over the conventional method.

  17. A Rational Approach for Discovering and Validating Cancer Markers in Very Small Samples Using Mass Spectrometry and ELISA Microarrays

    Directory of Open Access Journals (Sweden)

    Richard C. Zangar

    2004-01-01

    Full Text Available Identifying useful markers of cancer can be problematic due to limited amounts of sample. Some samples such as nipple aspirate fluid (NAF or early-stage tumors are inherently small. Other samples such as serum are collected in larger volumes but archives of these samples are very valuable and only small amounts of each sample may be available for a single study. Also, given the diverse nature of cancer and the inherent variability in individual protein levels, it seems likely that the best approach to screen for cancer will be to determine the profile of a battery of proteins. As a result, a major challenge in identifying protein markers of disease is the ability to screen many proteins using very small amounts of sample. In this review, we outline some technological advances in proteomics that greatly advance this capability. Specifically, we propose a strategy for identifying markers of breast cancer in NAF that utilizes mass spectrometry (MS to simultaneously screen hundreds or thousands of proteins in each sample. The best potential markers identified by the MS analysis can then be extensively characterized using an ELISA microarray assay. Because the microarray analysis is quantitative and large numbers of samples can be efficiently analyzed, this approach offers the ability to rapidly assess a battery of selected proteins in a manner that is directly relevant to traditional clinical assays.

  18. Targeted histology sampling from atypical small acinar proliferation area detected by repeat transrectal prostate biopsy

    Directory of Open Access Journals (Sweden)

    A. V. Karman

    2017-01-01

    Full Text Available Оbjective: to define the approach to the management of patients with the detected ASAP area.Materials and methods. In the time period from 2012 through 2015, 494 patients with previously negative biopsy and remaining suspicion of prostate cancer (PCa were examined. The patients underwent repeat 24-core multifocal prostate biopsy with taking additional tissue samples from suspicious areas detected by multiparametric magnetic resonance imaging and transrectal ultrasound. An isolated ASAP area was found in 127 (25. 7 % of the 494 examined men. All of them were offered to perform repeat target transrectal biopsy of this area. Targeted transrectal ultrasound guided biopsy of the ASAP area was performed in 56 (44.1 % of the 127 patients, 53 of them being included in the final analysis.Results. PCa was diagnosed in 14 (26.4 % of the 53 patients, their mean age being 64.4 ± 6.9 years. The average level of prostate-specific antigen (PSA in PCa patients was 6.8 ± 3.0 ng/ml, in those with benign lesions – 9.3 ± 6.5 ng/ml; the percentage ratio of free/total PSA with PCa was 16.2 ± 7,8 %, with benign lesions – 23.3 ± 7.7 %; PSA density in PCa patients was 0.14 ± 0.07 ng/ml/cm3, in those with benign lesions – 0.15 ± 0.12 ng/ml/cm3. Therefore, with ASAP area being detected in repeat prostate biopsy samples, it is advisable that targeted extended biopsy of this area be performed. 

  19. [Monitoring microbiological safety of small systems of water distribution. Comparison of two sampling programs in a town in central Italy].

    Science.gov (United States)

    Papini, Paolo; Faustini, Annunziata; Manganello, Rosa; Borzacchi, Giancarlo; Spera, Domenico; Perucci, Carlo A

    2005-01-01

    To determine the frequency of sampling in small water distribution systems (distribution. We carried out two sampling programs to monitor the water distribution system in a town in Central Italy between July and September 1992; the Poisson distribution assumption implied 4 water samples, the assumption of negative binomial distribution implied 21 samples. Coliform organisms were used as indicators of water safety. The network consisted of two pipe rings and two wells fed by the same water source. The number of summer customers varied considerably from 3,000 to 20,000. The mean density was 2.33 coliforms/100 ml (sd= 5.29) for 21 samples and 3 coliforms/100 ml (sd= 6) for four samples. However the hypothesis of homogeneity was rejected (p-value samples (beta= 0.24) than with 21 (beta= 0.05). For this small network, determining the samples' size according to heterogeneity hypothesis strengthens the statement that water is drinkable compared with homogeneity assumption.

  20. High-speed imaging upgrade for a standard sample scanning atomic force microscope using small cantilevers

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Jonathan D.; Nievergelt, Adrian; Erickson, Blake W.; Yang, Chen; Dukic, Maja; Fantner, Georg E., E-mail: georg.fantner@epfl.ch [Ecole Polytechnique Fédérale de Lausanne, Lausanne (Switzerland)

    2014-09-15

    We present an atomic force microscope (AFM) head for optical beam deflection on small cantilevers. Our AFM head is designed to be small in size, easily integrated into a commercial AFM system, and has a modular architecture facilitating exchange of the optical and electronic assemblies. We present two different designs for both the optical beam deflection and the electronic readout systems, and evaluate their performance. Using small cantilevers with our AFM head on an otherwise unmodified commercial AFM system, we are able to take tapping mode images approximately 5–10 times faster compared to the same AFM system using large cantilevers. By using additional scanner turnaround resonance compensation and a controller designed for high-speed AFM imaging, we show tapping mode imaging of lipid bilayers at line scan rates of 100–500 Hz for scan areas of several micrometers in size.

  1. Importance of including small-scale tile drain discharge in the calibration of a coupled groundwater-surface water catchment model

    DEFF Research Database (Denmark)

    Hansen, Anne Lausten; Refsgaard, Jens Christian; Christensen, Britt Stenhøj Baun

    2013-01-01

    the catchment. In this study, a coupled groundwater-surface water model based on the MIKE SHE code was developed for the 4.7 km2 Lillebæk catchment in Denmark, where tile drain flow is a major contributor to the stream discharge. The catchment model was calibrated in several steps by incrementally including...... the observation data into the calibration to see the effect on model performance of including diverse data types, especially tile drain discharge. For the Lillebæk catchment, measurements of hydraulic head, daily stream discharge, and daily tile drain discharge from five small (1–4 ha) drainage areas exist....... The results showed that including tile drain data in the calibration of the catchment model improved its general performance for hydraulic heads and stream discharges. However, the model failed to correctly describe the local-scale dynamics of the tile drain discharges, and, furthermore, including the drain...

  2. A TIMS-based method for the high precision measurements of the three-isotope potassium composition of small samples

    DEFF Research Database (Denmark)

    Wielandt, Daniel Kim Peel; Bizzarro, Martin

    2011-01-01

    A novel thermal ionization mass spectrometry (TIMS) method for the three-isotope analysis of K has been developed, and ion chromatographic methods for the separation of K have been adapted for the processing of small samples. The precise measurement of K-isotopes is challenged by the presence of ...

  3. Adiponectin levels measured in dried blood spot samples from neonates born small and appropriate for gestational age

    DEFF Research Database (Denmark)

    Klamer, A; Skogstrand, Kristin; Hougaard, D M

    2007-01-01

    Adiponectin levels measured in neonatal dried blood spot samples (DBSS) might be affected by both prematurity and being born small for gestational age (SGA). The aim of the study was to measure adiponectin levels in routinely collected neonatal DBSS taken on day 5 (range 3-12) postnatal from...

  4. Classification of natural formations based on their optical characteristics using small volumes of samples

    Science.gov (United States)

    Abramovich, N. S.; Kovalev, A. A.; Plyuta, V. Y.

    1986-02-01

    A computer algorithm has been developed to classify the spectral bands of natural scenes on Earth according to their optical characteristics. The algorithm is written in FORTRAN-IV and can be used in spectral data processing programs requiring small data loads. The spectral classifications of some different types of green vegetable canopies are given in order to illustrate the effectiveness of the algorithm.

  5. Thermal neutron absorption cross-section for small samples (experiments in cylindrical geometry)

    International Nuclear Information System (INIS)

    Czubek, J.A.; Drozdowicz, K.; Igielski, A.; Krynicka-Drozdowicz, E.; Woznicka, U.

    1982-01-01

    Measurement results for thermal neutron macroscopic absorption cross-sections Σsub(a)1 when applying the cylindrical sample-moderator system are presented. Experiments for liquid (water solutions of H 3 BO 3 ) and solid (crushed basalts) samples are reported. Solid samples have been saturated with the H 3 BO 3 ''poisoning'' solution. The accuracy obtained for the determination of the absorption cross-section of the solid material was σ(Σsub(ma))=(1.2+2.2) c.u. in the case when porosity was measured with the accuracy of σ(phi)=0.001+0.002. The dispersion of the Σsub(ma) data obtained for basalts (taken from different quarries) was higher than the accuracy of the measurement. All experimental data for the fundamental decay constants lambda 0 together with the whole information about the samples are given. (author)

  6. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    Science.gov (United States)

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  7. Neutron Activation Analysis of Archaeological Pottery Samples of Large Size, Including Pieces of Low Symmetry Shape: How to Get Accurate Analytical Results in a Practical Way

    International Nuclear Information System (INIS)

    Bedregal, P.S.; Montoya, E.H.; Mendoza, P.; Ubillús, M.; Baltuano, O.; Hernández, Y.; Gago, J.; Cohen, I.M.

    2018-01-01

    The feasibility of the instrumental neutron activation analysis of entire pieces of archaeological pottery, using low thermal neutron fluxes, is examined and a new approach for the non-destructive analysis of entire pottery objects by INAA, using the conventional relative method, is described. The proposed method relies in the preparation of a comparison standard, which is a nominally identical replicate of the original object to be studied. INAA of small samples taken from that replicate allows determining its composition for the elements to be analyzed. Then the intact sample and intact standard are irradiated together with the neutrons from a nuclear reactor neutron beam, using a suitable turntable facility and monitored by neutron flux monitors. Finally, after proper decay times, the induced activities in sample, standard and flux monitors, are successively measured, by high-resolution gamma spectroscopy, using a high-efficiency germanium detector. In this way, several complicating effects such geometrical efficiency, neutron self-shielding and gamma ray attenuation are avoided and the need of complicated mathematical corrections is not needed. A potential advantage of the method is that it can be fully validated. Quantitative experiments using 7 - 13 hours of irradiation of pairs of 750 grams replicates, at low neutron fluxes of 3.9 x10 6 n cm -2 s -1 , followed by 100000 to 200000 seconds of counting in front of a 70% relative efficiency HPGe detector, led to recoveries between 90% and 110% for Sc and La. Another experiment, using pairs of replicates of small solid mud anthropomorphic objects, (weighing about 100 grams each), irradiated by 8 hours at a neutron flux of 10 9 n cm -2 s -1 , led to recoveries better than 90% and 110% for As, Ba, Ce, Co, Cr, Cs, Eu, Fe, Hf, La, Lu, Rb, Sb, Sc, Sm, Ta, Tb, Th, Yb and U, showing that the proposed method is suitable for LSNAA of entire pottery or mud archaeological objects. (author)

  8. Conditional estimation of local pooled dispersion parameter in small-sample RNA-Seq data improves differential expression test.

    Science.gov (United States)

    Gim, Jungsoo; Won, Sungho; Park, Taesung

    2016-10-01

    High throughput sequencing technology in transcriptomics studies contribute to the understanding of gene regulation mechanism and its cellular function, but also increases a need for accurate statistical methods to assess quantitative differences between experiments. Many methods have been developed to account for the specifics of count data: non-normality, a dependence of the variance on the mean, and small sample size. Among them, the small number of samples in typical experiments is still a challenge. Here we present a method for differential analysis of count data, using conditional estimation of local pooled dispersion parameters. A comprehensive evaluation of our proposed method in the aspect of differential gene expression analysis using both simulated and real data sets shows that the proposed method is more powerful than other existing methods while controlling the false discovery rates. By introducing conditional estimation of local pooled dispersion parameters, we successfully overcome the limitation of small power and enable a powerful quantitative analysis focused on differential expression test with the small number of samples.

  9. Sensitive determination of iodine species, including organo-iodine, for freshwater and seawater samples using high performance liquid chromatography and spectrophotometric detection

    International Nuclear Information System (INIS)

    Schwehr, Kathleen A.; Santschi, Peter H.

    2003-01-01

    In order to more effectively use iodine isotope ratios, 129 I/ 127 I, as hydrological and geochemical tracers in aquatic systems, a new high performance liquid chromatography (HPLC) method was developed for the determination of iodine speciation. The dissolved iodine species that dominate natural water systems are iodide, iodate, and organic iodine. Using this new method, iodide was determined directly by combining anion exchange chromatography and spectrophotometry. Iodate and the total of organic iodine species are determined as iodide, with minimal sample preparation, compared to existing methods. The method has been applied to quantitatively determine iodide, iodate as the difference of total inorganic iodide and iodide after reduction of the sample by NaHSO 3 , and organic iodine as the difference of total iodide (after organic decomposition by dehydrohalogenation and reduction by NaHSO 3 ) and total inorganic iodide. Analytical accuracy was tested: (1) against certified reference material, SRM 1549, powdered milk (NIST); (2) through the method of standard additions; and (3) by comparison to values of environmental waters measured independently by inductively coupled plasma mass spectrometry (ICP-MS). The method has been successfully applied to measure the concentrations of iodide species in rain, surface and ground water, estuarine and seawater samples. The detection limit was ∼1 nM (0.2 ppb), with less than 3% relative standard deviation (R.S.D.) for samples determined by standard additions to an iodide solution of 20 nM in 0.1 M NaCl. This technique is one of the few methods sensitive enough to accurately quantify stable iodine species at nanomolar concentrations in aquatic systems across a range of matrices, and to quantitatively measure organic iodine. Additionally, this method makes use of a very dilute mobile phase, and may be applied to small sample volumes without pre-column concentration or post-column reactions

  10. Hybrid image and blood sampling input function for quantification of small animal dynamic PET data

    International Nuclear Information System (INIS)

    Shoghi, Kooresh I.; Welch, Michael J.

    2007-01-01

    We describe and validate a hybrid image and blood sampling (HIBS) method to derive the input function for quantification of microPET mice data. The HIBS algorithm derives the peak of the input function from the image, which is corrected for recovery, while the tail is derived from 5 to 6 optimally placed blood sampling points. A Bezier interpolation algorithm is used to link the rightmost image peak data point to the leftmost blood sampling point. To assess the performance of HIBS, 4 mice underwent 60-min microPET imaging sessions following a 0.40-0.50-mCi bolus administration of 18 FDG. In total, 21 blood samples (blood-sampled plasma time-activity curve, bsPTAC) were obtained throughout the imaging session to compare against the proposed HIBS method. MicroPET images were reconstructed using filtered back projection with a zoom of 2.75 on the heart. Volumetric regions of interest (ROIs) were composed by drawing circular ROIs 3 pixels in diameter on 3-4 transverse planes of the left ventricle. Performance was characterized by kinetic simulations in terms of bias in parameter estimates when bsPTAC and HIBS are used as input functions. The peak of the bsPTAC curve was distorted in comparison to the HIBS-derived curve due to temporal limitations and delay in blood sampling, which affected the rates of bidirectional exchange between plasma and tissue. The results highlight limitations in using bsPTAC. The HIBS method, however, yields consistent results, and thus, is a substitute for bsPTAC

  11. Small sample analysis using sputter atomization/resonance ionization mass spectrometry

    International Nuclear Information System (INIS)

    Christie, W.H.; Goeringer, D.E.

    1986-01-01

    We have used secondary ion mass spectrometry (SIMS) to investigate the emission of ions via argon sputtering from U metal, UO 2 , and U 3 O 8 samples. We have also used laser resonance ionization techniques to study argon-sputtered neutral atoms and molecules emitted from these same samples. For the case of U metal, a significant enhancement in detection sensitivity for U is obtained via SA/RIMS. For U in the fully oxidized form (U 3 O 8 ), SA/RIMS offers no improvement in U detection sensitivity over conventional SIMS when sputtering with argon. 9 refs., 1 fig., 2 tabs

  12. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    Directory of Open Access Journals (Sweden)

    Cuicui Zhang

    2014-12-01

    Full Text Available Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1 how to define diverse base classifiers from the small data; (2 how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  13. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  14. Small-kernel, constrained least-squares restoration of sampled image data

    Science.gov (United States)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  15. The Dirichet-Multinomial model for multivariate randomized response data and small samples

    NARCIS (Netherlands)

    Avetisyan, Marianna; Fox, Gerardus J.A.

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The

  16. The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples

    Science.gov (United States)

    Avetisyan, Marianna; Fox, Jean-Paul

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…

  17. In situ sampling of small volumes of soil solution using modified micro-suction cups

    NARCIS (Netherlands)

    Shen, Jianbo; Hoffland, E.

    2007-01-01

    Two modified designs of micro-pore-water samplers were tested for their capacity to collect unbiased soil solution samples containing zinc and citrate. The samplers had either ceramic or polyethersulfone (PES) suction cups. Laboratory tests of the micro-samplers were conducted using (a) standard

  18. Comparing distribution models for small samples of overdispersed counts of freshwater fish

    Science.gov (United States)

    Vaudor, Lise; Lamouroux, Nicolas; Olivier, Jean-Michel

    2011-05-01

    The study of species abundance often relies on repeated abundance counts whose number is limited by logistic or financial constraints. The distribution of abundance counts is generally right-skewed (i.e. with many zeros and few high values) and needs to be modelled for statistical inference. We used an extensive dataset involving about 100,000 fish individuals of 12 freshwater fish species collected in electrofishing points (7 m 2) during 350 field surveys made in 25 stream sites, in order to compare the performance and the generality of four distribution models of counts (Poisson, negative binomial and their zero-inflated counterparts). The negative binomial distribution was the best model (Bayesian Information Criterion) for 58% of the samples (species-survey combinations) and was suitable for a variety of life histories, habitat, and sample characteristics. The performance of the models was closely related to samples' statistics such as total abundance and variance. Finally, we illustrated the consequences of a distribution assumption by calculating confidence intervals around the mean abundance, either based on the most suitable distribution assumption or on an asymptotical, distribution-free (Student's) method. Student's method generally corresponded to narrower confidence intervals, especially when there were few (≤3) non-null counts in the samples.

  19. Calculation code of heterogeneity effects for analysis of small sample reactivity worth

    International Nuclear Information System (INIS)

    Okajima, Shigeaki; Mukaiyama, Takehiko; Maeda, Akio.

    1988-03-01

    The discrepancy between experimental and calculated central reactivity worths has been one of the most significant interests for the analysis of fast reactor critical experiment. Two effects have been pointed out so as to be taken into account in the calculation as the possible cause of the discrepancy; one is the local heterogeneity effect which is associated with the measurement geometry, the other is the heterogeneity effect on the distribution of the intracell adjoint flux. In order to evaluate these effects in the analysis of FCA actinide sample reactivity worth the calculation code based on the collision probability method was developed. The code can handle the sample size effect which is one of the local heterogeneity effects and also the intracell adjoint heterogeneity effect. (author)

  20. Gravimetric and volumetric approaches adapted for hydrogen sorption measurements with in situ conditioning on small sorbent samples

    International Nuclear Information System (INIS)

    Poirier, E.; Chahine, R.; Tessier, A.; Bose, T.K.

    2005-01-01

    We present high sensitivity (0 to 1 bar, 295 K) gravimetric and volumetric hydrogen sorption measurement systems adapted for in situ sample conditioning at high temperature and high vacuum. These systems are designed especially for experiments on sorbents available in small masses (mg) and requiring thorough degassing prior to sorption measurements. Uncertainty analysis from instrumental specifications and hydrogen absorption measurements on palladium are presented. The gravimetric and volumetric systems yield cross-checkable results within about 0.05 wt % on samples weighing from (3 to 25) mg. Hydrogen storage capacities of single-walled carbon nanotubes measured at 1 bar and 295 K with both systems are presented

  1. A summary of methods of predicting reliability life of nuclear equipment with small samples

    International Nuclear Information System (INIS)

    Liao Weixian

    2000-03-01

    Some of nuclear equipment are manufactured in small batch, e.g., 1-3 sets. Their service life may be very difficult to determine experimentally in view of economy and technology. The method combining theoretical analysis with material tests to predict the life of equipment is put forward, based on that equipment consists of parts or elements which are made of different materials. The whole life of an equipment part consists of the crack forming life (i.e., the fatigue life or the damage accumulation life) and the crack extension life. Methods of predicting machine life has systematically summarized with the emphasis on those which use theoretical analysis to substitute large scale prototype experiments. Meanwhile, methods and steps of predicting reliability life have been described by taking into consideration of randomness of various variables and parameters in engineering. Finally, the latest advance and trends of machine life prediction are discussed

  2. Density-viscosity product of small-volume ionic liquid samples using quartz crystal impedance analysis.

    Science.gov (United States)

    McHale, Glen; Hardacre, Chris; Ge, Rile; Doy, Nicola; Allen, Ray W K; MacInnes, Jordan M; Bown, Mark R; Newton, Michael I

    2008-08-01

    Quartz crystal impedance analysis has been developed as a technique to assess whether room-temperature ionic liquids are Newtonian fluids and as a small-volume method for determining the values of their viscosity-density product, rho eta. Changes in the impedance spectrum of a 5-MHz fundamental frequency quartz crystal induced by a water-miscible room-temperature ionic liquid, 1-butyl-3-methylimiclazolium trifluoromethylsulfonate ([C4mim][OTf]), were measured. From coupled frequency shift and bandwidth changes as the concentration was varied from 0 to 100% ionic liquid, it was determined that this liquid provided a Newtonian response. A second water-immiscible ionic liquid, 1-butyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide [C4mim][NTf2], with concentration varied using methanol, was tested and also found to provide a Newtonian response. In both cases, the values of the square root of the viscosity-density product deduced from the small-volume quartz crystal technique were consistent with those measured using a viscometer and density meter. The third harmonic of the crystal was found to provide the closest agreement between the two measurement methods; the pure ionic liquids had the largest difference of approximately 10%. In addition, 18 pure ionic liquids were tested, and for 11 of these, good-quality frequency shift and bandwidth data were obtained; these 12 all had a Newtonian response. The frequency shift of the third harmonic was found to vary linearly with square root of viscosity-density product of the pure ionic liquids up to a value of square root(rho eta) approximately 18 kg m(-2) s(-1/2), but with a slope 10% smaller than that predicted by the Kanazawa and Gordon equation. It is envisaged that the quartz crystal technique could be used in a high-throughput microfluidic system for characterizing ionic liquids.

  3. Interstitial water studies on small core samples, Deep Sea Drilling Project, Leg 5

    Science.gov (United States)

    Manheim, F. T.; Chan, K.M.; Sayles, F.L.

    1970-01-01

    Leg 5 samples fall into two categories with respect to interstitial water composition: 1) rapidly deposited terrigenous or appreciably terrigenous deposits, such as in Hole 35 (western Escanaba trough, off Cape Mendocino, California); and, 2) slowly deposited pelagic clays and biogenic muds and oozes. Interstitial waters in the former show modest to slight variations in chloride and sodium, but drastic changes in non-conservative ions such as magnesium and sulfate. The pelagic deposits show only relatively minor changes in both conservative and non-conservative pore fluid constituents. As was pointed out in earlier Leg Reports, it is believed that much of the variation in chloride in pore fluids within individual holes is attributable to the manipulation of samples on board ship and in the laboratory. On the other hand, the scatter in sodium is due in part to analytical error (on the order of 2 to 3 per cent, in terms of a standard deviation), and it probably accounts for most of the discrepancies in total anion and cation balance. All constituents reported here, with the exception of bulk water content, were analyzed on water samples which were sealed in plastic tubes aboard ship and were subsequently opened and divided into weighed aliquots in the laboratory. Analytical methods follow the atomic absorption, wet chemical and emission spectrochemical techniques briefly summarized in previous reports, e.g. Manheim et al., 1969, and Chan and Manheim, 1970. The authors acknowledge assistance from W. Sunda, D. Kerr, C. Lawson and H. Richards, and thank D. Spencer, P. Brewer and E. Degens for allowing the use of equipment and laboratory facilities.

  4. exTAS - next-generation TAS for small samples and extreme conditions

    International Nuclear Information System (INIS)

    Kulda, J.; Hiess, A.

    2011-01-01

    The currently used implementation of horizontally and vertically focusing optics in three-axis spectrometers (TAS) permits efficient studies of excitations in sub-cm 3 - sized single crystals]. With the present proposal we wish to stimulate a further paradigm shift into the domain of mm 3 -sized samples. exTAS combines highly focused mm-sized focal spots, boosting the sensitivity limits, with a spectrometer layout down-scaled to a table-top size to provide high flexibility in optimizing acceptance angles and to achieve sub-millimeter positioning accuracy. (authors)

  5. Sophistication of 14C measurement at JAEA-AMS-MUTSU. Attempt on a small quantity of sample

    International Nuclear Information System (INIS)

    Tanaka, Takayuki; Kabuto, Shoji; Kinoshita, Naoki; Yamamoto, Nobuo

    2010-01-01

    In the investigations on substance dynamics using the molecular weight and chemical fractionation, the utilization of 14 C measurement by an accelerator mass spectrometry (AMS) have started. As a result of the fractionation, sample contents required for AMS measurement have been downsized. We expect that this trend toward a small quantity of sample will be steadily accelerated in the future. As 14 C measurement by AMS established at Mutsu office require about 2 mg of sample content at present, our AMS lags behind the others in the trend. We try to downsize the needed sample content for 14 C measurement by our AMS. In this study, we modified the shape of the target-piece in which the sample is packed and which is regularly needed to radiocarbon measurement by our AMS. Moreover, we improved on the apparatus needed to pack the sample. As a result of the improvement, we revealed that it is possible to measure the 14 C using our AMS even by the amount of the sample of about 0.5 mg. (author)

  6. Spatial variation of contaminant elements of roadside dust samples from Budapest (Hungary) and Seoul (Republic of Korea), including Pt, Pd and Ir.

    Science.gov (United States)

    Sager, Manfred; Chon, Hyo-Taek; Marton, Laszlo

    2015-02-01

    Roadside dusts were studied to explain the spatial variation and present levels of contaminant elements including Pt, Pd and Ir in urban environment and around Budapest (Hungary) and Seoul (Republic of Korea). The samples were collected from six sites of high traffic volumes in Seoul metropolitan city and from two control sites within the suburbs of Seoul, for comparison. Similarly, road dust samples were obtained two times from traffic focal points in Budapest, from the large bridges across the River Danube, from Margitsziget (an island in the Danube in the northern part of Budapest, used for recreation) as well as from main roads (no highways) outside Budapest. The samples were analysed for contaminant elements by ICP-AES and for Pt, Pd and Ir by ICP-MS. The highest Pt, Pd and Ir levels in road dusts were found from major roads with high traffic volume, but correlations with other contaminant elements were low, however. This reflects automobile catalytic converter to be an important source. To interpret the obtained multi-element results in short, pollution index, contamination index and geo-accumulation index were calculated. Finally, the obtained data were compared with total concentrations encountered in dust samples from Madrid, Oslo, Tokyo and Muscat (Oman). Dust samples from Seoul reached top level concentrations for Cd-Zn-As-Co-Cr-Cu-Mo-Ni-Sn. Just Pb was rather low because unleaded gasoline was introduced as compulsory in 1993. Concentrations in Budapest dust samples were lower than from Seoul, except for Pb and Mg. Compared with Madrid as another continental site, Budapest was higher in Co-V-Zn. Dust from Oslo, which is not so large, contained more Mn-Na-Sr than dust from other towns, but less other metals.

  7. A method for multiple sequential analyses of macrophage functions using a small single cell sample

    Directory of Open Access Journals (Sweden)

    F.R.F. Nascimento

    2003-09-01

    Full Text Available Microbial pathogens such as bacillus Calmette-Guérin (BCG induce the activation of macrophages. Activated macrophages can be characterized by the increased production of reactive oxygen and nitrogen metabolites, generated via NADPH oxidase and inducible nitric oxide synthase, respectively, and by the increased expression of major histocompatibility complex class II molecules (MHC II. Multiple microassays have been developed to measure these parameters. Usually each assay requires 2-5 x 10(5 cells per well. In some experimental conditions the number of cells is the limiting factor for the phenotypic characterization of macrophages. Here we describe a method whereby this limitation can be circumvented. Using a single 96-well microassay and a very small number of peritoneal cells obtained from C3H/HePas mice, containing as little as <=2 x 10(5 macrophages per well, we determined sequentially the oxidative burst (H2O2, nitric oxide production and MHC II (IAk expression of BCG-activated macrophages. More specifically, with 100 µl of cell suspension it was possible to quantify H2O2 release and nitric oxide production after 1 and 48 h, respectively, and IAk expression after 48 h of cell culture. In addition, this microassay is easy to perform, highly reproducible and more economical.

  8. Aspects of working with manipulators and small samples in an αβγ-box

    International Nuclear Information System (INIS)

    Zubler, Robert; Bertsch, Johannes; Heimgartner, Peter

    2007-01-01

    The Laboratory for Materials Behaviour, operator of the Hotlab and part of the Paul Scherrer Institute (PSI) is studying corrosion- and mechanical phenomena of irradiated fuel rod cladding materials. To improve the options for mechanical tests, a heavy shielded αβγ) universal electro-mechanical testing machine has been installed. The machine is equipped with an 800 deg. C furnace. The furnace chamber is part of the inner α-box and can be flushed with inert gas. The specimen can be observed by camera during the tests. The foreseen active specimens are very small and can not be handled by hand. Before starting active tests, tools and installations had to be improved and a lot of manipulator practise had to be absolved. For the operational permit, given by the authorities (Swiss Federal Nuclear Safety Inspectorate, HSK), many safety data concerning furnace cooling, air pressure and γ- shielding had to be collected. Up to now various inactive tests have been performed. Besides the operational and safety features, results of inactive mechanical tests and tests for active commissioning are presented. (authors)

  9. Critical assessment of the performance of electronic moisture analyzers for small amounts of environmental samples and biological reference materials.

    Science.gov (United States)

    Krachler, M

    2001-12-01

    Two electronic moisture analyzers were critically evaluated with regard to their suitability for determining moisture in small amounts (environmental matrices such as leaves, needles, soil, peat, sediments, and sewage sludge, as well as various biological reference materials. To this end, several homogeneous bulk materials were prepared which were subsequently employed for the development and optimization of all analytical procedures. The key features of the moisture analyzers included a halogen or ceramic heater and an integrated balance with a resolution of 0.1 mg, which is an essential prerequisite for obtaining precise results. Oven drying of the bulk materials in a conventional oven at 105 degrees C until constant mass served as reference method. A heating temperature of 65degrees C was found to provide accurate and precise results for almost all matrices investigated. To further improve the accuracy and precision, other critical parameters such as handling of sample pans, standby temperature, and measurement delay were optimized. Because of its ponderous heating behavior, the performance of the ceramic radiator was inferior to that of the halogen heater, which produced moisture results comparable to those obtained by oven drying. The developed drying procedures were successfully applied to the fast moisture analysis (1.4-6.3 min) of certified biological reference materials of similar provenance to the investigated the bulk materials. Moisture results for 200 mg aliquots ranged from 1.4 to 7.8% and good agreement was obtained between the recommended drying procedure for the reference materials and the electronic moisture analyzers with absolute uncertainties amounting to 0.1% and 0.2-0.3%, respectively.

  10. Development of an evaluation method for fracture mechanical tests on small samples based on a cohesive zone model

    International Nuclear Information System (INIS)

    Mahler, Michael

    2016-01-01

    The safety and reliability of nuclear power plants of the fourth generation is an important issue. It is based on a reliable interpretation of the components for which, among other fracture mechanical material properties are required. The existing irradiation in the power plants significantly affects the material properties which therefore need to be determined on irradiated material. Often only small amounts of irradiated material are available for characterization. In that case it is not possible to manufacture sufficiently large specimens, which are necessary for fracture mechanical testing in agreement with the standard. Small specimens must be used. From this follows the idea of this study, in which the fracture toughness can be predicted with the developed method based on tests of small specimens. For this purpose, the fracture process including the crack growth is described with a continuum mechanical approach using the finite element method and the cohesive zone model. The experiments on small specimens are used for parameter identification of the cohesive zone model. The two parameters of the cohesive zone model are determined by tensile tests on notched specimens (cohesive stress) and by parameter fitting to the fracture behavior of smalls specimens (cohesive energy). To account the different triaxialities of the specimens, the cohesive stress is used depending on the triaxiality. After parameter identification a large specimen can be simulated with the cohesive zone parameters derived from small specimens. The predicted fracture toughness of this big specimen fulfills the size requirements in the standard (ASTM E1820 or ASTM E399) in contrast to the small specimen. This method can be used for ductile and brittle material behavior and was validated in this work. In summary, this method offers the possibility to determine the fracture toughness indirectly based on small specimen testing. Main advantage is the low required specimen volume. Thereby massively

  11. On the possibility of study the surface structure of small bio-objects, including fragments of nucleotide chains, by means of electron interference

    Energy Technology Data Exchange (ETDEWEB)

    Namiot, V.A., E-mail: vnamiot@gmail.co [Institute of Nuclear Physics, Moscow State University, Vorobyovy Gory, 119992 Moscow (Russian Federation)

    2009-07-20

    We propose a new method to study the surface of small bio-objects, including macromolecules and their complexes. This method is based on interference of low-energy electrons. Theoretically, this type of interference may allow to construct a hologram of the biological object, but, unlike an optical hologram, with the spatial resolution of the order of inter-atomic distances. The method provides a possibility to construct a series of such holograms at various levels of electron energies. In theory, obtaining such information would be enough to identify the types of molecular groups existing on the surface of the studied object. This method could also be used for 'fast reading' of nucleotide chains. It has been shown how to depose a long linear molecule as a straight line on a substrate before carrying out such 'reading'.

  12. Liquid-chromatographic analysis for cyclosporine with use of a microbore column and small sample volume.

    Science.gov (United States)

    Annesley, T; Matz, K; Balogh, L; Clayton, L; Giacherio, D

    1986-07-01

    This liquid-chromatographic assay requires 0.2 to 0.5 mL of whole blood, avoids the use of diethyl ether, and consumes only 10 to 20% of the solvents used in prior methods. Sample preparation involves an acidic extraction with methyl-t-butyl ether, performed in a 13 X 100 mm disposable glass tube, then a short second extraction of the organic phase with sodium hydroxide. After evaporation of the methyl-t-butyl ether, chromatography is performed on an "Astec" 2.0-mm (i.d.) octyl column. We compared results by this procedure with those by use of earlier larger-scale extractions and their respective 4.6-mm (i.d.) columns; analytical recoveries of cyclosporins A and D were comparable with previous findings and results for patients' specimens were equivalent, but the microbore columns provided greatly increased resolution and sensitivity.

  13. Sampling large landscapes with small-scale stratification-User's Manual

    Science.gov (United States)

    Bart, Jonathan

    2011-01-01

    This manual explains procedures for partitioning a large landscape into plots, assigning the plots to strata, and selecting plots in each stratum to be surveyed. These steps are referred to as the "sampling large landscapes (SLL) process." We assume that users of the manual have a moderate knowledge of ArcGIS and Microsoft ® Excel. The manual is written for a single user but in many cases, some steps will be carried out by a biologist designing the survey and some steps will be carried out by a quantitative assistant. Thus, the manual essentially may be passed back and forth between these users. The SLL process primarily has been used to survey birds, and we refer to birds as subjects of the counts. The process, however, could be used to count any objects. ®

  14. Investigation of Super Learner Methodology on HIV-1 Small Sample: Application on Jaguar Trial Data.

    Science.gov (United States)

    Houssaïni, Allal; Assoumou, Lambert; Marcelin, Anne Geneviève; Molina, Jean Michel; Calvez, Vincent; Flandre, Philippe

    2012-01-01

    Background. Many statistical models have been tested to predict phenotypic or virological response from genotypic data. A statistical framework called Super Learner has been introduced either to compare different methods/learners (discrete Super Learner) or to combine them in a Super Learner prediction method. Methods. The Jaguar trial is used to apply the Super Learner framework. The Jaguar study is an "add-on" trial comparing the efficacy of adding didanosine to an on-going failing regimen. Our aim was also to investigate the impact on the use of different cross-validation strategies and different loss functions. Four different repartitions between training set and validations set were tested through two loss functions. Six statistical methods were compared. We assess performance by evaluating R(2) values and accuracy by calculating the rates of patients being correctly classified. Results. Our results indicated that the more recent Super Learner methodology of building a new predictor based on a weighted combination of different methods/learners provided good performance. A simple linear model provided similar results to those of this new predictor. Slight discrepancy arises between the two loss functions investigated, and slight difference arises also between results based on cross-validated risks and results from full dataset. The Super Learner methodology and linear model provided around 80% of patients correctly classified. The difference between the lower and higher rates is around 10 percent. The number of mutations retained in different learners also varys from one to 41. Conclusions. The more recent Super Learner methodology combining the prediction of many learners provided good performance on our small dataset.

  15. Algorithm for computing significance levels using the Kolmogorov-Smirnov statistic and valid for both large and small samples

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, S.E.; Fields, D.E.

    1983-10-01

    The KSTEST code presented here is designed to perform the Kolmogorov-Smirnov one-sample test. The code may be used as a stand-alone program or the principal subroutines may be excerpted and used to service other programs. The Kolmogorov-Smirnov one-sample test is a nonparametric goodness-of-fit test. A number of codes to perform this test are in existence, but they suffer from the inability to provide meaningful results in the case of small sample sizes (number of values less than or equal to 80). The KSTEST code overcomes this inadequacy by using two distinct algorithms. If the sample size is greater than 80, an asymptotic series developed by Smirnov is evaluated. If the sample size is 80 or less, a table of values generated by Birnbaum is referenced. Valid results can be obtained from KSTEST when the sample contains from 3 to 300 data points. The program was developed on a Digital Equipment Corporation PDP-10 computer using the FORTRAN-10 language. The code size is approximately 450 card images and the typical CPU execution time is 0.19 s.

  16. Precise Th/U-dating of small and heavily coated samples of deep sea corals

    Science.gov (United States)

    Lomitschka, Michael; Mangini, Augusto

    1999-07-01

    Marine carbonate skeletons like deep-sea corals are frequently coated with iron and manganese oxides/hydroxides which adsorb additional thorium and uranium out of the sea water. A new cleaning procedure has been developed to reduce this contamination. In this further cleaning step a solution of Na 2EDTA (Na 2H 2T B) and ascorbic acid is used which composition is optimised especially for samples of 20 mg of weight. It was first tested on aliquots of a reef-building coral which had been artificially contaminated with powdered ferromanganese nodule. Applied on heavily contaminated deep-sea corals (scleractinia), it reduced excess 230Th by another order of magnitude in addition to usual cleaning procedures. The measurement of at least three fractions of different contamination, together with an additional standard correction for contaminated carbonates results in Th/U-ages corrected for the authigenic component. A good agreement between Th/U- and 14C-ages can be achieved even for extremely coated corals.

  17. Detection of seizures from small samples using nonlinear dynamic system theory.

    Science.gov (United States)

    Yaylali, I; Koçak, H; Jayakar, P

    1996-07-01

    The electroencephalogram (EEG), like many other biological phenomena, is quite likely governed by nonlinear dynamics. Certain characteristics of the underlying dynamics have recently been quantified by computing the correlation dimensions (D2) of EEG time series data. In this paper, D2 of the unbiased autocovariance function of the scalp EEG data was used to detect electrographic seizure activity. Digital EEG data were acquired at a sampling rate of 200 Hz per channel and organized in continuous frames (duration 2.56 s, 512 data points). To increase the reliability of D2 computations with short duration data, raw EEG data were initially simplified using unbiased autocovariance analysis to highlight the periodic activity that is present during seizures. The D2 computation was then performed from the unbiased autocovariance function of each channel using the Grassberger-Procaccia method with Theiler's box-assisted correlation algorithm. Even with short duration data, this preprocessing proved to be computationally robust and displayed no significant sensitivity to implementation details such as the choices of embedding dimension and box size. The system successfully identified various types of seizures in clinical studies.

  18. Including screening in van der Waals corrected density functional theory calculations: The case of atoms and small molecules physisorbed on graphene

    Energy Technology Data Exchange (ETDEWEB)

    Silvestrelli, Pier Luigi; Ambrosetti, Alberto [Dipartimento di Fisica e Astronomia, Università di Padova, via Marzolo 8, I–35131 Padova, Italy and DEMOCRITOS National Simulation Center of the Italian Istituto Officina dei Materiali (IOM) of the Italian National Research Council (CNR), Trieste (Italy)

    2014-03-28

    The Density Functional Theory (DFT)/van der Waals-Quantum Harmonic Oscillator-Wannier function (vdW-QHO-WF) method, recently developed to include the vdW interactions in approximated DFT by combining the quantum harmonic oscillator model with the maximally localized Wannier function technique, is applied to the cases of atoms and small molecules (X=Ar, CO, H{sub 2}, H{sub 2}O) weakly interacting with benzene and with the ideal planar graphene surface. Comparison is also presented with the results obtained by other DFT vdW-corrected schemes, including PBE+D, vdW-DF, vdW-DF2, rVV10, and by the simpler Local Density Approximation (LDA) and semilocal generalized gradient approximation approaches. While for the X-benzene systems all the considered vdW-corrected schemes perform reasonably well, it turns out that an accurate description of the X-graphene interaction requires a proper treatment of many-body contributions and of short-range screening effects, as demonstrated by adopting an improved version of the DFT/vdW-QHO-WF method. We also comment on the widespread attitude of relying on LDA to get a rough description of weakly interacting systems.

  19. Split Hopkinson Resonant Bar Test for Sonic-Frequency Acoustic Velocity and Attenuation Measurements of Small, Isotropic Geologic Samples

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, S.

    2011-04-01

    Mechanical properties (seismic velocities and attenuation) of geological materials are often frequency dependent, which necessitates measurements of the properties at frequencies relevant to a problem at hand. Conventional acoustic resonant bar tests allow measuring seismic properties of rocks and sediments at sonic frequencies (several kilohertz) that are close to the frequencies employed for geophysical exploration of oil and gas resources. However, the tests require a long, slender sample, which is often difficult to obtain from the deep subsurface or from weak and fractured geological formations. In this paper, an alternative measurement technique to conventional resonant bar tests is presented. This technique uses only a small, jacketed rock or sediment core sample mediating a pair of long, metal extension bars with attached seismic source and receiver - the same geometry as the split Hopkinson pressure bar test for large-strain, dynamic impact experiments. Because of the length and mass added to the sample, the resonance frequency of the entire system can be lowered significantly, compared to the sample alone. The experiment can be conducted under elevated confining pressures up to tens of MPa and temperatures above 100 C, and concurrently with x-ray CT imaging. The described Split Hopkinson Resonant Bar (SHRB) test is applied in two steps. First, extension and torsion-mode resonance frequencies and attenuation of the entire system are measured. Next, numerical inversions for the complex Young's and shear moduli of the sample are performed. One particularly important step is the correction of the inverted Young's moduli for the effect of sample-rod interfaces. Examples of the application are given for homogeneous, isotropic polymer samples and a natural rock sample.

  20. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    Energy Technology Data Exchange (ETDEWEB)

    Rabiet, M., E-mail: marion.rabiet@unilim.f [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France); Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M. [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France)

    2010-03-15

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10{sup -3} g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  1. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    International Nuclear Information System (INIS)

    Rabiet, M.; Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M.

    2010-01-01

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10 -3 g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  2. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Jamshid Jamali

    2017-01-01

    Full Text Available Evaluating measurement equivalence (also known as differential item functioning (DIF is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  3. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study.

    Science.gov (United States)

    Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman

    2017-01-01

    Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  4. A Novel Theory For The Origin And Evolution Of Stars And Planets, Including Earth, Which Asks, 'Was The Earth Once A Small Bright Star?'

    Science.gov (United States)

    Cimorelli, S. A.; Samuels, C.

    2001-12-01

    Improved prediction methods for earthquakes and volcanic activity will naturally follow from our theory, based on new concepts of the earth's interior composition, state and activity. In this paper we present a novel hypothesis for the formation and evolution of galaxies, stars (including black holes (BHs), neutron stars, giant, mid-size, dwarf, dying and dead stars), planets (including earth), and moons. Present day phenomenon will be used to substantiate the validity of this hypothesis. Every `body' is a multiple type of star, generated from modified pieces called particle proliferators, of a dislodged/expanded BH (of category 2 (c-2)) which explodes due to a collision with another expanded BH (or explodes on its own). This includes the sun, and the planet earth, which is a type of dead star. Such that, if we remove layers of the earth, starting with the crust, we will find evidence of each preceding star formation, from brown to blue, and the remains of the particle proliferator as the innermost core is reached. We show that the hypothesis is consistent with both the available astronomical data regarding stellar evolution and planetary formation; as well as the evolution of the earth itself, by considerations of the available geophysical data. Where data is not available, reasonably simple experiments are suggested to demonstrate further the consistency and viability of the hypothesis. Theories are presented to help define and explain phenomenon such as how two (or more) c-2 BHs expand and collide to form a small `big bang' (It is postulated that there was a small big bang to form each galaxy, similar to the big bang from a category 1 BH(s) that may have formed our universe. The Great Attractors would be massive c-2 BHs and act on galaxy clusters similar to the massive c-3 BHs at the center of Galaxies acting on stars.). This in turn afforded the material/matter to form all the galactic bodies, including the dark matter inside the galaxies that we catalogue as

  5. Beyond simple small-angle X-ray scattering: developments in online complementary techniques and sample environments

    Directory of Open Access Journals (Sweden)

    Wim Bras

    2014-11-01

    Full Text Available Small- and wide-angle X-ray scattering (SAXS, WAXS are standard tools in materials research. The simultaneous measurement of SAXS and WAXS data in time-resolved studies has gained popularity due to the complementary information obtained. Furthermore, the combination of these data with non X-ray based techniques, via either simultaneous or independent measurements, has advanced understanding of the driving forces that lead to the structures and morphologies of materials, which in turn give rise to their properties. The simultaneous measurement of different data regimes and types, using either X-rays or neutrons, and the desire to control parameters that initiate and control structural changes have led to greater demands on sample environments. Examples of developments in technique combinations and sample environment design are discussed, together with a brief speculation about promising future developments.

  6. Beyond simple small-angle X-ray scattering: developments in online complementary techniques and sample environments.

    Science.gov (United States)

    Bras, Wim; Koizumi, Satoshi; Terrill, Nicholas J

    2014-11-01

    Small- and wide-angle X-ray scattering (SAXS, WAXS) are standard tools in materials research. The simultaneous measurement of SAXS and WAXS data in time-resolved studies has gained popularity due to the complementary information obtained. Furthermore, the combination of these data with non X-ray based techniques, via either simultaneous or independent measurements, has advanced understanding of the driving forces that lead to the structures and morphologies of materials, which in turn give rise to their properties. The simultaneous measurement of different data regimes and types, using either X-rays or neutrons, and the desire to control parameters that initiate and control structural changes have led to greater demands on sample environments. Examples of developments in technique combinations and sample environment design are discussed, together with a brief speculation about promising future developments.

  7. A compact time-of-flight SANS instrument optimised for measurements of small sample volumes at the European Spallation Source

    Energy Technology Data Exchange (ETDEWEB)

    Kynde, Søren, E-mail: kynde@nbi.ku.dk [Niels Bohr Institute, University of Copenhagen (Denmark); Hewitt Klenø, Kaspar [Niels Bohr Institute, University of Copenhagen (Denmark); Nagy, Gergely [SINQ, Paul Scherrer Institute (Switzerland); Mortensen, Kell; Lefmann, Kim [Niels Bohr Institute, University of Copenhagen (Denmark); Kohlbrecher, Joachim, E-mail: Joachim.kohlbrecher@psi.ch [SINQ, Paul Scherrer Institute (Switzerland); Arleth, Lise, E-mail: arleth@nbi.ku.dk [Niels Bohr Institute, University of Copenhagen (Denmark)

    2014-11-11

    The high flux at European Spallation Source (ESS) will allow for performing experiments with relatively small beam-sizes while maintaining a high intensity of the incoming beam. The pulsed nature of the source makes the facility optimal for time-of-flight small-angle neutron scattering (ToF-SANS). We find that a relatively compact SANS instrument becomes the optimal choice in order to obtain the widest possible q-range in a single setting and the best possible exploitation of the neutrons in each pulse and hence obtaining the highest possible flux at the sample position. The instrument proposed in the present article is optimised for performing fast measurements of small scattering volumes, typically down to 2×2×2 mm{sup 3}, while covering a broad q-range from about 0.005 1/Å to 0.5 1/Å in a single instrument setting. This q-range corresponds to that available at a typical good BioSAXS instrument and is relevant for a wide set of biomacromolecular samples. A central advantage of covering the whole q-range in a single setting is that each sample has to be loaded only once. This makes it convenient to use the fully automated high-throughput flow-through sample changers commonly applied at modern synchrotron BioSAXS-facilities. The central drawback of choosing a very compact instrument is that the resolution in terms of δλ/λ obtained with the short wavelength neutrons becomes worse than what is usually the standard at state-of-the-art SANS instruments. Our McStas based simulations of the instrument performance for a set of characteristic biomacromolecular samples show that the resulting smearing effects still have relatively minor effects on the obtained data and can be compensated for in the data analysis. However, in cases where a better resolution is required in combination with the large simultaneous q-range characteristic of the instrument, we show that this can be obtained by inserting a set of choppers.

  8. Synaptic vesicles contain small ribonucleic acids (sRNAs) including transfer RNA fragments (trfRNA) and microRNAs (miRNA).

    Science.gov (United States)

    Li, Huinan; Wu, Cheng; Aramayo, Rodolfo; Sachs, Matthew S; Harlow, Mark L

    2015-10-08

    Synaptic vesicles (SVs) are neuronal presynaptic organelles that load and release neurotransmitter at chemical synapses. In addition to classic neurotransmitters, we have found that synaptic vesicles isolated from the electric organ of Torpedo californica, a model cholinergic synapse, contain small ribonucleic acids (sRNAs), primarily the 5' ends of transfer RNAs (tRNAs) termed tRNA fragments (trfRNAs). To test the evolutionary conservation of SV sRNAs we examined isolated SVs from the mouse central nervous system (CNS). We found abundant levels of sRNAs in mouse SVs, including trfRNAs and micro RNAs (miRNAs) known to be involved in transcriptional and translational regulation. This discovery suggests that, in addition to inducing changes in local dendritic excitability through the release of neurotransmitters, SVs may, through the release of specific trfRNAs and miRNAs, directly regulate local protein synthesis. We believe these findings have broad implications for the study of chemical synaptic transmission.

  9. The DSM-5 Dimensional Anxiety Scales in a Dutch non-clinical sample: psychometric properties including the adult separation anxiety disorder scale.

    Science.gov (United States)

    Möller, Eline L; Bögels, Susan M

    2016-09-01

    With DSM-5, the American Psychiatric Association encourages complementing categorical diagnoses with dimensional severity ratings. We therefore examined the psychometric properties of the DSM-5 Dimensional Anxiety Scales, a set of brief dimensional scales that are consistent in content and structure and assess DSM-5-based core features of anxiety disorders. Participants (285 males, 255 females) completed the DSM-5 Dimensional Anxiety Scales for social anxiety disorder, generalized anxiety disorder, specific phobia, agoraphobia, and panic disorder that were included in previous studies on the scales, and also for separation anxiety disorder, which is included in the DSM-5 chapter on anxiety disorders. Moreover, they completed the Screen for Child Anxiety Related Emotional Disorders Adult version (SCARED-A). The DSM-5 Dimensional Anxiety Scales demonstrated high internal consistency, and the scales correlated significantly and substantially with corresponding SCARED-A subscales, supporting convergent validity. Separation anxiety appeared present among adults, supporting the DSM-5 recognition of separation anxiety as an anxiety disorder across the life span. To conclude, the DSM-5 Dimensional Anxiety Scales are a valuable tool to screen for specific adult anxiety disorders, including separation anxiety. Research in more diverse and clinical samples with anxiety disorders is needed. © 2016 The Authors International Journal of Methods in Psychiatric Research Published by John Wiley & Sons Ltd. © 2016 The Authors International Journal of Methods in Psychiatric Research Published by John Wiley & Sons Ltd.

  10. Using Data-Dependent Priors to Mitigate Small Sample Bias in Latent Growth Models: A Discussion and Illustration Using M"plus"

    Science.gov (United States)

    McNeish, Daniel M.

    2016-01-01

    Mixed-effects models (MEMs) and latent growth models (LGMs) are often considered interchangeable save the discipline-specific nomenclature. Software implementations of these models, however, are not interchangeable, particularly with small sample sizes. Restricted maximum likelihood estimation that mitigates small sample bias in MEMs has not been…

  11. Forecasting elections with mere recognition from small, lousy samples: A comparison of collective recognition, wisdom of crowds, and representative polls

    Directory of Open Access Journals (Sweden)

    Wolfgang Gaissmeier

    2011-02-01

    Full Text Available We investigated the extent to which the human capacity for recognition helps to forecast political elections: We compared naive recognition-based election forecasts computed from convenience samples of citizens' recognition of party names to (i standard polling forecasts computed from representative samples of citizens' voting intentions, and to (ii simple---and typically very accurate---wisdom-of-crowds-forecasts computed from the same convenience samples of citizens' aggregated hunches about election results. Results from four major German elections show that mere recognition of party names forecast the parties' electoral success fairly well. Recognition-based forecasts were most competitive with the other models when forecasting the smaller parties' success and for small sample sizes. However, wisdom-of-crowds-forecasts outperformed recognition-based forecasts in most cases. It seems that wisdom-of-crowds-forecasts are able to draw on the benefits of recognition while at the same time avoiding its downsides, such as lack of discrimination among very famous parties or recognition caused by factors unrelated to electoral success. Yet it seems that a simple extension of the recognition-based forecasts---asking people what proportion of the population would recognize a party instead of whether they themselves recognize it---is also able to eliminate these downsides.

  12. Spatial Distribution of Stony Desertification and Key Influencing Factors on Different Sampling Scales in Small Karst Watersheds

    Science.gov (United States)

    Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie

    2018-01-01

    Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km2, 4.50 km2, and 1.87 km2, respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content. PMID:29652811

  13. Spatial Distribution of Stony Desertification and Key Influencing Factors on Different Sampling Scales in Small Karst Watersheds

    Directory of Open Access Journals (Sweden)

    Zhenming Zhang

    2018-04-01

    Full Text Available Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km2, 4.50 km2, and 1.87 km2, respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content.

  14. Spatial Distribution of Stony Desertification and Key Influencing Factors on Different Sampling Scales in Small Karst Watersheds.

    Science.gov (United States)

    Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie; Huang, Xianfei

    2018-04-13

    Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km², 4.50 km², and 1.87 km², respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content.

  15. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    International Nuclear Information System (INIS)

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  16. Application of inductively coupled plasma mass spectrometry for multielement analysis in small sample amounts of thyroid tissue from Chernobyl area

    International Nuclear Information System (INIS)

    Becker, J.S.; Dietze, H.J.; Boulyga, S.F.; Bazhanova, N.N.; Kanash, N.V.; Malenchenko, A.F.

    2000-01-01

    As a result of the Chernobyl nuclear power plant accident in 1986, thyroid pathologies occurred among children in some regions of belarus. Besides the irradiation of children's thyroids by radioactive iodine and caesium nuclides, toxic elements from fallout are a direct risk to health. Inductively coupled plasma quadrupole-based mass spectrometry (Icp-Ms) and instrumental neutron activation analysis (IAA) were used for multielement determination in small amounts (I-10 mg) of human thyroid tissue samples. The accuracy of the applied analytical technique for small biological sample amounts was checked using NIST standard reference material oyster tissue (SRM 1566 b). Almost all essential elements as well as a number of toxic elements such as Cd, Pb, Hg, U etc. Were determined in a multitude of human thyroid tissues by quadrupole-based Icp-Ms using micro nebulization. In general, the thyroid tissue affected by pathology is characterized by higher calcium content. Some other elements, among them Sr, Zn, Fe, Mn, V, As, Cr, Ni, Pb, U, Ba, Sb, were also Accumulated in such tissue. The results obtained will be used as initial material for further specific studies of the role of particular elements in thyroid pathology development

  17. Simultaneous analysis of perfluoroalkyl and polyfluoroalkyl substances including ultrashort-chain C2 and C3 compounds in rain and river water samples by ultra performance convergence chromatography.

    Science.gov (United States)

    Yeung, Leo W Y; Stadey, Christopher; Mabury, Scott A

    2017-11-03

    An analytical method using ultra performance convergence chromatography (UPC 2 ) coupled to a tandem mass spectrometer operated in negative electrospray mode was developed to measure perfluoroalkyl and polyfluoroalkyl substances (PFASs) including the ultrashort-chain PFASs (C2-C3). Compared to the existing liquid chromatography tandem mass spectrometry method using an ion exchange column, the new method has a lower detection limit (0.4pg trifluoroacetate (TFA) on-column), narrower peak width (3-6s), and a shorter run time (8min). Using the same method, different classes of PFASs (e.g., perfluoroalkyl sulfonates (PFSAs) and perfluorinated carboxylates (PFCAs), perfluorinated phosphonates (PFPAs) and phosphinates (PFPiAs), polyfluoroalkyl phosphate diesters (diPAPs)) can be measured in a single analysis. Rain (n=2) and river water (n=2) samples collected in Toronto, ON, were used for method validation and application. Results showed that short-chain PFAS (C2-C7 PFCAs and C4 PFSA) contributed to over 80% of the detectable PFASs in rain samples and the C2-C3 PFASs alone accounted for over 40% of the total. Reports on environmental levels of these ultrashort-chain PFASs are relatively scarce. Relatively large contribution of these ultrashort-chain PFASs to the total PFASs indicate the need to include the measurement of short-chain PFASs, especially C2 and C3 PFASs, in environmental monitoring. The sources of TFA and other short-chain PFASs in the environment are not entirely clear. The newly developed analytical method may help further investigation on the sources and the environmental levels of these ultrashort-chain PFASs. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. QNB: differential RNA methylation analysis for count-based small-sample sequencing data with a quad-negative binomial model.

    Science.gov (United States)

    Liu, Lian; Zhang, Shao-Wu; Huang, Yufei; Meng, Jia

    2017-08-31

    As a newly emerged research area, RNA epigenetics has drawn increasing attention recently for the participation of RNA methylation and other modifications in a number of crucial biological processes. Thanks to high throughput sequencing techniques, such as, MeRIP-Seq, transcriptome-wide RNA methylation profile is now available in the form of count-based data, with which it is often of interests to study the dynamics at epitranscriptomic layer. However, the sample size of RNA methylation experiment is usually very small due to its costs; and additionally, there usually exist a large number of genes whose methylation level cannot be accurately estimated due to their low expression level, making differential RNA methylation analysis a difficult task. We present QNB, a statistical approach for differential RNA methylation analysis with count-based small-sample sequencing data. Compared with previous approaches such as DRME model based on a statistical test covering the IP samples only with 2 negative binomial distributions, QNB is based on 4 independent negative binomial distributions with their variances and means linked by local regressions, and in the way, the input control samples are also properly taken care of. In addition, different from DRME approach, which relies only the input control sample only for estimating the background, QNB uses a more robust estimator for gene expression by combining information from both input and IP samples, which could largely improve the testing performance for very lowly expressed genes. QNB showed improved performance on both simulated and real MeRIP-Seq datasets when compared with competing algorithms. And the QNB model is also applicable to other datasets related RNA modifications, including but not limited to RNA bisulfite sequencing, m 1 A-Seq, Par-CLIP, RIP-Seq, etc.

  19. Antibiotic Resistance in Animal and Environmental Samples Associated with Small-Scale Poultry Farming in Northwestern Ecuador.

    Science.gov (United States)

    Braykov, Nikolay P; Eisenberg, Joseph N S; Grossman, Marissa; Zhang, Lixin; Vasco, Karla; Cevallos, William; Muñoz, Diana; Acevedo, Andrés; Moser, Kara A; Marrs, Carl F; Foxman, Betsy; Trostle, James; Trueba, Gabriel; Levy, Karen

    2016-01-01

    The effects of animal agriculture on the spread of antibiotic resistance (AR) are cross-cutting and thus require a multidisciplinary perspective. Here we use ecological, epidemiological, and ethnographic methods to examine populations of Escherichia coli circulating in the production poultry farming environment versus the domestic environment in rural Ecuador, where small-scale poultry production employing nontherapeutic antibiotics is increasingly common. We sampled 262 "production birds" (commercially raised broiler chickens and laying hens) and 455 "household birds" (raised for domestic use) and household and coop environmental samples from 17 villages between 2010 and 2013. We analyzed data on zones of inhibition from Kirby-Bauer tests, rather than established clinical breakpoints for AR, to distinguish between populations of organisms. We saw significantly higher levels of AR in bacteria from production versus household birds; resistance to either amoxicillin-clavulanate, cephalothin, cefotaxime, and gentamicin was found in 52.8% of production bird isolates and 16% of household ones. A strain jointly resistant to the 4 drugs was exclusive to a subset of isolates from production birds (7.6%) and coop surfaces (6.5%) and was associated with a particular purchase site. The prevalence of AR in production birds declined with bird age (P resistance (AR) in E. coli isolates from small-scale poultry production environments versus domestic environments in rural Ecuador, where such backyard poultry operations have become established over the past decade. Our previous research in the region suggests that introduction of AR bacteria through travel and commerce may be an important source of AR in villages of this region. This report extends the prior analysis by examining small-scale production chicken farming as a potential source of resistant strains. Our results suggest that AR strains associated with poultry production likely originate from sources outside the study

  20. Mutational status of synchronous and metachronous tumor samples in patients with metastatic non-small-cell lung cancer

    International Nuclear Information System (INIS)

    Quéré, Gilles; Descourt, Renaud; Robinet, Gilles; Autret, Sandrine; Raguenes, Odile; Fercot, Brigitte; Alemany, Pierre; Uguen, Arnaud; Férec, Claude; Quintin-Roué, Isabelle; Le Gac, Gérald

    2016-01-01

    Despite reported discordance between the mutational status of primary lung cancers and their metastases, metastatic sites are rarely biopsied and targeted therapy is guided by genetic biomarkers detected in the primary tumor. This situation is mostly explained by the apparent stability of EGFR-activating mutations. Given the dramatic increase in the range of candidate drugs and high rates of drug resistance, rebiopsy or liquid biopsy may become widespread. The purpose of this study was to test genetic biomarkers used in clinical practice (EGFR, ALK) and candidate biomarkers identified by the French National Cancer Institute (KRAS, BRAF, PIK3CA, HER2) in patients with metastatic non-small-cell lung cancer for whom two tumor samples were available. A retrospective study identified 88 tumor samples collected synchronously or metachronously, from the same or two different sites, in 44 patients. Mutation analysis used SNaPshot (EGFR, KRAS, BRAF missense mutations), pyrosequencing (EGFR and PIK3CA missense mutations), sizing assays (EGFR and HER2 indels) and IHC and/or FISH (ALK rearrangements). About half the patients (52 %) harbored at least one mutation. Five patients had an activating mutation of EGFR in both the primary tumor and the metastasis. The T790M resistance mutation was detected in metastases in 3 patients with acquired resistance to EGFR tyrosine kinase inhibitors. FISH showed discordance in ALK status between a small biopsy sample and the surgical specimen. KRAS mutations were observed in 36 % of samples, six patients (14 %) having discordant genotypes; all discordances concerned sampling from different sites. Two patients (5 %) showed PI3KCA mutations. One metastasis harbored both PI3KCA and KRAS mutations, while the synchronously sampled primary tumor was mutation free. No mutations were detected in BRAF and HER2. This study highlighted noteworthy intra-individual discordance in KRAS mutational status, whereas EGFR status was stable. Intratumoral

  1. Identification of potential small molecule allosteric modulator sites on IL-1R1 ectodomain using accelerated conformational sampling method.

    Directory of Open Access Journals (Sweden)

    Chao-Yie Yang

    Full Text Available The interleukin-1 receptor (IL-1R is the founding member of the interleukin 1 receptor family which activates innate immune response by its binding to cytokines. Reports showed dysregulation of cytokine production leads to aberrant immune cells activation which contributes to auto-inflammatory disorders and diseases. Current therapeutic strategies focus on utilizing antibodies or chimeric cytokine biologics. The large protein-protein interaction interface between cytokine receptor and cytokine poses a challenge in identifying binding sites for small molecule inhibitor development. Based on the significant conformational change of IL-1R type 1 (IL-1R1 ectodomain upon binding to different ligands observed in crystal structures, we hypothesized that transient small molecule binding sites may exist when IL-1R1 undergoes conformational transition and thus suitable for inhibitor development. Here, we employed accelerated molecular dynamics (MD simulation to efficiently sample conformational space of IL-1R1 ectodomain. Representative IL-1R1 ectodomain conformations determined from the hierarchy cluster analysis were analyzed by the SiteMap program which leads to identify small molecule binding sites at the protein-protein interaction interface and allosteric modulator locations. The cosolvent mapping analysis using phenol as the probe molecule further confirms the allosteric modulator site as a binding hotspot. Eight highest ranked fragment molecules identified from in silico screening at the modulator site were evaluated by MD simulations. Four of them restricted the IL-1R1 dynamical motion to inactive conformational space. The strategy from this study, subject to in vitro experimental validation, can be useful to identify small molecule compounds targeting the allosteric modulator sites of IL-1R and prevent IL-1R from binding to cytokine by trapping IL-1R in inactive conformations.

  2. Living conditions, including life style, in primary-care patients with nonacute, nonspecific spinal pain compared with a population-based sample: a cross-sectional study

    Directory of Open Access Journals (Sweden)

    Odd Lindell

    2010-11-01

    Full Text Available Odd Lindell, Sven-Erik Johansson, Lars-Erik Strender1Center for Family and Community Medicine, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Huddinge, SwedenBackground: Nonspecific spinal pain (NSP, comprising back and/or neck pain, is one of the leading disorders behind long-term sick-listing, including disability pensions. Early interventions to prevent long-term sick-listing require the identification of patients at risk. The aim of this study was to compare living conditions associated with long-term sick-listing for NSP in patients with nonacute NSP, with a nonpatient population-based sample. Nonacute NSP is pain that leads to full-time sick-listing>3 weeks.Methods: One hundred and twenty-five patients with nonacute NSP, 2000–2004, were included in a randomized controlled trial in Stockholm County with the objective of comparing cognitive–behavioral rehabilitation with traditional primary care. For these patients, a cross-sectional study was carried out with baseline data. Living conditions were compared between the patients and 338 nonpatients by logistic regression. The conditions from univariate analyses were included in a multivariate analysis. The nonsignificant variables were excluded sequentially to yield a model comprising only the significant factors (P <0.05. The results are shown as odds ratios (OR with 95% confidence intervals.Results: In the univariate analyses, 13 of the 18 living conditions had higher odds for the patients with a dominance of physical work strains and Indication of alcohol over-consumption, odds ratio (OR 14.8 (95% confidence interval [CI] 3.2–67.6. Five conditions qualified for the multivariate model: High physical workload, OR 13.7 (CI 5.9–32.2; Hectic work tempo, OR 8.4 (CI 2.5–28.3; Blue-collar job, OR 4.5 (CI 1.8–11.4; Obesity, OR 3.5 (CI 1.2–10.2; and Low education, OR 2.7 (CI 1.1–6.8.Conclusions: As most of the living conditions have previously been

  3. Analytical Method for Carbon and Oxygen Isotope of Small Carbonate Samples with the GasBench Ⅱ-IRMS Device

    Directory of Open Access Journals (Sweden)

    LIANG Cui-cui

    2015-01-01

    Full Text Available An analytical method for measuring carbon and oxygen isotopic compositions of trace amount carbonate (>15 μg was established by Delta V Advantage isotope Ratio MS coupled with GasBench Ⅱ. Different trace amount (5-50 μg carbonate standard samples (IAEA-CO-1 were measured by GasBench Ⅱ with 12 mL and 3.7 mL vials. When the weight of samples was less than 40 μg and it was acidified in 12 mL vials, most standard deviations of the δ13C and δ18O were more than 0.1‰, which couldn’t satisfied high-precision measurements. When the weight of samples was greater than 15 μg and it was acidified in 3.7 mL vials, standard deviations for the δ13C and δ18O were 0.01‰-0.07‰ and 0.01‰-0.08‰ respectively, which satisfied high-precision measurements. Therefore, small 3.7 mL vials were used to increase the concentration of carbon dioxide in headspace, carbonate samples even less as 15 μg can be analyzed routinely by a GasBench Ⅱ continuous-flow IRMS. Meanwhile, the linear relationship between sample’s weight and peak’s area was strong (R2>0.993 2 and it can be used to determine the carbon content of carbonate samples.

  4. The effect of albedo neutrons on the neutron multiplication of small plutonium oxide samples in a PNCC chamber

    CERN Document Server

    Bourva, L C A; Weaver, D R

    2002-01-01

    This paper describes how to evaluate the effect of neutrons reflected from parts of a passive neutron coincidence chamber on the neutron leakage self-multiplication, M sub L , of a fissile sample. It is shown that albedo neutrons contribute, in the case of small plutonium bearing samples, to a significant part of M sub L , and that their effect has to be taken into account in the relationship between the measured coincidence count rates and the sup 2 sup 4 sup 0 Pu effective mass of the sample. A simple one-interaction model has been used to write the balance of neutron gains and losses in the material when exposed to the re-entrant neutron flux. The energy and intensity profiles of the re-entrant flux have been parameterised using Monte Carlo MCNP sup T sup M calculations. This technique has been implemented for the On Site Laboratory neutron/gamma counter within the existing MEPL 1.0 code for the determination of the neutron leakage self-multiplication. Benchmark tests of the resulting MEPL 2.0 code with MC...

  5. Soot on snow in Iceland: First results on black carbon and organic carbon in Iceland 2016 snow and ice samples, including the glacier Solheimajökull

    Science.gov (United States)

    Meinander, Outi; Dagsson-Waldhauserova, Pavla; Gritsevich, Maria; Aurela, Minna; Arnalds, Olafur; Dragosics, Monika; Virkkula, Aki; Svensson, Jonas; Peltoniemi, Jouni; Kontu, Anna; Kivekäs, Niku; Leppäranta, Matti; de Leeuw, Gerrit; Laaksonen, Ari; Lihavainen, Heikki; Arslan, Ali N.; Paatero, Jussi

    2017-04-01

    New results on black carbon (BC) and organic carbon (OC) on snow and ice in Iceland in 2016 will be presented in connection to our earlier results on BC and OC on Arctic seasonal snow surface, and in connection to our 2013 and 2016 experiments on effects of light absorbing impurities, including Icelandic dust, on snow albedo, melt and density. Our sampling included the glacier Solheimajökull in Iceland. The mass balance of this glacier is negative and it has been shrinking during the last 20 years by 900 meters from its southwestern corner. Icelandic snow and ice samples were not expected to contain high concentrations of BC, as power generation with domestic renewable water and geothermal power energy sources cover 80 % of the total energy consumption in Iceland. Our BC results on filters analyzed with a Thermal/Optical Carbon Aerosol Analyzer (OC/EC) confirm this assumption. Other potential soot sources in Iceland include agricultural burning, industry (aluminum and ferroalloy production and fishing industry), open burning, residential heating and transport (shipping, road traffic, aviation). On the contrary to low BC, we have found high concentrations of organic carbon in our Iceland 2016 samples. Some of the possible reasons for those will be discussed in this presentation. Earlier, we have measured and reported unexpectedly low snow albedo values of Arctic seasonally melting snow in Sodankylä, north of Arctic Circle. Our low albedo results of melting snow have been confirmed by three independent data sets. We have explained these low values to be due to: (i) large snow grain sizes up to 3 mm in diameter (seasonally melting snow); (ii) meltwater surrounding the grains and increasing the effective grain size; (iii) absorption caused by impurities in the snow, with concentration of elemental carbon (black carbon) in snow of 87 ppb, and organic carbon 2894 ppb. The high concentrations of carbon were due to air masses originating from the Kola Peninsula, Russia

  6. Simultaneous extraction and clean-up of polychlorinated biphenyls and their metabolites from small tissue samples using pressurized liquid extraction

    Science.gov (United States)

    Kania-Korwel, Izabela; Zhao, Hongxia; Norstrom, Karin; Li, Xueshu; Hornbuckle, Keri C.; Lehmler, Hans-Joachim

    2008-01-01

    A pressurized liquid extraction-based method for the simultaneous extraction and in situ clean-up of polychlorinated biphenyls (PCBs), hydroxylated (OH)-PCBs and methylsulfonyl (MeSO2)-PCBs from small (< 0.5 gram) tissue samples was developed and validated. Extraction of a laboratory reference material with hexane:dichloromethane:methanol (48:43:9, v/v) and Florisil as fat retainer allowed an efficient recovery of PCBs (78–112%; RSD: 13–37%), OH-PCBs (46±2%; RSD: 4%) and MeSO2-PCBs (89±21%; RSD: 24%). Comparable results were obtained with an established analysis method for PCBs, OH-PCBs and MeSO2-PCBs. PMID:19019378

  7. CA II TRIPLET SPECTROSCOPY OF SMALL MAGELLANIC CLOUD RED GIANTS. III. ABUNDANCES AND VELOCITIES FOR A SAMPLE OF 14 CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Parisi, M. C.; Clariá, J. J.; Marcionni, N. [Observatorio Astronómico, Universidad Nacional de Córdoba, Laprida 854, Córdoba, CP 5000 (Argentina); Geisler, D.; Villanova, S. [Departamento de Astronomía, Universidad de Concepción Casilla 160-C, Concepción (Chile); Sarajedini, A. [Department of Astronomy, University of Florida P.O. Box 112055, Gainesville, FL 32611 (United States); Grocholski, A. J., E-mail: celeste@oac.uncor.edu, E-mail: claria@oac.uncor.edu, E-mail: nmarcionni@oac.uncor.edu, E-mail: dgeisler@astro-udec.cl, E-mail: svillanova@astro-udec.cl, E-mail: ata@astro.ufl.edu, E-mail: grocholski@phys.lsu.edu [Department of Physics and Astronomy, Louisiana State University 202 Nicholson Hall, Tower Drive, Baton Rouge, LA 70803-4001 (United States)

    2015-05-15

    We obtained spectra of red giants in 15 Small Magellanic Cloud (SMC) clusters in the region of the Ca ii lines with FORS2 on the Very Large Telescope. We determined the mean metallicity and radial velocity with mean errors of 0.05 dex and 2.6 km s{sup −1}, respectively, from a mean of 6.5 members per cluster. One cluster (B113) was too young for a reliable metallicity determination and was excluded from the sample. We combined the sample studied here with 15 clusters previously studied by us using the same technique, and with 7 clusters whose metallicities determined by other authors are on a scale similar to ours. This compilation of 36 clusters is the largest SMC cluster sample currently available with accurate and homogeneously determined metallicities. We found a high probability that the metallicity distribution is bimodal, with potential peaks at −1.1 and −0.8 dex. Our data show no strong evidence of a metallicity gradient in the SMC clusters, somewhat at odds with recent evidence from Ca ii triplet spectra of a large sample of field stars. This may be revealing possible differences in the chemical history of clusters and field stars. Our clusters show a significant dispersion of metallicities, whatever age is considered, which could be reflecting the lack of a unique age–metallicity relation in this galaxy. None of the chemical evolution models currently available in the literature satisfactorily represents the global chemical enrichment processes of SMC clusters.

  8. Reliability and Construct Validity of the Psychopathic Personality Inventory-Revised in a Swedish Non-Criminal Sample - A Multimethod Approach including Psychophysiological Correlates of Empathy for Pain.

    Directory of Open Access Journals (Sweden)

    Karolina Sörman

    Full Text Available Cross-cultural investigation of psychopathy measures is important for clarifying the nomological network surrounding the psychopathy construct. The Psychopathic Personality Inventory-Revised (PPI-R is one of the most extensively researched self-report measures of psychopathic traits in adults. To date however, it has been examined primarily in North American criminal or student samples. To address this gap in the literature, we examined PPI-R's reliability, construct validity and factor structure in non-criminal individuals (N = 227 in Sweden, using a multimethod approach including psychophysiological correlates of empathy for pain. PPI-R construct validity was investigated in subgroups of participants by exploring its degree of overlap with (i the Psychopathy Checklist: Screening Version (PCL:SV, (ii self-rated empathy and behavioral and physiological responses in an experiment on empathy for pain, and (iii additional self-report measures of alexithymia and trait anxiety. The PPI-R total score was significantly associated with PCL:SV total and factor scores. The PPI-R Coldheartedness scale demonstrated significant negative associations with all empathy subscales and with rated unpleasantness and skin conductance responses in the empathy experiment. The PPI-R higher order Self-Centered Impulsivity and Fearless Dominance dimensions were associated with trait anxiety in opposite directions (positively and negatively, respectively. Overall, the results demonstrated solid reliability (test-retest and internal consistency and promising but somewhat mixed construct validity for the Swedish translation of the PPI-R.

  9. MaxEnt’s parameter configuration and small samples: are we paying attention to recommendations? A systematic review

    Directory of Open Access Journals (Sweden)

    Narkis S. Morales

    2017-03-01

    Full Text Available Environmental niche modeling (ENM is commonly used to develop probabilistic maps of species distribution. Among available ENM techniques, MaxEnt has become one of the most popular tools for modeling species distribution, with hundreds of peer-reviewed articles published each year. MaxEnt’s popularity is mainly due to the use of a graphical interface and automatic parameter configuration capabilities. However, recent studies have shown that using the default automatic configuration may not be always appropriate because it can produce non-optimal models; particularly when dealing with a small number of species presence points. Thus, the recommendation is to evaluate the best potential combination of parameters (feature classes and regularization multiplier to select the most appropriate model. In this work we reviewed 244 articles published between 2013 and 2015 to assess whether researchers are following recommendations to avoid using the default parameter configuration when dealing with small sample sizes, or if they are using MaxEnt as a “black box tool.” Our results show that in only 16% of analyzed articles authors evaluated best feature classes, in 6.9% evaluated best regularization multipliers, and in a meager 3.7% evaluated simultaneously both parameters before producing the definitive distribution model. We analyzed 20 articles to quantify the potential differences in resulting outputs when using software default parameters instead of the alternative best model. Results from our analysis reveal important differences between the use of default parameters and the best model approach, especially in the total area identified as suitable for the assessed species and the specific areas that are identified as suitable by both modelling approaches. These results are worrying, because publications are potentially reporting over-complex or over-simplistic models that can undermine the applicability of their results. Of particular importance

  10. A method for analysing small samples of floral pollen for free and protein-bound amino acids.

    Science.gov (United States)

    Stabler, Daniel; Power, Eileen F; Borland, Anne M; Barnes, Jeremy D; Wright, Geraldine A

    2018-02-01

    Pollen provides floral visitors with essential nutrients including proteins, lipids, vitamins and minerals. As an important nutrient resource for pollinators, including honeybees and bumblebees, pollen quality is of growing interest in assessing available nutrition to foraging bees. To date, quantifying the protein-bound amino acids in pollen has been difficult and methods rely on large amounts of pollen, typically more than 1 g. More usual is to estimate a crude protein value based on the nitrogen content of pollen, however, such methods provide no information on the distribution of essential and non-essential amino acids constituting the proteins.Here, we describe a method of microwave-assisted acid hydrolysis using low amounts of pollen that allows exploration of amino acid composition, quantified using ultra high performance liquid chromatography (UHPLC), and a back calculation to estimate the crude protein content of pollen.Reliable analysis of protein-bound and free amino acids as well as an estimation of crude protein concentration was obtained from pollen samples as low as 1 mg. Greater variation in both protein-bound and free amino acids was found in pollen sample sizes amino acids in smaller sample sizes, we suggest a correction factor to apply to specific sample sizes of pollen in order to estimate total crude protein content.The method described in this paper will allow researchers to explore the composition of amino acids in pollen and will aid research assessing the available nutrition to pollinating animals. This method will be particularly useful in assaying the pollen of wild plants, from which it is difficult to obtain large sample weights.

  11. High-throughput analysis using non-depletive SPME: challenges and applications to the determination of free and total concentrations in small sample volumes.

    Science.gov (United States)

    Boyacı, Ezel; Bojko, Barbara; Reyes-Garcés, Nathaly; Poole, Justen J; Gómez-Ríos, Germán Augusto; Teixeira, Alexandre; Nicol, Beate; Pawliszyn, Janusz

    2018-01-18

    In vitro high-throughput non-depletive quantitation of chemicals in biofluids is of growing interest in many areas. Some of the challenges facing researchers include the limited volume of biofluids, rapid and high-throughput sampling requirements, and the lack of reliable methods. Coupled to the above, growing interest in the monitoring of kinetics and dynamics of miniaturized biosystems has spurred the demand for development of novel and revolutionary methodologies for analysis of biofluids. The applicability of solid-phase microextraction (SPME) is investigated as a potential technology to fulfill the aforementioned requirements. As analytes with sufficient diversity in their physicochemical features, nicotine, N,N-Diethyl-meta-toluamide, and diclofenac were selected as test compounds for the study. The objective was to develop methodologies that would allow repeated non-depletive sampling from 96-well plates, using 100 µL of sample. Initially, thin film-SPME was investigated. Results revealed substantial depletion and consequent disruption in the system. Therefore, new ultra-thin coated fibers were developed. The applicability of this device to the described sampling scenario was tested by determining the protein binding of the analytes. Results showed good agreement with rapid equilibrium dialysis. The presented method allows high-throughput analysis using small volumes, enabling fast reliable free and total concentration determinations without disruption of system equilibrium.

  12. Ca II TRIPLET SPECTROSCOPY OF SMALL MAGELLANIC CLOUD RED GIANTS. I. ABUNDANCES AND VELOCITIES FOR A SAMPLE OF CLUSTERS

    International Nuclear Information System (INIS)

    Parisi, M. C.; Claria, J. J.; Grocholski, A. J.; Geisler, D.; Sarajedini, A.

    2009-01-01

    We have obtained near-infrared spectra covering the Ca II triplet lines for a large number of stars associated with 16 Small Magellanic Cloud (SMC) clusters using the VLT + FORS2. These data compose the largest available sample of SMC clusters with spectroscopically derived abundances and velocities. Our clusters span a wide range of ages and provide good areal coverage of the galaxy. Cluster members are selected using a combination of their positions relative to the cluster center as well as their location in the color-magnitude diagram, abundances, and radial velocities (RVs). We determine mean cluster velocities to typically 2.7 km s -1 and metallicities to 0.05 dex (random errors), from an average of 6.4 members per cluster. By combining our clusters with previously published results, we compile a sample of 25 clusters on a homogeneous metallicity scale and with relatively small metallicity errors, and thereby investigate the metallicity distribution, metallicity gradient, and age-metallicity relation (AMR) of the SMC cluster system. For all 25 clusters in our expanded sample, the mean metallicity [Fe/H] = -0.96 with σ = 0.19. The metallicity distribution may possibly be bimodal, with peaks at ∼-0.9 dex and -1.15 dex. Similar to the Large Magellanic Cloud (LMC), the SMC cluster system gives no indication of a radial metallicity gradient. However, intermediate age SMC clusters are both significantly more metal-poor and have a larger metallicity spread than their LMC counterparts. Our AMR shows evidence for three phases: a very early (>11 Gyr) phase in which the metallicity reached ∼-1.2 dex, a long intermediate phase from ∼10 to 3 Gyr in which the metallicity only slightly increased, and a final phase from 3 to 1 Gyr ago in which the rate of enrichment was substantially faster. We find good overall agreement with the model of Pagel and Tautvaisiene, which assumes a burst of star formation at 4 Gyr. Finally, we find that the mean RV of the cluster system

  13. Hybrid [{sup 18}F]-FDG PET/MRI including non-Gaussian diffusion-weighted imaging (DWI): Preliminary results in non-small cell lung cancer (NSCLC)

    Energy Technology Data Exchange (ETDEWEB)

    Heusch, Philipp [Univ Dusseldorf, Medical Faculty, Department of Diagnostic and Interventional Radiology, D-40225 Dusseldorf (Germany); Univ Duisburg-Essen, Medical Faculty, Department of Diagnostic and Interventional Radiology and Neuroradiology, D-45147 Essen (Germany); Köhler, Jens [Univ Duisburg-Essen, Medical Faculty, Department of Medical Oncology, D-45147 Essen (Germany); Wittsack, Hans-Joerg [Univ Dusseldorf, Medical Faculty, Department of Diagnostic and Interventional Radiology, D-40225 Dusseldorf (Germany); Heusner, Till A., E-mail: Heusner@med.uni-duesseldorf.de [Univ Dusseldorf, Medical Faculty, Department of Diagnostic and Interventional Radiology, D-40225 Dusseldorf (Germany); Buchbender, Christian [Univ Dusseldorf, Medical Faculty, Department of Diagnostic and Interventional Radiology, D-40225 Dusseldorf (Germany); Poeppel, Thorsten D. [Univ Duisburg-Essen, Medical Faculty, Department of Nuclear Medicine, D-45147 Essen (Germany); Nensa, Felix; Wetter, Axel [Univ Duisburg-Essen, Medical Faculty, Department of Diagnostic and Interventional Radiology and Neuroradiology, D-45147 Essen (Germany); Gauler, Thomas [Univ Duisburg-Essen, Medical Faculty, Department of Medical Oncology, D-45147 Essen (Germany); Hartung, Verena [Univ Duisburg-Essen, Medical Faculty, Department of Nuclear Medicine, D-45147 Essen (Germany); Lanzman, Rotem S. [Univ Dusseldorf, Medical Faculty, Department of Diagnostic and Interventional Radiology, D-40225 Dusseldorf (Germany)

    2013-11-01

    Purpose: To assess the feasibility of non-Gaussian DWI as part of a FDG-PET/MRI protocol in patients with histologically proven non-small cell lung cancer. Material and methods: 15 consecutive patients with histologically proven NSCLC (mean age 61 ± 11 years) were included in this study and underwent whole-body FDG-PET/MRI following whole-body FDG-PET/CT. As part of the whole-body FDG-PET/MRI protocol, an EPI-sequence with 5 b-values (0, 100, 500, 1000 and 2000 s/mm{sup 2}) was acquired for DWI of the thorax during free-breathing. Volume of interest (VOI) measurements were performed to determine the maximum and mean standardized uptake value (SUV{sub max}; SUV{sub mean}). A region of interest (ROI) was manually drawn around the tumor on b = 0 images and then transferred to the corresponding parameter maps to assess ADC{sub mono}, D{sub app} and K{sub app}. To assess the goodness of the mathematical fit R{sup 2} was calculated for monoexponential and non-Gaussian analysis. Spearman's correlation coefficients were calculated to compare SUV values and diffusion coefficients. A Student's t-test was performed to compare the monoexponential and non-Gaussian diffusion fitting (R{sup 2}). Results: T staging was equal between FDG-PET/CT and FDG-PET/MRI in 12 of 15 patients. For NSCLC, mean ADC{sub mono} was 2.11 ± 1.24 × 10{sup −3} mm{sup 2}/s, D{sub app} was 2.46 ± 1.29 × 10{sup −3} mm{sup 2}/s and mean K{sub app} was 0.70 ± 0.21. The non-Gaussian diffusion analysis (R{sup 2} = 0.98) provided a significantly better mathematical fitting to the DWI signal decay than the monoexponetial analysis (R{sup 2} = 0.96) (p < 0.001). SUV{sub max} and SUV{sub mean} of NSCLC was 13.5 ± 7.6 and 7.9 ± 4.3 for FDG-PET/MRI. ADC{sub mono} as well as D{sub app} exhibited a significant inverse correlation with the SUV{sub max} (ADC{sub mono}: R = −0.67; p < 0.01; D{sub app}: R = −0.69; p < 0.01) as well as with SUV{sub mean} assessed by FDG-PET/MRI (ADC{sub mono}: R

  14. Effects of growth rate, size, and light availability on tree survival across life stages: a demographic analysis accounting for missing values and small sample sizes.

    Science.gov (United States)

    Moustakas, Aristides; Evans, Matthew R

    2015-02-28

    Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.

  15. {sup 13}C-METHYL FORMATE: OBSERVATIONS OF A SAMPLE OF HIGH-MASS STAR-FORMING REGIONS INCLUDING ORION-KL AND SPECTROSCOPIC CHARACTERIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Favre, Cécile; Bergin, Edwin A.; Crockett, Nathan R.; Neill, Justin L. [Department of Astronomy, University of Michigan, 500 Church Street, Ann Arbor, MI 48109 (United States); Carvajal, Miguel [Dpto. Física Aplicada, Unidad Asociada CSIC, Facultad de Ciencias Experimentales, Universidad de Huelva, E-21071 Huelva (Spain); Field, David [Department of Physics and Astronomy, University of Aarhus, Ny Munkegade 120, DK-8000 Aarhus C (Denmark); Jørgensen, Jes K.; Bisschop, Suzanne E. [Centre for Star and Planet Formation, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen Ø (Denmark); Brouillet, Nathalie; Despois, Didier; Baudry, Alain [Univ. Bordeaux, LAB, UMR 5804, F-33270, Floirac (France); Kleiner, Isabelle [Laboratoire Interuniversitaire des Systèmes Atmosphériques (LISA), CNRS, UMR 7583, Université de Paris-Est et Paris Diderot, 61, Av. du Général de Gaulle, F-94010 Créteil Cedex (France); Margulès, Laurent; Huet, Thérèse R.; Demaison, Jean, E-mail: cfavre@umich.edu, E-mail: miguel.carvajal@dfa.uhu.es [Laboratoire de Physique des Lasers, Atomes et Molécules, UMR CNRS 8523, Université Lille I, F-59655 Villeneuve d' Ascq Cedex (France)

    2015-01-01

    We have surveyed a sample of massive star-forming regions located over a range of distances from the Galactic center for methyl formate, HCOOCH{sub 3}, and its isotopologues H{sup 13}COOCH{sub 3} and HCOO{sup 13}CH{sub 3}. The observations were carried out with the APEX telescope in the frequency range 283.4-287.4 GHz. Based on the APEX observations, we report tentative detections of the {sup 13}C-methyl formate isotopologue HCOO{sup 13}CH{sub 3} toward the following four massive star-forming regions: Sgr B2(N-LMH), NGC 6334 IRS 1, W51 e2, and G19.61-0.23. In addition, we have used the 1 mm ALMA science verification observations of Orion-KL and confirm the detection of the {sup 13}C-methyl formate species in Orion-KL and image its spatial distribution. Our analysis shows that the {sup 12}C/{sup 13}C isotope ratio in methyl formate toward the Orion-KL Compact Ridge and Hot Core-SW components (68.4 ± 10.1 and 71.4 ± 7.8, respectively) are, for both the {sup 13}C-methyl formate isotopologues, commensurate with the average {sup 12}C/{sup 13}C ratio of CO derived toward Orion-KL. Likewise, regarding the other sources, our results are consistent with the {sup 12}C/{sup 13}C in CO. We also report the spectroscopic characterization, which includes a complete partition function, of the complex H{sup 13}COOCH{sub 3} and HCOO{sup 13}CH{sub 3} species. New spectroscopic data for both isotopomers H{sup 13}COOCH{sub 3} and HCOO{sup 13}CH{sub 3}, presented in this study, have made it possible to measure this fundamentally important isotope ratio in a large organic molecule for the first time.

  16. Small Body GN and C Research Report: G-SAMPLE - An In-Flight Dynamical Method for Identifying Sample Mass [External Release Version

    Science.gov (United States)

    Carson, John M., III; Bayard, David S.

    2006-01-01

    G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.

  17. Determination of water-extractable nonstructural carbohydrates, including inulin, in grass samples with high-performance anion exchange chromatography and pulsed amperometric detection.

    Science.gov (United States)

    Raessler, Michael; Wissuwa, Bianka; Breul, Alexander; Unger, Wolfgang; Grimm, Torsten

    2008-09-10

    The exact and reliable determination of carbohydrates in plant samples of different origin is of great importance with respect to plant physiology. Additionally, the identification and quantification of carbohydrates are necessary for the evaluation of the impact of these compounds on the biogeochemistry of carbon. To attain this goal, it is necessary to analyze a great number of samples with both high sensitivity and selectivity within a limited time frame. This paper presents a rugged and easy method that allows the isocratic chromatographic determination of 12 carbohydrates and sugar alcohols from one sample within 30 min. The method was successfully applied to a variety of plant materials with particular emphasis on perennial ryegrass samples of the species Lolium perenne. The method was easily extended to the analysis of the polysaccharide inulin after its acidic hydrolysis into the corresponding monomers without the need for substantial change of chromatographic conditions or even the use of enzymes. It therefore offers a fundamental advantage for the analysis of the complex mixture of nonstructural carbohydrates often found in plant samples.

  18. Determination of degree of RBC agglutination for blood typing using a small quantity of blood sample in a microfluidic system.

    Science.gov (United States)

    Chang, Yaw-Jen; Ho, Ching-Yuan; Zhou, Xin-Miao; Yen, Hsiu-Rong

    2018-04-15

    Blood typing assay is a critical test to ensure the serological compatibility of a donor and an intended recipient prior to a blood transfusion. This paper presents a microfluidic blood typing system using a small quantity of blood sample to determine the degree of agglutination of red blood cell (RBC). Two measuring methods were proposed: impedimetric measurement and electroanalytical measurement. The charge transfer resistance in the impedimetric measurement and the power parameter in the electroanalytical measurement were used for the analysis of agglutination level. From the experimental results, both measuring methods provide quantitative results, and the parameters are linearly and monotonically related to the degree of RBC agglutination. However, the electroanalytical measurement is more reliable than the impedimetric technique because the impedimetric measurement may suffer from many influencing factors, such as chip conditions. Five levels from non-agglutination (level 0) to strong agglutination (level 4+) can be discriminated in this study, conforming to the clinical requirement to prevent any risks in transfusion. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Probability estimation of rare extreme events in the case of small samples: Technique and examples of analysis of earthquake catalogs

    Science.gov (United States)

    Pisarenko, V. F.; Rodkin, M. V.; Rukavishnikova, T. A.

    2017-11-01

    The most general approach to studying the recurrence law in the area of the rare largest events is associated with the use of limit law theorems of the theory of extreme values. In this paper, we use the Generalized Pareto Distribution (GPD). The unknown GPD parameters are typically determined by the method of maximal likelihood (ML). However, the ML estimation is only optimal for the case of fairly large samples (>200-300), whereas in many practical important cases, there are only dozens of large events. It is shown that in the case of a small number of events, the highest accuracy in the case of using the GPD is provided by the method of quantiles (MQs). In order to illustrate the obtained methodical results, we have formed the compiled data sets characterizing the tails of the distributions for typical subduction zones, regions of intracontinental seismicity, and for the zones of midoceanic (MO) ridges. This approach paves the way for designing a new method for seismic risk assessment. Here, instead of the unstable characteristics—the uppermost possible magnitude M max—it is recommended to use the quantiles of the distribution of random maxima for a future time interval. The results of calculating such quantiles are presented.

  20. Detection of Bartonella henselae DNA in clinical samples including peripheral blood of immune competent and immune compromised patients by three nested amplifications

    Directory of Open Access Journals (Sweden)

    Karina Hatamoto Kawasato

    2013-02-01

    Full Text Available Bacteria of the genus Bartonella are emerging pathogens detected in lymph node biopsies and aspirates probably caused by increased concentration of bacteria. Twenty-three samples of 18 patients with clinical, laboratory and/or epidemiological data suggesting bartonellosis were subjected to three nested amplifications targeting a fragment of the 60-kDa heat shock protein (HSP, the internal transcribed spacer 16S-23S rRNA (ITS and the cell division (FtsZ of Bartonella henselae, in order to improve detection in clinical samples. In the first amplification 01, 04 and 05 samples, were positive by HSP (4.3%, FtsZ (17.4% and ITS (21.7%, respectively. After the second round six positive samples were identified by nested-HSP (26%, eight by nested-ITS (34.8% and 18 by nested-FtsZ (78.2%, corresponding to 10 peripheral blood samples, five lymph node biopsies, two skin biopsies and one lymph node aspirate. The nested-FtsZ was more sensitive than nested-HSP and nested-ITS (p < 0.0001, enabling the detection of Bartonella henselae DNA in 15 of 18 patients (83.3%. In this study, three nested-PCR that should be specific for Bartonella henselae amplification were developed, but only the nested-FtsZ did not amplify DNA from Bartonella quintana. We conclude that nested amplifications increased detection of B. henselae DNA, and that the nested-FtsZ was the most sensitive and the only specific to B. henselae in different biological samples. As all samples detected by nested-HSP and nested-ITS, were also by nested-FtsZ, we infer that in our series infections were caused by Bartonella henselae. The high number of positive blood samples draws attention to the use of this biological material in the investigation of bartonellosis, regardless of the immune status of patients. This fact is important in the case of critically ill patients and young children to avoid more invasive procedures such as lymph nodes biopsies and aspirates.

  1. Comparison of the solid-phase extraction efficiency of a bounded and an included cyclodextrin-silica microporous composite for polycyclic aromatic hydrocarbons determination in water samples.

    Science.gov (United States)

    Mauri-Aucejo, Adela; Amorós, Pedro; Moragues, Alaina; Guillem, Carmen; Belenguer-Sapiña, Carolina

    2016-08-15

    Solid-phase extraction is one of the most important techniques for sample purification and concentration. A wide variety of solid phases have been used for sample preparation over time. In this work, the efficiency of a new kind of solid-phase extraction adsorbent, which is a microporous material made from modified cyclodextrin bounded to a silica network, is evaluated through an analytical method which combines solid-phase extraction with high-performance liquid chromatography to determine polycyclic aromatic hydrocarbons in water samples. Several parameters that affected the analytes recovery, such as the amount of solid phase, the nature and volume of the eluent or the sample volume and concentration influence have been evaluated. The experimental results indicate that the material possesses adsorption ability to the tested polycyclic aromatic hydrocarbons. Under the optimum conditions, the quantification limits of the method were in the range of 0.09-2.4μgL(-1) and fine linear correlations between peak height and concentration were found around 1.3-70μgL(-1). The method has good repeatability and reproducibility, with coefficients of variation under 8%. Due to the concentration results, this material may represent an alternative for trace analysis of polycyclic aromatic hydrocarbons in water trough solid-phase extraction. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Modular design of processing and storage facilities for small volumes of low and intermediate level radioactive waste including disused sealed sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-06-15

    A number of IAEA Member States generate relatively small quantities of radioactive waste and/or disused sealed sources in research or in the application of nuclear techniques in medicine and industry. This publication presents a modular approach to the design of waste processing and storage facilities to address the needs of such Member States with a cost effective and flexible solution that allows easy adjustment to changing needs in terms of capacity and variety of waste streams. The key feature of the publication is the provision of practical guidance to enable the users to determine their waste processing and storage requirements, specify those requirements to allow the procurement of the appropriate processing and storage modules and to install and eventually operate those modules.

  3. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  4. Correlation between Asian Dust and Specific Radioactivities of Fission Products Included in Airborne Samples in Tokushima, Shikoku Island, Japan, Due to the Fukushima Nuclear Accident

    Energy Technology Data Exchange (ETDEWEB)

    Sakama, M., E-mail: minorusakama@tokushima-u.ac.jp [Department of Radiological Science, Division of Biomedical Information Sciences, Institute of Health Biosciences, The University of Tokushima, Tokushima 770-8509 (Japan); Nagano, Y. [Department of Radiological Science, Division of Biomedical Information Sciences, Institute of Health Biosciences, The University of Tokushima, Tokushima 770-8509 (Japan); Kitade, T. [Department of Laboratory, M and S Instruments Inc., Osaka 532-0005 (Japan); Shikino, O. [Department of Inorganic Analysis, PerkinElmer Japan Co. Ltd., Yokohama 240-0005 (Japan); Nakayama, S. [Department of Nuclear Science, Institute of Socio-Arts and Sciences, The University of Tokushima, Tokushima 770-8502 (Japan)

    2014-06-15

    Radioactive fission product {sup 131}I released from the Fukushima Daiichi Nuclear Power Plants (FD-NPP) was first detected on March 23, 2011 in an airborne aerosol sample collected at Tokushima, Shikoku Island, located in western Japan. Two other radioactive fission products, {sup 134}Cs and {sup 137}Cs were also observed in a sample collected from April 2 to 4, 2011. The maximum specific radioactivities observed in this work were about 2.5 to 3.5 mBq×m{sup -3} in a airborne aerosol sample collected on April 6. During the course of the continuous monitoring, we also made our first observation of seasonal Asian Dust and those fission products associated with the FDNPP accident concurrently from May 2 to 5, 2011. We found that the specific radioactivities of {sup 134}Cs and {sup 137}Cs decreased drastically only during the period of Asian Dust. And also, it was found that this trend was very similar to the atmospheric elemental concentration (ng×m{sup -3}) variation of stable cesium ({sup 133}Cs) quantified by elemental analyses using our developed ICP-DRC-MS instrument.

  5. Correlation between Asian Dust and Specific Radioactivities of Fission Products Included in Airborne Samples in Tokushima, Shikoku Island, Japan, Due to the Fukushima Nuclear Accident

    International Nuclear Information System (INIS)

    Sakama, M.; Nagano, Y.; Kitade, T.; Shikino, O.; Nakayama, S.

    2014-01-01

    Radioactive fission product 131 I released from the Fukushima Daiichi Nuclear Power Plants (FD-NPP) was first detected on March 23, 2011 in an airborne aerosol sample collected at Tokushima, Shikoku Island, located in western Japan. Two other radioactive fission products, 134 Cs and 137 Cs were also observed in a sample collected from April 2 to 4, 2011. The maximum specific radioactivities observed in this work were about 2.5 to 3.5 mBq×m -3 in a airborne aerosol sample collected on April 6. During the course of the continuous monitoring, we also made our first observation of seasonal Asian Dust and those fission products associated with the FDNPP accident concurrently from May 2 to 5, 2011. We found that the specific radioactivities of 134 Cs and 137 Cs decreased drastically only during the period of Asian Dust. And also, it was found that this trend was very similar to the atmospheric elemental concentration (ng×m -3 ) variation of stable cesium ( 133 Cs) quantified by elemental analyses using our developed ICP-DRC-MS instrument

  6. Prognostic role of platelet to lymphocyte ratio in non-small cell lung cancers: A meta-analysis including 3,720 patients.

    Science.gov (United States)

    Zhao, Qing-Tao; Yuan, Zheng; Zhang, Hua; Zhang, Xiao-Peng; Wang, Hui-En; Wang, Zhi-Kang; Duan, Guo-Chen

    2016-07-01

    Platelet to lymphocyte ratio (PLR) was recently reported as a useful index in predicting the prognosis of lung cancer. However, the prognostic role of PLR in lung cancer remains controversial. The aim of this study was to evaluate the association between PLR and clinical outcome of lung cancer patients through a meta-analysis. Relevant literatures were retrieved from PubMed, Ovid, the Cochrane Library and Web of Science databases. Meta-analysis was performed using hazard ratio (HR) and 95% confidence intervals (CIs) as effect measures. A total of 5,314 patients from 13 studies were finally enrolled in the meta-analysis. The summary results showed that elevated PLR predicted poorer overall survival (OS) (HR: 1.526, 95%CI: 1.268-1.836, p analysis revealed that increased PLR was also associated with poor OS in NSCLC treated by surgical resection (HR: 1.884, 95%CI: 1.308-2.714, P 160 (HR: 1.842, 95%CI: 1.523-2.228, P  0.05) in patients with small cell lung cancer (SCLC).This meta-analysis result suggested that elevated PLR might be a predicative factor of poor prognosis for NSCLC patients. © 2016 UICC.

  7. Updates on the treatment of gout, including a review of updated treatment guidelines and use of small molecule therapies for difficult-to-treat gout and gout flares.

    Science.gov (United States)

    Soskind, Rose; Abazia, Daniel T; Bridgeman, Mary Barna

    2017-08-01

    Gout is a rheumatologic condition associated with elevated serum uric acid levels and deposition of monosodium urate crystals in joints and soft tissues. Areas covered: In this article, we describe the role of currently available drug therapies for managing acute gout flares and used in reducing serum urate levels. Further, we explore the role of novel small molecular therapies and biologic agents in the treatment of refractory or severe gout symptoms. A literature search of MEDLINE and MEDLINE In-Process & Other Non-Indexed Citations Databases (1996-June 2017) was conducted utilizing the key words 'gout', 'interleukin-1 inhibitors', 'acute gout', 'gout treatment', 'urate lowering therapies', 'hyperuricemia', 'colchicine', 'pegloticase', 'lesinurad', 'xanthine oxidase', 'xanthine oxidase inhibitors', 'allopurinol', 'febuxostat', 'uricosurics', 'probenecid', and 'benzbromarone'. All published articles regarding therapeutic management of gout and hyperuricemia were evaluated. References of selected articles, data from poster presentations, and abstract publications were additionally reviewed. Expert opinion: Numerous therapies are currently available to managing acute gout flares and for lowering serum urate levels; advances in the understanding of the pathophysiology of this disorder has led to the emergence of targeted therapies and novel biologic preparations currently in development which may improve the clinical management of severe or refractory cases of disease that fail to respond to traditional therapies.

  8. MDMA-assisted psychotherapy using low doses in a small sample of women with chronic posttraumatic stress disorder.

    Science.gov (United States)

    Bouso, José Carlos; Doblin, Rick; Farré, Magí; Alcázar, Miguel Angel; Gómez-Jarabo, Gregorio

    2008-09-01

    The purpose of this study was to investigate the safety of different doses of MDMA-assisted psychotherapy administered in a psychotherapeutic setting to women with chronic PTSD secondary to a sexual assault, and also to obtain preliminary data regarding efficacy. Although this study was originally planned to include 29 subjects, political pressures led to the closing of the study before it could be finished, at which time only six subjects had been treated. Preliminary results from those six subjects are presented here. We found that low doses of MDMA (between 50 and 75 mg) were both psychologically and physiologically safe for all the subjects. Future studies in larger samples and using larger doses are needed in order to further clarify the safety and efficacy of MDMA in the clinical setting in subjects with PTSD.

  9. Data release for intermediate-density hydrogeochemical and stream sediment sampling in the Vallecito Creek Special Study Area, Colorado, including concentrations of uranium and forty-six additional elements

    International Nuclear Information System (INIS)

    Warren, R.G.

    1981-04-01

    A sediment sample and two water samples were collected at each location about a kilometer apart from small tributary streams within the area. One of the two water samples collected at each location was filtered in the field and the other was not. Both samples were acidified to a pH of < 1; field data and uranium concentrations are listed first for the filtered sample (sample type = 07) and followed by the unfiltered sample (sample type = 27) for each location in Appendix I-A. Uranium concentrations are higher in unfiltered samples than in filtered samples for most locations. Measured uranium concentrations in control standards analyzed with the water samples are listed in Appendix II. All sediments were air dried and the fraction finer than 100 mesh was separated and analyzed for uranium and forty-six additional elements. Field data and analytical results for each sediment sample are listed in Appendix I-B. Analytical procedures for both water and sediment samples are briefly described in Appendix III. Most bedrock units within the sampled area are of Precambrian age. Three Precambrian units are known or potential hosts for uranium deposits; the Trimble granite is associated with the recently discovered Florida Mountain vein deposit, the Uncompahgre formation hosts a vein-type occurrence in Elk Park near the contact with the Irving formation, and the Vallecito conglomerate has received some attention as a possible host for a quartz pebble conglomerate deposit. Nearly all sediment samples collected downslope from exposures of Timble granite (geologic unit symbol ''T'' in Appendix I) contain unusually high uranium concentrations. High uranium concentrations in sediment also occur for an individual sample location that has a geologic setting similar to the Elk Park occurrence and for a sample associated with the Vallecito conglomerate

  10. Altering Practices to Include Bimodal-bilingual (ASL-Spoken English) Programming at a Small School for the Deaf in Canada.

    Science.gov (United States)

    Priestley, Karen; Enns, Charlotte; Arbuckle, Shauna

    2018-01-01

    Bimodal-bilingual programs are emerging as one way to meet broader needs and provide expanded language, educational and social-emotional opportunities for students who are deaf and hard of hearing (Marschark, M., Tang, G. & Knoors, H. (Eds). (2014). Bilingualism and bilingual Deaf education. New York, NY: Oxford University Press; Paludneviciene & Harris, R. (2011). Impact of cochlear implants on the deaf community. In Paludneviciene, R. & Leigh, I. (Eds.), Cochlear implants evolving perspectives (pp. 3-19). Washington, DC: Gallaudet University Press). However, there is limited research on students' spoken language development, signed language growth, academic outcomes or the social-emotional factors associated with these programs (Marschark, M., Tang, G. & Knoors, H. (Eds). (2014). Bilingualism and bilingual Deaf education. New York, NY: Oxford University Press; Nussbaum, D & Scott, S. (2011). The cochlear implant education center: Perspectives on effective educational practices. In Paludneviciene, R. & Leigh, I. (Eds.) Cochlear implants evolving perspectives (pp. 175-205). Washington, DC: Gallaudet University Press. The cochlear implant education center: Perspectives on effective educational practices. In Paludnevicience & Leigh (Eds). Cochlear implants evolving perspectives (pp. 175-205). Washington, DC: Gallaudet University Press; Spencer, P. & Marschark, M. (Eds.) (2010). Evidence-based practice in educating deaf and hard-of-hearing students. New York, NY: Oxford University Press). The purpose of this case study was to look at formal and informal student outcomes as well as staff and parent perceptions during the first 3 years of implementing a bimodal-bilingual (ASL and spoken English) program within an ASL milieu at a small school for the deaf. Speech and language assessment results for five students were analyzed over a 3-year period and indicated that the students made significant positive gains in all areas, although results were variable. Staff and parent

  11. The challenge of NSCLC diagnosis and predictive analysis on small samples. Practical approach of a working group

    DEFF Research Database (Denmark)

    Thunnissen, Erik; Kerr, Keith M; Herth, Felix J F

    2012-01-01

    Until recently, the division of pulmonary carcinomas into small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC) was adequate for therapy selection. Due to the emergence of new treatment options subtyping of NSCLC and predictive testing have become mandatory. A practical approach to...

  12. The development of small, cabled, real-time video based observation systems for near shore coastal marine science including three examples and lessons learned

    Science.gov (United States)

    Hatcher, Gerry; Okuda, Craig

    2016-01-01

    The effects of climate change on the near shore coastal environment including ocean acidification, accelerated erosion, destruction of coral reefs, and damage to marine habitat have highlighted the need for improved equipment to study, monitor, and evaluate these changes [1]. This is especially true where areas of study are remote, large, or beyond depths easily accessible to divers. To this end, we have developed three examples of low cost and easily deployable real-time ocean observation platforms. We followed a scalable design approach adding complexity and capability as familiarity and experience were gained with system components saving both time and money by reducing design mistakes. The purpose of this paper is to provide information for the researcher, technician, or engineer who finds themselves in need of creating or acquiring similar platforms.

  13. Incidence of isolated nodal failure in non-small cell lung cancer patients included in a prospective study of the value of PET–CT

    International Nuclear Information System (INIS)

    Kolodziejczyk, Milena; Bujko, Krzysztof; Michalski, Wojciech; Kepka, Lucyna

    2012-01-01

    Purpose: Elective nodal irradiation (ENI) is not recommended in PET–CT-based radiotherapy for NSCLC despite a low level of evidence to support such guidelines. The aim of this investigation is to find out whether omitting ENI is safe. Materials and methods: Sixty-seven patients treated within a frame of a previously published prospective trial of the value of PET–CT were included in the analysis. Seventeen (25%) patients received ENI due to higher initial nodal involvement and in the remaining 50 patients (75%) with N0-N1 or single N2 disease ENI was omitted. Isolated nodal failure (INF) was recorded if relapse occurred in the initially uninvolved regional lymph node without previous or simultaneous local recurrence regardless of the status of distant metastases. Results: With a median follow-up of 32 months, the estimated 3-year overall survival was 42%, local progression-free interval was 55%, and distant metastases-free interval was 62%. Three patients developed INF; all had ENI omitted from treatment, giving a final result of three INFs in 50 (6%) patients treated without ENI. In this group of patients, the 3-year cause-specific cumulative incidence of INF was 6.4% (95% confidence interval: 0–17%). Conclusions: The omission of ENI appears to be not as safe as suggested by current recommendations.

  14. Incidence of isolated nodal failure in non-small cell lung cancer patients included in a prospective study of the value of PET-CT.

    Science.gov (United States)

    Kolodziejczyk, Milena; Bujko, Krzysztof; Michalski, Wojciech; Kepka, Lucyna

    2012-07-01

    Elective nodal irradiation (ENI) is not recommended in PET-CT-based radiotherapy for NSCLC despite a low level of evidence to support such guidelines. The aim of this investigation is to find out whether omitting ENI is safe. Sixty-seven patients treated within a frame of a previously published prospective trial of the value of PET-CT were included in the analysis. Seventeen (25%) patients received ENI due to higher initial nodal involvement and in the remaining 50 patients (75%) with N0-N1 or single N2 disease ENI was omitted. Isolated nodal failure (INF) was recorded if relapse occurred in the initially uninvolved regional lymph node without previous or simultaneous local recurrence regardless of the status of distant metastases. With a median follow-up of 32 months, the estimated 3-year overall survival was 42%, local progression-free interval was 55%, and distant metastases-free interval was 62%. Three patients developed INF; all had ENI omitted from treatment, giving a final result of three INFs in 50 (6%) patients treated without ENI. In this group of patients, the 3-year cause-specific cumulative incidence of INF was 6.4% (95% confidence interval: 0-17%). The omission of ENI appears to be not as safe as suggested by current recommendations. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. Effects of Sample Impurities on the Analysis of MS2 Bacteriophage by Small-Angle Neutron Scattering

    National Research Council Canada - National Science Library

    Elashvili, Ilya; Wick, Charles H; Kuzmanovic, Deborah A; Krueger, Susan; O'Connell, Catherine

    2005-01-01

    .... The impact of small molecular weight impurities of the resolution of structural data obtained by SANS of the bacteriophage MS2 distorts the resolution and sharpness of contrast variation peaks...

  16. Sorption of Sr, Co and Zn on illite: Batch experiments and modelling including Co in-diffusion measurements on compacted samples

    Science.gov (United States)

    Montoya, V.; Baeyens, B.; Glaus, M. A.; Kupcik, T.; Marques Fernandes, M.; Van Laer, L.; Bruggeman, C.; Maes, N.; Schäfer, T.

    2018-02-01

    Experimental investigations on the uptake of divalent cations (Sr, Co and Zn) onto illite (Illite du Puy, Le-Puy-en-Velay, France) were carried out by three different international research groups (Institute for Nuclear Waste Disposal, KIT (Germany), Group Waste & Disposal, SCK-CEN, (Belgium) and Laboratory for Waste Management, PSI (Switzerland)) in the framework of the European FP7 CatClay project. The dependence of solid-liquid distribution ratios (Rd values) on pH at trace metal conditions (sorption edges) and on the metal ion concentration (sorption isotherms) was determined in dilute suspensions of homo-ionic Na-illite (Na-IdP) under controlled N2 atmosphere. The experimental results were modelled using the 2 Site Protolysis Non Electrostatic Surface Complexation and Cation Exchange (2SPNE SC/CE) sorption model. The sorption of Sr depends strongly on ionic strength, while a rather weak pH dependence is observed in a pH range between 3 and 11. The data were modelled with cation exchange reactions, taking into account competition with H, K, Ca, Mg and Al, and surface complexation on weak amphotheric edge sites at higher pH values. The sorption of Co on Na-IdP, however, is strongly pH dependent. Cation exchange on the planar sites and surface complexation on strong and weak amphoteric edge sites were used to describe the Co sorption data. Rd values for Co derived from in-diffusion measurements on compacted Na-IdP samples (bulk-dry density of 1700 kg m-3) between pH 5.0 and 9.0 are in good agreement with the batch sorption data. The equivalence of both approaches to measure sorption was thus confirmed for the present test system. In addition, the results highlight the importance of both major and minor surface species for the diffusive transport behaviour of strongly sorbing metal cations. While surface complexes at the edge sites determine largely the Rd value, the diffusive flux may be governed by those species bound to the planar sites, even at low fractional

  17. Analysis and comparison of fish growth from small samples of length-at-age data : Detection of sexual dimorphism in Eurasian perch as an example

    NARCIS (Netherlands)

    Mooij, WM; Van Rooij, JM; Wijnhoven, S

    A relatively simple approach is presented for statistical analysis and comparison of fish growth patterns inferred from size-at-age data. It can be used for any growth model and small sample sizes. Bootstrapping is used to generate confidence regions for the model parameters and for size and growth

  18. Modernizing Agrifood Markets : Including Small Producers in ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Observers fear that new patterns of supply and marketing are biased in ... these concerns in the following countries: China, India, Indonesia, Mexico, Poland, ... prevent gender-based violence, and make digital platforms work for inclusive ...

  19. Investigation of the Effect of Small Hardening Spots Created on the Sample Surface by Laser Complex with Solid-State Laser

    Science.gov (United States)

    Nozdrina, O.; Zykov, I.; Melnikov, A.; Tsipilev, V.; Turanov, S.

    2018-03-01

    This paper describes the results of an investigation of the effect of small hardening spots (about 1 mm) created on the surface of a sample by laser complex with solid-state laser. The melted area of the steel sample is not exceed 5%. Steel microhardness change in the region subjected to laser treatment is studied. Also there is a graph of the deformation of samples dependence on the tension. As a result, the yield plateau and plastic properties changes were detected. The flow line was tracked in the series of speckle photographs. As a result we can see how mm surface inhomogeneity can influence on the deformation and strength properties of steel.

  20. Approaches for cytogenetic and molecular analyses of small flow-sorted cell populations from childhood leukemia bone marrow samples

    DEFF Research Database (Denmark)

    Obro, Nina Friesgaard; Madsen, Hans O.; Ryder, Lars Peter

    2011-01-01

    defined cell populations with subsequent analyses of leukemia-associated cytogenetic and molecular marker. The approaches described here optimize the use of the same tube of unfixed, antibody-stained BM cells for flow-sorting of small cell populations and subsequent exploratory FISH and PCR-based analyses....

  1. A novel device for batch-wise isolation of α-cellulose from small-amount wholewood samples

    OpenAIRE

    T. Wieloch; Gerhard Helle; Ingo Heinrich; Michael Voigt; P. Schyma

    2011-01-01

    A novel device for the chemical isolation of α-cellulose from wholewood material of tree rings was designed by the Potsdam Dendro Laboratory. It allows the simultaneous treatment of up to several hundred micro samples. Key features are the batch-wise exchange of the chemical solutions, the reusability of all major parts and the easy and unambiguous labelling of each individual sample. Compared to classical methods labour intensity and running costs are significantly reduced.

  2. Measurement of large asymptotic reactor periods from about 103 to 4.104 sec) to determine reactivity effects of small samples

    International Nuclear Information System (INIS)

    Grinevich, F.A.; Evchuk, A.I.; Klimentov, V.B.; Tyzh, A.V.; Churkin, Yu.I.; Yaroshevich, O.I.

    1977-01-01

    All investigation programs on fast reactor physics include measurements of low reactivity values (1-0.01)x10 -5 ΔK/K. An application of the pile oscillator technique for the purpose requires a special critical assembly for an installation of the oscillator. Thus it is of interest to develop relatively simple methods. In particular, one of such methods is the asymptotic period method which is widely used for low reactivity measurements. The description of the method and equipment developed for low reactivity measurements according to the measurements of the steady-state reactor period is presented. The equipment has been tested on the BTS-2 fast-thermal critical assembly. Measurement results on the reactivity effects of small samples in the fast zone centre are given. It is shown that the application of the method of measuring long steady-state periods and developed and tested equipment enables the reactivity of (1+-0.02)x10 -5 ΔK/K to be determined at the critical assembly power of 5 to 10 Wt. The disadvantage of the method presented is the time lost on reaching the steady-state period which results in greater sensitivity of the method to reactivity drifts

  3. Sample Preparation and Extraction in Small Sample Volumes Suitable for Pediatric Clinical Studies: Challenges, Advances, and Experiences of a Bioanalytical HPLC-MS/MS Method Validation Using Enalapril and Enalaprilat

    Science.gov (United States)

    Burckhardt, Bjoern B.; Laeer, Stephanie

    2015-01-01

    In USA and Europe, medicines agencies force the development of child-appropriate medications and intend to increase the availability of information on the pediatric use. This asks for bioanalytical methods which are able to deal with small sample volumes as the trial-related blood lost is very restricted in children. Broadly used HPLC-MS/MS, being able to cope with small volumes, is susceptible to matrix effects. The latter restrains the precise drug quantification through, for example, causing signal suppression. Sophisticated sample preparation and purification utilizing solid-phase extraction was applied to reduce and control matrix effects. A scale-up from vacuum manifold to positive pressure manifold was conducted to meet the demands of high-throughput within a clinical setting. Faced challenges, advances, and experiences in solid-phase extraction are exemplarily presented on the basis of the bioanalytical method development and validation of low-volume samples (50 μL serum). Enalapril, enalaprilat, and benazepril served as sample drugs. The applied sample preparation and extraction successfully reduced the absolute and relative matrix effect to comply with international guidelines. Recoveries ranged from 77 to 104% for enalapril and from 93 to 118% for enalaprilat. The bioanalytical method comprising sample extraction by solid-phase extraction was fully validated according to FDA and EMA bioanalytical guidelines and was used in a Phase I study in 24 volunteers. PMID:25873972

  4. Sample Preparation and Extraction in Small Sample Volumes Suitable for Pediatric Clinical Studies: Challenges, Advances, and Experiences of a Bioanalytical HPLC-MS/MS Method Validation Using Enalapril and Enalaprilat

    Directory of Open Access Journals (Sweden)

    Bjoern B. Burckhardt

    2015-01-01

    Full Text Available In USA and Europe, medicines agencies force the development of child-appropriate medications and intend to increase the availability of information on the pediatric use. This asks for bioanalytical methods which are able to deal with small sample volumes as the trial-related blood lost is very restricted in children. Broadly used HPLC-MS/MS, being able to cope with small volumes, is susceptible to matrix effects. The latter restrains the precise drug quantification through, for example, causing signal suppression. Sophisticated sample preparation and purification utilizing solid-phase extraction was applied to reduce and control matrix effects. A scale-up from vacuum manifold to positive pressure manifold was conducted to meet the demands of high-throughput within a clinical setting. Faced challenges, advances, and experiences in solid-phase extraction are exemplarily presented on the basis of the bioanalytical method development and validation of low-volume samples (50 μL serum. Enalapril, enalaprilat, and benazepril served as sample drugs. The applied sample preparation and extraction successfully reduced the absolute and relative matrix effect to comply with international guidelines. Recoveries ranged from 77 to 104% for enalapril and from 93 to 118% for enalaprilat. The bioanalytical method comprising sample extraction by solid-phase extraction was fully validated according to FDA and EMA bioanalytical guidelines and was used in a Phase I study in 24 volunteers.

  5. Analysis of Reflectance and Transmittance Measurements on Absorbing and Scattering Small Samples Using a Modified Integrating Sphere Setup

    DEFF Research Database (Denmark)

    Jernshøj, Kit Drescher; Hassing, Søren

    2009-01-01

    Formålet med artiklen er at anlysere reflektans og transmittans målinger på små spredende og absorberende emner. Små emner, som f.eks. grønne blade udgør en speciel eksperimentel udfordring, når sample beamet har et større tværsnit end emnet, der skal måles på. De eksperimentelle fejl, der indfør...

  6. Multi-actinide analysis with AMS for ultra-trace determination and small sample sizes: advantages and drawbacks

    Energy Technology Data Exchange (ETDEWEB)

    Quinto, Francesca; Lagos, Markus; Plaschke, Markus; Schaefer, Thorsten; Geckeis, Horst [Institute for Nuclear Waste Disposal, Karlsruhe Institute of Technology (Germany); Steier, Peter; Golser, Robin [VERA Laboratory, Faculty of Physics, University of Vienna (Austria)

    2016-07-01

    With the abundance sensitivities of AMS for U-236, Np-237 and Pu-239 relative to U-238 at levels lower than 1E-15, a simultaneous determination of several actinides without previous chemical separation from each other is possible. The actinides are extracted from the matrix elements via an iron hydroxide co-precipitation and the nuclides sequentially measured from the same sputter target. This simplified method allows for the use of non-isotopic tracers and consequently the determination of Np-237 and Am-243 for which isotopic tracers with the degree of purity required by ultra-trace mass-spectrometric analysis are not available. With detection limits of circa 1E+4 atoms in a sample, 1E+8 atoms are determined with circa 1 % relative uncertainty due to counting statistics. This allows for an unprecedented reduction of the sample size down to 100 ml of natural water. However, the use of non-isotopic tracers introduces a dominating uncertainty of up to 30 % related to the reproducibility of the results. The advantages and drawbacks of the novel method will be presented with the aid of recent results from the CFM Project at the Grimsel Test Site and from the investigation of global fallout in environmental samples.

  7. SampleCNN: End-to-End Deep Convolutional Neural Networks Using Very Small Filters for Music Classification

    Directory of Open Access Journals (Sweden)

    Jongpil Lee

    2018-01-01

    Full Text Available Convolutional Neural Networks (CNN have been applied to diverse machine learning tasks for different modalities of raw data in an end-to-end fashion. In the audio domain, a raw waveform-based approach has been explored to directly learn hierarchical characteristics of audio. However, the majority of previous studies have limited their model capacity by taking a frame-level structure similar to short-time Fourier transforms. We previously proposed a CNN architecture which learns representations using sample-level filters beyond typical frame-level input representations. The architecture showed comparable performance to the spectrogram-based CNN model in music auto-tagging. In this paper, we extend the previous work in three ways. First, considering the sample-level model requires much longer training time, we progressively downsample the input signals and examine how it affects the performance. Second, we extend the model using multi-level and multi-scale feature aggregation technique and subsequently conduct transfer learning for several music classification tasks. Finally, we visualize filters learned by the sample-level CNN in each layer to identify hierarchically learned features and show that they are sensitive to log-scaled frequency.

  8. A Simple Method for Automated Solid Phase Extraction of Water Samples for Immunological Analysis of Small Pollutants.

    Science.gov (United States)

    Heub, Sarah; Tscharner, Noe; Kehl, Florian; Dittrich, Petra S; Follonier, Stéphane; Barbe, Laurent

    2016-01-01

    A new method for solid phase extraction (SPE) of environmental water samples is proposed. The developed prototype is cost-efficient and user friendly, and enables to perform rapid, automated and simple SPE. The pre-concentrated solution is compatible with analysis by immunoassay, with a low organic solvent content. A method is described for the extraction and pre-concentration of natural hormone 17β-estradiol in 100 ml water samples. Reverse phase SPE is performed with octadecyl-silica sorbent and elution is done with 200 µl of methanol 50% v/v. Eluent is diluted by adding di-water to lower the amount of methanol. After preparing manually the SPE column, the overall procedure is performed automatically within 1 hr. At the end of the process, estradiol concentration is measured by using a commercial enzyme-linked immune-sorbent assay (ELISA). 100-fold pre-concentration is achieved and the methanol content in only 10% v/v. Full recoveries of the molecule are achieved with 1 ng/L spiked de-ionized and synthetic sea water samples.

  9. A combined approach of generalized additive model and bootstrap with small sample sets for fault diagnosis in fermentation process of glutamate.

    Science.gov (United States)

    Liu, Chunbo; Pan, Feng; Li, Yun

    2016-07-29

    Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.

  10. A small, lightweight multipollutant sensor system for ground-mobile and aerial emission sampling from open area sources

    Science.gov (United States)

    Characterizing highly dynamic, transient, and vertically lofted emissions from open area sources poses unique measurement challenges. This study developed and applied a multipollutant sensor and integrated sampler system for use on mobile applications including tethered balloons ...

  11. Detection of Small Numbers of Campylobacter jejuni and Campylobacter coli Cells in Environmental Water, Sewage, and Food Samples by a Seminested PCR Assay

    Science.gov (United States)

    Waage, Astrid S.; Vardund, Traute; Lund, Vidar; Kapperud, Georg

    1999-01-01

    A rapid and sensitive assay was developed for detection of small numbers of Campylobacter jejuni and Campylobacter coli cells in environmental water, sewage, and food samples. Water and sewage samples were filtered, and the filters were enriched overnight in a nonselective medium. The enrichment cultures were prepared for PCR by a rapid and simple procedure consisting of centrifugation, proteinase K treatment, and boiling. A seminested PCR based on specific amplification of the intergenic sequence between the two Campylobacter flagellin genes, flaA and flaB, was performed, and the PCR products were visualized by agarose gel electrophoresis. The assay allowed us to detect 3 to 15 CFU of C. jejuni per 100 ml in water samples containing a background flora consisting of up to 8,700 heterotrophic organisms per ml and 10,000 CFU of coliform bacteria per 100 ml. Dilution of the enriched cultures 1:10 with sterile broth prior to the PCR was sometimes necessary to obtain positive results. The assay was also conducted with food samples analyzed with or without overnight enrichment. As few as ≤3 CFU per g of food could be detected with samples subjected to overnight enrichment, while variable results were obtained for samples analyzed without prior enrichment. This rapid and sensitive nested PCR assay provides a useful tool for specific detection of C. jejuni or C. coli in drinking water, as well as environmental water, sewage, and food samples containing high levels of background organisms. PMID:10103261

  12. Sample types applied for molecular diagnosis of therapeutic management of advanced non-small cell lung cancer in the precision medicine.

    Science.gov (United States)

    Han, Yanxi; Li, Jinming

    2017-10-26

    In this era of precision medicine, molecular biology is becoming increasingly significant for the diagnosis and therapeutic management of non-small cell lung cancer. The specimen as the primary element of the whole testing flow is particularly important for maintaining the accuracy of gene alteration testing. Presently, the main sample types applied in routine diagnosis are tissue and cytology biopsies. Liquid biopsies are considered as the most promising alternatives when tissue and cytology samples are not available. Each sample type possesses its own strengths and weaknesses, pertaining to the disparity of sampling, preparation and preservation procedures, the heterogeneity of inter- or intratumors, the tumor cellularity (percentage and number of tumor cells) of specimens, etc., and none of them can individually be a "one size to fit all". Therefore, in this review, we summarized the strengths and weaknesses of different sample types that are widely used in clinical practice, offered solutions to reduce the negative impact of the samples and proposed an optimized strategy for choice of samples during the entire diagnostic course. We hope to provide valuable information to laboratories for choosing optimal clinical specimens to achieve comprehensive functional genomic landscapes and formulate individually tailored treatment plans for NSCLC patients that are in advanced stages.

  13. A comparison of turtle sampling methods in a small lake in Standing Stone State Park, Overton County, Tennessee

    Science.gov (United States)

    Weber, A.; Layzer, James B.

    2011-01-01

    We used basking traps and hoop nets to sample turtles in Standing Stone Lake at 2-week intervals from May to November 2006. In alternate weeks, we conducted visual basking surveys. We collected and observed four species of turtles: spiny softshell (Apalone spinifera), northern map turtle (Graptemys geographica), pond slider (Trachernys scripta), and snapping turtle (Chelydra serpentina). Relative abundances varied greatly among sampling methods. To varying degrees, all methods were species selective. Population estimates from mark and recaptures of three species, basking counts, and hoop net catches indicated that pond sliders were the most abundant species, but northern map turtles were 8× more abundant than pond sliders in basking trap catches. We saw relatively few snapping turtles basking even though population estimates indicated they were the second most abundant species. Populations of all species were dominated by adult individuals. Sex ratios of three species differed significantly from 1:1. Visual surveys were the most efficient method for determining the presence of species, but capture methods were necessary to obtain size and sex data.

  14. Criticality Safety Evaluation for Small Sample Preparation and Non-Destructive Assay (NDA) Operations in Wing 7 Basement of the CMR Facility

    Energy Technology Data Exchange (ETDEWEB)

    Kunkle, Paige Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Zhang, Ning [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-02

    Nuclear Criticality Safety (NCS) has reviewed the fissionable material small sample preparation and NDA operations in Wing 7 Basement of the CMR Facility. This is a Level-1 evaluation conducted in accordance with NCS-AP-004 [Reference 1], formerly NCS-GUIDE-01, and the guidance set forth on use of the Standard Criticality Safety Requirements (SCSRs) [Reference 2]. As stated in Reference 2, the criticality safety evaluation consists of both the SCSR CSED and the SCSR Application CSED. The SCSR CSED is a Level-3 CSED [Reference 3]. This Level-1 CSED is the SCSR Application CSED. This SCSR Application (Level-1) evaluation does not derive controls, it simply applies controls derived from the SCSR CSED (Level-3) for the application of operations conducted here. The controls derived in the SCSR CSED (Level-3) were evaluated via the process described in Section 6.6.5 of SD-130 (also reproduced in Section 4.3.5 of NCS-AP-004 [Reference 1]) and were determined to not meet the requirements for consideration of elevation into the safety basis documentation for CMR. According to the guidance set forth on use of the SCSRs [Reference 2], the SCSR CSED (Level-3) is also applicable to the CMR Facility because the process and the normal and credible abnormal conditions in question are bounded by those that are described in the SCSR CSED. The controls derived in the SCSR CSED include allowances for solid materials and solution operations. Based on the operations conducted at this location, there are less-than-accountable (LTA) amounts of 233U. Based on the evaluation documented herein, the normal and credible abnormal conditions that might arise during the execution of this process will remain subcritical with the following recommended controls.

  15. Mediastinal lymph node dissection versus mediastinal lymph node sampling for early stage non-small cell lung cancer: a systematic review and meta-analysis.

    Science.gov (United States)

    Huang, Xiongfeng; Wang, Jianmin; Chen, Qiao; Jiang, Jielin

    2014-01-01

    This systematic review and meta-analysis aimed to evaluate the overall survival, local recurrence, distant metastasis, and complications of mediastinal lymph node dissection (MLND) versus mediastinal lymph node sampling (MLNS) in stage I-IIIA non-small cell lung cancer (NSCLC) patients. A systematic search of published literature was conducted using the main databases (MEDLINE, PubMed, EMBASE, and Cochrane databases) to identify relevant randomized controlled trials that compared MLND vs. MLNS in NSCLC patients. Methodological quality of included randomized controlled trials was assessed according to the criteria from the Cochrane Handbook for Systematic Review of Interventions (Version 5.1.0). Meta-analysis was performed using The Cochrane Collaboration's Review Manager 5.3. The results of the meta-analysis were expressed as hazard ratio (HR) or risk ratio (RR), with their corresponding 95% confidence interval (CI). We included results reported from six randomized controlled trials, with a total of 1,791 patients included in the primary meta-analysis. Compared to MLNS in NSCLC patients, there was no statistically significant difference in MLND on overall survival (HR = 0.77, 95% CI 0.55 to 1.08; P = 0.13). In addition, the results indicated that local recurrence rate (RR = 0.93, 95% CI 0.68 to 1.28; P = 0.67), distant metastasis rate (RR = 0.88, 95% CI 0.74 to 1.04; P = 0.15), and total complications rate (RR = 1.10, 95% CI 0.67 to 1.79; P = 0.72) were similar, no significant difference found between the two groups. Results for overall survival, local recurrence rate, and distant metastasis rate were similar between MLND and MLNS in early stage NSCLC patients. There was no evidence that MLND increased complications compared with MLNS. Whether or not MLND is superior to MLNS for stage II-IIIA remains to be determined.

  16. A rheo-optical apparatus for real time kinetic studies on shear-induced alignment of self-assembled soft matter with small sample volumes

    Science.gov (United States)

    Laiho, Ari; Ikkala, Olli

    2007-01-01

    In soft materials, self-assembled nanoscale structures can allow new functionalities but a general problem is to align such local structures aiming at monodomain overall order. In order to achieve shear alignment in a controlled manner, a novel type of rheo-optical apparatus has here been developed that allows small sample volumes and in situ monitoring of the alignment process during the shear. Both the amplitude and orientation angles of low level linear birefringence and dichroism are measured while the sample is subjected to large amplitude oscillatory shear flow. The apparatus is based on a commercial rheometer where we have constructed a flow cell that consists of two quartz teeth. The lower tooth can be set in oscillatory motion whereas the upper one is connected to the force transducers of the rheometer. A custom made cylindrical oven allows the operation of the flow cell at elevated temperatures up to 200 °C. Only a small sample volume is needed (from 9 to 25 mm3), which makes the apparatus suitable especially for studying new materials which are usually obtainable only in small quantities. Using this apparatus the flow alignment kinetics of a lamellar polystyrene-b-polyisoprene diblock copolymer is studied during shear under two different conditions which lead to parallel and perpendicular alignment of the lamellae. The open device geometry allows even combined optical/x-ray in situ characterization of the alignment process by combining small-angle x-ray scattering using concepts shown by Polushkin et al. [Macromolecules 36, 1421 (2003)].

  17. Triacylglycerol Analysis in Human Milk and Other Mammalian Species: Small-Scale Sample Preparation, Characterization, and Statistical Classification Using HPLC-ELSD Profiles.

    Science.gov (United States)

    Ten-Doménech, Isabel; Beltrán-Iturat, Eduardo; Herrero-Martínez, José Manuel; Sancho-Llopis, Juan Vicente; Simó-Alfonso, Ernesto Francisco

    2015-06-24

    In this work, a method for the separation of triacylglycerols (TAGs) present in human milk and from other mammalian species by reversed-phase high-performance liquid chromatography using a core-shell particle packed column with UV and evaporative light-scattering detectors is described. Under optimal conditions, a mobile phase containing acetonitrile/n-pentanol at 10 °C gave an excellent resolution among more than 50 TAG peaks. A small-scale method for fat extraction in these milks (particularly of interest for human milk samples) using minimal amounts of sample and reagents was also developed. The proposed extraction protocol and the traditional method were compared, giving similar results, with respect to the total fat and relative TAG contents. Finally, a statistical study based on linear discriminant analysis on the TAG composition of different types of milks (human, cow, sheep, and goat) was carried out to differentiate the samples according to their mammalian origin.

  18. Small-angle X-ray scattering tensor tomography: model of the three-dimensional reciprocal-space map, reconstruction algorithm and angular sampling requirements.

    Science.gov (United States)

    Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel

    2018-01-01

    Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.

  19. Ochratoxin A in raisins and currants: basic extraction procedure used in two small marketing surveys of the occurrence and control of the heterogeneity of the toxins in samples.

    Science.gov (United States)

    Möller, T E; Nyberg, M

    2003-11-01

    A basic extraction procedure for analysis of ochratoxin A (OTA) in currants and raisins is described, as well as the occurrence of OTA and a control of heterogeneity of the toxin in samples bought for two small marketing surveys 1999/2000 and 2001/02. Most samples in the surveys were divided into two subsamples that were individually prepared as slurries and analysed separately. The limit of quantification for the method was estimated as 0.1 microg kg(-1) and recoveries of 85, 90 and 115% were achieved in recovery experiments at 10, 5 and 0.1 microg kg(-1), respectively. Of all 118 subsamples analysed in the surveys, 96 (84%) contained ochratoxin A at levels above the quantification level and five samples (4%) contained more than the European Community legislation of 10 microg kg(-1). The OTA concentrations found in the first survey were in the range Big differences were often achieved between individual subsamples of the original sample, which indicate a wide heterogeneous distribution of the toxin. Data from the repeatability test as well as recovery experiments from the same slurries showed that preparation of slurries as described here seemed to give a homogeneous and representative sample. The extraction with the basic sodium bicarbonate-methanol mixture used in the surveys gave similar or somewhat higher OTA values on some samples tested in a comparison with a weak phosphoric acid water-methanol extraction mixture.

  20. Oxygen consumption during mineralization of organic compounds in water samples from a small sub-tropical reservoir (Brazil

    Directory of Open Access Journals (Sweden)

    Cunha-Santino Marcela Bianchessi da

    2003-01-01

    Full Text Available Assays were carried out to evaluate the oxygen consumption resulting from mineralization of different organic compounds: glucose, sucrose, starch, tannic acid, lysine and glycine. The compounds were added to 1 l of water sample from Monjolinho Reservoir. Dissolved oxygen and dissolved organic carbon were monitored during 20 days and the results were fitted to first order kinetics model. During the 20 days of experiments, the oxygen consumption varied from 4.5 mg.l-1 (tannic acid to 71.5 mg.l-1 (glucose. The highest deoxygenation rate (kD was observed for mineralization of tannic acid (0.321 day-1 followed by glycine, starch, lysine, sucrose and glucose (0.1004, 0.0504, 0.0486, 0.0251 and 0.0158 day-1, respectively. From theoretical calculations and oxygen and carbon concentrations we obtained the stoichiometry of the mineralization processes. Stoichiometric values varied from 0.17 (tannic acid to 2.55 (sucrose.

  1. Dynamics of glucagon secretion in mice and rats revealed using a validated sandwich ELISA for small sample volumes

    DEFF Research Database (Denmark)

    Albrechtsen, Nicolai Jacob Wewer; Kuhre, Rune Ehrenreich; Windeløv, Johanne Agerlin

    2016-01-01

    Glucagon is a metabolically important hormone, but many aspects of its physiology remain obscure, because glucagon secretion is difficult to measure in mice and rats due to methodological inadequacies. Here, we introduce and validate a low-volume, enzyme-linked immunosorbent glucagon assay...... according to current analytical guidelines, including tests of sensitivity, specificity, and accuracy, and compare it, using the Bland-Altman algorithm and size-exclusion chromatography, with three other widely cited assays. After demonstrating adequate performance of the assay, we measured glucagon...... and returning to basal levels at 6 min (mice) and 12 min (rats). d-Mannitol (osmotic control) was without effect. Ketamine/xylazine anesthesia in mice strongly attenuated (P assay. In conclusion, dynamic analysis...

  2. Small polaron hopping conduction in samples of ceramic La1.4Sr1.6Mn2O7.06

    International Nuclear Information System (INIS)

    Nakatsugawa, H.; Iguchi, E.; Jung, W.H.; Munakata, F.

    1999-01-01

    The ceramic sample of La 1.4 Sr 1.6 Mn 2 O 7.06 exhibits the metal-insulator transition and a negative magnetoresistance in the vicinity of the Curie temperature (T C ∼ 100 K). The dc magnetic susceptibility between 100 K and 280 K is nearly constant and decreases gradually with increasing temperature above 280 K. The measurements of dc resistivity and the thermoelectric power indicate that small polaron hopping conduction takes place at T > 280 K. The spin ordering due to the two-dimensional d x 2 -y 2 state occurring at T > 280 K is directly related to the hopping conduction above 280 K, although the spin ordering due to the one-dimensional d 3z 2 -r 2 state takes place at T > T C . The two-dimensional d x 2 -y 2 state extending within the MnO 2 sheets starts to narrow and leads to the carrier localisation at 280 K. The effective number of holes in this sample estimated from the thermoelectric power is considerably smaller than the nominal value. This indicates that the small polaron hopping conduction takes place predominantly within the in-plane MnO 2 sheets. A discussion is given of the experimental results of the ceramic sample of La 2/3 Ca 1/3 MnO 2.98 . Copyright (1999) CSIRO Australia

  3. Information in small neuronal ensemble activity in the hippocampal CA1 during delayed non-matching to sample performance in rats

    Directory of Open Access Journals (Sweden)

    Takahashi Susumu

    2009-09-01

    Full Text Available Abstract Background The matrix-like organization of the hippocampus, with its several inputs and outputs, has given rise to several theories related to hippocampal information processing. Single-cell electrophysiological studies and studies of lesions or genetically altered animals using recognition memory tasks such as delayed non-matching-to-sample (DNMS tasks support the theories. However, a complete understanding of hippocampal function necessitates knowledge of the encoding of information by multiple neurons in a single trial. The role of neuronal ensembles in the hippocampal CA1 for a DNMS task was assessed quantitatively in this study using multi-neuronal recordings and an artificial neural network classifier as a decoder. Results The activity of small neuronal ensembles (6-18 cells over brief time intervals (2-50 ms contains accurate information specifically related to the matching/non-matching of continuously presented stimuli (stimulus comparison. The accuracy of the combination of neurons pooled over all the ensembles was markedly lower than those of the ensembles over all examined time intervals. Conclusion The results show that the spatiotemporal patterns of spiking activity among cells in the small neuronal ensemble contain much information that is specifically useful for the stimulus comparison. Small neuronal networks in the hippocampal CA1 might therefore act as a comparator during recognition memory tasks.

  4. An Improved Metabolism Grey Model for Predicting Small Samples with a Singular Datum and Its Application to Sulfur Dioxide Emissions in China

    Directory of Open Access Journals (Sweden)

    Wei Zhou

    2016-01-01

    Full Text Available This study proposes an improved metabolism grey model [IMGM(1,1] to predict small samples with a singular datum, which is a common phenomenon in daily economic data. This new model combines the fitting advantage of the conventional GM(1,1 in small samples and the additional advantages of the MGM(1,1 in new real-time data, while overcoming the limitations of both the conventional GM(1,1 and MGM(1,1 when the predicted results are vulnerable at any singular datum. Thus, this model can be classified as an improved grey prediction model. Its improvements are illustrated through a case study of sulfur dioxide emissions in China from 2007 to 2013 with a singular datum in 2011. Some features of this model are presented based on the error analysis in the case study. Results suggest that if action is not taken immediately, sulfur dioxide emissions in 2016 will surpass the standard level required by the Twelfth Five-Year Plan proposed by the China State Council.

  5. Comparison of Time-of-flight and Multicollector ICP Mass Spectrometers for Measuring Actinides in Small Samples using single shot Laser Ablation

    International Nuclear Information System (INIS)

    R.S. Houk; D.B. Aeschliman; S.J. Bajic; D. Baldwin

    2005-01-01

    The objective of these experiments is to evaluate the performance of two types of ICP-MS device for measurement of actinide isotopes by laser ablation (LA) ICP-MS. The key advantage of ICP-MS compared to monitoring of radioactive decay is that the element need not decay during the measurement time. Hence ICP-MS is much faster for long-lived radionuclides. The LA process yields a transient signal. When spatially resolved analysis is required for small samples, the laser ablation sample pulse lasts only ∼10 seconds. It is difficult to measure signals at several isotopes with analyzers that are scanned for such a short sample transient. In this work, a time-of-flight (TOF) ICP-MS device, the GBC Optimass 8000 (Figure 1) is one instrument used. Strictly speaking, ions at different m/z values are not measured simultaneously in TOF. However, they are measured in very rapid sequence with little or no compromise between the number of m/z values monitored and the performance. Ions can be measured throughout the m/z range in single sample transients by TOF. The other ICP-MS instrument used is a magnetic sector multicollector MS, the NU Plasma 1700 (Figure 2). Up to 8 adjacent m/z values can be monitored at one setting of the magnetic field and accelerating voltage. Three of these m/z values can be measured with an electron multiplier. This device is usually used for high precision isotope ratio measurements with the Faraday cup detectors. The electron multipliers have much higher sensitivity. In our experience with the scanning magnetic sector instrument in Ames, these devices have the highest sensitivity and lowest background of any ICP-MS device. The ability to monitor several ions simultaneously, or nearly so, should make these devices valuable for the intended application: measurement of actinide isotopes at low concentrations in very small samples for nonproliferation purposes. The primary sample analyzed was an urban dust pellet reference material, NIST 1648. The

  6. Big news in small samples

    NARCIS (Netherlands)

    P.C. Schotman (Peter); S. Straetmans; C.G. de Vries (Casper)

    1997-01-01

    textabstractUnivariate time series regressions of the forex return on the forward premium generate mostly negative slope coefficients. Simple and refined panel estimation techniques yield slope estimates that are much closer to unity. We explain the two apparently opposing results by allowing for

  7. Small Boat Bottomfish Sampling Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Fishing operations that focus on targeting bottomfish (mostly juvenile opakapaka) that are independent of a larger research vessel, i.e. the Oscar Elton Sette.

  8. Use of a 137Cs re-sampling technique to investigate temporal changes in soil erosion and sediment mobilisation for a small forested catchment in southern Italy

    International Nuclear Information System (INIS)

    Porto, Paolo; Walling, Des E.; Alewell, Christine; Callegari, Giovanni; Mabit, Lionel; Mallimo, Nicola; Meusburger, Katrin; Zehringer, Markus

    2014-01-01

    Soil erosion and both its on-site and off-site impacts are increasingly seen as a serious environmental problem across the world. The need for an improved evidence base on soil loss and soil redistribution rates has directed attention to the use of fallout radionuclides, and particularly 137 Cs, for documenting soil redistribution rates. This approach possesses important advantages over more traditional means of documenting soil erosion and soil redistribution. However, one key limitation of the approach is the time-averaged or lumped nature of the estimated erosion rates. In nearly all cases, these will relate to the period extending from the main period of bomb fallout to the time of sampling. Increasing concern for the impact of global change, particularly that related to changing land use and climate change, has frequently directed attention to the need to document changes in soil redistribution rates within this period. Re-sampling techniques, which should be distinguished from repeat-sampling techniques, have the potential to meet this requirement. As an example, the use of a re-sampling technique to derive estimates of the mean annual net soil loss from a small (1.38 ha) forested catchment in southern Italy is reported. The catchment was originally sampled in 1998 and samples were collected from points very close to the original sampling points again in 2013. This made it possible to compare the estimate of mean annual erosion for the period 1954–1998 with that for the period 1999–2013. The availability of measurements of sediment yield from the catchment for parts of the overall period made it possible to compare the results provided by the 137 Cs re-sampling study with the estimates of sediment yield for the same periods. In order to compare the estimates of soil loss and sediment yield for the two different periods, it was necessary to establish the uncertainty associated with the individual estimates. In the absence of a generally accepted procedure

  9. Miniaturizing 3D assay for high-throughput drug and genetic screens for small patient-derived tumor samples (Conference Presentation)

    Science.gov (United States)

    Rotem, Asaf; Garraway, Levi; Su, Mei-Ju; Basu, Anindita; Regev, Aviv; Struhl, Kevin

    2017-02-01

    Three-dimensional growth conditions reflect the natural environment of cancer cells and are crucial to be performed at drug screens. We developed a 3D assay for cellular transformation that involves growth in low attachment (GILA) conditions and is strongly correlated with the 50-year old benchmark assay-soft agar. Using GILA, we performed high-throughput screens for drugs and genes that selectively inhibit or increase transformation, but not proliferation. This phenotypic approach is complementary to our genetic approach that utilizes single-cell RNA-sequencing of a patient sample to identify putative oncogenes that confer sensitivity to drugs designed to specifically inhibit the identified oncoprotein. Currently, we are dealing with a big challenge in our field- the limited number of cells that might be extracted from a biopsy. Small patient-derived samples are hard to test in the traditional multiwell plate and it will be helpful to minimize the culture area and the experimental system. We managed to design a suitable microfluidic device for limited number of cells and perform the assay using image analysis. We aim to test drugs on tumor cells, outside of the patient body- and recommend on the ideal treatment that is tailored to the individual. This device will help to minimize biopsy-sampling volumes and minimize interventions in the patient's tumor.

  10. Context matters: volunteer bias, small sample size, and the value of comparison groups in the assessment of research-based undergraduate introductory biology lab courses.

    Science.gov (United States)

    Brownell, Sara E; Kloser, Matthew J; Fukami, Tadashi; Shavelson, Richard J

    2013-01-01

    The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course.

  11. Context Matters: Volunteer Bias, Small Sample Size, and the Value of Comparison Groups in the Assessment of Research-Based Undergraduate Introductory Biology Lab Courses

    Directory of Open Access Journals (Sweden)

    Sara E. Brownell

    2013-08-01

    Full Text Available The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course.

  12. A technique of evaluating most probable stochastic valuables from a small number of samples and their accuracies and degrees of confidence

    Energy Technology Data Exchange (ETDEWEB)

    Katoh, K [Ibaraki Pref. Univ. Health Sci., (Japan)

    1997-12-31

    A problem of estimating stochastic characteristics of a population from a small number of samples is solved as an inverse problem, from view point of information theory and with the Bayesian statistics. For both Poisson-process and Bernoulli-process, the most probable values of the characteristics of the mother population and their accuracies and degrees of confidence are successfully obtained. Mathematical expressions are given to the general case where a limit amount of information and/or knowledge with the stochastic characteristics are available and a special case where no a priori information nor knowledge are available. Mathematical properties of the solutions obtained, practical appreciation to the problem to radiation measurement are also discussed.

  13. Decomposition and forecasting analysis of China's energy efficiency: An application of three-dimensional decomposition and small-sample hybrid models

    International Nuclear Information System (INIS)

    Meng, Ming; Shang, Wei; Zhao, Xiaoli; Niu, Dongxiao; Li, Wei

    2015-01-01

    The coordinated actions of the central and the provincial governments are important in improving China's energy efficiency. This paper uses a three-dimensional decomposition model to measure the contribution of each province in improving the country's energy efficiency and a small-sample hybrid model to forecast this contribution. Empirical analysis draws the following conclusions which are useful for the central government to adjust its provincial energy-related policies. (a) There are two important areas for the Chinese government to improve its energy efficiency: adjusting the provincial economic structure and controlling the number of the small-scale private industrial enterprises; (b) Except for a few outliers, the energy efficiency growth rates of the northern provinces are higher than those of the southern provinces; provinces with high growth rates tend to converge geographically; (c) With regard to the energy sustainable development level, Beijing, Tianjin, Jiangxi, and Shaanxi are the best performers and Heilongjiang, Shanxi, Shanghai, and Guizhou are the worst performers; (d) By 2020, China's energy efficiency may reach 24.75 thousand yuan per ton of standard coal; as well as (e) Three development scenarios are designed to forecast China's energy consumption in 2012–2020. - Highlights: • Decomposition and forecasting models are used to analyze China's energy efficiency. • China should focus on the small industrial enterprises and local protectionism. • The energy sustainable development level of each province is evaluated. • Geographic distribution characteristics of energy efficiency changes are revealed. • Future energy efficiency and energy consumption are forecasted

  14. A semi-nested real-time PCR method to detect low chimerism percentage in small quantity of hematopoietic stem cell transplant DNA samples.

    Science.gov (United States)

    Aloisio, Michelangelo; Bortot, Barbara; Gandin, Ilaria; Severini, Giovanni Maria; Athanasakis, Emmanouil

    2017-02-01

    Chimerism status evaluation of post-allogeneic hematopoietic stem cell transplantation samples is essential to predict post-transplant relapse. The most commonly used technique capable of detecting small increments of chimerism is quantitative real-time PCR. Although this method is already used in several laboratories, previously described protocols often lack sensitivity and the amount of the DNA required for each chimerism analysis is too high. In the present study, we compared a novel semi-nested allele-specific real-time PCR (sNAS-qPCR) protocol with our in-house standard allele-specific real-time PCR (gAS-qPCR) protocol. We selected two genetic markers and analyzed technical parameters (slope, y-intercept, R2, and standard deviation) useful to determine the performances of the two protocols. The sNAS-qPCR protocol showed better sensitivity and precision. Moreover, the sNAS-qPCR protocol requires, as input, only 10 ng of DNA, which is at least 10-fold less than the gAS-qPCR protocols described in the literature. Finally, the proposed sNAS-qPCR protocol could prove very useful for performing chimerism analysis with a small amount of DNA, as in the case of blood cell subsets.

  15. A rapid procedure for the determination of thorium, uranium, cadmium and molybdenum in small sediment samples by inductively coupled plasma-mass spectrometry: application in Chesapeake Bay

    International Nuclear Information System (INIS)

    Zheng, Y.; Weinman, B.; Cronin, T.; Fleisher, M.Q.; Anderson, R.F.

    2003-01-01

    This paper describes a rapid procedure that allows precise analysis of Mo, Cd, U and Th in sediment samples as small as 10 mg by using a novel approach that utilizes a 'pseudo' isotope dilution for Th and conventional isotope dilution for Mo, Cd and U by ICP-MS. Long-term reproducibility of the method is between 2.5 and 5% with an advantage of rapid analysis on a single digestion of sediment sample and the potential of adding other elements of interest if so desired. Application of this method to two piston cores collected near the mouth of the Patuxent River in Chesapeake Bay showed that the accumulation of authigenic Mo and Cd varied in response to the changing bottom water redox conditions, with anoxia showing consistent oscillations throughout both pre-industrial and industrial times. Accumulation of authigenic U shows consistent oscillations as well, without any apparent increase in productivity related to anoxic trends. Degrees of Mo and Cd enrichment also inversely correlate to halophilic microfaunal assemblages already established as paleoclimate proxies within the bay indicating that bottom water anoxia is driven in part by the amount of freshwater discharge that the area receives

  16. Thermal transfer and apparent-dose distributions in poorly bleached mortar samples: results from single grains and small aliquots of quartz

    International Nuclear Information System (INIS)

    Jain, M.; Thomsen, K.J.; Boetter-Jensen, L.; Urray, A.S.

    2004-01-01

    In the assessment of doses received from a nuclear accident, considerable attention has been paid to retrospective dosimetry using the optically stimulated luminescence (OSL) of heated materials such as bricks and tiles. quartz extracted from these artefacts was heated during manufacture; this process releases all the prior trapped charge and simultaneously sensitises he quartz. Unfortunately unheated materials such as mortar and concrete are ore common in industrial sites and particularly in nuclear installations. These materials are usually exposed to daylight during quarrying and construction, but in general this exposure is insufficient to completely empty (bleach) any geological trapped charge. This leads to a distribution of apparent doses in the sample at the time of construction with only some (if ny) grains exposed to sufficient light to be considered well bleached for SL dosimetry. The challenge in using such materials as retrospective dosemeters is in identifying these well-bleached grains when an accident dose as been superimposed on the original dose distribution. We investigate here, sing OSL, the background dose in three different mortar samples: render, whitewash and inner wall plaster from a building built in 1964. These samples re found to be both poorly bleached and weakly sensitive (only 0.3% of rains giving a detectable dose response). We study thermal transfer in ingle grains of quartz, investigate the grain-size dependence of bleaching n the size range 90-300 μm and compare the dose-distributions obtained rom small aliquots and single-grain procedures. A comparison of three different methods viz. (a) first 5%, (b) probability plot and (c) comparison f internal and external uncertainties, is made for equivalent dose estimation. The results have implications for accident dosimetry, archaeological studies and dating of poorly bleached sediments

  17. Ultra-trace plutonium determination in small volume seawater by sector field inductively coupled plasma mass spectrometry with application to Fukushima seawater samples.

    Science.gov (United States)

    Bu, Wenting; Zheng, Jian; Guo, Qiuju; Aono, Tatsuo; Tagami, Keiko; Uchida, Shigeo; Tazoe, Hirofumi; Yamada, Masatoshi

    2014-04-11

    Long-term monitoring of Pu isotopes in seawater is required for assessing Pu contamination in the marine environment from the Fukushima Dai-ichi Nuclear Power Plant accident. In this study, we established an accurate and precise analytical method based on anion-exchange chromatography and SF-ICP-MS. This method was able to determine Pu isotopes in seawater samples with small volumes (20-60L). The U decontamination factor was 3×10(7)-1×10(8), which provided sufficient removal of interfering U from the seawater samples. The estimated limits of detection for (239)Pu and (240)Pu were 0.11fgmL(-1) and 0.08fgmL(-1), respectively, which corresponded to 0.01mBqm(-3) for (239)Pu and 0.03mBqm(-3) for (240)Pu when a 20L volume of seawater was measured. We achieved good precision (2.9%) and accuracy (0.8%) for measurement of the (240)Pu/(239)Pu atom ratio in the standard Pu solution with a (239)Pu concentration of 11fgmL(-1) and (240)Pu concentration of 2.7fgmL(-1). Seawater reference materials were used for the method validation and both the (239+240)Pu activities and (240)Pu/(239)Pu atom ratios agreed well with the expected values. Surface and bottom seawater samples collected off Fukushima in the western North Pacific since March 2011 were analyzed. Our results suggested that there was no significant variation of the Pu distribution in seawater in the investigated areas compared to the distribution before the accident. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Spectroelectrochemical Sensing Based on Multimode Selectivity simultaneously Achievable in a Single Device. 11. Design and Evaluation of a Small Portable Sensor for the Determination of Ferrocyanide in Hanford Waste Samples

    International Nuclear Information System (INIS)

    Stegemiller, Michael L.; Heineman, William R.; Seliskar, Carl J.; Ridgway, Thomas H.; Bryan, Samuel A.; Hubler, Timothy L.; Sell, Richard L.

    2003-01-01

    Spectroelectrochemical sensing based on multimode selectivity simultaneously achievable in a single device. 11. Design and evaluation of a small portable sensor for the determination of ferrocyanide in Hanford waste samples

  19. Analytic device including nanostructures

    KAUST Repository

    Di Fabrizio, Enzo M.; Fratalocchi, Andrea; Totero Gongora, Juan Sebastian; Coluccio, Maria Laura; Candeloro, Patrizio; Cuda, Gianni

    2015-01-01

    A device for detecting an analyte in a sample comprising: an array including a plurality of pixels, each pixel including a nanochain comprising: a first nanostructure, a second nanostructure, and a third nanostructure, wherein size of the first nanostructure is larger than that of the second nanostructure, and size of the second nanostructure is larger than that of the third nanostructure, and wherein the first nanostructure, the second nanostructure, and the third nanostructure are positioned on a substrate such that when the nanochain is excited by an energy, an optical field between the second nanostructure and the third nanostructure is stronger than an optical field between the first nanostructure and the second nanostructure, wherein the array is configured to receive a sample; and a detector arranged to collect spectral data from a plurality of pixels of the array.

  20. Performance of next-generation sequencing on small tumor specimens and/or low tumor content samples using a commercially available platform.

    Directory of Open Access Journals (Sweden)

    Scott Morris

    Full Text Available Next generation sequencing tests (NGS are usually performed on relatively small core biopsy or fine needle aspiration (FNA samples. Data is limited on what amount of tumor by volume or minimum number of FNA passes are needed to yield sufficient material for running NGS. We sought to identify the amount of tumor for running the PCDx NGS platform.2,723 consecutive tumor tissues of all cancer types were queried and reviewed for inclusion. Information on tumor volume, success of performing NGS, and results of NGS were compiled. Assessment of sequence analysis, mutation calling and sensitivity, quality control, drug associations, and data aggregation and analysis were performed.6.4% of samples were rejected from all testing due to insufficient tumor quantity. The number of genes with insufficient sensitivity make definitive mutation calls increased as the percentage of tumor decreased, reaching statistical significance below 5% tumor content. The number of drug associations also decreased with a lower percentage of tumor, but this difference only became significant between 1-3%. The number of drug associations did decrease with smaller tissue size as expected. Neither specimen size or percentage of tumor affected the ability to pass mRNA quality control. A tumor area of 10 mm2 provides a good margin of error for specimens to yield adequate drug association results.Specimen suitability remains a major obstacle to clinical NGS testing. We determined that PCR-based library creation methods allow the use of smaller specimens, and those with a lower percentage of tumor cells to be run on the PCDx NGS platform.

  1. Flavoring Chemicals in E-Cigarettes: Diacetyl, 2,3-Pentanedione, and Acetoin in a Sample of 51 Products, Including Fruit-, Candy-, and Cocktail-Flavored E-Cigarettes.

    Science.gov (United States)

    Allen, Joseph G; Flanigan, Skye S; LeBlanc, Mallory; Vallarino, Jose; MacNaughton, Piers; Stewart, James H; Christiani, David C

    2016-06-01

    There are > 7,000 e-cigarette flavors currently marketed. Flavoring chemicals gained notoriety in the early 2000s when inhalation exposure of the flavoring chemical diacetyl was found to be associated with a disease that became known as "popcorn lung." There has been limited research on flavoring chemicals in e-cigarettes. We aimed to determine if the flavoring chemical diacetyl and two other high-priority flavoring chemicals, 2,3-pentanedione and acetoin, are present in a convenience sample of flavored e-cigarettes. We selected 51 types of flavored e-cigarettes sold by leading e-cigarette brands and flavors we deemed were appealing to youth. E-cigarette contents were fully discharged and the air stream was captured and analyzed for total mass of diacetyl, 2,3-pentanedione, and acetoin, according to OSHA method 1012. At least one flavoring chemical was detected in 47 of 51 unique flavors tested. Diacetyl was detected above the laboratory limit of detection in 39 of the 51 flavors tested, ranging from below the limit of quantification to 239 μg/e-cigarette. 2,3-Pentanedione and acetoin were detected in 23 and 46 of the 51 flavors tested at concentrations up to 64 and 529 μg/e-cigarette, respectively. Because of the associations between diacetyl and bronchiolitis obliterans and other severe respiratory diseases observed in workers, urgent action is recommended to further evaluate this potentially widespread exposure via flavored e-cigarettes. Allen JG, Flanigan SS, LeBlanc M, Vallarino J, MacNaughton P, Stewart JH, Christiani DC. 2016. Flavoring chemicals in e-cigarettes: diacetyl, 2,3-pentanedione, and acetoin in a sample of 51 products, including fruit-, candy-, and cocktail-flavored e-cigarettes. Environ Health Perspect 124:733-739; http://dx.doi.org/10.1289/ehp.1510185.

  2. Body Mass Index, family lifestyle, physical activity and eating behavior on a sample of primary school students in a small town of Western Sicily

    Directory of Open Access Journals (Sweden)

    Enza Sidoti

    2009-09-01

    Full Text Available

    Background: Obesity is actually a discernible issue in prosperous western society and is dramatically increasing in children and adolescents. Many studies indicate that obesity in childhood may become chronic disease in adulthood and, particularly, those who are severely overweight have an increased risk of death by cardiovascular disease. Understanding the determinants of life style and behavior in a person’s youth and making attempts to change children’s habits is considered a key strategy in the primary prevention of obesity. This study aims to find a correlation between Body Mass Index, (BMI, physical activity and eating behavior and to identify, eventually, risks, protective factors and possible directions for interventions on incorrect nutritional/physical activity and intra-familiar life styles in a sample of young adolescents in a small town of Western Sicily.

    Methods: The research surveyed the entire population of the last three curricular years of two Primary Schools in a town of western Sicily, (n=294. The instrument used for the survey was a questionnaire containing 20 different items with multiple choices answers. Personal information, physical activity and eating behaviors were collected both for parents and students to cross students’ and parents’ characteristics. Data were codified and statistical analysis was computed through Statistica and Openstat software.

    Results: Data obtained demonstrated a relevant percentage (18% of obese children. Prevalence of overweight was high as well, (23%, and many in this area (12% were at risk since they were on the limits of the lower class. A significant association was found between the percentage of students classified as having an elevated BMI and a sedentary habit and/or an incorrect eating behavior. Among the overweight and obese children a direct statistical association was also shown between the weight of their

  3. Nonlinearity and thresholds in dose-response relationships for carcinogenicity due to sampling variation, logarithmic dose scaling, or small differences in individual susceptibility

    International Nuclear Information System (INIS)

    Lutz, W.K.; Gaylor, D.W.; Conolly, R.B.; Lutz, R.W.

    2005-01-01

    Nonlinear and threshold-like shapes of dose-response curves are often observed in tests for carcinogenicity. Here, we present three examples where an apparent threshold is spurious and can be misleading for low dose extrapolation and human cancer risk assessment. Case 1: For experiments that are not replicated, such as rodent bioassays for carcinogenicity, random variation can lead to misinterpretation of the result. This situation was simulated by 20 random binomial samplings of 50 animals per group, assuming a true linear dose response from 5% to 25% tumor incidence at arbitrary dose levels 0, 0.5, 1, 2, and 4. Linearity was suggested only by 8 of the 20 simulations. Four simulations did not reveal the carcinogenicity at all. Three exhibited thresholds, two showed a nonmonotonic behavior with a decrease at low dose, followed by a significant increase at high dose ('hormesis'). Case 2: Logarithmic representation of the dose axis transforms a straight line into a sublinear (up-bent) curve, which can be misinterpreted to indicate a threshold. This is most pronounced if the dose scale includes a wide low dose range. Linear regression of net tumor incidences and intersection with the dose axis results in an apparent threshold, even with an underlying true linear dose-incidence relationship. Case 3: Nonlinear shapes of dose-cancer incidence curves are rarely seen with epidemiological data in humans. The discrepancy to data in rodents may in part be explained by a wider span of individual susceptibilities for tumor induction in humans due to more diverse genetic background and modulation by co-carcinogenic lifestyle factors. Linear extrapolation of a human cancer risk could therefore be appropriate even if animal bioassays show nonlinearity

  4. Laser sampling

    International Nuclear Information System (INIS)

    Gorbatenko, A A; Revina, E I

    2015-01-01

    The review is devoted to the major advances in laser sampling. The advantages and drawbacks of the technique are considered. Specific features of combinations of laser sampling with various instrumental analytical methods, primarily inductively coupled plasma mass spectrometry, are discussed. Examples of practical implementation of hybrid methods involving laser sampling as well as corresponding analytical characteristics are presented. The bibliography includes 78 references

  5. Linear models for airborne-laser-scanning-based operational forest inventory with small field sample size and highly correlated LiDAR data

    Science.gov (United States)

    Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.

    2015-01-01

    Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.

  6. The Budget Impact of Including Necitumumab on the Formulary for First-Line Treatment of Metastatic Squamous Non-Small Cell Lung Cancer: U.S. Commercial Payer and Medicare Perspectives.

    Science.gov (United States)

    Bly, Christopher A; Molife, Cliff; Brown, Jacqueline; Tawney, Mahesh K; Carter, Gebra Cuyun; Cinfio, Frank N; Klein, Robert W

    2018-06-01

    Necitumumab (Neci) was the first biologic approved by the FDA for use in combination with gemcitabine and cisplatin (Neci + GCis) in first-line treatment of metastatic squamous non-small cell lung cancer (msqNSCLC). The potential financial impact on a health plan of adding Neci + GCis to drug formularies may be important to value-based decision makers in the United States, given ever-tightening budget constraints. To estimate the budget impact of introducing Neci + GCis for first-line treatment of msqNSCLC from U.S. commercial and Medicare payer perspectives. The budget impact model estimates the costs of msqNSCLC before and after adoption of Neci + GCis in hypothetical U.S. commercial and Medicare health plans over a 3-year time horizon. The eligible patient population was estimated from U.S. epidemiology statistics. Clinical data were obtained from randomized clinical trials, U.S. prescribing information, and clinical guidelines. Market share projections were based on market research data. Cost data were obtained from online sources and published literature. The incremental aggregate annual health plan, per-patient-per-year (PPPY), and per-member-per-month (PMPM) costs were estimated in 2015 U.S. dollars. One-way sensitivity analyses were conducted to assess the effect of model parameters on results. In a hypothetical 1,000,000-member commercial health plan with an estimated population of 30 msqNSCLC patients receiving first-line chemotherapy, the introduction of Neci + GCis at an initial market share of approximately 5% had an overall year 1 incremental budget impact of $88,394 ($3,177 PPPY, $0.007 PMPM), representing a 2.9% cost increase and reaching $304,079 ($10,397 PPPY, $0.025 PMPM) or a 7.4% cost increase at a market share of 14.7% in year 3. This increase in total costs was largely attributable to Neci drug costs and, in part, due to longer survival and treatment duration for patients treated with Neci+GCis. Overall, treatment costs increased by $81

  7. Stratospheric Air Sub-sampler (SAS) and its application to analysis of Delta O-17(CO2) from small air samples collected with an AirCore

    NARCIS (Netherlands)

    Mrozek, Dorota Janina; van der Veen, Carina; Hofmann, Magdalena E. G.; Chen, Huilin; Kivi, Rigel; Heikkinen, Pauli; Rockmann, Thomas

    2016-01-01

    We present the set-up and a scientific application of the Stratospheric Air Sub-sampler (SAS), a device to collect and to store the vertical profile of air collected with an AirCore (Karion et al., 2010) in numerous sub-samples for later analysis in the laboratory. The SAS described here is a 20m

  8. Extreme-temperature lab on a chip for optogalvanic spectroscopy of ultra small samples - key components and a first integration attempt

    International Nuclear Information System (INIS)

    Berglund, Martin; Khaji, Zahra; Persson, Anders; Sturesson, Peter; Breivik, Johan Söderberg; Thornell, Greger; Klintberg, Lena

    2016-01-01

    This is a short summary of the authors’ recent R and D on valves, combustors, plasma sources, and pressure and temperature sensors, realized in high-temperature co-fired ceramics, and an account for the first attempt to monolithically integrate them to form a lab on a chip for sample administration, preparation and analysis, as a stage in optogalvanic spectroscopy. (paper)

  9. Using Web2.0 social network technology for sampling framework identification and respondent recruitment: experiences with a small-scale experiment

    NARCIS (Netherlands)

    Grigolon, A.B.; Kemperman, A.D.A.M.; Timmermans, H.J.P.

    2011-01-01

    In this paper, we report the results of a small–scale experiment to explore the potential of using social network technology for respondent recruitment. Of particular interest are the following questions (i) can social media be used for the identification of sampling frames, (ii) what response rates

  10. Stevens Pond: A postglacial pollen diagram from a small Typha Swamp in Northwestern Minnesota, interpreted from pollen indicators and surface samples

    NARCIS (Netherlands)

    Janssen, C.R.

    1967-01-01

    The pollen assemblages of a core in the coniferhardwood formation in northwestern Minnesota are compared with the floristics of the recent vegetation in the region. Percentage levels of the main tree components have been compared first with those from recent surface samples taken at the same short

  11. Development of a standard data base for FBR core nuclear design (XIII). Analysis of small sample reactivity experiments at ZPPR-9

    International Nuclear Information System (INIS)

    Sato, Wakaei; Fukushima, Manabu; Ishikawa, Makoto

    2000-09-01

    A comprehensive study to evaluate and accumulate the abundant results of fast reactor physics is now in progress at O-arai Engineering Center to improve analytical methods and prediction accuracy of nuclear design for large fast breeder cores such as future commercial FBRs. The present report summarizes the analytical results of sample reactivity experiments at ZPPR-9 core, which has not been evaluated by the latest analytical method yet. The intention of the work is to extend and further generalize the standard data base for FBR core nuclear design. The analytical results of the sample reactivity experiments (samples: PU-30, U-6, DU-6, SS-1 and B-1) at ZPPR-9 core in JUPITER series, with the latest nuclear data library JENDL-3.2 and the analytical method which was established by the JUPITER analysis, can be concluded as follows: The region-averaged final C/E values generally agreed with unity within 5% differences at the inner core region. However, the C/E values of every sample showed the radial space-dependency increasing from center to core edge, especially the discrepancy of B-1 was the largest by 10%. Next, the influence of the present analytical results for the ZPPR-9 sample reactivity to the cross-section adjustment was evaluated. The reference case was a unified cross-section set ADJ98 based on the recent JUPITER analysis. As a conclusion, the present analytical results have sufficient physical consistency with other JUPITER data, and possess qualification as a part of the standard data base for FBR nuclear design. (author)

  12. A high sensitivity SQUID-method for the measurement of magnetic susceptibility of small samples in the temperature range 1.5 K-40 K and application on small palladium particles

    International Nuclear Information System (INIS)

    Tu Nguyen Quang.

    1979-01-01

    In this paper a method is developed for magnetic susceptibility measurements which is superior to the common methods. The method is based on the SQUID-principle (Superconducting Quantum Interference Device) using the tunnel effect of a superconducting point contact and magnetic flux quantization for measuring electric and magnetic quantities. Due to this refined method susceptibility changes of very small palladium particles could be detected in the temperature range 1.5 K-40 K with respect to the bulk. In addition susceptibility differences of particle distributions with different means diameters (81 Angstroem and 65 Angstroem) have been measured for the first time. A quantitative comparison of the measurements with theoretical results shows satisfactory agreement. (orig./WBU) [de

  13. Accuracy assessment of digital surface models based on a small format action camera in a North-East Hungarian sample area

    Directory of Open Access Journals (Sweden)

    Barkóczi Norbert

    2017-01-01

    Full Text Available The use of the small format digital action cameras has been increased in the past few years in various applications, due to their low budget cost, flexibility and reliability. We can mount these small cameras on several devices, like unmanned air vehicles (UAV and create 3D models with photogrammetric technique. Either creating or receiving these kind of databases, one of the most important questions will always be that how accurate these systems are, what the accuracy that can be achieved is. We gathered the overlapping images, created point clouds, and then we generated 21 different digital surface models (DSM. The differences based on the number of images we used in each model, and on the flight height. We repeated the flights three times, to compare the same models with each other. Besides, we measured 129 reference points with RTK-GPS, to compare the height differences with the extracted cell values from each DSM. The results showed that higher flight height has lower errors, and the optimal air base distance is one fourth of the flying height in both cases. The lowest median was 0.08 meter, at the 180 meter flight, 50 meter air base distance model. Raising the number of images does not increase the overall accuracy. The connection between the amount of error and distance from the nearest GCP is not linear in every case.

  14. Method of determining coking temperature of coke. [Experimental method of determining final coking temperature using a small sample and calibration graph

    Energy Technology Data Exchange (ETDEWEB)

    Mel' nichuk, A.Yu.; Bondarenko, A.K.; Fialkov, B.S.; Khegay, L.U.; Khvan, L.A.; Muzyzhuk, V.D.; Zakharov, A.G.; Zelenskiy, V.P.

    1985-01-01

    The coking temperature of coke should be determined from the magnitude of the ionization current of the medium during heating (3/sup 0//min) of a coke sample (2 g, fraction < 0.2 mm) in an oxidation medium with air supply (1 1/min). The coking temperature is determined from the maximum magnitude of current using a graduated graph constructed during analysis of coke samples obtained with different final coking temperatures. The discrepancy between the established coking temperature and that defined from the proposed method is 8-19/sup 0/, and that defined from electrical resistance of coke is 26-43/sup 0/. In addition to high accuracy, this method reduces the time outlays for making the analysis.

  15. Outsourcing cytological samples to a referral laboratory for EGFR testing in non-small cell lung cancer: does theory meet practice?

    Science.gov (United States)

    Vigliar, E; Malapelle, U; Bellevicine, C; de Luca, C; Troncone, G

    2015-10-01

    Guidelines from the College of American Pathologists (CAP), the International Association for the Study of Lung Cancer (IASLC) and the Association for Molecular Pathology (AMP) consider cytology suitable for testing epidermal growth factor receptor (EGFR) mutations in lung adenocarcinoma. The guidelines recommend that cytopathologists first discuss the possibility of testing squamous cell carcinomas (SqCC) in multidisciplinary meetings. Second, cell blocks should be analysed rather than smear preparations and, third, specimens should be sent to external molecular laboratories within three working days of receiving requests. This study monitored how these recommendations are met in practice. Our laboratory received 596 requests from cytologists from 13 different institutions. For each case, the cytological diagnosis, cytopreparation type, and time between the request and sample mailing were compared with the recommendations. Of the 596 samples, 32 (5.4%) had been reported as SqCC. Three of these (9.4%) showed EGFR mutation. Cytological slides, either ThinPrep(™) (51.2%) or direct smears (43.2%), were more frequently received than cell blocks (5.7%). The mean time between the oncologist's request and specimen dispatching was 5.8 working days. The occurrence of mutations in samples reported as SqCC was higher than expected. This questions the reliability of the original diagnosis, which reinforced the recommendation to evaluate the opportunity for testing non-adenocarcinoma cytology on a case-by-case basis. In spite of CAP/IASLC/AMP recommendations, cell blocks were underutilized for EGFR testing, but cytological slides were suitable for DNA analyses. Significant efforts are needed to avoid delays in outsourcing cytological samples for EGFR testing. © 2014 John Wiley & Sons Ltd.

  16. Device including a contact detector

    DEFF Research Database (Denmark)

    2011-01-01

    arms (12) may extend from the supporting body in co-planar relationship with the first surface. The plurality of cantilever arms (12) may extend substantially parallel to each other and each of the plurality of cantilever arms (12) may include an electrical conductive tip for contacting the area......The present invention relates to a probe for determining an electrical property of an area of a surface of a test sample, the probe is intended to be in a specific orientation relative to the test sample. The probe may comprise a supporting body defining a first surface. A plurality of cantilever...... of the test sample by movement of the probe relative to the surface of the test sample into the specific orientation.; The probe may further comprise a contact detector (14) extending from the supporting body arranged so as to contact the surface of the test sample prior to any one of the plurality...

  17. Sample preparation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Sample preparation prior to HPLC analysis is certainly one of the most important steps to consider in trace or ultratrace analysis. For many years scientists have tried to simplify the sample preparation process. It is rarely possible to inject a neat liquid sample or a sample where preparation may not be any more complex than dissolution of the sample in a given solvent. The last process alone can remove insoluble materials, which is especially helpful with the samples in complex matrices if other interactions do not affect extraction. Here, it is very likely a large number of components will not dissolve and are, therefore, eliminated by a simple filtration process. In most cases, the process of sample preparation is not as simple as dissolution of the component interest. At times, enrichment is necessary, that is, the component of interest is present in very large volume or mass of material. It needs to be concentrated in some manner so a small volume of the concentrated or enriched sample can be injected into HPLC. 88 refs

  18. Determination of proguanil and metabolites in small sample volumes of whole blood stored on filter paper by high-performance liquid chromatography.

    Science.gov (United States)

    Kolawole, J A; Taylor, R B; Moody, R R

    1995-12-01

    A method is reported for the determination of proguanil and its two metabolites cycloguanil and 4-chlorophenylbiguanide in whole blood and plasma samples obtained by thumbprick and stored dry on filter paper. The sample preparation involves liquid extraction from the filter paper and subsequent solid-phase extraction using C8 Bond-Elut cartridges. Separation and quantification is by a previously reported ion-pairing high-performance liquid chromatographic system with ODS Hypersil as stationary phase and an 50:50 acetonitrile-pH 2 phosphate buffer mobile phase containing 200 mM sodium dodecylsulphate as ion-pairing agent. The analytical characteristics of the method are reported. Representative concentrations are shown as a function of time from a human subject after ingestion of a single 200-mg dose of proguanil hydrochloride. Typical ranges of concentration detected by the proposed method in human subjects were proguanil 12-900 ng/ml, cycloguanil 16-44 ng/ml and 4-chlorophenylbiguanide 1.5-10 ng/ml in whole blood.

  19. Combining land use information and small stream sampling with PCR-based methods for better characterization of diffuse sources of human fecal pollution.

    Science.gov (United States)

    Peed, Lindsay A; Nietch, Christopher T; Kelty, Catherine A; Meckes, Mark; Mooney, Thomas; Sivaganesan, Mano; Shanks, Orin C

    2011-07-01

    Diffuse sources of human fecal pollution allow for the direct discharge of waste into receiving waters with minimal or no treatment. Traditional culture-based methods are commonly used to characterize fecal pollution in ambient waters, however these methods do not discern between human and other animal sources of fecal pollution making it difficult to identify diffuse pollution sources. Human-associated quantitative real-time PCR (qPCR) methods in combination with low-order headwatershed sampling, precipitation information, and high-resolution geographic information system land use data can be useful for identifying diffuse source of human fecal pollution in receiving waters. To test this assertion, this study monitored nine headwatersheds over a two-year period potentially impacted by faulty septic systems and leaky sanitary sewer lines. Human fecal pollution was measured using three different human-associated qPCR methods and a positive significant correlation was seen between abundance of human-associated genetic markers and septic systems following wet weather events. In contrast, a negative correlation was observed with sanitary sewer line densities suggesting septic systems are the predominant diffuse source of human fecal pollution in the study area. These results demonstrate the advantages of combining water sampling, climate information, land-use computer-based modeling, and molecular biology disciplines to better characterize diffuse sources of human fecal pollution in environmental waters.

  20. Development of an evaluation method for fracture mechanical tests on small samples based on a cohesive zone model; Entwicklung einer Auswertemethode fuer bruchmechanische Versuche an kleinen Proben auf der Basis eines Kohaesivzonenmodells

    Energy Technology Data Exchange (ETDEWEB)

    Mahler, Michael

    2016-07-01

    The safety and reliability of nuclear power plants of the fourth generation is an important issue. It is based on a reliable interpretation of the components for which, among other fracture mechanical material properties are required. The existing irradiation in the power plants significantly affects the material properties which therefore need to be determined on irradiated material. Often only small amounts of irradiated material are available for characterization. In that case it is not possible to manufacture sufficiently large specimens, which are necessary for fracture mechanical testing in agreement with the standard. Small specimens must be used. From this follows the idea of this study, in which the fracture toughness can be predicted with the developed method based on tests of small specimens. For this purpose, the fracture process including the crack growth is described with a continuum mechanical approach using the finite element method and the cohesive zone model. The experiments on small specimens are used for parameter identification of the cohesive zone model. The two parameters of the cohesive zone model are determined by tensile tests on notched specimens (cohesive stress) and by parameter fitting to the fracture behavior of smalls specimens (cohesive energy). To account the different triaxialities of the specimens, the cohesive stress is used depending on the triaxiality. After parameter identification a large specimen can be simulated with the cohesive zone parameters derived from small specimens. The predicted fracture toughness of this big specimen fulfills the size requirements in the standard (ASTM E1820 or ASTM E399) in contrast to the small specimen. This method can be used for ductile and brittle material behavior and was validated in this work. In summary, this method offers the possibility to determine the fracture toughness indirectly based on small specimen testing. Main advantage is the low required specimen volume. Thereby massively

  1. Response to ``Comment on `Small field behavior of critical current in Y1Ba2Cu3O7 sintered samples' ''

    Science.gov (United States)

    Paternò, G.; Alvani, C.; Casadio, S.; Gambardella, U.; Maritato, L.

    1989-05-01

    In our response we would like to point out the fitting of the data has done to account for the shift of the maximum magnetic field dependence of the critical current. This shift on the order of 1 Gauss or less is gener ally observed in all our data and is attributable to the residual external field. Since we used a crude junction model, the self-field effects were not included. (AIP)

  2. A new set-up for simultaneous high-precision measurements of CO2, δ13C-CO2 and δ18O-CO2 on small ice core samples

    Science.gov (United States)

    Jenk, Theo Manuel; Rubino, Mauro; Etheridge, David; Ciobanu, Viorela Gabriela; Blunier, Thomas

    2016-08-01

    Palaeoatmospheric records of carbon dioxide and its stable carbon isotope composition (δ13C) obtained from polar ice cores provide important constraints on the natural variability of the carbon cycle. However, the measurements are both analytically challenging and time-consuming; thus only data exist from a limited number of sampling sites and time periods. Additional analytical resources with high analytical precision and throughput are thus desirable to extend the existing datasets. Moreover, consistent measurements derived by independent laboratories and a variety of analytical systems help to further increase confidence in the global CO2 palaeo-reconstructions. Here, we describe our new set-up for simultaneous measurements of atmospheric CO2 mixing ratios and atmospheric δ13C and δ18O-CO2 in air extracted from ice core samples. The centrepiece of the system is a newly designed needle cracker for the mechanical release of air entrapped in ice core samples of 8-13 g operated at -45 °C. The small sample size allows for high resolution and replicate sampling schemes. In our method, CO2 is cryogenically and chromatographically separated from the bulk air and its isotopic composition subsequently determined by continuous flow isotope ratio mass spectrometry (IRMS). In combination with thermal conductivity measurement of the bulk air, the CO2 mixing ratio is calculated. The analytical precision determined from standard air sample measurements over ice is ±1.9 ppm for CO2 and ±0.09 ‰ for δ13C. In a laboratory intercomparison study with CSIRO (Aspendale, Australia), good agreement between CO2 and δ13C results is found for Law Dome ice core samples. Replicate analysis of these samples resulted in a pooled standard deviation of 2.0 ppm for CO2 and 0.11 ‰ for δ13C. These numbers are good, though they are rather conservative estimates of the overall analytical precision achieved for single ice sample measurements. Facilitated by the small sample requirement

  3. Assessment of real-time PCR method for detection of EGFR mutation using both supernatant and cell pellet of malignant pleural effusion samples from non-small-cell lung cancer patients.

    Science.gov (United States)

    Shin, Saeam; Kim, Juwon; Kim, Yoonjung; Cho, Sun-Mi; Lee, Kyung-A

    2017-10-26

    EGFR mutation is an emerging biomarker for treatment selection in non-small-cell lung cancer (NSCLC) patients. However, optimal mutation detection is hindered by complications associated with the biopsy procedure, tumor heterogeneity and limited sensitivity of test methodology. In this study, we evaluated the diagnostic utility of real-time PCR using malignant pleural effusion samples. A total of 77 pleural fluid samples from 77 NSCLC patients were tested using the cobas EGFR mutation test (Roche Molecular Systems). Pleural fluid was centrifuged, and separated cell pellets and supernatants were tested in parallel. Results were compared with Sanger sequencing and/or peptide nucleic acid (PNA)-mediated PCR clamping of matched tumor tissue or pleural fluid samples. All samples showed valid real-time PCR results in one or more DNA samples extracted from cell pellets and supernatants. Compared with other molecular methods, the sensitivity of real-time PCR method was 100%. Concordance rate of real-time PCR and Sanger sequencing plus PNA-mediated PCR clamping was 98.7%. We have confirmed that real-time PCR using pleural fluid had a high concordance rate compared to conventional methods, with no failed samples. Our data demonstrated that the parallel real-time PCR testing using supernatant and cell pellet could offer reliable and robust surrogate strategy when tissue is not available.

  4. Experimental technique of small angle neutron scattering

    International Nuclear Information System (INIS)

    Xia Qingzhong; Chen Bo

    2006-03-01

    The main parts of Small Angle Neutron Scattering (SANS) spectrometer, and their function and different parameters are introduced from experimental aspect. Detailed information is also introduced for SANS spectrometer 'Membrana-2'. Based on practical experiments, the fundamental requirements and working condition for SANS experiments, including sample preparation, detector calibration, standard sample selection and data preliminary process are described. (authors)

  5. Thermal transfer and apparent-dose distributions in poorly bleached mortar samples: Results from single grains and small aliquots of quartz

    DEFF Research Database (Denmark)

    Jain, M.; Thomsen, Kristina Jørkov; Bøtter-Jensen, L.

    2004-01-01

    In the assessment of doses received from a nuclear accident, considerable attention has been paid to retrospective dosimetry using the optically stimulated luminescence (OSL) of heated materials such as bricks and tiles. Quartz extracted from these artefacts was heated during manufacture;, this p......In the assessment of doses received from a nuclear accident, considerable attention has been paid to retrospective dosimetry using the optically stimulated luminescence (OSL) of heated materials such as bricks and tiles. Quartz extracted from these artefacts was heated during manufacture......;, this process releases all the prior trapped charge and simultaneously sensitises the quartz. Unfortunately unheated materials such as mortar and concrete are more common in industrial sites and particularly in nuclear installations. These materials are usually exposed to daylight during quarrying...... dosimetry. The challenge in using such materials as retrospective dosemeters, is in identifying these well-bleached grains when an accident dose has been superimposed on the original dose distribution. We investigate here, using OSL, the background dose in three different mortar samples: render, whitewash...

  6. New seismograph includes filters

    Energy Technology Data Exchange (ETDEWEB)

    1979-11-02

    The new Nimbus ES-1210 multichannel signal enhancement seismograph from EG and G geometrics has recently been redesigned to include multimode signal fillers on each amplifier. The ES-1210F is a shallow exploration seismograph for near subsurface exploration such as in depth-to-bedrock, geological hazard location, mineral exploration, and landslide investigations.

  7. RARE DECAYS INCLUDING PENGUINS

    Energy Technology Data Exchange (ETDEWEB)

    Eigen, G

    2003-12-04

    The authors present a preliminary measurement of the exclusive charmless semileptonic B decays, B {yields} {rho}{ell}{nu}, and the extraction of the CKM parameters V{sub ub}. IN a data sample of 55 x 10{sup 6} B{bar B} events they measure a branching fraction of {Beta}(B {yields} {rho}{ell}{nu}) = (3.39 {+-} 0.44{sub stat} {+-} 0.52{sub sys} {+-} 0.60{sub th}) x 10{sup -4} yielding |V{sub ub}| = (3.69 {+-} 0.23{sub stat} {+-} 0.27{sub sys -0.59th}{sup +0.40}) x 10{sup -3}. Next, they report on a preliminary study of the radiative penguin modes B {yields} K{ell}{sup +}{ell}{sup -} and B {yields} K*{ell}{sup +}{ell}{sup -}. In a data sample of 84 x 10{sup 6} B{bar B} events they observe a significant signal (4.4{sigma}) in B {yields} K{ell}{sup +}{ell}{sup -}, yielding a branching fraction of {Beta}(B {yields} K{ell}{sup +}{ell}{sup -}) = (0.78{sub -0.20-0.18}{sup +0.24+0.11}) x 10{sup -6}. In B {yields} K*{ell}{sup +}{ell}{sup -} the observed yield is not yet significant (2.8{sigma}), yielding an upper limit of the branching fraction of {Beta}(B {yields} K*{ell}{sup +}{ell}{sup -}) 3.0 x 10{sup -6} {at} 90% confidence level. Finally, they summarize preliminary results of searches for B {yields} {rho}({omega}){gamma}, B{sup +} {yields} K{sup +} {nu}{bar {nu}} and B{sup 0} {yields} {ell}{sup +}{ell}{sup -}.

  8. Saskatchewan resources. [including uranium

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-01

    The production of chemicals and minerals for the chemical industry in Saskatchewan are featured, with some discussion of resource taxation. The commodities mentioned include potash, fatty amines, uranium, heavy oil, sodium sulfate, chlorine, sodium hydroxide, sodium chlorate and bentonite. Following the successful outcome of the Cluff Lake inquiry, the uranium industry is booming. Some developments and production figures for Gulf Minerals, Amok, Cenex and Eldorado are mentioned.

  9. Continuous-flow liquid microjunction surface sampling probe connected on-line with high-performance liquid chromatography/mass spectrometry for spatially resolved analysis of small molecules and proteins.

    Science.gov (United States)

    Van Berkel, Gary J; Kertesz, Vilmos

    2013-06-30

    A continuous-flow liquid microjunction surface sampling probe extracts soluble material from surfaces for direct ionization and detection by mass spectrometry. Demonstrated here is the on-line coupling of such a probe with high-performance liquid chromatography/mass spectrometry (HPLC/MS) enabling extraction, separation and detection of small molecules and proteins from surfaces in a spatially resolved (~0.5 mm diameter spots) manner. A continuous-flow liquid microjunction surface sampling probe was connected to a six-port, two-position valve for extract collection and injection to an HPLC column. A QTRAP® 5500 hybrid triple quadrupole linear ion trap equipped with a Turbo V™ ion source operated in positive electrospray ionization (ESI) mode was used for all experiments. The system operation was tested with the extraction, separation and detection of propranolol and associated metabolites from drug dosed tissues, caffeine from a coffee bean, cocaine from paper currency, and proteins from dried sheep blood spots on paper. Confirmed in the tissue were the parent drug and two different hydroxypropranolol glucuronides. The mass spectrometric response for these compounds from different locations in the liver showed an increase with increasing extraction time (5, 20 and 40 s). For on-line separation and detection/identification of extracted proteins from dried sheep blood spots, two major protein peaks dominated the chromatogram and could be correlated with the expected masses for the hemoglobin α and β chains. Spatially resolved sampling, separation, and detection of small molecules and proteins from surfaces can be accomplished using a continuous-flow liquid microjunction surface sampling probe coupled on-line with HPLC/MS detection. Published in 2013. This article is a U.S. Government work and is in the public domain in the USA.

  10. Gender Segregation Small Firms

    OpenAIRE

    Kenneth R Troske; William J Carrington

    1992-01-01

    This paper studies interfirm gender segregation in a unique sample of small employers. We focus on small firms because previous research on interfirm segregation has studied only large firms and because it is easier to link the demographic characteristics of employers and employees in small firms. This latter feature permits an assessment of the role of employer discrimination in creating gender segregation. Our first finding is that interfirm segregation is prevalent among small employers. I...

  11. Being Included and Excluded

    DEFF Research Database (Denmark)

    Korzenevica, Marina

    2016-01-01

    Following the civil war of 1996–2006, there was a dramatic increase in the labor mobility of young men and the inclusion of young women in formal education, which led to the transformation of the political landscape of rural Nepal. Mobility and schooling represent a level of prestige that rural...... politics. It analyzes how formal education and mobility either challenge or reinforce traditional gendered norms which dictate a lowly position for young married women in the household and their absence from community politics. The article concludes that women are simultaneously excluded and included from...... community politics. On the one hand, their mobility and decision-making powers decrease with the increase in the labor mobility of men and their newly gained education is politically devalued when compared to the informal education that men gain through mobility, but on the other hand, schooling strengthens...

  12. Small angle spectrometers: Summary

    International Nuclear Information System (INIS)

    Courant, E.; Foley, K.J.; Schlein, P.E.

    1986-01-01

    Aspects of experiments at small angles at the Superconducting Super Collider are considered. Topics summarized include a small angle spectrometer, a high contingency spectrometer, dipole and toroid spectrometers, and magnet choices

  13. EGFR T790M mutation testing of non-small cell lung cancer tissue and blood samples artificially spiked with circulating cell-free tumor DNA: results of a round robin trial.

    Science.gov (United States)

    Fassunke, Jana; Ihle, Michaela Angelika; Lenze, Dido; Lehmann, Annika; Hummel, Michael; Vollbrecht, Claudia; Penzel, Roland; Volckmar, Anna-Lena; Stenzinger, Albrecht; Endris, Volker; Jung, Andreas; Lehmann, Ulrich; Zeugner, Silke; Baretton, Gustavo; Kreipe, Hans; Schirmacher, Peter; Kirchner, Thomas; Dietel, Manfred; Büttner, Reinhard; Merkelbach-Bruse, Sabine

    2017-10-01

    The European Commision (EC) recently approved osimertinib for the treatment of adult patients with locally advanced or metastatic non-small-cell lung cancer (NSCLC) harboring EGFR T790M mutations. Besides tissue-based testing, blood samples containing cell-free circulating tumor DNA (ctDNA) can be used to interrogate T790M status. Herein, we describe the conditions and results of a round robin trial (RRT) for T790M mutation testing in NSCLC tissue specimens and peripheral blood samples spiked with cell line DNA mimicking tumor-derived ctDNA. The underlying objectives of this two-staged external quality assessment (EQA) approach were (a) to evaluate the accuracy of T790M mutations testing across multiple centers and (b) to investigate if a liquid biopsy-based testing for T790M mutations in spiked blood samples is feasible in routine diagnostic. Based on a successfully completed internal phase I RRT, an open RRT for EGFR T790M mutation testing in tumor tissue and blood samples was initiated. In total, 48 pathology centers participated in the EQA. Of these, 47 (97.9%) centers submitted their analyses within the pre-defined time frame and 44 (tissue), respectively, 40 (plasma) successfully passed the test. The overall success rates in the RRT phase II were 91.7% (tissue) and 83.3% (blood), respectively. Thirty-eight out of 48 participants (79.2%) successfully passed both parts of the RRT. The RRT for blood-based EGFR testing initiated in Germany is, to the best of our knowledge, the first of his kind in Europe. In summary, our results demonstrate that blood-based genotyping for EGFR resistance mutations can be successfully integrated in routine molecular diagnostics complementing the array of molecular methods already available at pathology centers in Germany.

  14. Evaluation of respondent-driven sampling.

    Science.gov (United States)

    McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required

  15. Evaluation of Respondent-Driven Sampling

    Science.gov (United States)

    McCreesh, Nicky; Frost, Simon; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda Ndagire; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Background Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex-workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total-population data. Methods Total-population data on age, tribe, religion, socioeconomic status, sexual activity and HIV status were available on a population of 2402 male household-heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, employing current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). Results We recruited 927 household-heads. Full and small RDS samples were largely representative of the total population, but both samples under-represented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven-sampling statistical-inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven-sampling bootstrap 95% confidence intervals included the population proportion. Conclusions Respondent-driven sampling produced a generally representative sample of this well-connected non-hidden population. However, current respondent-driven-sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience-sampling

  16. Enabling optical metrology on small 5×5μm2 in-cell targets to support flexible sampling and higher order overlay and CD control for advanced logic devices nodes

    Science.gov (United States)

    Salerno, Antonio; de la Fuente, Isabel; Hsu, Zack; Tai, Alan; Chang, Hammer; McNamara, Elliott; Cramer, Hugo; Li, Daoping

    2018-03-01

    In next generation Logic devices, overlay control requirements shrink to sub 2.5nm level on-product overlay. Historically on-product overlay has been defined by the overlay capability of after-develop in-scribe targets. However, due to design and dimension, the after development metrology targets are not completely representative for the final overlay of the device. In addition, they are confined to the scribe-lane area, which limits the sampling possibilities. To address these two issues, metrology on structures matching the device structure and which can be sampled with high density across the device is required. Conventional after-etch CDSEM techniques on logic devices present difficulties in discerning the layers of interest, potential destructive charging effects and finally, they are limited by the long measurement times[1] [2] [3] . All together, limit the sampling densities and making CDSEM less attractive for control applications. Optical metrology can overcome most of these limitations. Such measurement, however, does require repetitive structures. This requirement is not fulfilled by logic devices, as the features vary in pitch and CD over the exposure field. The solution is to use small targets, with a maximum pad size of 5x5um2 , which can easily be placed in the logic cell area. These targets share the process and architecture of the device features of interest, but with a modified design that replicates as close as possible the device layout, allowing for in-device metrology for both CD and Overlay. This solution enables measuring closer to the actual product feature location and, not being limited to scribe-lanes, it opens the possibility of higher-density sampling schemes across the field. In summary, these targets become the facilitator of in-device metrology (IDM), that is, enabling the measurements both in-device Overlay and the CD parameters of interest and can deliver accurate, high-throughput, dense and after-etch measurements for Logic

  17. Iowa Geologic Sampling Points

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — Point locations of geologic samples/files in the IGS repository. Types of samples include well cuttings, outcrop samples, cores, drillers logs, measured sections,...

  18. Small Data

    NARCIS (Netherlands)

    S. Pemberton (Steven)

    2014-01-01

    htmlabstractThe term “Open Data” often goes hand in hand with the term “Big Data”, where large data sets get released allowing for analysis, but the Cinderella of the Open Data ball is Small Data, small amounts of data, nonetheless possibly essential, that are too small to be put in some database or

  19. KONTAMINASI MERKURI PADA SAMPEL LINGKUNGAN DAN FAKTOR RISIKO PADA MASYARAKAT DARI KEGIATAN PENAMBANGAN EMAS SKALA KECIL KRUENG SABEE PROVINSI ACEH (Mercury Contamination in the Environmental Samples and Risk Factors in Inhabitants of the Small Scale Gold

    Directory of Open Access Journals (Sweden)

    Sofia Sofia

    2016-09-01

    Full Text Available ABSTRAK Kegiatan penambangan emas skala kecil dengan teknik amalgamasi dapat memberikan peluang introduksi merkuri (Hg ke lingkungan dan manusia. Penelitian kontaminasi Hg pada air minum, ikan, rambut kepala manusia, dan faktor risiko pada manusia telah dilakukan di wilayah Krueng Sabee, Provinsi Aceh. Metode pengambilan dan pengujian sampel yang mengandung Hg dilakukan dengan prosedur SNI, EPA dan WHO. Rancangan cross sectional survey dilakukan pada empat desa dengan 72 responden yang dipilih secara acak. Wawancara dilakukan menggunakan kuesioner terstruktur untuk mendapatkan informasi terkait faktor risiko kesehatan. Pengukuran konsentrasi Hg untuk sampel air dan ikan dilakukan dengan Cold Vapor Atomic Absorption Spectrophotometer dan untuk sampel rambut kepala menggunakan Inductively Coupled Plasma Mass Spectrometry. Analisis data dilakukan dengan analisis varian, uji t sampel bebas, dan uji t satu sampel. Model prediksi dihasilkan menggunakan analisis regresi linier berganda. Hasil penelitian ini menunjukkan konsentrasi Hg pada sampel air sumur sebesar 0,24 ± 0,25 µg/L; sampel ikan: Rastrellinger kanagurta,149,46 ± 2,00 µg/g, Selaroides sp, 58,6 ± 3,01 µg/g, Euthynnus affinis, 46,3 ± 2,98 µg/g; dan pada rambut kepala mulai dari 11,2 ± 4,02 µg/g hingga 48,3 ± 22,29 µg/g. Faktor-faktor risiko yang berpengaruh terhadap konsentrasi Hg pada responden adalah status bekerja di Krueng Sabee, lokasi, lama tinggal, status pekerja tambang dan lama penggunaan pembakar amalgam. Faktor-faktor risiko ini memberi peran sebesar 45,8% terhadap akumulasi Hg di dalam rambut kepala responden.   ABSTRACT Small-scale gold mining activities with amalgamation process can contribute the entry of mercury (Hg into environment and humans. Research on Hg contamination in drinking water, fish, human head hair, and risk factors has been conducted in the area of Krueng Sabee, Aceh Province. Methods of samples collection and Hg concentrations testing conducted

  20. 'They say Islam has a solution for everything, so why are there no guidelines for this?' Ethical dilemmas associated with the births and deaths of infants with fatal abnormalities from a small sample of Pakistani Muslim couples in Britain.

    Science.gov (United States)

    Shaw, Alison

    2012-11-01

    This paper presents ethical dilemmas concerning the termination of pregnancy, the management of childbirth, and the withdrawal of life-support from infants in special care, for a small sample of British Pakistani Muslim parents of babies diagnosed with fatal abnormalities. Case studies illustrating these dilemmas are taken from a qualitative study of 66 families of Pakistani origin referred to a genetics clinic in Southern England. The paper shows how parents negotiated between the authoritative knowledge of their doctors, religious experts, and senior family members in response to the ethical dilemmas they faced. There was little knowledge or open discussion of the view that Islam permits the termination of pregnancy for serious or fatal abnormality within 120 days and there was considerable disquiet over the idea of ending a pregnancy. For some parents, whether their newborn baby would draw breath was a main worry, with implications for the baby's Muslim identity and for the recognition of loss the parents would receive from family and community. This concern sometimes conflicted with doctors' concerns to minimize risk to future pregnancies by not performing a Caesarean delivery if a baby is sure to die. The paper also identifies parents' concerns and feelings of wrong-doing regarding the withdrawal of artificial life-support from infants with multiple abnormalities. The conclusion considers some of the implications of these observations for the counselling and support of Muslim parents following the pre- or neo-natal diagnosis of fatal abnormalities in their children. © 2011 Blackwell Publishing Ltd.

  1. THE ORGANIZATION OF MANAGEMENT ACCOUNTING AT SMALL ENTERPRISES IN UKRAINE

    OpenAIRE

    Nadiya Khocha

    2017-01-01

    The purpose of the research is to study the organization of managerial accounting in Ukrainian small enterprises. Methodology. The survey of management accounting is conducted by an interview with the manager/ chief accountant/financial director of small enterprises, or by sending the questionnaires to these persons via the e-mail. The sample of study includes fifty-five small enterprises of the Lviv region in different types of activities and forms of ownership. Results. Analysis of theoreti...

  2. Small Data

    OpenAIRE

    Pemberton, Steven

    2014-01-01

    htmlabstractThe term “Open Data” often goes hand in hand with the term “Big Data”, where large data sets get released allowing for analysis, but the Cinderella of the Open Data ball is Small Data, small amounts of data, nonetheless possibly essential, that are too small to be put in some database or online dataset to be put to use. RDFa is a technology that allows Cinderella to go to the ball.

  3. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  4. Graph sampling

    OpenAIRE

    Zhang, L.-C.; Patone, M.

    2017-01-01

    We synthesise the existing theory of graph sampling. We propose a formal definition of sampling in finite graphs, and provide a classification of potential graph parameters. We develop a general approach of Horvitz–Thompson estimation to T-stage snowball sampling, and present various reformulations of some common network sampling methods in the literature in terms of the outlined graph sampling theory.

  5. Systematic lymphadenectomy versus sampling of ipsilateral mediastinal lymph-nodes during lobectomy for non-small-cell lung cancer: a systematic review of randomized trials and a meta-analysis.

    Science.gov (United States)

    Mokhles, Sahar; Macbeth, Fergus; Treasure, Tom; Younes, Riad N; Rintoul, Robert C; Fiorentino, Francesca; Bogers, Ad J J C; Takkenberg, Johanna J M

    2017-06-01

    To re-examine the evidence for recommendations for complete dissection versus sampling of ipsilateral mediastinal lymph nodes during lobectomy for cancer. We searched for randomized trials of systematic mediastinal lymphadenectomy versus mediastinal sampling. We performed a textual analysis of the authors' own starting assumptions and conclusion. We analysed the trial designs and risk of bias. We extracted data on early mortality, perioperative complications, overall survival, local recurrence and distant recurrence for meta-analysis. We found five randomized controlled trials recruiting 1980 patients spanning 1989-2007. The expressed starting position in 3/5 studies was a conviction that systematic dissection was effective. Long-term survival was better with lymphadenectomy compared with sampling (Hazard Ratio 0.78; 95% CI 0.69-0.89) as was perioperative survival (Odds Ratio 0.59; 95% CI 0.25-1.36, non-significant). But there was an overall high risk of bias and a lack of intention to treat analysis. There were higher rates (non-significant) of perioperative complications including bleeding, chylothorax and recurrent nerve palsy with lymphadenectomy. The high risk of bias in these trials makes the overall conclusion insecure. The finding of clinically important surgically related morbidities but lower perioperative mortality with lymphadenectomy seems inconsistent. The multiple variables in patients, cancers and available treatments suggest that large pragmatic multicentre trials, testing currently available strategies, are the best way to find out which are more effective. The number of patients affected with lung cancer makes trials feasible. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  6. Small but super

    International Nuclear Information System (INIS)

    Donald, R.L.

    1994-01-01

    This paper compares the advantages and disadvantages between large and small gas utility companies. It discusses areas of construction, gaining markets, technology advances, pricing, and customer service. The paper includes discussions from four chairmen of small utility companies whom describe their perceived position among the larger companies. It also describes methods which small companies use to unite for state and nationally significant issues to voice their opinions

  7. Minijets at small x

    International Nuclear Information System (INIS)

    Landshoff, P.V.

    1994-01-01

    Nonperturbative pomeron exchange at high energy includes minijet production. Minijets are jets whose transverse momentum is so small that they are difficult, or even impossible, to detect experimentally. At moderate Q 2 it is responsible for the small-x behaviour of νW 2 . Hence minijet production should be a feature of deep inelastic scattering at small x. (author). 9 refs., 7 figs

  8. Rapid Sampling from Sealed Containers

    International Nuclear Information System (INIS)

    Johnston, R.G.; Garcia, A.R.E.; Martinez, R.K.; Baca, E.T.

    1999-01-01

    The authors have developed several different types of tools for sampling from sealed containers. These tools allow the user to rapidly drill into a closed container, extract a sample of its contents (gas, liquid, or free-flowing powder), and permanently reseal the point of entry. This is accomplished without exposing the user or the environment to the container contents, even while drilling. The entire process is completed in less than 15 seconds for a 55 gallon drum. Almost any kind of container can be sampled (regardless of the materials) with wall thicknesses up to 1.3 cm and internal pressures up to 8 atm. Samples can be taken from the top, sides, or bottom of a container. The sampling tools are inexpensive, small, and easy to use. They work with any battery-powered hand drill. This allows considerable safety, speed, flexibility, and maneuverability. The tools also permit the user to rapidly attach plumbing, a pressure relief valve, alarms, or other instrumentation to a container. Possible applications include drum venting, liquid transfer, container flushing, waste characterization, monitoring, sampling for archival or quality control purposes, emergency sampling by rapid response teams, counter-terrorism, non-proliferation and treaty verification, and use by law enforcement personnel during drug or environmental raids

  9. Patient Safety Outcomes in Small Urban and Small Rural Hospitals

    Science.gov (United States)

    Vartak, Smruti; Ward, Marcia M.; Vaughn, Thomas E.

    2010-01-01

    Purpose: To assess patient safety outcomes in small urban and small rural hospitals and to examine the relationship of hospital and patient factors to patient safety outcomes. Methods: The Nationwide Inpatient Sample and American Hospital Association annual survey data were used for analyses. To increase comparability, the study sample was…

  10. Industrial Education. "Small Engines".

    Science.gov (United States)

    Parma City School District, OH.

    Part of a series of curriculum guides dealing with industrial education in junior high schools, this guide provides the student with information and manipulative experiences on small gasoline engines. Included are sections on shop adjustment, safety, small engines, internal combustion, engine construction, four stroke engines, two stroke engines,…

  11. Balanced sampling

    NARCIS (Netherlands)

    Brus, D.J.

    2015-01-01

    In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling

  12. Small hydro

    International Nuclear Information System (INIS)

    Bennett, K.; Tung, T.

    1995-01-01

    A small hydro plant in Canada is defined as any project between 1 MW and 15 MW but the international standard is 10 MW. The global market for small hydro development was considered good. There are some 1000 to 2000 MW of generating capacity being added each year. In Canada, growth potential is considered small, primarily in remote areas, but significant growth is anticipated in Eastern Europe, Africa and Asia. Canada with its expertise in engineering, manufacturing and development is considered to have a good chance to take advantage of these growing markets

  13. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  14. Genetic Sample Inventory

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database archives genetic tissue samples from marine mammals collected primarily from the U.S. east coast. The collection includes samples from field programs,...

  15. Genetic Sample Inventory - NRDA

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database archives genetic tissue samples from marine mammals collected in the North-Central Gulf of Mexico from 2010-2015. The collection includes samples from...

  16. 7 CFR 201.42 - Small containers.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Small containers. 201.42 Section 201.42 Agriculture... REGULATIONS Sampling in the Administration of the Act § 201.42 Small containers. In sampling seed in small containers that it is not practical to sample as required in § 201.41, a portion of one unopened container or...

  17. Post-Decontamination Vapor Sampling and Analytical Test Methods

    Science.gov (United States)

    2015-08-12

    is decontaminated that could pose an exposure hazard to unprotected personnel. The chemical contaminants may include chemical warfare agents (CWAs... decontamination process. Chemical contaminants can include chemical warfare agents (CWAs) or their simulants, nontraditional agents (NTAs), toxic industrial...a range of test articles from coupons, panels, and small fielded equipment items. 15. SUBJECT TERMS Vapor hazard; vapor sampling; chemical warfare

  18. Canadian small wind market

    International Nuclear Information System (INIS)

    Moorhouse, E.

    2010-01-01

    This PowerPoint presentation discussed initiatives and strategies adopted by the Canadian Wind Energy Association (CanWEA) to support the development of Canada's small wind market. The general public has shown a significant interest in small wind projects of 300 kW. Studies have demonstrated that familiarity and comfort with small wind projects can help to ensure the successful implementation of larger wind projects. Small wind markets include residential, farming and commercial, and remote community applications. The results of CanWEA market survey show that the small wind market grew by 78 percent in 2008 over 2007, and again in 2009 by 32 percent over 2008. The average turbine size is 1 kW. A total of 11,000 turbines were purchased in 2007 and 2008. Global small wind market growth increased by 110 percent in 2008, and the average turbine size was 2.4 kW. Eighty-seven percent of the turbines made by Canadian mid-size wind turbine manufacturers are exported, and there is now a significant risk that Canada will lose its competitive advantage in small wind manufacturing as financial incentives have not been implemented. American and Canadian-based small wind manufacturers were listed, and small wind policies were reviewed. The presentation concluded with a set of recommendations for future incentives, educational programs and legislation. tabs., figs.

  19. Developing Water Sampling Standards

    Science.gov (United States)

    Environmental Science and Technology, 1974

    1974-01-01

    Participants in the D-19 symposium on aquatic sampling and measurement for water pollution assessment were informed that determining the extent of waste water stream pollution is not a cut and dry procedure. Topics discussed include field sampling, representative sampling from storm sewers, suggested sampler features and application of improved…

  20. Soil sampling

    International Nuclear Information System (INIS)

    Fortunati, G.U.; Banfi, C.; Pasturenzi, M.

    1994-01-01

    This study attempts to survey the problems associated with techniques and strategies of soil sampling. Keeping in mind the well defined objectives of a sampling campaign, the aim was to highlight the most important aspect of representativeness of samples as a function of the available resources. Particular emphasis was given to the techniques and particularly to a description of the many types of samplers which are in use. The procedures and techniques employed during the investigations following the Seveso accident are described. (orig.)

  1. Small x physics

    International Nuclear Information System (INIS)

    Kwiecinski, J.

    1993-01-01

    The QCD expectations concerning the small x limit of parton distributions where x is the Bjorken scaling variable are reviewed. This includes discussion of the evolutions equations in the small x region, the Lipatov equation which sums the leading powers of ln(1/x) and the shadowing effects. Phenomenological implantations of the theoretical expectations for the deep inelastic lepton-hadron scattering in the small x region which will be accessible at the HERA ep collider are described. We give predictions for structure functions F 2 and F L and discuss specific processes sensitive to the small x physics such as heavy quark production, deep inelastic diffraction and jet production in deep inelastic lepton scattering. A brief review of nuclear shadowing in the inelastic lepton nucleus scattering at small x is also presented. (author). 86 refs, 29 figs

  2. Language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik

    1998-01-01

    This article has two aims: [1] to present a revised version of the sampling method that was originally proposed in 1993 by Rijkhoff, Bakker, Hengeveld and Kahrel, and [2] to discuss a number of other approaches to language sampling in the light of our own method. We will also demonstrate how our...... sampling method is used with different genetic classifications (Voegelin & Voegelin 1977, Ruhlen 1987, Grimes ed. 1997) and argue that —on the whole— our sampling technique compares favourably with other methods, especially in the case of exploratory research....

  3. Sampling of ore

    International Nuclear Information System (INIS)

    Boehme, R.C.; Nicholas, B.L.

    1987-01-01

    This invention relates to a method of an apparatus for ore sampling. The method includes the steps of periodically removing a sample of the output material of a sorting machine, weighing each sample so that each is of the same weight, measuring a characteristic such as the radioactivity, magnetivity or the like of each sample, subjecting at least an equal portion of each sample to chemical analysis to determine the mineral content of the sample and comparing the characteristic measurement with desired mineral content of the chemically analysed portion of the sample to determine the characteristic/mineral ratio of the sample. The apparatus includes an ore sample collector, a deflector for deflecting a sample of ore particles from the output of an ore sorter into the collector and means for moving the deflector from a first position in which it is clear of the particle path from the sorter to a second position in which it is in the particle path at predetermined time intervals and for predetermined time periods to deflect the sample particles into the collector. The apparatus conveniently includes an ore crusher for comminuting the sample particle, a sample hopper means for weighing the hopper, a detector in the hopper for measuring a characteristic such as radioactivity, magnetivity or the like of particles in the hopper, a discharge outlet from the hopper and means for feeding the particles from the collector to the crusher and then to the hopper

  4. Superfund Site Information - Site Sampling Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — This asset includes Superfund site-specific sampling information including location of samples, types of samples, and analytical chemistry characteristics of...

  5. Small - Display Cartography

    DEFF Research Database (Denmark)

    Nissen, Flemming; Hvas, Anders; Münster-Swendsen, Jørgen

    Service Communication and finally, Part IV: Concluding remarks and topics for further research on small-display cartography. Part II includes a separate Appendix D consisting of a cartographic design specification. Part III includes a separate Appendix C consisting of a schema specification, a separate...

  6. Sampling Development

    Science.gov (United States)

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  7. UNDERSTANDING SMALL BUSINESS SCAMS

    OpenAIRE

    MICHAEL T. SCHAPER; PAUL WEBER

    2012-01-01

    This paper provides an overview of the current state of knowledge about small business scams. A scam is a form of dishonest action, based upon an invitation to participate in an activity. Victims are encouraged, mislead or induced to voluntarily interact with the perpetrator, and ultimately to willingly surrender over money, information or other valuable resources. Common forms of scams directed towards small business include phishing, false business valuations and sales, fake overpayments, f...

  8. Environmental sampling

    International Nuclear Information System (INIS)

    Puckett, J.M.

    1998-01-01

    Environmental Sampling (ES) is a technology option that can have application in transparency in nuclear nonproliferation. The basic process is to take a sample from the environment, e.g., soil, water, vegetation, or dust and debris from a surface, and through very careful sample preparation and analysis, determine the types, elemental concentration, and isotopic composition of actinides in the sample. The sample is prepared and the analysis performed in a clean chemistry laboratory (CCL). This ES capability is part of the IAEA Strengthened Safeguards System. Such a Laboratory is planned to be built by JAERI at Tokai and will give Japan an intrinsic ES capability. This paper presents options for the use of ES as a transparency measure for nuclear nonproliferation

  9. Small hepatocellular carcinoma versus small cavernous hemangioma

    International Nuclear Information System (INIS)

    Choi, B.I.; Park, H.W.; Kim, S.H.; Han, M.C.; Kim, C.W.

    1989-01-01

    To determine the optimal pulse sequence for detection and differential diagnosis of small hepatocellular carcinomas and cavernous hemangiomas less than 5 cm in diameter, the authors have analyzed spin-echo (SE) images of 15 small hepatocellular carcinomas and 31 small cavernous hemangiomas obtained at 2.0 T. Pulse sequences used included repetition times (TRs) of 500 and 2,000 msec and echo times (TEs) of 30,60,90,120,150, and 180 msec. Mean tumor-liver contrast-to-noise ratios on the SE 2,000/60 (TR msec/TE msec) sequence were 23.90 ± 16.33 and 62.10 ± 25.94 for small hepatocellular carcinomas and hemangiomas, respectively, and were significantly greater than for all other pulse sequences. Mean tumor-liver signal intensity ratios on the SE 2,000/150 sequence were 2.34 ± 1.72 and 6.04 ± 2.72 for small hepatocellular carcinomas and hemangiomas, respectively, and were significantly greater than for all other pulse sequences in hemangiomas

  10. Small School Reform

    Directory of Open Access Journals (Sweden)

    Carroll E. Bronson

    2013-05-01

    Full Text Available This qualitative ethnographic case study explored the evolution of a public urban high school in its 3rd year of small school reform. The study focused on how the high school proceeded from its initial concept, moving to a small school program, and emerging as a new small high school. Data collection included interviews, observations, and document review to develop a case study of one small high school sharing a multiplex building. The first key finding, “Too Many Pieces, Not Enough Glue,” revealed that the school had too many new programs starting at once and they lacked a clear understanding of their concept and vision for their new small school, training on the Montessori philosophies, teaching and learning in small schools, and how to operate within a teacher-cooperative model. The second key finding, “A Continuous Struggle,” revealed that the shared building space presented problems for teachers and students. District policies remain unchanged, resulting in staff and students resorting to activist approaches to get things done. These findings offer small school reform leaders suggestions for developing and sustaining a small school culture and cohesion despite the pressures to revert back to top-down, comprehensive high school norms.

  11. Comparative evaluation of serum, FTA filter-dried blood and oral fluid as sample material for PRRSV diagnostics by RT-qPCR in a small-scale experimental study.

    Science.gov (United States)

    Steinrigl, Adolf; Revilla-Fernández, Sandra; Wodak, Eveline; Schmoll, Friedrich; Sattler, Tatjana

    2014-01-01

    Recently, research into alternative sample materials, such as oral fluid or filter-dried blood has been intensified, in order to facilitate cost-effective and animal-friendly sampling of individuals or groups of pigs for diagnostic purposes. The objective of this study was to compare the sensitivity of porcine reproductive and respiratory syndrome virus (PRRSV)-RNA detection by reverse transcription quantitative real-time PCR (RT-qPCR) in serum, FTA filter-dried blood and oral fluid sampled from individual pigs. Ten PRRSV negative pigs were injected with an EU-type PRRSV live vaccine. Blood and oral fluid samples were taken from each pig before, and 4, 7, 14 and 21 days after vaccination. All samples were then analyzed by PRRSV RT-qPCR. In serum, eight often pigs tested RT-qPCR positive at different time points post infection. Absolute quantification showed low serum PRRSV-RNA loads in most samples. In comparison to serum, sensitivity of PRRSV-RNA detection was strongly reduced in matched FTA filter-dried blood and in oral fluid from the same pigs. These results indicate that with low PRRSV-RNA loads the diagnostic sensitivity of PRRSV-RNA detection by RT-qPCR achieved with serum is currently unmatched by either FTA filter-dried blood or oral fluid.

  12. Spherical sampling

    CERN Document Server

    Freeden, Willi; Schreiner, Michael

    2018-01-01

    This book presents, in a consistent and unified overview, results and developments in the field of today´s spherical sampling, particularly arising in mathematical geosciences. Although the book often refers to original contributions, the authors made them accessible to (graduate) students and scientists not only from mathematics but also from geosciences and geoengineering. Building a library of topics in spherical sampling theory it shows how advances in this theory lead to new discoveries in mathematical, geodetic, geophysical as well as other scientific branches like neuro-medicine. A must-to-read for everybody working in the area of spherical sampling.

  13. The RECONS 10 Parsec Sample

    Science.gov (United States)

    Henry, Todd; Dieterich, Sergio; Finch, C.; Ianna, P. A.; Jao, W.-C.; Riedel, Adric; Subasavage, John; Winters, J.; RECONS Team

    2018-01-01

    The sample of stars, brown dwarfs, and exoplanets known within 10 parsecs of our Solar System as of January 1, 2017 is presented. The current census is comprised of 416 objects made up of 371 stars (including the Sun and white dwarfs) and 45 brown dwarfs. The stars are known to be orbited by 43 planets (eight in our Solar System and 35 exoplanets). There are 309 systems within 10 pc, including 275 with stellar primaries and 34 systems containing only brown dwarfs.Via a long-term astrometric effort at CTIO, the RECONS (REsearch Consortium On Nearby Stars, www.recons.org) team has added 44 stellar systems to the sample, accounting for one of every seven systems known within 10 pc. Overall, the 278 red dwarfs clearly dominate the sample, accounting for 75% of all stars known within 10 pc. The completeness of the sample is assessed, indicating that a few red, brown, and white dwarfs within 10 pc may be discovered, both as primaries and secondaries, although we estimate that 90% of the stellar systems have been identified. The evolution of the 10 pc sample over the past century is outlined to illustrate our growing knowledge of the solar neighborhood.The luminosity and mass functions for stars within 10 pc are described. In contrast to many studies, once all known close multiples are resolved into individual components, the true mass function rises to the end of the stellar main sequence, followed by a precipitous drop in the number of brown dwarfs, which are outnumbered 8.2 to 1 by stars. Of the 275 stellar primaries in the sample, 182 (66%) are single, 75 (27%) have at least one stellar companion, only 8 (3%) have a brown dwarf companion, and 19 (7%) systems are known to harbor planets. Searches for brown dwarf companions to stars in this sample have been quite rigorous, so the brown dwarf companion rate is unlikely to rise significantly. In contrast, searches for exoplanets, particularly terrestrial planets, have been limited. Thus, overall the solar neighborhood is

  14. Quality evaluation of processed clay soil samples.

    Science.gov (United States)

    Steiner-Asiedu, Matilda; Harrison, Obed Akwaa; Vuvor, Frederick; Tano-Debrah, Kwaku

    2016-01-01

    This study assessed the microbial quality of clay samples sold on two of the major Ghanaian markets. The study was a cross-sectional assessing the evaluation of processed clay and effects it has on the nutrition of the consumers in the political capital town of Ghana. The items for the examination was processed clay soil samples. Staphylococcus spp and fecal coliforms including Klebsiella, Escherichia, and Shigella and Enterobacterspp were isolated from the clay samples. Samples from the Kaneshie market in Accra recorded the highest total viable counts 6.5 Log cfu/g and Staphylococcal count 5.8 Log cfu/g. For fecal coliforms, Madina market samples had the highest count 6.5 Log cfu/g and also recorded the highest levels of yeast and mould. For Koforidua, total viable count was highest in the samples from the Zongo market 6.3 Log cfu/g. Central market samples had the highest count of fecal coliforms 4.6 Log cfu/g and yeasts and moulds 6.5 Log cfu/g. "Small" market recorded the highest staphylococcal count 6.2 Log cfu/g. The water activity of the clay samples were low, and ranged between 0.65±0.01 and 0.66±0.00 for samples collected from Koforidua and Accra respectively. The clay samples were found to contain Klebsiella spp. Escherichia, Enterobacter, Shigella spp. staphylococcus spp., yeast and mould. These have health implications when consumed.

  15. (including travel dates) Proposed itinerary

    Indian Academy of Sciences (India)

    Ashok

    31 July to 22 August 2012 (including travel dates). Proposed itinerary: Arrival in Bangalore on 1 August. 1-5 August: Bangalore, Karnataka. Suggested institutions: Indian Institute of Science, Bangalore. St Johns Medical College & Hospital, Bangalore. Jawaharlal Nehru Centre, Bangalore. 6-8 August: Chennai, TN.

  16. Fluidic sampling

    International Nuclear Information System (INIS)

    Houck, E.D.

    1992-01-01

    This paper covers the development of the fluidic sampler and its testing in a fluidic transfer system. The major findings of this paper are as follows. Fluidic jet samples can dependably produce unbiased samples of acceptable volume. The fluidic transfer system with a fluidic sampler in-line will transfer water to a net lift of 37.2--39.9 feet at an average ratio of 0.02--0.05 gpm (77--192 cc/min). The fluidic sample system circulation rate compares very favorably with the normal 0.016--0.026 gpm (60--100 cc/min) circulation rate that is commonly produced for this lift and solution with the jet-assisted airlift sample system that is normally used at ICPP. The volume of the sample taken with a fluidic sampler is dependant on the motive pressure to the fluidic sampler, the sample bottle size and on the fluidic sampler jet characteristics. The fluidic sampler should be supplied with fluid having the motive pressure of the 140--150 percent of the peak vacuum producing motive pressure for the jet in the sampler. Fluidic transfer systems should be operated by emptying a full pumping chamber to nearly empty or empty during the pumping cycle, this maximizes the solution transfer rate

  17. Some factors including radiation affecting the productivity of proteinase enzymes by mucor lamprosporus

    International Nuclear Information System (INIS)

    El-Kabbany, H.M.I.

    1996-01-01

    In the present time, great attention has been focused on the production of milk clotting enzymes from microbial source for use as remain substitute due to the increasing demands on rennin for cheese making and the prohibition of the slaughter of small calves. The present investigation included the isolation and identification of remin-like enzyme fungal producers from different egyptian food and soil samples. Different factors including gamma radiation affecting the capability of selected isolate to produce the enzyme was also included. Special attention has also given to study the effect of different purification methods of the produced enzyme. The properties of the purified enzyme were also investigated

  18. Theory including future not excluded

    DEFF Research Database (Denmark)

    Nagao, K.; Nielsen, H.B.

    2013-01-01

    We study a complex action theory (CAT) whose path runs over not only past but also future. We show that, if we regard a matrix element defined in terms of the future state at time T and the past state at time TA as an expectation value in the CAT, then we are allowed to have the Heisenberg equation......, Ehrenfest's theorem, and the conserved probability current density. In addition,we showthat the expectation value at the present time t of a future-included theory for large T - t and large t - T corresponds to that of a future-not-included theory with a proper inner product for large t - T. Hence, the CAT...

  19. [Small renal mass].

    Science.gov (United States)

    Prokofiev, D; Kreutzer, N; Kress, A; Wissing, F; Pfeifer, H; Stolzenburg, J-U; Dietel, A; Schwalenberg, T; Do, M; Truß, M C

    2012-10-01

    The frequent application of ultrasound and radiological imaging for non-urological indications in recent years has resulted in an increase in the diagnosis of small renal masses. The treatment options for patients with a small renal mass include active surveillance, surgery (both open and minimally invasive) as well as ablative techniques. As there is a risk for metastatic spread even in small renal masses surgical extirpation remains the treatment of choice in most patients. Ablative procedures, such as cryoablation and radiofrequency ablation are appropriate for old and multi-morbid patients who require active treatment of a small renal mass. Active surveillance is an alternative for high-risk patients. Meticulous patient selection by the urologist and patient preference will determine the choice of treatment option in the future.

  20. The perception of small crime

    NARCIS (Netherlands)

    Douhou, S.; Magnus, J.R.; van Soest, A.H.O.

    2011-01-01

    In this paper we measure perceptions of incorrect behavior or ‘small crime’, based on a questionnaire administered to a large representative sample from the Dutch population. In the questionnaire we ask the respondents to rate the severity or justifiability of a number of small crimes. We present

  1. On sampling social networking services

    OpenAIRE

    Wang, Baiyang

    2012-01-01

    This article aims at summarizing the existing methods for sampling social networking services and proposing a faster confidence interval for related sampling methods. It also includes comparisons of common network sampling techniques.

  2. Teaching Small Business Ownership and Management

    Science.gov (United States)

    Leach, James A.

    1977-01-01

    Topics discussed include integrating small business ownership with existing programs; establishing awareness, exploration, and orientation activities; and preparation for small business ownership. A curriculum guide developed for teaching small business ownership and management is also described. (TA)

  3. Small talk

    Directory of Open Access Journals (Sweden)

    Ryszard Przybylski

    2016-12-01

    Full Text Available The poem Small talk conjures up a communicative situation in which the main character, a newcomer from Poland, answers conventional questions related to their country. Bearing in mind the fact that this poem is set during a military dictatorship, superficial interest in his homeland may trigger a feeling of impatience. This is at least the impression formed if we adopt the perspective defined within the romantic tradition, and when taking into account the conventional poetry of martial law in Poland. Nevertheless, Barańczak retains an ironic distance towards such communicative situations and, as a consequence, does not create poetry that meets most readersʼ expectations. His poetic imperative for verbal art to be the expression of mistrust remains valid.

  4. Small Composers

    DEFF Research Database (Denmark)

    Holgersen, Sven-Erik; Bruun, Peter; Tjagvad, Mette

    2018-01-01

    the study: What expectations do the class teacher and the professional musicians have to the creative practice, i.e. to the collaboration and to the musical outcome? To which extent do the collaborating partners share a common understanding of the aim, content and method of the workshop? How do the roles......The present chapter discusses roles and responsibilities of the collaborating partners in a creative music workshop called Small Composers. The aim is to be attentive to a number of potential alterations implicated by the collaborating partners’ different backgrounds. The following questions guided...... and responsibilities of the collaborating partners become visible through the practice? How do the professional identities of the teacher and the musicians become visible and what are the implications for the workshop as a musical community of practice?...

  5. Venus Suface Sampling and Analysis

    Data.gov (United States)

    National Aeronautics and Space Administration — This effort is developing the technology to transfer particulate samples from a Venus drill (being developed by Honeybee Robotics in a Phase 2 Small Business...

  6. Ethno veterinary practices of small ruminant livestock farmers in ...

    African Journals Online (AJOL)

    Data were collected from a total of 400 ruminant livestock farmers selected from Oyo, Ogun, Lagos, Ondo and Edo States of Nigeria using Multi-stage sampling technique. The data collected include the specific attributes of small ruminant livestock farmers in the area, ethno-veterinary practices of farmers in the treatment of ...

  7. An Accounting System for Solid Waste Management in Small Communities.

    Science.gov (United States)

    Zausner, Eric R.

    This pamphlet provides a guide to the type and quantity of information to be collected for effective solid waste management in small communities. It is directed at municipal or private personnel involved in the operation and ownership of management facilities. Sample activity reports are included for reference. (CS)

  8. Neoclassical transport including collisional nonlinearity.

    Science.gov (United States)

    Candy, J; Belli, E A

    2011-06-10

    In the standard δf theory of neoclassical transport, the zeroth-order (Maxwellian) solution is obtained analytically via the solution of a nonlinear equation. The first-order correction δf is subsequently computed as the solution of a linear, inhomogeneous equation that includes the linearized Fokker-Planck collision operator. This equation admits analytic solutions only in extreme asymptotic limits (banana, plateau, Pfirsch-Schlüter), and so must be solved numerically for realistic plasma parameters. Recently, numerical codes have appeared which attempt to compute the total distribution f more accurately than in the standard ordering by retaining some nonlinear terms related to finite-orbit width, while simultaneously reusing some form of the linearized collision operator. In this work we show that higher-order corrections to the distribution function may be unphysical if collisional nonlinearities are ignored.

  9. Sampling methods

    International Nuclear Information System (INIS)

    Loughran, R.J.; Wallbrink, P.J.; Walling, D.E.; Appleby, P.G.

    2002-01-01

    Methods for the collection of soil samples to determine levels of 137 Cs and other fallout radionuclides, such as excess 210 Pb and 7 Be, will depend on the purposes (aims) of the project, site and soil characteristics, analytical capacity, the total number of samples that can be analysed and the sample mass required. The latter two will depend partly on detector type and capabilities. A variety of field methods have been developed for different field conditions and circumstances over the past twenty years, many of them inherited or adapted from soil science and sedimentology. The use of them inherited or adapted from soil science and sedimentology. The use of 137 Cs in erosion studies has been widely developed, while the application of fallout 210 Pb and 7 Be is still developing. Although it is possible to measure these nuclides simultaneously, it is common for experiments to designed around the use of 137 Cs along. Caesium studies typically involve comparison of the inventories found at eroded or sedimentation sites with that of a 'reference' site. An accurate characterization of the depth distribution of these fallout nuclides is often required in order to apply and/or calibrate the conversion models. However, depending on the tracer involved, the depth distribution, and thus the sampling resolution required to define it, differs. For example, a depth resolution of 1 cm is often adequate when using 137 Cs. However, fallout 210 Pb and 7 Be commonly has very strong surface maxima that decrease exponentially with depth, and fine depth increments are required at or close to the soil surface. Consequently, different depth incremental sampling methods are required when using different fallout radionuclides. Geomorphic investigations also frequently require determination of the depth-distribution of fallout nuclides on slopes and depositional sites as well as their total inventories

  10. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  11. Sample Reuse in Statistical Remodeling.

    Science.gov (United States)

    1987-08-01

    as the jackknife and bootstrap, is an expansion of the functional, T(Fn), or of its distribution function or both. Frangos and Schucany (1987a) used...accelerated bootstrap. In the same report Frangos and Schucany demonstrated the small sample superiority of that approach over the proposals that take...higher order terms of an Edgeworth expansion into account. In a second report Frangos and Schucany (1987b) examined the small sample performance of

  12. Research using small tokamaks

    International Nuclear Information System (INIS)

    1993-01-01

    This document consists of a collection of papers presented at the IAEA Technical Committee Meeting on Research Using Small Tokamaks. It contains 22 papers on a wide variety of research aspects, including diagnostics, design, transport, equilibrium, stability, and confinement. Some of these papers are devoted to other concepts (stellarators, compact tori). Refs, figs and tabs

  13. Small Group Research

    Science.gov (United States)

    McGrath, Joseph E.

    1978-01-01

    Summarizes research on small group processes by giving a comprehensive account of the types of variables primarily studied in the laboratory. These include group structure, group composition, group size, and group relations. Considers effects of power, leadership, conformity to social norms, and role relationships. (Author/AV)

  14. SmallSat Database

    Science.gov (United States)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data

  15. Phobos Sample Return: Next Approach

    Science.gov (United States)

    Zelenyi, Lev; Martynov, Maxim; Zakharov, Alexander; Korablev, Oleg; Ivanov, Alexey; Karabadzak, George

    possible scenario of the Boomerang mission includes the approach to Deimos prior to the landing of Phobos. The needed excess ΔV w.r.t. simple scenario (elliptical orbit à near-Phobos orbit) amounts to 0.67 km s-1 (1.6 vs 0.93 km s-1). The Boomerang mission basically repeats the Phobos-SR (2011) architecture, where the transfer-orbiting spacecraft lands on the Phobos surface and a small return vehicle launches the return capsule to Earth. We consider the Boomerang mission as an important step in Mars exploration and a direct precursor of Mars Sample Return. The following elements of the Boomerang mission might be directly employed, or serve as the prototypes for the Mars Sample return in future: Return vehicle, Earth descent module, Transfer-orbital spacecraft. We urge the development of this project for its high science value and recognize its elements as potential national contribution to an international Mars Sample Return project. Galimov E.M., Phobos sample return mission: scientific substantiation, Solar System Res., v.44, No.1, pp5-14, 2010. Chappaz L., H.J. Melosh, M. Vaguero, and K.C. Howell, Material transfer from the surface of Mars to Phobos and Deimos, 43rd Lunar and planetary Science Conference, paper 1422, 2012.

  16. Understanding small business engagement in workplace violence prevention programs.

    Science.gov (United States)

    Bruening, Rebecca A; Strazza, Karen; Nocera, Maryalice; Peek-Asa, Corinne; Casteel, Carri

    2015-01-01

    Worksite wellness, safety, and violence prevention programs have low penetration among small, independent businesses. This study examined barriers and strategies influencing small business participation in workplace violence prevention programs (WVPPs). A semistructured interview guide was used in 32 telephone interviews. The study took place at the University of North Carolina Injury Prevention Research Center. Participating were a purposive sample of 32 representatives of small business-serving organizations (e.g., business membership organizations, regulatory agencies, and economic development organizations) selected for their experience with small businesses. This study was designed to inform improved dissemination of Crime Free Business (CFB), a WVPP for small, independent retail businesses. Thematic qualitative data analysis was used to identify key barriers and strategies for promoting programs and services to small businesses. Three key factors that influence small business engagement emerged from the analysis: (1) small businesses' limited time and resources, (2) low salience of workplace violence, (3) influence of informal networks and source credibility. Identified strategies include designing low-cost and convenient programs, crafting effective messages, partnering with influential organizations and individuals, and conducting outreach through informal networks. Workplace violence prevention and public health practitioners may increase small business participation in programs by reducing time and resource demands, addressing small business concerns, enlisting support from influential individuals and groups, and emphasizing business benefits of participating in the program.

  17. Small finance banks: Challenges

    Directory of Open Access Journals (Sweden)

    Jayadev M

    2017-12-01

    Full Text Available A recent innovation in the Indian banking structure has been the formation of a new banking institution—small finance banks (SFBs. These banks are expected to penetrate into financial inclusion by providing basic banking and credit services with a differentiated banking model to the larger population. In this context the new SFBs have multiple challenges in coming out with a new, differentiated business model. The challenges include building low cost liability portfolio, technology management, and balancing the regulatory compliances. This paper also presents the top of mind views of three senior executives of new small finance banks.

  18. Small Column Ion Exchange

    International Nuclear Information System (INIS)

    Huff, Thomas

    2010-01-01

    Small Column Ion Exchange (SCIX) leverages a suite of technologies developed by DOE across the complex to achieve lifecycle savings. Technologies are applicable to multiple sites. Early testing supported multiple sites. Balance of SRS SCIX testing supports SRS deployment. A forma Systems Engineering Evaluation (SEE) was performed and selected Small Column Ion Exchange columns containing Crystalline Silicotitanate (CST) in a 2-column lead/lag configuration. SEE considered use of Spherical Resorcinol-Formaldehyde (sRF). Advantages of approach at SRS include: (1) no new buildings, (2) low volume of Cs waste in solid form compared to aqueous strip effluent; and availability of downstream processing facilities for immediate processing of spent resin.

  19. Brine Sampling and Evaluation Program

    International Nuclear Information System (INIS)

    Deal, D.E.; Case, J.B.; Deshler, R.M.; Drez, P.E.; Myers, J.; Tyburski, J.R.

    1987-12-01

    The Brine Sampling and Evaluation Program (BSEP) Phase II Report is an interim report which updates the data released in the BSEP Phase I Report. Direct measurements and observations of the brine that seeps into the WIPP repository excavations were continued through the period between August 1986 and July 1987. That data is included in Appendix A, which extends the observation period for some locations to approximately 900 days. Brine observations at 87 locations are presented in this report. Although WIPP underground workings are considered ''dry,'' small amounts of brine are present. Part of that brine migrates into the repository in response to pressure gradients at essentially isothermal conditions. The data presented in this report is a continuation of moisture content studies of the WIPP facility horizon that were initiated in 1982, as soon as underground drifts began to be excavated. Brine seepages are manifested by salt efflorescences, moist areas, and fluid accumulations in drillholes. 35 refs., 6 figs., 11 tabs

  20. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  1. Quantitative measurements of small isotopic samples in gaseous mixtures by utilization of some nuclear properties; Etude des possibilites de mesures de faibles quantites de gaz radioactifs dans un melange en utilisant simultanement plusieurs proprietes nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Maragnon, J; Delperie, C

    1967-12-01

    The question is to define the characteristics of a group of measurements allowing the analysis of traces of radioactive rare gases in gas mixtures of different composition. To distinguish the radioactive isotopes from each other and the activity level reaching 10{sup 6} due to their nuclear properties, the method was chosen to use several nuclear properties: gamma radiation energy, beta particle energy, lifetime of excited states. The choice of a plastic scintillator as beta detector allows to answer satisfactorilly to this demand by measurement of nuclear constants because of the short de/excitation time of this detector. Another advantage is that it can be a reservoir for the sample without any destruction nor modification of the sample. The study has been based on the mixture of Kr-85, the analysis of other rare gases follwos immediately from the adopted principle. [French] Les auteurs ont oriente leur recherche vers une solution permettant de distinguer les isotopes radioactifs les uns des autres et dans des rapports d'activite pouvant atteindre 10{sup 6}, grace a plusieurs de leurs proprietes nucleaires, energie de rayonnement gamma, energie de la particule beta, temps de vie des niveaux excites. Le choix d'un scintillateur plastique comme detecteur beta permet de repondre d'une maniere satisfaisante a la mesure des constantes nucleaires en raison du temps de de/excitation rapide de ce scintillateur. Il offre en outre l'avantage de pouvoir servir de reservoir a l'echantillon sans entrainer aucune destruction ni modification de celui-ci. L'etude a ete basee sur la mixture de Kr-85, l'analyse des autre gaz rares decoulant immediatement du principe adopte. (auteur)

  2. Zγ production at NNLO including anomalous couplings

    Science.gov (United States)

    Campbell, John M.; Neumann, Tobias; Williams, Ciaran

    2017-11-01

    In this paper we present a next-to-next-to-leading order (NNLO) QCD calculation of the processes pp → l + l -γ and pp\\to ν \\overline{ν}γ that we have implemented in MCFM. Our calculation includes QCD corrections at NNLO both for the Standard Model (SM) and additionally in the presence of Zγγ and ZZγ anomalous couplings. We compare our implementation, obtained using the jettiness slicing approach, with a previous SM calculation and find broad agreement. Focusing on the sensitivity of our results to the slicing parameter, we show that using our setup we are able to compute NNLO cross sections with numerical uncertainties of about 0.1%, which is small compared to residual scale uncertainties of a few percent. We study potential improvements using two different jettiness definitions and the inclusion of power corrections. At √{s}=13 TeV we present phenomenological results and consider Zγ as a background to H → Zγ production. We find that, with typical cuts, the inclusion of NNLO corrections represents a small effect and loosens the extraction of limits on anomalous couplings by about 10%.

  3. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  4. An overview of possible High Temperature Gas-cooled Reactors - Gas Turbine (HTGR-GT) systems for the production of electricity and heat. Includes a technical assessment of the suitability for a small Dutch cogeneration plant; Een overzicht van mogelijke HTGR-GT systemen voor produktie van elektriciteit en warmte. Met technische beoordeling van geschiktheid voor een kleine Nederlandse W/K centrale

    Energy Technology Data Exchange (ETDEWEB)

    Kikstra, J.F

    1997-06-01

    There is a large number of different configurations for the combination of a closed cycle gas turbine (CCGT) system and a high-temperature gas-cooled reactor (HTGR). Based on the results of a literature survey an overview of such configurations is presented and a comparison is made for their appropriateness for a small cogeneration system (<60 MWt) to be used in the Netherlands. However, most cycles can only be applied for large-scale energy production or supply heat on a too low temperature level. The direct, recuperated cycle is the only suitable cycle, while that cycle is a simple system and shows an acceptable electric and total efficiency. Calculations were carried out for the co-production of hot water (75-125C and 40-70C) and for steam (10 bar, 220C). By means of a static model and an optimizer the feasible efficiencies for different heat demand are determined. The maximum electric efficiency is 42% for the co-production of hot water and 38% for the co-production of steam. 28 refs.

  5. Preparation of small uranium hexafluoride samples in view of mass spectrometry analysis; Preparation de petits echantillons d'hexafluorure d'uranium en vue d'analyse spectrometrique de masse

    Energy Technology Data Exchange (ETDEWEB)

    Severin, Michel

    1958-07-01

    We have studied the preparation of uranium hexafluoride for the determination of the isotopic ratio {sup 235}U/{sup 238}U by means of a mass spectrometer. UF{sub 6} should be produced from an amount of raw material (metallic uranium or oxide) that should not exceed 0,1 g. Our method has a good yield (we have studied the rate of transformation) and gives samples which present a content of impurities (HF and SiF{sub 4}) low enough to enable correct isotopic measurements. The method which seemed the best uses the cobalt trifluoride as a fluorining agent. It is now in current use in the laboratories of mass spectrometry. (author) [French] Nous avons etudie la preparation de l'hexafluorure d'uranium en vue de la determination au spectrometre de masse du rapport isotopique {sup 235}U/{sup 238}U. L'hexafluorure d'uranium devait etre produit a partir d'une quantite de matiere premiere (uranium metallique ou oxyde) ne devant pas exceder 0,1 g. Nous avons mis au point une methode de preparation presentant un rendement eleve (etude du taux de transformation) et donnant des echantillons dont le taux d'impuretes (HF et SiF{sub 4}) est suffisamment faible pour permettre des mesures isotopiques correctes. La methode ayant donne le plus de satisfaction utilise le trifluorure de cobalt comme agent fluorant. Ce procede est maintenant couramment employe dans les laboratoires de spectrometrie de masse. (auteur)

  6. Small Hydropower - The comeback of small hydropower stations

    International Nuclear Information System (INIS)

    Niederhaeusern, A.

    2008-01-01

    This issue of the 'Erneuerbare Energien' (renewable energies) magazine published by the Swiss Solar Energy Society takes a look at small hydropower projects in Switzerland. In a number of interviews and articles, various topics concerning small hydropower are dealt with. First of all, an interview with Bruno Guggisberg, previously responsible for small hydro at the Swiss Federal Office of Energy, examines the potential of small hydro and the various political, technical and economic influences on such projects. Further articles provide an overview of the various types of small hydro schemes, including power generation using height differences in drinking-water and wastewater installations. As far as the components of small hydro schemes are concerned, various types of turbines and further system components that are needed are examined. A further article takes a look at the small hydro market and the market players involved. Ecological aspects and research activities are discussed in further articles. In a second interview with Martin Boelli, presently responsible for small hydropower at the Swiss Federal Office of Energy, the unused potential for the use of hydropower in Switzerland is discussed. Examples of small-scale hydro schemes are examined and the support offered by the Small Hydropower Program is discussed. Finally the question is asked, if the small hydro market in Switzerland is overheated as a result of promotion schemes such as cost-covering remuneration for electricity from renewable energy sources.

  7. Small Wind Site Assessment Guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, Tim [Advanced Energy Systems LLC, Eugene, OR (United States); Preus, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-09-01

    Site assessment for small wind energy systems is one of the key factors in the successful installation, operation, and performance of a small wind turbine. A proper site assessment is a difficult process that includes wind resource assessment and the evaluation of site characteristics. These guidelines address many of the relevant parts of a site assessment with an emphasis on wind resource assessment, using methods other than on-site data collection and creating a small wind site assessment report.

  8. REECo activities and sample logistics in support of the Nevada Applied Ecology Group

    International Nuclear Information System (INIS)

    Wireman, D.L.; Rosenberry, C.E. Jr.

    1975-01-01

    Activities and sample logistics of Reynolds Electrical and Engineering Co., Inc. (REECo), in support of the Nevada Applied Ecology Group (NAEG), are discussed in this summary report. Activities include the collection, preparation, and shipment of samples of soils, vegetation, and small animals collected at Pu-contaminated areas of the Nevada Test Site and Tonopah Test Range. (CH)

  9. Sampling Operations on Big Data

    Science.gov (United States)

    2015-11-29

    gories. These include edge sampling methods where edges are selected by a predetermined criteria; snowball sampling methods where algorithms start... Sampling Operations on Big Data Vijay Gadepally, Taylor Herr, Luke Johnson, Lauren Milechin, Maja Milosavljevic, Benjamin A. Miller Lincoln...process and disseminate information for discovery and exploration under real-time constraints. Common signal processing operations such as sampling and

  10. Small Business Development Center

    Data.gov (United States)

    Small Business Administration — Small Business Development Centers (SBDCs) provide assistance to small businesses and aspiring entrepreneurs throughout the United States and its territories. SBDCs...

  11. Small white matter lesion detection in cerebral small vessel disease

    Science.gov (United States)

    Ghafoorian, Mohsen; Karssemeijer, Nico; van Uden, Inge; de Leeuw, Frank E.; Heskes, Tom; Marchiori, Elena; Platel, Bram

    2015-03-01

    Cerebral small vessel disease (SVD) is a common finding on magnetic resonance images of elderly people. White matter lesions (WML) are important markers for not only the small vessel disease, but also neuro-degenerative diseases including multiple sclerosis, Alzheimer's disease and vascular dementia. Volumetric measurements such as the "total lesion load", have been studied and related to these diseases. With respect to SVD we conjecture that small lesions are important, as they have been observed to grow over time and they form the majority of lesions in number. To study these small lesions they need to be annotated, which is a complex and time-consuming task. Existing (semi) automatic methods have been aimed at volumetric measurements and large lesions, and are not suitable for the detection of small lesions. In this research we established a supervised voxel classification CAD system, optimized and trained to exclusively detect small WMLs. To achieve this, several preprocessing steps were taken, which included a robust standardization of subject intensities to reduce inter-subject intensity variability as much as possible. A number of features that were found to be well identifying small lesions were calculated including multimodal intensities, tissue probabilities, several features for accurate location description, a number of second order derivative features as well as multi-scale annular filter for blobness detection. Only small lesions were used to learn the target concept via Adaboost using random forests as its basic classifiers. Finally the results were evaluated using Free-response receiver operating characteristic.

  12. Urban Waters Small Grants 101

    Science.gov (United States)

    General information on Urban Waters Small Grants is provided in this document. Grantees are listed by themes, including Environmental Justice, Water Quality, Job Training and Creation, and Green Infrastructure.

  13. Gas-driven pump for ground-water samples

    Science.gov (United States)

    Signor, Donald C.

    1978-01-01

    Observation wells installed for artificial-recharge research and other wells used in different ground-water programs are frequently cased with small-diameter steel pipe. To obtain samples from these small-diameter wells in order to monitor water quality, and to calibrate solute-transport models, a small-diameter pump with unique operating characteristics is required that causes a minimum alternation of samples during field sampling. A small-diameter gas-driven pump was designed and built to obtain water samples from wells of two-inch diameter or larger. The pump is a double-piston type with the following characteristics: (1) The water sample is isolated from the operating gas, (2) no source of electricity is ncessary, (3) operation is continuous, (4) use of compressed gas is efficient, and (5) operation is reliable over extended periods of time. Principles of operation, actual operation techniques, gas-use analyses and operating experience are described. Complete working drawings and a component list are included. Recent modifications and pump construction for high-pressure applications also are described. (Woodard-USGS)

  14. Alternating phase focussing including space charge

    International Nuclear Information System (INIS)

    Cheng, W.H.; Gluckstern, R.L.

    1992-01-01

    Longitudinal stability can be obtained in a non-relativistic drift tube accelerator by traversing each gap as the rf accelerating field rises. However, the rising accelerating field leads to a transverse defocusing force which is usually overcome by magnetic focussing inside the drift tubes. The radio frequency quadrupole is one way of providing simultaneous longitudinal and transverse focusing without the use of magnets. One can also avoid the use of magnets by traversing alternate gaps between drift tubes as the field is rising and falling, thus providing an alternation of focussing and defocusing forces in both the longitudinal and transverse directions. The stable longitudinal phase space area is quite small, but recent efforts suggest that alternating phase focussing (APF) may permit low velocity acceleration of currents in the 100-300 ma range. This paper presents a study of the parameter space and a test of crude analytic predictions by adapting the code PARMILA, which includes space charge, to APF. 6 refs., 3 figs

  15. Langevin simulations of QCD, including fermions

    International Nuclear Information System (INIS)

    Kronfeld, A.S.

    1986-02-01

    We encounter critical slow down in updating when xi/a -> infinite and in matrix inversion (needed to include fermions) when msub(q)a -> 0. A simulation that purports to solve QCD numerically will encounter these limits, so to face the challenge in the title of this workshop, we must cure the disease of critical slow down. Physically, this critical slow down is due to the reluctance of changes at short distances to propagate to large distances. Numerically, the stability of an algorithm at short wavelengths requires a (moderately) small step size; critical slow down occurs when the effective long wavelength step size becomes tiny. The remedy for this disease is an algorithm that propagates signals quickly throughout the system; i.e. one whose effective step size is not reduced for the long wavelength conponents of the fields. (Here the effective ''step size'' is essentially an inverse decorrelation time.) To do so one must resolve various wavelengths of the system and modify the dynamics (in CPU time) of the simulation so that all modes evolve at roughly the same rate. This can be achieved by introducing Fourier transforms. I show how to implement Fourier acceleration for Langevin updating and for conjugate gradient matrix inversion. The crucial feature of these algorithms that lends them to Fourier acceleration is that they update the lattice globally; hence the Fourier transforms are computed once per sweep rather than once per hit. (orig./HSI)

  16. Towards Cost-efficient Sampling Methods

    OpenAIRE

    Peng, Luo; Yongli, Li; Chong, Wu

    2014-01-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and...

  17. New Generation Flask Sampling Technology Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Smith, James R. [AOS, Inc., Colorado Springs, CO (United States)

    2017-11-09

    Scientists are turning their focus to the Arctic, site of one of the strongest climate change signals. A new generation of technologies is required to function within that harsh environment, chart evolution of its trace gases and provide new kinds of information for models of the atmosphere. Our response to the solicitation tracks how global atmospheric monitoring was launched more than a half century ago; namely, acquisition of discrete samples of air by flask and subsequent analysis in the laboratory. AOS is proposing to develop a new generation of flask sampling technology. It will enable the new Arctic programs to begin with objective high density sampling of the atmosphere by UAS. The Phase I program will build the prototype flask technology and show that it can acquire and store mol fractions of CH4 and CO2 and value of δ13C with good fidelity. A CAD model will be produced for the entire platform including a package with 100 flasks and the airframe with auto-pilot, electronic propulsion and ground-to-air communications. A mobile flask analysis station will be prototyped in Phase I and designed to final form in Phase II. It expends very small sample per analysis and will interface directly to the flask package integrated permanently into the UAS fuselage. Commercial Applications and Other Benefits: • The New Generation Flask Sampling Technology able to provide a hundred or more samples of air per UAS mission. • A mobile analysis station expending far less sample than the existing ones and small enough to be stationed at the remote sites of Arctic operations. • A new form of validation for continuous trace gas observations from all platforms including the small UAS. • Further demonstration to potential customers of the AOS capabilities to invent, build, deploy and exploit entire platforms for observations of Earth’s atmosphere and ocean. Key Words: Flask Sampler, Mobile Analysis Station, Trace Gas, CO2, CH4, δC13, UAS, Baseline Airborne Observatory

  18. Small Wind Energy Systems

    DEFF Research Database (Denmark)

    Simões, Marcelo Godoy; Farret, Felix Alberto; Blaabjerg, Frede

    2017-01-01

    considered when selecting a generator for a wind power plant, including capacity of the AC system, types of loads, availability of spare parts, voltage regulation, technical personal and cost. If several loads are likely inductive, such asphase-controlled converters, motors and fluorescent lights......This chapter intends to serve as a brief guide when someone is considering the use of wind energy for small power applications. It is discussed that small wind energy systems act as the major energy source for residential or commercial applications, or how to make it part of a microgrid...... as a distributed generator. In this way, sources and loads are connected in such a way to behave as a renewable dispatch center. With this regard, non-critical loads might be curtailed or shed during times of energy shortfall or periods of high costs of energy production. If such a wind energy system is connected...

  19. The relative performance of bivariate causality tests in small samples

    NARCIS (Netherlands)

    Bult, J..R.; Leeflang, P.S.H.; Wittink, D.R.

    1997-01-01

    Causality tests have been applied to establish directional effects and to reduce the set of potential predictors, For the latter type of application only bivariate tests can be used, In this study we compare bivariate causality tests. Although the problem addressed is general and could benefit

  20. Data Quality Tools for Data Warehousing - A Small Sample Survey

    National Research Council Canada - National Science Library

    Neely, M

    1998-01-01

    It is estimated that as high as 75% of the effort spent on building a data warehouse can be attributed to back-end issues, such as readying the data and transporting it into the data warehouse (Atre, 1998...

  1. Power in Bayesian Mediation Analysis for Small Sample Research

    NARCIS (Netherlands)

    Miočević, M.; MacKinnon, David; Levy, Roy

    2017-01-01

    Bayesian methods have the potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This article compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product,

  2. Preparation of calcium-separated isotope targets using small samples

    International Nuclear Information System (INIS)

    Thomas, G.E.

    1975-01-01

    Targets are routinely evaporated using a few milligram quantities of separated isotopes of calcium with reducing agents. The source to target distance is 3.0 cm with the substrate, if necessary, as thin as 15 μg/cm 2 carbon or 100 μg/cm 2 of gold. A tantalum closed boat, heat shield, and special collimator system are used

  3. inverse gaussian model for small area estimation via gibbs sampling

    African Journals Online (AJOL)

    ADMIN

    1 Department of Decision Sciences and MIS, Concordia University, Montréal,. Québec ... method by application to household income survey data, comparing it against the usual lognormal ...... pensions, superannuation and annuities and other.

  4. Small Sample Robust Testing for Normality against Pareto Tails

    Czech Academy of Sciences Publication Activity Database

    Stehlík, M.; Fabián, Zdeněk; Střelec, L.

    2012-01-01

    Roč. 41, č. 7 (2012), s. 1167-1194 ISSN 0361-0918 Grant - others:Aktion(CZ-AT) 51p7, 54p21, 50p14, 54p13 Institutional research plan: CEZ:AV0Z10300504 Keywords : consistency * Hill estimator * t-Hill estimator * location functional * Pareto tail * power comparison * returns * robust tests for normality Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.295, year: 2012

  5. Measuring Thermal Conductivity of a Small Insulation Sample

    Science.gov (United States)

    Miller, Robert A.; Kuczmarski, Maria A.

    2009-01-01

    A multiple-throat venturi system has been invented for measuring laminar flow of air or other gas at low speed (1 to 30 cm/s) in a duct while preserving the laminar nature of the flow and keeping the velocity profile across the duct as nearly flat as possible. While means for measuring flows at higher speeds are well established, heretofore, there have been no reliable means for making consistent, accurate measurements in this speed range. In the original application for which this system was invented, the duct leads into the test section of a low-speed wind tunnel wherein uniform, low-speed, laminar flow is required for scientific experiments. The system could also be used to monitor a slow flow of gas in an industrial process like chemical vapor deposition. In the original application, the multiple- throat venturi system is mounted at the inlet end of the duct having a rectangular cross section of 19 by 14 cm, just upstream of an assembly of inlet screens and flow straighteners that help to suppress undesired flow fluctuations (see Figure 1). The basic venturi measurement principle is well established: One measures the difference in pressure between (1) a point just outside the inlet, where the pressure is highest and the kinetic energy lowest; and (2) the narrowest part (the throat) of the venturi passage, where the kinetic energy is highest and the pressure is lowest. Then by use of Bernoulli s equation for the relationship between pressure and kinetic energy, the volumetric flow speed in the duct can be calculated from the pressure difference and the inlet and throat widths. The design of this system represents a compromise among length, pressure recovery, uniformity of flow, and complexity of assembly. Traditionally, venturis are used to measure faster flows in narrower cross sections, with longer upstream and downstream passages to maintain accuracy. The dimensions of the passages of the present venturi system are sized to provide a readily measurable pressure drop. Multiple throats are used to minimize the length needed to recover internal energy and enable the velocity profile to recover to near flatness.

  6. Directional emission of single photons from small atomic samples

    DEFF Research Database (Denmark)

    Miroshnychenko, Yevhen; V. Poulsen, Uffe; Mølmer, Klaus

    2013-01-01

    We provide a formalism to describe deterministic emission of single photons with tailored spatial and temporal profiles from a regular array of multi-level atoms. We assume that a single collective excitation is initially shared by all the atoms in a metastable atomic state, and that this state i...... is coupled by a classical laser field to an optically excited state which rapidly decays to the ground atomic state. Our model accounts for the different field polarization components via re-absorption and emission of light by the Zeeman manifold of optically excited states.......We provide a formalism to describe deterministic emission of single photons with tailored spatial and temporal profiles from a regular array of multi-level atoms. We assume that a single collective excitation is initially shared by all the atoms in a metastable atomic state, and that this state...

  7. An automated synthesis-purification-sample-management platform for the accelerated generation of pharmaceutical candidates.

    Science.gov (United States)

    Sutherland, J David; Tu, Noah P; Nemcek, Thomas A; Searle, Philip A; Hochlowski, Jill E; Djuric, Stevan W; Pan, Jeffrey Y

    2014-04-01

    A flexible and integrated flow-chemistry-synthesis-purification compound-generation and sample-management platform has been developed to accelerate the production of small-molecule organic-compound drug candidates in pharmaceutical research. Central to the integrated system is a Mitsubishi robot, which hands off samples throughout the process to the next station, including synthesis and purification, sample dispensing for purity and quantification analysis, dry-down, and aliquot generation.

  8. Small ring testing of a creep resistant material

    International Nuclear Information System (INIS)

    Hyde, C.J.; Hyde, T.H.; Sun, W.; Nardone, S.; De Bruycker, E.

    2013-01-01

    Many components in conventional and nuclear power plant, aero-engines, chemical plant etc., operate at temperatures which are high enough for creep to occur. These include steam pipes, pipe branches, gas and steam turbine blades, etc. The manufacture of such components may also require welds to be part of them. In most cases, only nominal operating conditions (i.e. pressure, temperatures, system load, etc.) are known and hence precise life predictions for these components are not possible. Also, the proportion of life consumed will vary from position to position within a component. Hence, non-destructive techniques are adopted to assist in making decisions on whether to repair, continue operating or replace certain components. One such approach is to test a small sample removed from the component to make small creep test specimens which can be tested to give information on the remaining creep life of the component. When such a small sample cannot be removed from the operating component, e.g. in the case of small components, the component can be taken out of operation in order to make small creep test specimens, the results from which can then be used to assist with making decisions regarding similar or future components. This paper presents a small creep test specimen which can be used for the testing of particularly strong and creep resistant materials, such as nickel-based superalloys

  9. On the analysis of small particles

    International Nuclear Information System (INIS)

    Vis, R.D.

    2002-01-01

    The analysis of small, micrometer or even submicrometer sized, particles represents a challenging problem. The whole analytical procedure, including quality assurance and control, needs careful planning. Even the sampling itself is in many cases not trivial at all and the question as to whether the sample is representative for the suite of particles on wants to measure is sometimes difficult to assess. The question of representativity is even more important if one performs single particle analysis. Only large numbers of such analyses will lead to meaningful and interpretable results. In this contribution a few aspects of the various steps in the analytical protocol will be described. Starting point is that it is the elemental composition of the particle that is of interest

  10. Sample summary report for KOR1 pressure tube sample

    International Nuclear Information System (INIS)

    Lee, Hee Jong; Nam, Min Woo; Choi, Young Ha

    2006-01-01

    This summary report includes basically the following: - The FLAW CHARACTERIZATION TABLE of KOR1 sample and supporting documentation. - The CROSS REFERENCE TABLES for each investigator, which is the SAMPLE INSPECTION TABLE that cross reference to the FLAW CHARACTERIZATION TABLE. - Each Sample Inspection Report as Appendices

  11. Creatures in the Classroom: Including Insects and Small Animals in Your Preschool Gardening Curriculum

    Science.gov (United States)

    Hachey, Alyse C.; Butler, Deanna

    2012-01-01

    When doing spring planting activities, what does a teacher do while waiting for the plants to grow? This waiting time is a golden opportunity to explore another side of gardening--the creatures that make it all possible. Insects are an integral part of everyday world, having existed for over 300 million years; they are the most common animal on…

  12. System for Packaging Planetary Samples for Return to Earth

    Science.gov (United States)

    Badescu, Mircea; Bar-Cohen, Yoseph; Backes, paul G.; Sherrit, Stewart; Bao, Xiaoqi; Scott, James S.

    2010-01-01

    A system is proposed for packaging material samples on a remote planet (especially Mars) in sealed sample tubes in preparation for later return to Earth. The sample tubes (Figure 1) would comprise (1) tubes initially having open tops and closed bottoms; (2) small, bellows-like collapsible bodies inside the tubes at their bottoms; and (3) plugs to be eventually used to close the tops of the tubes. The top inner surface of each tube would be coated with solder. The side of each plug, which would fit snugly into a tube, would feature a solder-filled ring groove. The system would include equipment for storing, manipulating, filling, and sealing the tubes. The containerization system (see Figure 2) will be organized in stations and will include: the storage station, the loading station, and the heating station. These stations can be structured in circular or linear pattern to minimize the manipulator complexity, allowing for compact design and mass efficiency. The manipulation of the sample tube between stations is done by a simple manipulator arm. The storage station contains the unloaded sample tubes and the plugs before sealing as well as the sealed sample tubes with samples after loading and sealing. The chambers at the storage station also allow for plug insertion into the sample tube. At the loading station the sample is poured or inserted into the sample tube and then the tube is topped off. At the heating station the plug is heated so the solder ring melts and seals the plug to the sample tube. The process is performed as follows: Each tube is filled or slightly overfilled with sample material and the excess sample material is wiped off the top. Then, the plug is inserted into the top section of the tube packing the sample material against the collapsible bellowslike body allowing the accommodation of the sample volume. The plug and the top of the tube are heated momentarily to melt the solder in order to seal the tube.

  13. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  14. Spatial Sampling of Weather Data for Regional Crop Yield Simulations

    Science.gov (United States)

    Van Bussel, Lenny G. J.; Ewert, Frank; Zhao, Gang; Hoffmann, Holger; Enders, Andreas; Wallach, Daniel; Asseng, Senthold; Baigorria, Guillermo A.; Basso, Bruno; Biernath, Christian; hide

    2016-01-01

    Field-scale crop models are increasingly applied at spatio-temporal scales that range from regions to the globe and from decades up to 100 years. Sufficiently detailed data to capture the prevailing spatio-temporal heterogeneity in weather, soil, and management conditions as needed by crop models are rarely available. Effective sampling may overcome the problem of missing data but has rarely been investigated. In this study the effect of sampling weather data has been evaluated for simulating yields of winter wheat in a region in Germany over a 30-year period (1982-2011) using 12 process-based crop models. A stratified sampling was applied to compare the effect of different sizes of spatially sampled weather data (10, 30, 50, 100, 500, 1000 and full coverage of 34,078 sampling points) on simulated wheat yields. Stratified sampling was further compared with random sampling. Possible interactions between sample size and crop model were evaluated. The results showed differences in simulated yields among crop models but all models reproduced well the pattern of the stratification. Importantly, the regional mean of simulated yields based on full coverage could already be reproduced by a small sample of 10 points. This was also true for reproducing the temporal variability in simulated yields but more sampling points (about 100) were required to accurately reproduce spatial yield variability. The number of sampling points can be smaller when a stratified sampling is applied as compared to a random sampling. However, differences between crop models were observed including some interaction between the effect of sampling on simulated yields and the model used. We concluded that stratified sampling can considerably reduce the number of required simulations. But, differences between crop models must be considered as the choice for a specific model can have larger effects on simulated yields than the sampling strategy. Assessing the impact of sampling soil and crop management

  15. An Integrated Biochemistry Laboratory, Including Molecular Modeling

    Science.gov (United States)

    Hall, Adele J. Wolfson Mona L.; Branham, Thomas R.

    1996-11-01

    for lysozyme activity and a colorimetric one for protein concentration. Familiarity with the assays is reinforced by an independently designed project to modify a variable in one of these assays. The assay for lysozyme activity is that of Shugar (6), based on hydrolysis of a cell-wall suspension from the bacterium Micrococcus lysodeikticus, a substrate that is particularly sensitive to lysozyme. As the cell walls are broken down by the enzyme, the turbidity of the sample decreases. This decrease can be conveniently measured by following the decrease in absorbance at a wavelength of 450 nm, using a spectrophotometer or other device for measuring light scattering. The Bradford method (7), a standard assay, is used to determine protein concentration. Using the data from both lysozyme activity assays and protein concentration assays, students can calculate the specific activity for commercial lysozyme and an egg- white solution. These calculations clearly demonstrate the increase in specific activity with increasing purity, since the purified (commercial) preparation has a specific activity approximately 20-fold higher than that of the crude egg-white solution. Lysozyme Purification by Ion-Exchange Chromatography (5 weeks) As suggested by Strang (8), students can design a rational purification of lysozyme using ion-exchange chromatography when presented with information on the isoelectric point of the enzyme and the properties of ion- exchange resins. One week is spent discussing protein purification and the relative advantages and disadvantages of different resins. Each group has a choice of anion-exchange (DEAE) or cation-exchange (CM) resins. Because lysozyme is positively charged below a pH of 11, it will not be adsorbed to an anion-exchange resin, but will be adsorbed to the cation-exchange resin. Therefore, for the cation-exchange protocols, there are further options for methods of collecting and eluting the desired protein. A purification table, including

  16. Small radioisotope powered batteries

    International Nuclear Information System (INIS)

    Myatt, J.

    1975-06-01

    Various methods of converting the large amounts of energy stored in radioisotopes are described. These are based on:- (a) the Seebeck effect; (b) thermionic emission of electrons from a hot body; (c) the Stirling Cycle; and (d) radiovoltaic charge separation in 'p-n' junctions. Small generators in the range 0 to 100 W(e) developed using these effects are described and typical applications for each of these systems are given. These include data collection and transmission from remote sites, implantable medical devices, lighthouses, radio beacons, and space power supplies. (author)

  17. [Small vessel cerebrovascular disease].

    Science.gov (United States)

    Cardona Portela, P; Escrig Avellaneda, A

    2018-05-09

    Small vessel vascular disease is a spectrum of different conditions that includes lacunar infarction, alteration of deep white matter, or microbleeds. Hypertension is the main risk factor, although the atherothrombotic lesion may be present, particularly in large-sized lacunar infarctions along with other vascular risk factors. MRI findings are characteristic and the lesions authentic biomarkers that allow differentiating the value of risk factors and defining their prognostic value. Copyright © 2018 SEH-LELHA. Publicado por Elsevier España, S.L.U. All rights reserved.

  18. Ball assisted device for analytical surface sampling

    Science.gov (United States)

    ElNaggar, Mariam S; Van Berkel, Gary J; Covey, Thomas R

    2015-11-03

    A system for sampling a surface includes a sampling probe having a housing and a socket, and a rolling sampling sphere within the socket. The housing has a sampling fluid supply conduit and a sampling fluid exhaust conduit. The sampling fluid supply conduit supplies sampling fluid to the sampling sphere. The sampling fluid exhaust conduit has an inlet opening for receiving sampling fluid carried from the surface by the sampling sphere. A surface sampling probe and a method for sampling a surface are also disclosed.

  19. [A comparison of convenience sampling and purposive sampling].

    Science.gov (United States)

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  20. Patient identification in blood sampling.

    Science.gov (United States)

    Davidson, Anne; Bolton-Maggs, Paula

    The majority of adverse reports relating to blood transfusions result from human error, including misidentification of patients and incorrect labelling of samples. This article outlines best practice in blood sampling for transfusion (but is recommended for all pathology samples) and the role of patient empowerment in improving safety.