WorldWideScience

Sample records for stratified sampling based

  1. Stratified sampling design based on data mining.

    Science.gov (United States)

    Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung

    2013-09-01

    To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.

  2. [Study of spatial stratified sampling strategy of Oncomelania hupensis snail survey based on plant abundance].

    Science.gov (United States)

    Xun-Ping, W; An, Z

    2017-07-27

    Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.

  3. Data splitting for artificial neural networks using SOM-based stratified sampling.

    Science.gov (United States)

    May, R J; Maier, H R; Dandy, G C

    2010-03-01

    Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.

  4. Model-based estimation of finite population total in stratified sampling

    African Journals Online (AJOL)

    The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...

  5. Bayesian stratified sampling to assess corpus utility

    Energy Technology Data Exchange (ETDEWEB)

    Hochberg, J.; Scovel, C.; Thomas, T.; Hall, S.

    1998-12-01

    This paper describes a method for asking statistical questions about a large text corpus. The authors exemplify the method by addressing the question, ``What percentage of Federal Register documents are real documents, of possible interest to a text researcher or analyst?`` They estimate an answer to this question by evaluating 200 documents selected from a corpus of 45,820 Federal Register documents. Bayesian analysis and stratified sampling are used to reduce the sampling uncertainty of the estimate from over 3,100 documents to fewer than 1,000. A possible application of the method is to establish baseline statistics used to estimate recall rates for information retrieval systems.

  6. Monte Carlo stratified source-sampling

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Gelbard, E.M.

    1997-01-01

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo open-quotes eigenvalue of the worldclose quotes problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. The original test-problem was treated by a special code designed specifically for that purpose. Recently ANL started work on a method for dealing with more realistic eigenvalue of the world configurations, and has been incorporating this method into VIM. The original method has been modified to take into account real-world statistical noise sources not included in the model problem. This paper constitutes a status report on work still in progress

  7. Distribution-Preserving Stratified Sampling for Learning Problems.

    Science.gov (United States)

    Cervellera, Cristiano; Maccio, Danilo

    2017-06-09

    The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data. This is obvious for unsupervised learning problems, where the goal is to gain insights on the distribution of the data, but it is also relevant for supervised problems, where the theory explains how the training set distribution influences the generalization error. In this paper, we analyze the technique of stratified sampling from the point of view of distances between probabilities. This allows us to introduce an algorithm, based on recursive binary partition of the input space, aimed at obtaining samples that are distributed as much as possible as the original data. A theoretical analysis is proposed, proving the (greedy) optimality of the procedure together with explicit error bounds. An adaptive version of the algorithm is also introduced to cope with streaming data. Simulation tests on various data sets and different learning tasks are also provided.

  8. Stratified source-sampling techniques for Monte Carlo eigenvalue analysis

    International Nuclear Information System (INIS)

    Mohamed, A.

    1998-01-01

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results

  9. Temporally stratified sampling programs for estimation of fish impingement

    International Nuclear Information System (INIS)

    Kumar, K.D.; Griffith, J.S.

    1977-01-01

    Impingement monitoring programs often expend valuable and limited resources and fail to provide a dependable estimate of either total annual impingement or those biological and physicochemical factors affecting impingement. In situations where initial monitoring has identified ''problem'' fish species and the periodicity of their impingement, intensive sampling during periods of high impingement will maximize information obtained. We use data gathered at two nuclear generating facilities in the southeastern United States to discuss techniques of designing such temporally stratified monitoring programs and their benefits and drawbacks. Of the possible temporal patterns in environmental factors within a calendar year, differences among seasons are most influential in the impingement of freshwater fishes in the Southeast. Data on the threadfin shad (Dorosoma petenense) and the role of seasonal temperature changes are utilized as an example to demonstrate ways of most efficiently and accurately estimating impingement of the species

  10. Prototypic Features of Loneliness in a Stratified Sample of Adolescents

    Directory of Open Access Journals (Sweden)

    Mathias Lasgaard

    2009-06-01

    Full Text Available Dominant theoretical approaches in loneliness research emphasize the value of personality characteristics in explaining loneliness. The present study examines whether dysfunctional social strategies and attributions in lonely adolescents can be explained by personality characteristics. A questionnaire survey was conducted with 379 Danish Grade 8 students (M = 14.1 years, SD = 0.4 from 22 geographically stratified and randomly selected schools. Hierarchical linear regression analysis showed that network orientation, success expectation and avoidance in affiliative situations predicted loneliness independent of personality characteristics, demographics and social desirability. The study indicates that dysfunctional strategies and attributions in affiliative situations are directly related to loneliness in adolescence. These strategies and attributions may preclude lonely adolescents from guidance and intervention. Thus, professionals need to be knowledgeable about prototypic features of loneliness in addition to employing a pro-active approach when assisting adolescents who display prototypic features.

  11. Monitoring oil persistence on beaches : SCAT versus stratified random sampling designs

    International Nuclear Information System (INIS)

    Short, J.W.; Lindeberg, M.R.; Harris, P.M.; Maselko, J.M.; Pella, J.J.; Rice, S.D.

    2003-01-01

    In the event of a coastal oil spill, shoreline clean-up assessment teams (SCAT) commonly rely on visual inspection of the entire affected area to monitor the persistence of the oil on beaches. Occasionally, pits are excavated to evaluate the persistence of subsurface oil. This approach is practical for directing clean-up efforts directly following a spill. However, sampling of the 1989 Exxon Valdez oil spill in Prince William Sound 12 years later has shown that visual inspection combined with pit excavation does not offer estimates of contaminated beach area of stranded oil volumes. This information is needed to statistically evaluate the significance of change with time. Assumptions regarding the correlation of visually-evident surface oil and cryptic subsurface oil are usually not evaluated as part of the SCAT mandate. Stratified random sampling can avoid such problems and could produce precise estimates of oiled area and volume that allow for statistical assessment of major temporal trends and the extent of the impact. The 2001 sampling of the shoreline of Prince William Sound showed that 15 per cent of surface oil occurrences were associated with subsurface oil. This study demonstrates the usefulness of the stratified random sampling method and shows how sampling design parameters impact statistical outcome. Power analysis based on the study results, indicate that optimum power is derived when unnecessary stratification is avoided. It was emphasized that sampling effort should be balanced between choosing sufficient beaches for sampling and the intensity of sampling

  12. MUP, CEC-DES, STRADE. Codes for uncertainty propagation, experimental design and stratified random sampling techniques

    International Nuclear Information System (INIS)

    Amendola, A.; Astolfi, M.; Lisanti, B.

    1983-01-01

    The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems

  13. Brachytherapy dose-volume histogram computations using optimized stratified sampling methods

    International Nuclear Information System (INIS)

    Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.

    2002-01-01

    A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points

  14. Estimation of Finite Population Mean in Multivariate Stratified Sampling under Cost Function Using Goal Programming

    Directory of Open Access Journals (Sweden)

    Atta Ullah

    2014-01-01

    Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.

  15. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  16. Stratifying patients with peripheral neuropathic pain based on sensory profiles

    DEFF Research Database (Denmark)

    Vollert, Jan; Maier, Christoph; Attal, Nadine

    2017-01-01

    In a recent cluster analysis, it has been shown that patients with peripheral neuropathic pain can be grouped into 3 sensory phenotypes based on quantitative sensory testing profiles, which are mainly characterized by either sensory loss, intact sensory function and mild thermal hyperalgesia and...... populations that need to be screened to reach a subpopulation large enough to conduct a phenotype-stratified study. The most common phenotype in diabetic polyneuropathy was sensory loss (83%), followed by mechanical hyperalgesia (75%) and thermal hyperalgesia (34%, note that percentages are overlapping...

  17. A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin

    Science.gov (United States)

    Blaschek, Michael; Duttmann, Rainer

    2015-04-01

    The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using

  18. Fixed-location hydroacoustic monitoring designs for estimating fish passage using stratified random and systematic sampling

    International Nuclear Information System (INIS)

    Skalski, J.R.; Hoffman, A.; Ransom, B.H.; Steig, T.W.

    1993-01-01

    Five alternate sampling designs are compared using 15 d of 24-h continuous hydroacoustic data to identify the most favorable approach to fixed-location hydroacoustic monitoring of salmonid outmigrants. Four alternative aproaches to systematic sampling are compared among themselves and with stratified random sampling (STRS). Stratifying systematic sampling (STSYS) on a daily basis is found to reduce sampling error in multiday monitoring studies. Although sampling precision was predictable with varying levels of effort in STRS, neither magnitude nor direction of change in precision was predictable when effort was varied in systematic sampling (SYS). Furthermore, modifying systematic sampling to include replicated (e.g., nested) sampling (RSYS) is further shown to provide unbiased point and variance estimates as does STRS. Numerous short sampling intervals (e.g., 12 samples of 1-min duration per hour) must be monitored hourly using RSYS to provide efficient, unbiased point and interval estimates. For equal levels of effort, STRS outperformed all variations of SYS examined. Parametric approaches to confidence interval estimates are found to be superior to nonparametric interval estimates (i.e., bootstrap and jackknife) in estimating total fish passage. 10 refs., 1 fig., 8 tabs

  19. Interactive Fuzzy Goal Programming approach in multi-response stratified sample surveys

    Directory of Open Access Journals (Sweden)

    Gupta Neha

    2016-01-01

    Full Text Available In this paper, we applied an Interactive Fuzzy Goal Programming (IFGP approach with linear, exponential and hyperbolic membership functions, which focuses on maximizing the minimum membership values to determine the preferred compromise solution for the multi-response stratified surveys problem, formulated as a Multi- Objective Non Linear Programming Problem (MONLPP, and by linearizing the nonlinear objective functions at their individual optimum solution, the problem is approximated to an Integer Linear Programming Problem (ILPP. A numerical example based on real data is given, and comparison with some existing allocations viz. Cochran’s compromise allocation, Chatterjee’s compromise allocation and Khowaja’s compromise allocation is made to demonstrate the utility of the approach.

  20. Gambling problems in the family – A stratified probability sample study of prevalence and reported consequences

    Directory of Open Access Journals (Sweden)

    Øren Anita

    2008-12-01

    Full Text Available Abstract Background Prior studies on the impact of problem gambling in the family mainly include help-seeking populations with small numbers of participants. The objective of the present stratified probability sample study was to explore the epidemiology of problem gambling in the family in the general population. Methods Men and women 16–74 years-old randomly selected from the Norwegian national population database received an invitation to participate in this postal questionnaire study. The response rate was 36.1% (3,483/9,638. Given the lack of validated criteria, two survey questions ("Have you ever noticed that a close relative spent more and more money on gambling?" and "Have you ever experienced that a close relative lied to you about how much he/she gambles?" were extrapolated from the Lie/Bet Screen for pathological gambling. Respondents answering "yes" to both questions were defined as Concerned Significant Others (CSOs. Results Overall, 2.0% of the study population was defined as CSOs. Young age, female gender, and divorced marital status were factors positively associated with being a CSO. CSOs often reported to have experienced conflicts in the family related to gambling, worsening of the family's financial situation, and impaired mental and physical health. Conclusion Problematic gambling behaviour not only affects the gambling individual but also has a strong impact on the quality of life of family members.

  1. Employment status, inflation and suicidal behaviour: an analysis of a stratified sample in Italy.

    Science.gov (United States)

    Solano, Paola; Pizzorno, Enrico; Gallina, Anna M; Mattei, Chiara; Gabrielli, Filippo; Kayman, Joshua

    2012-09-01

    There is abundant empirical evidence of a surplus risk of suicide among the unemployed, although few studies have investigated the influence of economic downturns on suicidal behaviours in an employment status-stratified sample. We investigated how economic inflation affected suicidal behaviours according to employment status in Italy from 2001 to 2008. Data concerning economically active people were provided by the Italian Institute for Statistical Analysis and by the International Monetary Fund. The association between inflation and completed versus attempted suicide with respect to employment status was investigated in every year and quarter-year of the study time frame. We considered three occupational categories: employed, unemployed who were previously employed and unemployed who had never worked. The unemployed are at higher suicide risk than the employed. Among the PE, a significant association between inflation and suicide attempt was found, whereas no association was reported concerning completed suicides. No association was found between completed and attempted suicides among the employed, the NE and inflation. Completed suicide in females is significantly associated with unemployment in every quarter-year. The reported vulnerability to suicidal behaviours among the PE as inflation rises underlines the need of effective support strategies for both genders in times of economic downturns.

  2. Stratifying empiric risk of schizophrenia among first degree relatives using multiple predictors in two independent Indian samples.

    Science.gov (United States)

    Bhatia, Triptish; Gettig, Elizabeth A; Gottesman, Irving I; Berliner, Jonathan; Mishra, N N; Nimgaonkar, Vishwajit L; Deshpande, Smita N

    2016-12-01

    Schizophrenia (SZ) has an estimated heritability of 64-88%, with the higher values based on twin studies. Conventionally, family history of psychosis is the best individual-level predictor of risk, but reliable risk estimates are unavailable for Indian populations. Genetic, environmental, and epigenetic factors are equally important and should be considered when predicting risk in 'at risk' individuals. To estimate risk based on an Indian schizophrenia participant's family history combined with selected demographic factors. To incorporate variables in addition to family history, and to stratify risk, we constructed a regression equation that included demographic variables in addition to family history. The equation was tested in two independent Indian samples: (i) an initial sample of SZ participants (N=128) with one sibling or offspring; (ii) a second, independent sample consisting of multiply affected families (N=138 families, with two or more sibs/offspring affected with SZ). The overall estimated risk was 4.31±0.27 (mean±standard deviation). There were 19 (14.8%) individuals in the high risk group, 75 (58.6%) in the moderate risk and 34 (26.6%) in the above average risk (in Sample A). In the validation sample, risks were distributed as: high (45%), moderate (38%) and above average (17%). Consistent risk estimates were obtained from both samples using the regression equation. Familial risk can be combined with demographic factors to estimate risk for SZ in India. If replicated, the proposed stratification of risk may be easier and more realistic for family members. Copyright © 2016. Published by Elsevier B.V.

  3. Stratified Sampling to Define Levels of Petrographic Variation in Coal Beds: Examples from Indonesia and New Zealand

    Directory of Open Access Journals (Sweden)

    Tim A. Moore

    2016-01-01

    Full Text Available DOI: 10.17014/ijog.3.1.29-51Stratified sampling of coal seams for petrographic analysis using block samples is a viable alternative to standard methods of channel sampling and particulate pellet mounts. Although petrographic analysis of particulate pellets is employed widely, it is both time consuming and does not allow variation within sampling units to be assessed - an important measure in any study whether it be for paleoenvironmental reconstruction or in obtaining estimates of industrial attributes. Also, samples taken as intact blocks provide additional information, such as texture and botanical affinity that cannot be gained using particulate pellets. Stratified sampling can be employed both on ‘fine’ and ‘coarse’ grained coal units. Fine-grained coals are defined as those coal intervals that do not contain vitrain bands greater than approximately 1 mm in thickness (as measured perpendicular to bedding. In fine-grained coal seams, a reasonable sized block sample (with a polished surface area of ~3 cm2 can be taken that encapsulates the macroscopic variability. However, for coarse-grained coals (vitrain bands >1 mm a different system has to be employed in order to accurately account for the larger particles. Macroscopic point counting of vitrain bands can accurately account for those particles>1 mm within a coal interval. This point counting method is conducted using something as simple as string on a coal face with marked intervals greater than the largest particle expected to be encountered (although new technologies are being developed to capture this type of information digitally. Comparative analyses of particulate pellets and blocks on the same interval show less than 6% variation between the two sample types when blocks are recalculated to include macroscopic counts of vitrain. Therefore even in coarse-grained coals, stratified sampling can be used effectively and representatively.

  4. PREDOMINANTLY LOW METALLICITIES MEASURED IN A STRATIFIED SAMPLE OF LYMAN LIMIT SYSTEMS AT Z  = 3.7

    Energy Technology Data Exchange (ETDEWEB)

    Glidden, Ana; Cooper, Thomas J.; Simcoe, Robert A. [Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139 (United States); Cooksey, Kathy L. [Department of Physics and Astronomy, University of Hawai‘i at Hilo, 200 West Kāwili Street, Hilo, HI 96720 (United States); O’Meara, John M., E-mail: aglidden@mit.edu, E-mail: tjcooper@mit.edu, E-mail: simcoe@space.mit.edu, E-mail: kcooksey@hawaii.edu, E-mail: jomeara@smcvt.edu [Department of Physics, Saint Michael’s College, One Winooski Park, Colchester, VT 05439 (United States)

    2016-12-20

    We measured metallicities for 33 z = 3.4–4.2 absorption line systems drawn from a sample of H i-selected-Lyman limit systems (LLSs) identified in Sloan Digital Sky Survey (SDSS) quasar spectra and stratified based on metal line features. We obtained higher-resolution spectra with the Keck Echellette Spectrograph and Imager, selecting targets according to our stratification scheme in an effort to fully sample the LLS population metallicity distribution. We established a plausible range of H i column densities and measured column densities (or limits) for ions of carbon, silicon, and aluminum, finding ionization-corrected metallicities or upper limits. Interestingly, our ionization models were better constrained with enhanced α -to-aluminum abundances, with a median abundance ratio of [ α /Al] = 0.3. Measured metallicities were generally low, ranging from [M/H] = −3 to −1.68, with even lower metallicities likely for some systems with upper limits. Using survival statistics to incorporate limits, we constructed the cumulative distribution function (CDF) for LLS metallicities. Recent models of galaxy evolution propose that galaxies replenish their gas from the low-metallicity intergalactic medium (IGM) via high-density H i “flows” and eject enriched interstellar gas via outflows. Thus, there has been some expectation that LLSs at the peak of cosmic star formation ( z  ≈ 3) might have a bimodal metallicity distribution. We modeled our CDF as a mix of two Gaussian distributions, one reflecting the metallicity of the IGM and the other representative of the interstellar medium of star-forming galaxies. This bimodal distribution yielded a poor fit. A single Gaussian distribution better represented the sample with a low mean metallicity of [M/H] ≈ −2.5.

  5. Discrete element method (DEM) simulations of stratified sampling during solid dosage form manufacturing.

    Science.gov (United States)

    Hancock, Bruno C; Ketterhagen, William R

    2011-10-14

    Discrete element model (DEM) simulations of the discharge of powders from hoppers under gravity were analyzed to provide estimates of dosage form content uniformity during the manufacture of solid dosage forms (tablets and capsules). For a system that exhibits moderate segregation the effects of sample size, number, and location within the batch were determined. The various sampling approaches were compared to current best-practices for sampling described in the Product Quality Research Institute (PQRI) Blend Uniformity Working Group (BUWG) guidelines. Sampling uniformly across the discharge process gave the most accurate results with respect to identifying segregation trends. Sigmoidal sampling (as recommended in the PQRI BUWG guidelines) tended to overestimate potential segregation issues, whereas truncated sampling (common in industrial practice) tended to underestimate them. The size of the sample had a major effect on the absolute potency RSD. The number of sampling locations (10 vs. 20) had very little effect on the trends in the data, and the number of samples analyzed at each location (1 vs. 3 vs. 7) had only a small effect for the sampling conditions examined. The results of this work provide greater understanding of the effect of different sampling approaches on the measured content uniformity of real dosage forms, and can help to guide the choice of appropriate sampling protocols. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. A census-weighted, spatially-stratified household sampling strategy for urban malaria epidemiology

    Directory of Open Access Journals (Sweden)

    Slutsker Laurence

    2008-02-01

    Full Text Available Abstract Background Urban malaria is likely to become increasingly important as a consequence of the growing proportion of Africans living in cities. A novel sampling strategy was developed for urban areas to generate a sample simultaneously representative of population and inhabited environments. Such a strategy should facilitate analysis of important epidemiological relationships in this ecological context. Methods Census maps and summary data for Kisumu, Kenya, were used to create a pseudo-sampling frame using the geographic coordinates of census-sampled structures. For every enumeration area (EA designated as urban by the census (n = 535, a sample of structures equal to one-tenth the number of households was selected. In EAs designated as rural (n = 32, a geographically random sample totalling one-tenth the number of households was selected from a grid of points at 100 m intervals. The selected samples were cross-referenced to a geographic information system, and coordinates transferred to handheld global positioning units. Interviewers found the closest eligible household to the sampling point and interviewed the caregiver of a child aged Results 4,336 interviews were completed in 473 of the 567 study area EAs from June 2002 through February 2003. EAs without completed interviews were randomly distributed, and non-response was approximately 2%. Mean distance from the assigned sampling point to the completed interview was 74.6 m, and was significantly less in urban than rural EAs, even when controlling for number of households. The selected sample had significantly more children and females of childbearing age than the general population, and fewer older individuals. Conclusion This method selected a sample that was simultaneously population-representative and inclusive of important environmental variation. The use of a pseudo-sampling frame and pre-programmed handheld GPS units is more efficient and may yield a more complete sample than

  7. Monitoring and identification of spatiotemporal landscape changes in multiple remote sensing images by using a stratified conditional Latin hypercube sampling approach and geostatistical simulation.

    Science.gov (United States)

    Lin, Yu-Pin; Chu, Hone-Jay; Huang, Yu-Long; Tang, Chia-Hsi; Rouhani, Shahrokh

    2011-06-01

    This study develops a stratified conditional Latin hypercube sampling (scLHS) approach for multiple, remotely sensed, normalized difference vegetation index (NDVI) images. The objective is to sample, monitor, and delineate spatiotemporal landscape changes, including spatial heterogeneity and variability, in a given area. The scLHS approach, which is based on the variance quadtree technique (VQT) and the conditional Latin hypercube sampling (cLHS) method, selects samples in order to delineate landscape changes from multiple NDVI images. The images are then mapped for calibration and validation by using sequential Gaussian simulation (SGS) with the scLHS selected samples. Spatial statistical results indicate that in terms of their statistical distribution, spatial distribution, and spatial variation, the statistics and variograms of the scLHS samples resemble those of multiple NDVI images more closely than those of cLHS and VQT samples. Moreover, the accuracy of simulated NDVI images based on SGS with scLHS samples is significantly better than that of simulated NDVI images based on SGS with cLHS samples and VQT samples, respectively. However, the proposed approach efficiently monitors the spatial characteristics of landscape changes, including the statistics, spatial variability, and heterogeneity of NDVI images. In addition, SGS with the scLHS samples effectively reproduces spatial patterns and landscape changes in multiple NDVI images.

  8. Quantum image pseudocolor coding based on the density-stratified method

    Science.gov (United States)

    Jiang, Nan; Wu, Wenya; Wang, Luo; Zhao, Na

    2015-05-01

    Pseudocolor processing is a branch of image enhancement. It dyes grayscale images to color images to make the images more beautiful or to highlight some parts on the images. This paper proposes a quantum image pseudocolor coding scheme based on the density-stratified method which defines a colormap and changes the density value from gray to color parallel according to the colormap. Firstly, two data structures: quantum image GQIR and quantum colormap QCR are reviewed or proposed. Then, the quantum density-stratified algorithm is presented. Based on them, the quantum realization in the form of circuits is given. The main advantages of the quantum version for pseudocolor processing over the classical approach are that it needs less memory and can speed up the computation. Two kinds of examples help us to describe the scheme further. Finally, the future work are analyzed.

  9. Sampling high-altitude and stratified mating flights of red imported fire ant.

    Science.gov (United States)

    Fritz, Gary N; Fritz, Ann H; Vander Meer, Robert K

    2011-05-01

    With the exception of an airplane equipped with nets, no method has been developed that successfully samples red imported fire ant, Solenopsis invicta Buren, sexuals in mating/dispersal flights throughout their potential altitudinal trajectories. We developed and tested a method for sampling queens and males during mating flights at altitudinal intervals reaching as high as "140 m. Our trapping system uses an electric winch and a 1.2-m spindle bolted to a swiveling platform. The winch dispenses up to 183 m of Kevlar-core, nylon rope and the spindle stores 10 panels (0.9 by 4.6 m each) of nylon tulle impregnated with Tangle-Trap. The panels can be attached to the rope at various intervals and hoisted into the air by using a 3-m-diameter, helium-filled balloon. Raising or lowering all 10 panels takes approximately 15-20 min. This trap also should be useful for altitudinal sampling of other insects of medical importance.

  10. Improving Precision and Reducing Runtime of Microscopic Traffic Simulators through Stratified Sampling

    Directory of Open Access Journals (Sweden)

    Khewal Bhupendra Kesur

    2013-01-01

    Full Text Available This paper examines the application of Latin Hypercube Sampling (LHS and Antithetic Variables (AVs to reduce the variance of estimated performance measures from microscopic traffic simulators. LHS and AV allow for a more representative coverage of input probability distributions through stratification, reducing the standard error of simulation outputs. Two methods of implementation are examined, one where stratification is applied to headways and routing decisions of individual vehicles and another where vehicle counts and entry times are more evenly sampled. The proposed methods have wider applicability in general queuing systems. LHS is found to outperform AV, and reductions of up to 71% in the standard error of estimates of traffic network performance relative to independent sampling are obtained. LHS allows for a reduction in the execution time of computationally expensive microscopic traffic simulators as fewer simulations are required to achieve a fixed level of precision with reductions of up to 84% in computing time noted on the test cases considered. The benefits of LHS are amplified for more congested networks and as the required level of precision increases.

  11. A direct observation method for auditing large urban centers using stratified sampling, mobile GIS technology and virtual environments.

    Science.gov (United States)

    Lafontaine, Sean J V; Sawada, M; Kristjansson, Elizabeth

    2017-02-16

    With the expansion and growth of research on neighbourhood characteristics, there is an increased need for direct observational field audits. Herein, we introduce a novel direct observational audit method and systematic social observation instrument (SSOI) for efficiently assessing neighbourhood aesthetics over large urban areas. Our audit method uses spatial random sampling stratified by residential zoning and incorporates both mobile geographic information systems technology and virtual environments. The reliability of our method was tested in two ways: first, in 15 Ottawa neighbourhoods, we compared results at audited locations over two subsequent years, and second; we audited every residential block (167 blocks) in one neighbourhood and compared the distribution of SSOI aesthetics index scores with results from the randomly audited locations. Finally, we present interrater reliability and consistency results on all observed items. The observed neighbourhood average aesthetics index score estimated from four or five stratified random audit locations is sufficient to characterize the average neighbourhood aesthetics. The SSOI was internally consistent and demonstrated good to excellent interrater reliability. At the neighbourhood level, aesthetics is positively related to SES and physical activity and negatively correlated with BMI. The proposed approach to direct neighbourhood auditing performs sufficiently and has the advantage of financial and temporal efficiency when auditing a large city.

  12. Application of a stratified random sampling technique to the estimation and minimization of respirable quartz exposure to underground miners

    International Nuclear Information System (INIS)

    Makepeace, C.E.; Horvath, F.J.; Stocker, H.

    1981-11-01

    The aim of a stratified random sampling plan is to provide the best estimate (in the absence of full-shift personal gravimetric sampling) of personal exposure to respirable quartz among underground miners. One also gains information of the exposure distribution of all the miners at the same time. Three variables (or strata) are considered in the present scheme: locations, occupations and times of sampling. Random sampling within each stratum ensures that each location, occupation and time of sampling has equal opportunity of being selected without bias. Following implementation of the plan and analysis of collected data, one can determine the individual exposures and the mean. This information can then be used to identify those groups whose exposure contributes significantly to the collective exposure. In turn, this identification, along with other considerations, allows the mine operator to carry out a cost-benefit optimization and eventual implementation of engineering controls for these groups. This optimization and engineering control procedure, together with the random sampling plan, can then be used in an iterative manner to minimize the mean value of the distribution and collective exposures

  13. Development and enrolee satisfaction with basic medical insurance in China: A systematic review and stratified cluster sampling survey.

    Science.gov (United States)

    Jing, Limei; Chen, Ru; Jing, Lisa; Qiao, Yun; Lou, Jiquan; Xu, Jing; Wang, Junwei; Chen, Wen; Sun, Xiaoming

    2017-07-01

    Basic Medical Insurance (BMI) has changed remarkably over time in China because of health reforms that aim to achieve universal coverage and better health care with adequate efforts by increasing subsidies, reimbursement, and benefits. In this paper, we present the development of BMI, including financing and operation, with a systematic review. Meanwhile, Pudong New Area in Shanghai was chosen as a typical BMI sample for its coverage and management; a stratified cluster sampling survey together with an ordinary logistic regression model was used for the analysis. Enrolee satisfaction and the factors associated with enrolee satisfaction with BMI were analysed. We found that the reenrolling rate superficially improved the BMI coverage and nearly achieved universal coverage. However, BMI funds still faced dual contradictions of fund deficit and insured under compensation, and a long-term strategy is needed to realize the integration of BMI schemes with more homogeneous coverage and benefits. Moreover, Urban Resident Basic Medical Insurance participants reported a higher rate of dissatisfaction than other participants. The key predictors of the enrolees' satisfaction were awareness of the premium and compensation, affordability of out-of-pocket costs, and the proportion of reimbursement. These results highlight the importance that the Chinese government takes measures, such as strengthening BMI fund management, exploring mixed payment methods, and regulating sequential medical orders, to develop an integrated medical insurance system of universal coverage and vertical equity while simultaneously improving enrolee satisfaction. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Model-Based Prediction of Pulsed Eddy Current Testing Signals from Stratified Conductive Structures

    International Nuclear Information System (INIS)

    Zhang, Jian Hai; Song, Sung Jin; Kim, Woong Ji; Kim, Hak Joon; Chung, Jong Duk

    2011-01-01

    Excitation and propagation of electromagnetic field of a cylindrical coil above an arbitrary number of conductive plates for pulsed eddy current testing(PECT) are very complex problems due to their complicated physical properties. In this paper, analytical modeling of PECT is established by Fourier series based on truncated region eigenfunction expansion(TREE) method for a single air-cored coil above stratified conductive structures(SCS) to investigate their integrity. From the presented expression of PECT, the coil impedance due to SCS is calculated based on analytical approach using the generalized reflection coefficient in series form. Then the multilayered structures manufactured by non-ferromagnetic (STS301L) and ferromagnetic materials (SS400) are investigated by the developed PECT model. Good prediction of analytical model of PECT not only contributes to the development of an efficient solver but also can be applied to optimize the conditions of experimental setup in PECT

  15. Executive control resources and frequency of fatty food consumption: findings from an age-stratified community sample.

    Science.gov (United States)

    Hall, Peter A

    2012-03-01

    Fatty foods are regarded as highly appetitive, and self-control is often required to resist consumption. Executive control resources (ECRs) are potentially facilitative of self-control efforts, and therefore could predict success in the domain of dietary self-restraint. It is not currently known whether stronger ECRs facilitate resistance to fatty food consumption, and moreover, it is unknown whether such an effect would be stronger in some age groups than others. The purpose of the present study was to examine the association between ECRs and consumption of fatty foods among healthy community-dwelling adults across the adult life span. An age-stratified sample of individuals between 18 and 89 years of age attended two laboratory sessions. During the first session they completed two computer-administered tests of ECRs (Stroop and Go-NoGo) and a test of general cognitive function (Wechsler Abbreviated Scale of Intelligence); participants completed two consecutive 1-week recall measures to assess frequency of fatty and nonfatty food consumption. Regression analyses revealed that stronger ECRs were associated with lower frequency of fatty food consumption over the 2-week interval. This association was observed for both measures of ECR and a composite measure. The effect remained significant after adjustment for demographic variables (age, gender, socioeconomic status), general cognitive function, and body mass index. The observed effect of ECRs on fatty food consumption frequency was invariant across age group, and did not generalize to nonfatty food consumption. ECRs may be potentially important, though understudied, determinants of dietary behavior in adults across the life span.

  16. Prevalence and Risk Factors of Dengue Infection in Khanh Hoa Province, Viet Nam: A Stratified Cluster Sampling Survey.

    Science.gov (United States)

    Mai, Vien Quang; Mai, Trịnh Thị Xuan; Tam, Ngo Le Minh; Nghia, Le Trung; Komada, Kenichi; Murakami, Hitoshi

    2018-05-19

    Dengue is a clinically important arthropod-borne viral disease with increasing global incidence. Here we aimed to estimate the prevalence of dengue infections in Khanh Hoa Province, central Viet Nam, and to identify risk factors for infection. We performed a stratified cluster sampling survey including residents of 3-60 years of age in Nha Trang City, Ninh Hoa District and Dien Khanh District, Khanh Hoa Province, in October 2011. Immunoglobulin G (IgG) and immunoglobulin M (IgM) against dengue were analyzed using a rapid test kit. Participants completed a questionnaire exploring clinical dengue incidence, socio-economic status, and individual behavior. A household checklist was used to examine environment, mosquito larvae presence, and exposure to public health interventions. IgG positivity was 20.5% (urban, 16.3%; rural, 23.0%), IgM positivity was 6.7% (urban, 6.4%; rural, 6.9%), and incidence of clinically compatible dengue during the prior 3 months was 2.8 per 1,000 persons (urban, 1.7; rural, 3.4). For IgG positivity, the adjusted odds ratio (AOR) was 2.68 (95% confidence interval [CI], 1.24-5.81) for mosquito larvae presence in water pooled in old tires and was 3.09 (95% CI, 1.75-5.46) for proximity to a densely inhabited area. For IgM positivity, the AOR was 3.06 (95% CI, 1.50-6.23) for proximity to a densely inhabited area. Our results indicated rural penetration of dengue infections. Control measures should target densely inhabited areas, and may include clean-up of discarded tires and water-collecting waste.

  17. Evaluating effectiveness of down-sampling for stratified designs and unbalanced prevalence in Random Forest models of tree species distributions in Nevada

    Science.gov (United States)

    Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino

    2012-01-01

    Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...

  18. Stratified Entomological Sampling in Preparation for an Area-Wide Integrated Pest Management Program: The Example of Glossina palpalis gambiensis (Diptera: Glossinidae) in the Niayes of Senegal

    International Nuclear Information System (INIS)

    Bouyer, Jeremy; Seck, Momar Talla; Guerrini, Laure; Sall, Baba; Ndiaye, Elhadji Youssou; Vreysen, Marc J.B.

    2010-01-01

    The riverine tsetse species Glossina palpalis gambiensis Vanderplank 1949 (Diptera: Glossinidae) inhabits riparian forests along river systems in West Africa. The government of Senegal has embarked on a project to eliminate this tsetse species, and African animal trypanosomoses, from the Niayes are using an area-wide integrated pest management approach. A stratified entomological sampling strategy was therefore developed using spatial analytical tools and mathematical modeling. A preliminary phytosociological census identified eight types of suitable habitat, which could be discriminated from LandSat 7ETM satellite images and denominated wet areas. At the end of March 2009, 683 unbaited Vavoua traps had been deployed, and the observed infested area in the Niayes was 525 km2. In the remaining area, a mathematical model was used to assess the risk that flies were present despite a sequence of zero catches. The analysis showed that this risk was above 0.05 in19% of this area that will be considered as infested during the control operations.The remote sensing analysis that identifed the wet areas allowed a restriction of the area to be surveyed to 4% of the total surface area (7,150km2), whereas the mathematical model provided an efficient method to improve the accuracy and the robustness of the sampling protocol. The final size of the control area will be decided based on the entomological collection data.This entomological sampling procedure might be used for other vector or pest control scenarios. (Authors)

  19. Monoplane 3D-2D registration of cerebral angiograms based on multi-objective stratified optimization

    Science.gov (United States)

    Aksoy, T.; Špiclin, Ž.; Pernuš, F.; Unal, G.

    2017-12-01

    Registration of 3D pre-interventional to 2D intra-interventional medical images has an increasingly important role in surgical planning, navigation and treatment, because it enables the physician to co-locate depth information given by pre-interventional 3D images with the live information in intra-interventional 2D images such as x-ray. Most tasks during image-guided interventions are carried out under a monoplane x-ray, which is a highly ill-posed problem for state-of-the-art 3D to 2D registration methods. To address the problem of rigid 3D-2D monoplane registration we propose a novel multi-objective stratified parameter optimization, wherein a small set of high-magnitude intensity gradients are matched between the 3D and 2D images. The stratified parameter optimization matches rotation templates to depth templates, first sampled from projected 3D gradients and second from the 2D image gradients, so as to recover 3D rigid-body rotations and out-of-plane translation. The objective for matching was the gradient magnitude correlation coefficient, which is invariant to in-plane translation. The in-plane translations are then found by locating the maximum of the gradient phase correlation between the best matching pair of rotation and depth templates. On twenty pairs of 3D and 2D images of ten patients undergoing cerebral endovascular image-guided intervention the 3D to monoplane 2D registration experiments were setup with a rather high range of initial mean target registration error from 0 to 100 mm. The proposed method effectively reduced the registration error to below 2 mm, which was further refined by a fast iterative method and resulted in a high final registration accuracy (0.40 mm) and high success rate (> 96%). Taking into account a fast execution time below 10 s, the observed performance of the proposed method shows a high potential for application into clinical image-guidance systems.

  20. Stratified random sampling plans designed to assist in the determination of radon and radon daughter concentrations in underground uranium mine atmosphere

    International Nuclear Information System (INIS)

    Makepeace, C.E.

    1981-01-01

    Sampling strategies for the monitoring of deleterious agents present in uranium mine air in underground and surface mining areas are described. These methods are designed to prevent overexposure of the lining of the respiratory system of uranium miners to ionizing radiation from radon and radon daughters, and whole body overexposure to external gamma radiation. A detailed description is provided of stratified random sampling monitoring methodology for obtaining baseline data to be used as a reference for subsequent compliance assessment

  1. Resource-stratified implementation of a community-based breast cancer management programme in Peru.

    Science.gov (United States)

    Duggan, Catherine; Dvaladze, Allison L; Tsu, Vivien; Jeronimo, Jose; Constant, Tara K Hayes; Romanoff, Anya; Scheel, John R; Patel, Shilpen; Gralow, Julie R; Anderson, Benjamin O

    2017-10-01

    Breast cancer incidence and mortality rates continue to rise in Peru, with related deaths projected to increase from 1208 in 2012, to 2054 in 2030. Despite improvements in national cancer control plans, various barriers to positive breast cancer outcomes remain. Multiorganisational stakeholder collaboration is needed for the development of functional, sustainable early diagnosis, treatment and supportive care programmes with the potential to achieve measurable outcomes. In 2011, PATH, the Peruvian Ministry of Health, the National Cancer Institute in Lima, and the Regional Cancer Institute in Trujillo collaborated to establish the Community-based Program for Breast Health, the aim of which was to improve breast health-care delivery in Peru. A four-step, resource-stratified implementation strategy was used to establish an effective community-based triage programme and a practical early diagnosis scheme within existing multilevel health-care infrastructure. The phased implementation model was initially developed by the Breast Cancer Initiative 2·5: a group of health and non-governmental organisations who collaborate to improve breast cancer outcomes. To date, the Community-based Program for Breast Health has successfully implemented steps 1, 2, and 3 of the Breast Cancer Initiative 2·5 model in Peru, with reports of increased awareness of breast cancer among women, improved capacity for early diagnosis among health workers, and the creation of stronger and more functional linkages between the primary levels (ie, local or community) and higher levels (ie, district, region, and national) of health care. The Community-based Program for Breast Health is a successful example of stakeholder and collaborator involvement-both internal and external to Peru-in the design and implementation of resource-appropriate interventions to increase breast health-care capacity in a middle-income Latin American country. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. The relationships between sixteen perfluorinated compound concentrations in blood serum and food, and other parameters, in the general population of South Korea with proportionate stratified sampling method.

    Science.gov (United States)

    Kim, Hee-Young; Kim, Seung-Kyu; Kang, Dong-Mug; Hwang, Yong-Sik; Oh, Jeong-Eun

    2014-02-01

    Serum samples were collected from volunteers of various ages and both genders using a proportionate stratified sampling method, to assess the exposure of the general population in Busan, South Korea to perfluorinated compounds (PFCs). 16 PFCs were investigated in serum samples from 306 adults (124 males and 182 females) and one day composite diet samples (breakfast, lunch, and dinner) from 20 of the serum donors, to investigate the relationship between food and serum PFC concentrations. Perfluorooctanoic acid and perfluorooctanesulfonic acid were the dominant PFCs in the serum samples, with mean concentrations of 8.4 and 13 ng/mL, respectively. Perfluorotridecanoic acid was the dominant PFC in the composite food samples, ranging from studies. We confirmed from the relationships between questionnaire results and the PFC concentrations in the serum samples, that food is one of the important contribution factors of human exposure to PFCs. However, there were no correlations between the PFC concentrations in the one day composite diet samples and the serum samples, because a one day composite diet sample is not necessarily representative of a person's long-term diet and because of the small number of samples taken. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Hand Grip Strength: age and gender stratified normative data in a population-based study

    Directory of Open Access Journals (Sweden)

    Taylor Anne W

    2011-04-01

    Full Text Available Abstract Background The North West Adelaide Health Study is a representative longitudinal cohort study of people originally aged 18 years and over. The aim of this study was to describe normative data for hand grip strength in a community-based Australian population. Secondary aims were to investigate the relationship between body mass index (BMI and hand grip strength, and to compare Australian data with international hand grip strength norms. Methods The sample was randomly selected and recruited by telephone interview. Overall, 3 206 (81% of those recruited participants returned to the clinic during the second stage (2004-2006 which specifically focused on the collection of information relating to musculoskeletal conditions. Results Following the exclusion of 435 participants who had hand pain and/or arthritis, 1366 men and 1312 women participants provided hand grip strength measurement. The study population was relatively young, with 41.5% under 40 years; and their mean BMI was 28.1 kg/m2 (SD 5.5. Higher hand grip strength was weakly related to higher BMI in adults under the age of 30 and over the age of 70, but inversely related to higher BMI between these ages. Australian norms from this sample had amongst the lowest of the hand grip strength of the internationally published norms, except those from underweight populations. Conclusions This population demonstrated higher BMI and lower grip strength in younger participants than much of the international published, population data. A complete exploration of the relationship between BMI and hand grip strength was not fully explored as there were very few participants with BMI in the underweight range. The age and gender grip strength values are lower in younger adults than those reported in international literature.

  4. Design of dry sand soil stratified sampler

    Science.gov (United States)

    Li, Erkang; Chen, Wei; Feng, Xiao; Liao, Hongbo; Liang, Xiaodong

    2018-04-01

    This paper presents a design of a stratified sampler for dry sand soil, which can be used for stratified sampling of loose sand under certain conditions. Our group designed the mechanical structure of a portable, single - person, dry sandy soil stratified sampler. We have set up a mathematical model for the sampler. It lays the foundation for further development of design research.

  5. Identification of Anisotropic Criteria for Stratified Soil Based on Triaxial Tests Results

    Science.gov (United States)

    Tankiewicz, Matylda; Kawa, Marek

    2017-09-01

    The paper presents the identification methodology of anisotropic criteria based on triaxial test results. The considered material is varved clay - a sedimentary soil occurring in central Poland which is characterized by the so-called "layered microstructure". The strength examination outcomes were identified by standard triaxial tests. The results include the estimated peak strength obtained for a wide range of orientations and confining pressures. Two models were chosen as potentially adequate for the description of the tested material, namely Pariseau and its conjunction with the Jaeger weakness plane. Material constants were obtained by fitting the model to the experimental results. The identification procedure is based on the least squares method. The optimal values of parameters are searched for between specified bounds by sequentially decreasing the distance between points and reducing the length of the searched range. For both considered models the optimal parameters have been obtained. The comparison of theoretical and experimental results as well as the assessment of the suitability of selected criteria for the specified range of confining pressures are presented.

  6. Assessing Performance of Three BIM-Based Views of Buildings for Communication and Management of Vertically Stratified Legal Interests

    Directory of Open Access Journals (Sweden)

    Behnam Atazadeh

    2017-07-01

    Full Text Available Multistorey buildings typically include stratified legal interests which provide entitlements to a community of owners to lawfully possess private properties and use communal and public properties. The spatial arrangements of these legal interests are often defined by multiplexing cognitively outlined spaces and physical elements of a building. In order to support 3D digital management and communication of legal arrangements of properties, a number of spatial data models have been recently developed in Geographic Information Systems (GIS and Building Information Modelling (BIM domains. While some data models, such as CityGML, IndoorGML or IFC, provide a merely physical representation of the built environment, others, e.g., LADM, mainly rely on legal data elements to support a purely legal view of multistorey buildings. More recently, spatial data models integrating legal and physical notions of multistorey buildings have been proposed to overcome issues associated with purely legal models and purely physical ones. In previous investigations, it has been found that the 3D digital data environment of BIM has the flexibility to utilize either only physical elements or only legal spaces, or an integrated view of both legal spaces and physical elements to represent spatial arrangements of stratified legal interests. In this article, the performance of these three distinct BIM-based representations of legal interests defined inside multistorey buildings is assessed in the context of the Victorian jurisdiction of Australia. The assessment metrics are a number of objects and geometry batches, visualization speed in terms of frame rate, query time, modelling legal boundaries, and visual communication of legal boundaries.

  7. Efficacy of the Smartphone-Based Glucose Management Application Stratified by User Satisfaction

    Directory of Open Access Journals (Sweden)

    Hun-Sung Kim

    2014-06-01

    Full Text Available BackgroundWe aimed to assess the efficacy of the smartphone-based health application for glucose control and patient satisfaction with the mobile network system used for glucose self-monitoring.MethodsThirty-five patients were provided with a smartphone device, and self-measured blood glucose data were automatically transferred to the medical staff through the smartphone application over the course of 12 weeks. The smartphone user group was divided into two subgroups (more satisfied group vs. less satisfied group based on the results of questionnaire surveys regarding satisfaction, comfort, convenience, and functionality, as well as their willingness to use the smartphone application in the future. The control group was set up via a review of electronic medical records by group matching in terms of age, sex, doctor in charge, and glycated hemoglobin (HbA1c.ResultsBoth the smartphone group and the control group showed a tendency towards a decrease in the HbA1c level after 3 months (7.7%±0.7% to 7.5%±0.7%, P=0.077. In the more satisfied group (n=27, the HbA1c level decreased from 7.7%±0.8% to 7.3%±0.6% (P=0.001, whereas in the less satisfied group (n=8, the HbA1c result increased from 7.7%±0.4% to 8.1%±0.5% (P=0.062, showing values much worse than that of the no-smartphone control group (from 7.7%±0.5% to 7.7%±0.7%, P=0.093.ConclusionIn addition to medical feedback, device and network-related patient satisfaction play a crucial role in blood glucose management. Therefore, for the smartphone app-based blood glucose monitoring to be effective, it is essential to provide the patient with a well-functioning high quality tool capable of increasing patient satisfaction and willingness to use.

  8. Effect of nature of base cation on surface conductivity of H forms of stratified silicates

    International Nuclear Information System (INIS)

    Vasil'ev, N.G.; Ovcharenko, F.D.; Savkin, A.G.

    1976-01-01

    Interpretation has been proposed for curves of conductometric titration of diluted suspensions of natural silicates in hydrogen forms with solutions of alkalies and organic bases. The curves of conductometric are presented for suspensions of H-form of montmorillonite with solutions of alkali metal hydroxides and with Ba(OH) 2 . A linear decrease in electroconductiv;ty of the system is observed when H-mineral is neutralized with LIOH and NaOH solution. If hydroxides of other metals are added to such a system, the titration curves have an anomalous character. It is especially pronounced when H-mineral is titrated with RbOH and CsOH solutions. When these solutions are added to the suspension of h H-mineral, an additional amount of highly mobile H+-ions is formed which increases electroconductivity of the system. When all the exchange protons in a flat double layer are replaced by Rb 1 or Cs 1 ions, electroconductivity decreases which is related to neutralization of protons in the diffusion part of the layer

  9. Evaluation of a Stratified National Breast Screening Program in the United Kingdom: An Early Model-Based Cost-Effectiveness Analysis.

    Science.gov (United States)

    Gray, Ewan; Donten, Anna; Karssemeijer, Nico; van Gils, Carla; Evans, D Gareth; Astley, Sue; Payne, Katherine

    2017-09-01

    To identify the incremental costs and consequences of stratified national breast screening programs (stratified NBSPs) and drivers of relative cost-effectiveness. A decision-analytic model (discrete event simulation) was conceptualized to represent four stratified NBSPs (risk 1, risk 2, masking [supplemental screening for women with higher breast density], and masking and risk 1) compared with the current UK NBSP and no screening. The model assumed a lifetime horizon, the health service perspective to identify costs (£, 2015), and measured consequences in quality-adjusted life-years (QALYs). Multiple data sources were used: systematic reviews of effectiveness and utility, published studies reporting costs, and cohort studies embedded in existing NBSPs. Model parameter uncertainty was assessed using probabilistic sensitivity analysis and one-way sensitivity analysis. The base-case analysis, supported by probabilistic sensitivity analysis, suggested that the risk stratified NBSPs (risk 1 and risk-2) were relatively cost-effective when compared with the current UK NBSP, with incremental cost-effectiveness ratios of £16,689 per QALY and £23,924 per QALY, respectively. Stratified NBSP including masking approaches (supplemental screening for women with higher breast density) was not a cost-effective alternative, with incremental cost-effectiveness ratios of £212,947 per QALY (masking) and £75,254 per QALY (risk 1 and masking). When compared with no screening, all stratified NBSPs could be considered cost-effective. Key drivers of cost-effectiveness were discount rate, natural history model parameters, mammographic sensitivity, and biopsy rates for recalled cases. A key assumption was that the risk model used in the stratification process was perfectly calibrated to the population. This early model-based cost-effectiveness analysis provides indicative evidence for decision makers to understand the key drivers of costs and QALYs for exemplar stratified NBSP. Copyright

  10. Undersized description on motile gyrotactic micro-organisms individualities in MHD stratified water-based Newtonian nanofluid

    Science.gov (United States)

    Rehman, Khalil Ur; Malik, Aneeqa Ashfaq; Tahir, M.; Malik, M. Y.

    2018-03-01

    The current pagination summarized the influence of bio-convection Schmidt number, bio-convection Peclet number and micro-organisms concentration difference parameter on the density of motile gyrotactic micro-organisms when they have interaction with the thermally stratified magneto-nanofluid flow past a vertical stretching surface. It is observed that the density of motile microorganisms is the decreasing function of the bio-convection Schmidt and Peclet numbers. It is trusted that the outcomes of present analysis will serve as a helping source for the upcoming developments regarding individualities of motile gyrotactic micro-organisms subject to boundary layer flows induced by stretching surfaces.

  11. Theory of sampling and its application in tissue based diagnosis

    Directory of Open Access Journals (Sweden)

    Kayser Gian

    2009-02-01

    Full Text Available Abstract Background A general theory of sampling and its application in tissue based diagnosis is presented. Sampling is defined as extraction of information from certain limited spaces and its transformation into a statement or measure that is valid for the entire (reference space. The procedure should be reproducible in time and space, i.e. give the same results when applied under similar circumstances. Sampling includes two different aspects, the procedure of sample selection and the efficiency of its performance. The practical performance of sample selection focuses on search for localization of specific compartments within the basic space, and search for presence of specific compartments. Methods When a sampling procedure is applied in diagnostic processes two different procedures can be distinguished: I the evaluation of a diagnostic significance of a certain object, which is the probability that the object can be grouped into a certain diagnosis, and II the probability to detect these basic units. Sampling can be performed without or with external knowledge, such as size of searched objects, neighbourhood conditions, spatial distribution of objects, etc. If the sample size is much larger than the object size, the application of a translation invariant transformation results in Kriege's formula, which is widely used in search for ores. Usually, sampling is performed in a series of area (space selections of identical size. The size can be defined in relation to the reference space or according to interspatial relationship. The first method is called random sampling, the second stratified sampling. Results Random sampling does not require knowledge about the reference space, and is used to estimate the number and size of objects. Estimated features include area (volume fraction, numerical, boundary and surface densities. Stratified sampling requires the knowledge of objects (and their features and evaluates spatial features in relation to

  12. The stratified Boycott effect

    Science.gov (United States)

    Peacock, Tom; Blanchette, Francois; Bush, John W. M.

    2005-04-01

    We present the results of an experimental investigation of the flows generated by monodisperse particles settling at low Reynolds number in a stably stratified ambient with an inclined sidewall. In this configuration, upwelling beneath the inclined wall associated with the Boycott effect is opposed by the ambient density stratification. The evolution of the system is determined by the relative magnitudes of the container depth, h, and the neutral buoyancy height, hn = c0(ρp-ρf)/|dρ/dz|, where c0 is the particle concentration, ρp the particle density, ρf the mean fluid density and dρ/dz Boycott layer transports dense fluid from the bottom to the top of the system; subsequently, the upper clear layer of dense saline fluid is mixed by convection. For sufficiently strong stratification, h > hn, layering occurs. The lowermost layer is created by clear fluid transported from the base to its neutral buoyancy height, and has a vertical extent hn; subsequently, smaller overlying layers develop. Within each layer, convection erodes the initially linear density gradient, generating a step-like density profile throughout the system that persists after all the particles have settled. Particles are transported across the discrete density jumps between layers by plumes of particle-laden fluid.

  13. Evaluation of a Stratified National Breast Screening Program in the United Kingdom : An Early Model-Based Cost-Effectiveness Analysis

    NARCIS (Netherlands)

    Gray, Ewan; Donten, Anna; Karssemeijer, Nico; van Gils, Carla; Evans, D. Gareth R.; Astley, Sue; Payne, Katherine

    Objectives: To identify the incremental costs and consequences of stratified national breast screening programs (stratified NBSPs) and drivers of relative cost-effectiveness. Methods: A decision-analytic model (discrete event simulation) was conceptualized to represent four stratified NBSPs (risk 1,

  14. Evaluation of a Stratified National Breast Screening Program in the United Kingdom: An Early Model-Based Cost-Effectiveness Analysis

    NARCIS (Netherlands)

    Gray, E.; Donten, A.; Karssemeijer, N.; Gils, C. van; Evans, D.G.; Astley, S.; Payne, K.

    2017-01-01

    OBJECTIVES: To identify the incremental costs and consequences of stratified national breast screening programs (stratified NBSPs) and drivers of relative cost-effectiveness. METHODS: A decision-analytic model (discrete event simulation) was conceptualized to represent four stratified NBSPs (risk 1,

  15. Evaluation of the health status of preschool children stratified based on the weight-length index (WLI).

    Science.gov (United States)

    Shin, Kyung-Ok; Chung, Keun-Hee; Park, Hyun-Suh

    2010-10-01

    This study was conducted to prepare basic materials and offer advice regarding dietary habits to prevent and cure childhood obesity by comparing and analyzing dietary habit, nutritional status, blood factors, and mineral contents of hair. All subjects were stratified by their weight-length index (WLI). According to the standard WLI values, 64.9% of children were within the normal value, 13.5% of children were underweight, and 21.6% of children were overweight and obese (WLI ≥ 110%). Overall, the score assessed dietary habit for all children was 21.32 ± 2.55 point (921 subjects), with 5.1% of children having excellent dietary habits and 3.1% having poor dietary habits. Additionally, 37.9% of underweight children, 37.6% of normal weight children, and 43.2% of overweight and obese children consumed higher amounts of protein than underweight children did (meat, fish, eggs, and soy products) (P < 0.05). Overweight and obese children consumed more fried foods than underweight or normal weight children (P < 0.05). Moreover, 38.0% of the children had hemoglobin levels of 12 g/dl, while 7.6% were anemic (11.1 g/dl). When a hematocrit level of 33% was taken as the standard, 11.0% of children were anemic. The plasma transferrin content was 263.76 ± 54.52 mg/dl in overweight and obese children. The mean values of Fe, Cu, Ca, Cr, Mn, Se, Na, K, Li, V, Co, and Mo were within the reference values, but the Zn concentrations of underweight, normal weight, and overweight and obese children were 67.97 ± 28.51 ppm, 70.09 ± 30.81 ppm, and 73.99 ± 30.36 ppm, respectively. The Zn concentration of overweight and obese children (73.99 ± 30.36 ppm) was lower than that of the standard value (180~220 ppm). Therefore, a nutritional education program and new guidance for dietary pattern should be developed to reduce the number of underweight and overweight and obese children.

  16. Prevalence and Risk Factors of Overweight and Obesity among Children Aged 6–59 Months in Cameroon: A Multistage, Stratified Cluster Sampling Nationwide Survey

    Science.gov (United States)

    Tchoubi, Sébastien; Sobngwi-Tambekou, Joëlle; Noubiap, Jean Jacques N.; Asangbeh, Serra Lem; Nkoum, Benjamin Alexandre; Sobngwi, Eugene

    2015-01-01

    Background Childhood obesity is one of the most serious public health challenges of the 21st century. The prevalence of overweight and obesity among children (overweight and obesity among children aged 6 months to 5 years in Cameroon in 2011. Methods Four thousand five hundred and eighteen children (2205 boys and 2313 girls) aged between 6 to 59 months were sampled in the 2011 Demographic Health Survey (DHS) database. Body Mass Index (BMI) z-scores based on WHO 2006 reference population was chosen to estimate overweight (BMI z-score > 2) and obesity (BMI for age > 3). Regression analyses were performed to investigate risk factors of overweight/obesity. Results The prevalence of overweight and obesity was 8% (1.7% for obesity alone). Boys were more affected by overweight than girls with a prevalence of 9.7% and 6.4% respectively. The highest prevalence of overweight was observed in the Grassfield area (including people living in West and North-West regions) (15.3%). Factors that were independently associated with overweight and obesity included: having overweight mother (adjusted odds ratio (aOR) = 1.51; 95% CI 1.15 to 1.97) and obese mother (aOR = 2.19; 95% CI = 155 to 3.07), compared to having normal weight mother; high birth weight (aOR = 1.69; 95% CI 1.24 to 2.28) compared to normal birth weight; male gender (aOR = 1.56; 95% CI 1.24 to 1.95); low birth rank (aOR = 1.35; 95% CI 1.06 to 1.72); being aged between 13–24 months (aOR = 1.81; 95% CI = 1.21 to 2.66) and 25–36 months (aOR = 2.79; 95% CI 1.93 to 4.13) compared to being aged 45 to 49 months; living in the grassfield area (aOR = 2.65; 95% CI = 1.87 to 3.79) compared to living in Forest area. Muslim appeared as a protective factor (aOR = 0.67; 95% CI 0.46 to 0.95).compared to Christian religion. Conclusion This study underlines a high prevalence of early childhood overweight with significant disparities between ecological areas of Cameroon. Risk factors of overweight included high maternal BMI, high

  17. Prevalence and Risk Factors of Overweight and Obesity among Children Aged 6-59 Months in Cameroon: A Multistage, Stratified Cluster Sampling Nationwide Survey.

    Science.gov (United States)

    Tchoubi, Sébastien; Sobngwi-Tambekou, Joëlle; Noubiap, Jean Jacques N; Asangbeh, Serra Lem; Nkoum, Benjamin Alexandre; Sobngwi, Eugene

    2015-01-01

    Childhood obesity is one of the most serious public health challenges of the 21st century. The prevalence of overweight and obesity among children (overweight and obesity among children aged 6 months to 5 years in Cameroon in 2011. Four thousand five hundred and eighteen children (2205 boys and 2313 girls) aged between 6 to 59 months were sampled in the 2011 Demographic Health Survey (DHS) database. Body Mass Index (BMI) z-scores based on WHO 2006 reference population was chosen to estimate overweight (BMI z-score > 2) and obesity (BMI for age > 3). Regression analyses were performed to investigate risk factors of overweight/obesity. The prevalence of overweight and obesity was 8% (1.7% for obesity alone). Boys were more affected by overweight than girls with a prevalence of 9.7% and 6.4% respectively. The highest prevalence of overweight was observed in the Grassfield area (including people living in West and North-West regions) (15.3%). Factors that were independently associated with overweight and obesity included: having overweight mother (adjusted odds ratio (aOR) = 1.51; 95% CI 1.15 to 1.97) and obese mother (aOR = 2.19; 95% CI = 155 to 3.07), compared to having normal weight mother; high birth weight (aOR = 1.69; 95% CI 1.24 to 2.28) compared to normal birth weight; male gender (aOR = 1.56; 95% CI 1.24 to 1.95); low birth rank (aOR = 1.35; 95% CI 1.06 to 1.72); being aged between 13-24 months (aOR = 1.81; 95% CI = 1.21 to 2.66) and 25-36 months (aOR = 2.79; 95% CI 1.93 to 4.13) compared to being aged 45 to 49 months; living in the grassfield area (aOR = 2.65; 95% CI = 1.87 to 3.79) compared to living in Forest area. Muslim appeared as a protective factor (aOR = 0.67; 95% CI 0.46 to 0.95).compared to Christian religion. This study underlines a high prevalence of early childhood overweight with significant disparities between ecological areas of Cameroon. Risk factors of overweight included high maternal BMI, high birth weight, male gender, low birth rank

  18. Silicon based ultrafast optical waveform sampling

    DEFF Research Database (Denmark)

    Ji, Hua; Galili, Michael; Pu, Minhao

    2010-01-01

    A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode-locker as th......A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode......-locker as the sampling source. A clear eye-diagram of a 320 Gbit/s data signal is obtained. The temporal resolution of the sampling system is estimated to 360 fs....

  19. Education-stratified base-rate information on discrepancy scores within and between the Wechsler Adult Intelligence Scale--Third Edition and the Wechsler Memory Scale--Third Edition.

    Science.gov (United States)

    Dori, Galit A; Chelune, Gordon J

    2004-06-01

    The Wechsler Adult Intelligence Scale--Third Edition (WAIS-III; D. Wechsler, 1997a) and the Wechsler Memory Scale--Third Edition (WMS-III; D. Wechsler, 1997b) are 2 of the most frequently used measures in psychology and neuropsychology. To facilitate the diagnostic use of these measures in the clinical decision-making process, this article provides information on education-stratified, directional prevalence rates (i.e., base rates) of discrepancy scores between the major index scores for the WAIS-III, the WMS-III, and between the WAIS-III and WMS-III. To illustrate how such base-rate data can be clinically used, this article reviews the relative risk (i.e., odds ratio) of empirically defined "rare" cognitive deficits in 2 of the clinical samples presented in the WAIS-III--WMS-III Technical Manual (The Psychological Corporation, 1997). ((c) 2004 APA, all rights reserved)

  20. Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling

    Directory of Open Access Journals (Sweden)

    Saeed Mian Qaisar

    2009-01-01

    Full Text Available The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.

  1. Stroke rehabilitation and risk of mortality: a population-based cohort study stratified by age and gender.

    Science.gov (United States)

    Hou, Wen-Hsuan; Ni, Cheng-Hua; Li, Chung-Yi; Tsai, Pei-Shan; Lin, Li-Fong; Shen, Hsiu-Nien

    2015-06-01

    To determine the survival of patients with stroke for up to 10 years after a first-time stroke and to investigate whether stroke rehabilitation within the first 3 months reduced long-term mortality in these patients. We used the medical claims data for a random sample of 1 million insured Taiwanese registered in the year 2000. A total of 7767 patients admitted for a first-time stroke between 2000 and 2005; 1285 (16.7%) received rehabilitation within the first 3 months after stroke admission. The other 83.3% of patients served as a comparison cohort. A Cox proportional hazards model was used to estimate the relative risk of mortality in relation to the rehabilitation intervention. In all, 181 patients with rehabilitation and 1123 controls died, representing respective mortality rates of 25.0 and 32.7 per 1000 person-years. Rehabilitation was significantly associated with a lower risk of mortality (hazard ratio .68, 95% confidence interval .58-.79). Such a beneficial effect tended to be more obvious as the frequency of rehabilitation increased (P for the trend Stroke rehabilitation initiated in the first 3 months after a stroke admission may significantly reduce the risk of mortality for 10 years after the stroke. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  2. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    2000-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999a) and the Final Safety Analysis Report (FSAR) (FDH 1999b) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in producing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks. The results given in this report are a revision to similar results given in an earlier version of the document (Jensen and Wilmarth 1999). The main difference between the results in this document and the earlier version is that the dose conversion factors (DCF) for converting μCi/g or μCi/L to Sv/L (sieverts per liter) have changed. There are now two DCFs, one based on ICRP-68 and one based on ICW-71 (Brevick 2000)

  3. Stratified Medicine and Reimbursement Issues

    Directory of Open Access Journals (Sweden)

    Hans-Joerg eFugel

    2012-10-01

    Full Text Available Stratified Medicine (SM has the potential to target patient populations who will most benefit from a therapy while reducing unnecessary health interventions associated with side effects. The link between clinical biomarkers/diagnostics and therapies provides new opportunities for value creation to strengthen the value proposition to pricing and reimbursement (P&R authorities. However, the introduction of SM challenges current reimbursement schemes in many EU countries and the US as different P&R policies have been adopted for drugs and diagnostics. Also, there is a lack of a consistent process for value assessment of more complex diagnostics in these markets. New, innovative approaches and more flexible P&R systems are needed to reflect the added value of diagnostic tests and to stimulate investments in new technologies. Yet, the framework for access of diagnostic–based therapies still requires further development while setting the right incentives and appropriate align stakeholders interests when realizing long- term patient benefits. This article addresses the reimbursement challenges of SM approaches in several EU countries and the US outlining some options to overcome existing reimbursement barriers for stratified medicine.

  4. Effectiveness of a two-step population-based osteoporosis screening program using FRAX: the randomized Risk-stratified Osteoporosis Strategy Evaluation (ROSE) study.

    Science.gov (United States)

    Rubin, K H; Rothmann, M J; Holmberg, T; Høiberg, M; Möller, S; Barkmann, R; Glüer, C C; Hermann, A P; Bech, M; Gram, J; Brixen, K

    2018-03-01

    The Risk-stratified Osteoporosis Strategy Evaluation (ROSE) study investigated the effectiveness of a two-step screening program for osteoporosis in women. We found no overall reduction in fractures from systematic screening compared to the current case-finding strategy. The group of moderate- to high-risk women, who accepted the invitation to DXA, seemed to benefit from the program. The purpose of the ROSE study was to investigate the effectiveness of a two-step population-based osteoporosis screening program using the Fracture Risk Assessment Tool (FRAX) derived from a self-administered questionnaire to select women for DXA scan. After the scanning, standard osteoporosis management according to Danish national guidelines was followed. Participants were randomized to either screening or control group, and randomization was stratified according to age and area of residence. Inclusion took place from February 2010 to November 2011. Participants received a self-administered questionnaire, and women in the screening group with a FRAX score ≥ 15% (major osteoporotic fractures) were invited to a DXA scan. Primary outcome was incident clinical fractures. Intention-to-treat analysis and two per-protocol analyses were performed. A total of 3416 fractures were observed during a median follow-up of 5 years. No significant differences were found in the intention-to-treat analyses with 34,229 women included aged 65-80 years. The per-protocol analyses showed a risk reduction in the group that underwent DXA scanning compared to women in the control group with a FRAX ≥ 15%, in regard to major osteoporotic fractures, hip fractures, and all fractures. The risk reduction was most pronounced for hip fractures (adjusted SHR 0.741, p = 0.007). Compared to an office-based case-finding strategy, the two-step systematic screening strategy had no overall effect on fracture incidence. The two-step strategy seemed, however, to be beneficial in the group of women who were

  5. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    Science.gov (United States)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  6. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    1999-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999) and the Final Safety Analysis Report (FSAR) (FDH 1999) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in developing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks

  7. Nitrogen transformations in stratified aquatic microbial ecosystems

    DEFF Research Database (Denmark)

    Revsbech, N. P.; Risgaard-Petersen, N.; Schramm, A.

    2006-01-01

    Abstract  New analytical methods such as advanced molecular techniques and microsensors have resulted in new insights about how nitrogen transformations in stratified microbial systems such as sediments and biofilms are regulated at a µm-mm scale. A large and ever-expanding knowledge base about n...

  8. Design-based estimators for snowball sampling

    OpenAIRE

    Shafie, Termeh

    2010-01-01

    Snowball sampling, where existing study subjects recruit further subjects from amongtheir acquaintances, is a popular approach when sampling from hidden populations.Since people with many in-links are more likely to be selected, there will be a selectionbias in the samples obtained. In order to eliminate this bias, the sample data must beweighted. However, the exact selection probabilities are unknown for snowball samplesand need to be approximated in an appropriate way. This paper proposes d...

  9. Electromagnetic waves in stratified media

    CERN Document Server

    Wait, James R; Fock, V A; Wait, J R

    2013-01-01

    International Series of Monographs in Electromagnetic Waves, Volume 3: Electromagnetic Waves in Stratified Media provides information pertinent to the electromagnetic waves in media whose properties differ in one particular direction. This book discusses the important feature of the waves that enables communications at global distances. Organized into 13 chapters, this volume begins with an overview of the general analysis for the electromagnetic response of a plane stratified medium comprising of any number of parallel homogeneous layers. This text then explains the reflection of electromagne

  10. National-scale vegetation change across Britain; an analysis of sample-based surveillance data from the Countryside Surveys of 1990 and 1998

    NARCIS (Netherlands)

    Smart, S.M.; Clarke, R.T.; Poll, van de H.M.; Robertson, E.J.; Shield, E.R.; Bunce, R.G.H.; Maskell, L.C.

    2003-01-01

    Patterns of vegetation across Great Britain (GB) between 1990 and 1998 were quantified based on an analysis of plant species data from a total of 9596 fixed plots. Plots were established on a stratified random basis within 501 1 km sample squares located as part of the Countryside Survey of GB.

  11. Stratified medicine and reimbursement issues

    NARCIS (Netherlands)

    Fugel, Hans-Joerg; Nuijten, Mark; Postma, Maarten

    2012-01-01

    Stratified Medicine (SM) has the potential to target patient populations who will most benefit from a therapy while reducing unnecessary health interventions associated with side effects. The link between clinical biomarkers/diagnostics and therapies provides new opportunities for value creation to

  12. Nitrogen transformations in stratified aquatic microbial ecosystems

    DEFF Research Database (Denmark)

    Revsbech, Niels Peter; Risgaard-Petersen, N.; Schramm, Andreas

    2006-01-01

    Abstract  New analytical methods such as advanced molecular techniques and microsensors have resulted in new insights about how nitrogen transformations in stratified microbial systems such as sediments and biofilms are regulated at a µm-mm scale. A large and ever-expanding knowledge base about n...... performing dissimilatory reduction of nitrate to ammonium have given new dimensions to the understanding of nitrogen cycling in nature, and the occurrence of these organisms and processes in stratified microbial communities will be described in detail.......Abstract  New analytical methods such as advanced molecular techniques and microsensors have resulted in new insights about how nitrogen transformations in stratified microbial systems such as sediments and biofilms are regulated at a µm-mm scale. A large and ever-expanding knowledge base about...... nitrogen fixation, nitrification, denitrification, and dissimilatory reduction of nitrate to ammonium, and about the microorganisms performing the processes, has been produced by use of these techniques. During the last decade the discovery of anammmox bacteria and migrating, nitrate accumulating bacteria...

  13. Comparison of sampling strategies for object-based classification of urban vegetation from Very High Resolution satellite images

    Science.gov (United States)

    Rougier, Simon; Puissant, Anne; Stumpf, André; Lachiche, Nicolas

    2016-09-01

    Vegetation monitoring is becoming a major issue in the urban environment due to the services they procure and necessitates an accurate and up to date mapping. Very High Resolution satellite images enable a detailed mapping of the urban tree and herbaceous vegetation. Several supervised classifications with statistical learning techniques have provided good results for the detection of urban vegetation but necessitate a large amount of training data. In this context, this study proposes to investigate the performances of different sampling strategies in order to reduce the number of examples needed. Two windows based active learning algorithms from state-of-art are compared to a classical stratified random sampling and a third combining active learning and stratified strategies is proposed. The efficiency of these strategies is evaluated on two medium size French cities, Strasbourg and Rennes, associated to different datasets. Results demonstrate that classical stratified random sampling can in some cases be just as effective as active learning methods and that it should be used more frequently to evaluate new active learning methods. Moreover, the active learning strategies proposed in this work enables to reduce the computational runtime by selecting multiple windows at each iteration without increasing the number of windows needed.

  14. Template-Based Sampling of Anisotropic BRDFs

    Czech Academy of Sciences Publication Activity Database

    Filip, Jiří; Vávra, Radomír

    2014-01-01

    Roč. 33, č. 7 (2014), s. 91-99 ISSN 0167-7055. [Pacific Graphics 2014. Soul, 08.10.2014-10.10.2014] R&D Projects: GA ČR(CZ) GA14-02652S; GA ČR(CZ) GA14-10911S; GA ČR GAP103/11/0335 Institutional support: RVO:67985556 Keywords : BRDF database * material appearnce * sampling * measurement Subject RIV: BD - Theory of Information Impact factor: 1.642, year: 2014 http://library.utia.cas.cz/separaty/2014/RO/filip-0432894.pdf

  15. The Stratified Legitimacy of Abortions.

    Science.gov (United States)

    Kimport, Katrina; Weitz, Tracy A; Freedman, Lori

    2016-12-01

    Roe v. Wade was heralded as an end to unequal access to abortion care in the United States. However, today, despite being common and safe, abortion is performed only selectively in hospitals and private practices. Drawing on 61 interviews with obstetrician-gynecologists in these settings, we examine how they determine which abortions to perform. We find that they distinguish between more and less legitimate abortions, producing a narrative of stratified legitimacy that privileges abortions for intended pregnancies, when the fetus is unhealthy, and when women perform normative gendered sexuality, including distress about the abortion, guilt about failure to contracept, and desire for motherhood. This stratified legitimacy can perpetuate socially-inflected inequality of access and normative gendered sexuality. Additionally, we argue that the practice by physicians of distinguishing among abortions can legitimate legislative practices that regulate and restrict some kinds of abortion, further constraining abortion access. © American Sociological Association 2016.

  16. RADIAL STABILITY IN STRATIFIED STARS

    International Nuclear Information System (INIS)

    Pereira, Jonas P.; Rueda, Jorge A.

    2015-01-01

    We formulate within a generalized distributional approach the treatment of the stability against radial perturbations for both neutral and charged stratified stars in Newtonian and Einstein's gravity. We obtain from this approach the boundary conditions connecting any two phases within a star and underline its relevance for realistic models of compact stars with phase transitions, owing to the modification of the star's set of eigenmodes with respect to the continuous case

  17. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  18. Sample classroom activities based on climate science

    Science.gov (United States)

    Miler, T.

    2009-09-01

    We present several activities developed for the middle school education based on a climate science. The first activity was designed to teach about the ocean acidification. A simple experiment can prove that absorption of CO2 in water increases its acidity. A liquid pH indicator is suitable for the demonstration in a classroom. The second activity uses data containing coordinates of a hurricane position. Pupils draw a path of a hurricane eye in a tracking chart (map of the Atlantic ocean). They calculate an average speed of the hurricane, investigate its direction and intensity development. The third activity uses pictures of the Arctic ocean on September when ice extend is usually the lowest. Students measure the ice extend for several years using a square grid printed on a plastic foil. Then they plot a graph and discuss the results. All these activities can be used to improve the natural science education and increase the climate change literacy.

  19. Representativeness-based sampling network design for the State of Alaska

    Science.gov (United States)

    Forrest M. Hoffman; Jitendra Kumar; Richard T. Mills; William W. Hargrove

    2013-01-01

    Resource and logistical constraints limit the frequency and extent of environmental observations, particularly in the Arctic, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent environmental variability at desired scales. A quantitative methodology for stratifying sampling domains, informing site selection,...

  20. Free Falling in Stratified Fluids

    Science.gov (United States)

    Lam, Try; Vincent, Lionel; Kanso, Eva

    2017-11-01

    Leaves falling in air and discs falling in water are examples of unsteady descents due to complex interaction between gravitational and aerodynamic forces. Understanding these descent modes is relevant to many branches of engineering and science such as estimating the behavior of re-entry space vehicles to studying biomechanics of seed dispersion. For regularly shaped objects falling in homogenous fluids, the motion is relatively well understood. However, less is known about how density stratification of the fluid medium affects the falling behavior. Here, we experimentally investigate the descent of discs in both pure water and in stable linearly stratified fluids for Froude numbers Fr 1 and Reynolds numbers Re between 1000 -2000. We found that stable stratification (1) enhances the radial dispersion of the disc at landing, (2) increases the descent time, (3) decreases the inclination (or nutation) angle, and (4) decreases the fluttering amplitude while falling. We conclude by commenting on how the corresponding information can be used as a predictive model for objects free falling in stratified fluids.

  1. Assessment of fracture risk: value of random population-based samples--the Geelong Osteoporosis Study.

    Science.gov (United States)

    Henry, M J; Pasco, J A; Seeman, E; Nicholson, G C; Sanders, K M; Kotowicz, M A

    2001-01-01

    Fracture risk is determined by bone mineral density (BMD). The T-score, a measure of fracture risk, is the position of an individual's BMD in relation to a reference range. The aim of this study was to determine the magnitude of change in the T-score when different sampling techniques were used to produce the reference range. Reference ranges were derived from three samples, drawn from the same region: (1) an age-stratified population-based random sample, (2) unselected volunteers, and (3) a selected healthy subset of the population-based sample with no diseases or drugs known to affect bone. T-scores were calculated using the three reference ranges for a cohort of women who had sustained a fracture and as a group had a low mean BMD (ages 35-72 yr; n = 484). For most comparisons, the T-scores for the fracture cohort were more negative using the population reference range. The difference in T-scores reached 1.0 SD. The proportion of the fracture cohort classified as having osteoporosis at the spine was 26, 14, and 23% when the population, volunteer, and healthy reference ranges were applied, respectively. The use of inappropriate reference ranges results in substantial changes to T-scores and may lead to inappropriate management.

  2. Consensus of heterogeneous multi-agent systems based on sampled data with a small sampling delay

    International Nuclear Information System (INIS)

    Wang Na; Wu Zhi-Hai; Peng Li

    2014-01-01

    In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  3. Information content of household-stratified epidemics

    Directory of Open Access Journals (Sweden)

    T.M. Kinyanjui

    2016-09-01

    Full Text Available Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs.

  4. Information content of household-stratified epidemics.

    Science.gov (United States)

    Kinyanjui, T M; Pellis, L; House, T

    2016-09-01

    Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Risk of community-acquired pneumonia in chronic obstructive pulmonary disease stratified by smoking status: a population-based cohort study in the United Kingdom

    Directory of Open Access Journals (Sweden)

    Braeken DCW

    2017-08-01

    Full Text Available Dionne CW Braeken,1–3 Gernot GU Rohde,2 Frits ME Franssen,1,2 Johanna HM Driessen,3–5 Tjeerd P van Staa,3,6 Patrick C Souverein,3 Emiel FM Wouters,1,2 Frank de Vries3,4,7 1Department of Research and Education, CIRO, Horn, 2Department of Respiratory Medicine, Maastricht University Medical Centre (MUMC+, Maastricht, 3Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute of Pharmaceutical Sciences, Utrecht, 4Department of Clinical Pharmacy and Toxicology, Maastricht University Medical Centre (MUMC+, Maastricht, 5Department of Epidemiology, Care and Public Health Research Institute (CAPHRI, Maastricht, the Netherlands; 6Department of Health eResearch, University of Manchester, Manchester, 7MRC Lifecourse Epidemiology Unit, Southampton General Hospital, Southampton, UK Background: Smoking increases the risk of community-acquired pneumonia (CAP and is associated with the development of COPD. Until now, it is unclear whether CAP in COPD is due to smoking-related effects, or due to COPD pathophysiology itself. Objective: To evaluate the association between COPD and CAP by smoking status. Methods: In total, 62,621 COPD and 191,654 control subjects, matched by year of birth, gender and primary care practice, were extracted from the Clinical Practice Research Datalink (2005–2014. Incidence rates (IRs were estimated by dividing the total number of CAP cases by the cumulative person-time at risk. Time-varying Cox proportional hazard models were used to estimate the hazard ratios (HRs for CAP in COPD patients versus controls. HRs of CAP by smoking status were calculated by stratified analyses in COPD patients versus controls and within both subgroups with never smoking as reference. Results: IRs of CAP in COPD patients (32.00/1,000 person-years and controls (6.75/1,000 person-years increased with age and female gender. The risk of CAP in COPD patients was higher than in controls (HR 4.51, 95% CI: 4.27–4.77. Current smoking

  6. A Story-Based Simulation for Teaching Sampling Distributions

    Science.gov (United States)

    Turner, Stephen; Dabney, Alan R.

    2015-01-01

    Statistical inference relies heavily on the concept of sampling distributions. However, sampling distributions are difficult to teach. We present a series of short animations that are story-based, with associated assessments. We hope that our contribution can be useful as a tool to teach sampling distributions in the introductory statistics…

  7. Control charts for location based on different sampling schemes

    NARCIS (Netherlands)

    Mehmood, R.; Riaz, M.; Does, R.J.M.M.

    2013-01-01

    Control charts are the most important statistical process control tool for monitoring variations in a process. A number of articles are available in the literature for the X̄ control chart based on simple random sampling, ranked set sampling, median-ranked set sampling (MRSS), extreme-ranked set

  8. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  9. Suppression of stratified explosive interactions

    Energy Technology Data Exchange (ETDEWEB)

    Meeks, M.K.; Shamoun, B.I.; Bonazza, R.; Corradini, M.L. [Wisconsin Univ., Madison, WI (United States). Dept. of Nuclear Engineering and Engineering Physics

    1998-01-01

    Stratified Fuel-Coolant Interaction (FCI) experiments with Refrigerant-134a and water were performed in a large-scale system. Air was uniformly injected into the coolant pool to establish a pre-existing void which could suppress the explosion. Two competing effects due to the variation of the air flow rate seem to influence the intensity of the explosion in this geometrical configuration. At low flow rates, although the injected air increases the void fraction, the concurrent agitation and mixing increases the intensity of the interaction. At higher flow rates, the increase in void fraction tends to attenuate the propagated pressure wave generated by the explosion. Experimental results show a complete suppression of the vapor explosion at high rates of air injection, corresponding to an average void fraction of larger than 30%. (author)

  10. PHOTOSPHERIC EMISSION FROM STRATIFIED JETS

    International Nuclear Information System (INIS)

    Ito, Hirotaka; Nagataki, Shigehiro; Ono, Masaomi; Lee, Shiu-Hang; Mao, Jirong; Yamada, Shoichi; Pe'er, Asaf; Mizuta, Akira; Harikae, Seiji

    2013-01-01

    We explore photospheric emissions from stratified two-component jets, wherein a highly relativistic spine outflow is surrounded by a wider and less relativistic sheath outflow. Thermal photons are injected in regions of high optical depth and propagated until the photons escape at the photosphere. Because of the presence of shear in velocity (Lorentz factor) at the boundary of the spine and sheath region, a fraction of the injected photons are accelerated using a Fermi-like acceleration mechanism such that a high-energy power-law tail is formed in the resultant spectrum. We show, in particular, that if a velocity shear with a considerable variance in the bulk Lorentz factor is present, the high-energy part of observed gamma-ray bursts (GRBs) photon spectrum can be explained by this photon acceleration mechanism. We also show that the accelerated photons might also account for the origin of the extra-hard power-law component above the bump of the thermal-like peak seen in some peculiar bursts (e.g., GRB 090510, 090902B, 090926A). We demonstrate that time-integrated spectra can also reproduce the low-energy spectrum of GRBs consistently using a multi-temperature effect when time evolution of the outflow is considered. Last, we show that the empirical E p -L p relation can be explained by differences in the outflow properties of individual sources

  11. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  12. The Toggle Local Planner for sampling-based motion planning

    KAUST Repository

    Denny, Jory; Amato, Nancy M.

    2012-01-01

    Sampling-based solutions to the motion planning problem, such as the probabilistic roadmap method (PRM), have become commonplace in robotics applications. These solutions are the norm as the dimensionality of the planning space grows, i.e., d > 5

  13. Variable screening and ranking using sampling-based sensitivity measures

    International Nuclear Information System (INIS)

    Wu, Y-T.; Mohanty, Sitakanta

    2006-01-01

    This paper presents a methodology for screening insignificant random variables and ranking significant important random variables using sensitivity measures including two cumulative distribution function (CDF)-based and two mean-response based measures. The methodology features (1) using random samples to compute sensitivities and (2) using acceptance limits, derived from the test-of-hypothesis, to classify significant and insignificant random variables. Because no approximation is needed in either the form of the performance functions or the type of continuous distribution functions representing input variables, the sampling-based approach can handle highly nonlinear functions with non-normal variables. The main characteristics and effectiveness of the sampling-based sensitivity measures are investigated using both simple and complex examples. Because the number of samples needed does not depend on the number of variables, the methodology appears to be particularly suitable for problems with large, complex models that have large numbers of random variables but relatively few numbers of significant random variables

  14. A sampling-based approach to probabilistic pursuit evasion

    KAUST Repository

    Mahadevan, Aditya; Amato, Nancy M.

    2012-01-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented

  15. Improving the operational forecasting system of the stratified flow in Osaka Bay using an ensemble Kalman filter–based steady state Kalman filter

    NARCIS (Netherlands)

    El Serafy, G.Y.H.; Mynett, A.E.

    2008-01-01

    Numerical models of a water system are always based on assumptions and simplifications that may result in errors in the model's predictions. Such errors can be reduced through the use of data assimilation and thus can significantly improve the success rate of the predictions and operational

  16. On incomplete sampling under birth-death models and connections to the sampling-based coalescent.

    Science.gov (United States)

    Stadler, Tanja

    2009-11-07

    The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.

  17. Alpha Matting with KL-Divergence Based Sparse Sampling.

    Science.gov (United States)

    Karacan, Levent; Erdem, Aykut; Erdem, Erkut

    2017-06-22

    In this paper, we present a new sampling-based alpha matting approach for the accurate estimation of foreground and background layers of an image. Previous sampling-based methods typically rely on certain heuristics in collecting representative samples from known regions, and thus their performance deteriorates if the underlying assumptions are not satisfied. To alleviate this, we take an entirely new approach and formulate sampling as a sparse subset selection problem where we propose to pick a small set of candidate samples that best explains the unknown pixels. Moreover, we describe a new dissimilarity measure for comparing two samples which is based on KLdivergence between the distributions of features extracted in the vicinity of the samples. The proposed framework is general and could be easily extended to video matting by additionally taking temporal information into account in the sampling process. Evaluation on standard benchmark datasets for image and video matting demonstrates that our approach provides more accurate results compared to the state-of-the-art methods.

  18. The development of heterogeneous materials based on Ni and B4C powders using a cold spray and stratified selective laser melting technologies

    Science.gov (United States)

    Filippov, A. A.; Fomin, V. M.; Buzyurkin, A. E.; Kosarev, V. F.; Malikov, A. G.; Orishich, A. M.; Ryashin, N. S.

    2018-01-01

    The work is dedicated to the creation of new ceramic-composite materials based on boron carbide, nickel and using a laser welding in order to obtain three dimensional objects henceforth. The perspective way of obtaining which has been suggested by the authors combined two methods: cold spray technology and subsequent laser post-treatment. At this stage, the authors focused on the interaction of the laser with the substance, regardless of the multi-layer object development. The investigated material of this work was the metal-ceramic mixture based on boron carbide, which has high physical and mechanical characteristics, such as hardness, elastic modulus, and chemical resistance. The nickel powder as a binder and different types of boron carbide were used. The ceramic content varied from 30 to 70% by mass. Thin ceramic layers were obtained by the combined method and cross-sections of different seams were studied. It was shown that the most perspective layers for additive manufacturing could be obtained from cold spray coatings with ceramic concentrations more than 50% by weight treated when laser beam was defocused (thermal-conductive laser mode).

  19. Depression and anxiety symptoms are associated with white blood cell count and red cell distribution width: A sex-stratified analysis in a population-based study.

    Science.gov (United States)

    Shafiee, Mojtaba; Tayefi, Maryam; Hassanian, Seyed Mahdi; Ghaneifar, Zahra; Parizadeh, Mohammad Reza; Avan, Amir; Rahmani, Farzad; Khorasanchi, Zahra; Azarpajouh, Mahmoud Reza; Safarian, Hamideh; Moohebati, Mohsen; Heidari-Bakavoli, Alireza; Esmaeili, Habibolah; Nematy, Mohsen; Safarian, Mohammad; Ebrahimi, Mahmoud; Ferns, Gordon A; Mokhber, Naghmeh; Ghayour-Mobarhan, Majid

    2017-10-01

    Depression and anxiety are two common mood disorders that are both linked to systemic inflammation. Increased white blood cell (WBC) count and red cell distribution width (RDW) are associated with negative clinical outcomes in a wide variety of pathological conditions. WBC is a non-specific inflammatory marker and RDW is also strongly related to other inflammatory markers. Therefore, we proposed that there might be an association between these hematological inflammatory markers and depression/anxiety symptoms. The primary objective of this study was to examine the association between depression/anxiety symptoms and hematological inflammatory markers including WBC and RDW in a large population-based study. Symptoms of depression and anxiety and a complete blood count (CBC) were measured in 9274 participants (40% males and 60% females) aged 35-65 years, enrolled in a population-based cohort (MASHAD) study in north-eastern Iran. Symptoms of depression and anxiety were evaluated using the Beck Depression and Anxiety Inventories. The mean WBC count increased with increasing severity of symptoms of depression and anxiety among men. Male participants with severe depression had significantly higher values of RDW (panxiety symptoms had significantly higher values of RDW (panxiety. Our results suggest that higher depression and anxiety scores are associated with an enhanced inflammatory state, as assessed by higher hematological inflammatory markers including WBC and RDW, even after adjusting for potential confounders. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Epidemiology of Functional abdominal bloating and its impact on health related quality of life: male-female stratified propensity score analysis in a population based survey in mainland China.

    Science.gov (United States)

    Wu, Meijing; Zhao, Yanfang; Wang, Rui; Zheng, Wenxin; Guo, Xiaojing; Wu, Shunquan; Ma, Xiuqiang; He, Jia

    2014-01-01

    The epidemiology of Functional abdominal bloating (FAB) and its impact on health-related quality of life (HRQoL) in Chinese people remains unclear. Randomised, stratified, multi-stage sampling methodology was used to select a representative sample of the general population from five cities in China (n = 16,078). All respondents completed the modified Rome II questionnaire; 20% were asked to complete the 36-item Short Form (SF-36). The associated factors of FAB were analyzed. The effects of FAB on HRQoL were estimated with gender stratification using propensity score techniques in 20% subsample. Overall, 643 individuals (4.00%) had FAB and it was more prevalent in males than in females (4.87% vs. 3.04%, Pproblems (P = 0.030) and bodily pain (PChina was lower than previous reports. Males who had ever been diagnosed with dyspepsia and females who were in a poor self-reported health status were correlated with a higher prevalence of FAB. FAB affected only physical health in females, but impaired both physical and mental health in males.

  1. The first report of Japanese antimicrobial use measured by national database based on health insurance claims data (2011-2013): comparison with sales data, and trend analysis stratified by antimicrobial category and age group.

    Science.gov (United States)

    Yamasaki, Daisuke; Tanabe, Masaki; Muraki, Yuichi; Kato, Genta; Ohmagari, Norio; Yagi, Tetsuya

    2018-04-01

    Our objective was to evaluate the utility of the national database (NDB) based on health insurance claims data for antimicrobial use (AMU) surveillance in medical institutions in Japan. The population-weighted total AMU expressed as defined daily doses (DDDs) per 1000 inhabitants per day (DID) was measured by the NDB. The data were compared with our previous study measured by the sales data. Trend analysis of DID from 2011 to 2013 and subgroup analysis stratified by antimicrobial category and age group were performed. There was a significant linear correlation between the AMUs measured by the sales data and the NDB. Total oral and parenteral AMUs (expressed in DID) were 1.04-fold from 12.654 in 2011 to 13.202 in 2013 and 1.13-fold from 0.734 to 0.829, respectively. Percentage of oral form among total AMU was high with more than 94% during the study period. AMU in the children group (0-14 years) decreased from 2011 to 2013 regardless of dosage form, although the working age group (15-64 years) and elderly group (65 and above years) increased. Oral AMU in the working age group was approximately two-thirds of those in the other age groups. In contrast, parenteral AMU in the elderly group was extremely high compared to the other age groups. The trend of AMU stratified by antimicrobial category and age group were successfully measured using the NDB, which can be a tool to monitor outcome indices for the national action plan on antimicrobial resistance.

  2. An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method

    International Nuclear Information System (INIS)

    Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2015-01-01

    Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

  3. Classification of archaeologically stratified pumice by INAA

    International Nuclear Information System (INIS)

    Peltz, C.; Bichler, M.

    2001-01-01

    In the framework of the research program 'Synchronization of Civilization in the Eastern Mediterranean Region in the 2nd Millenium B.C.' instrumental neutron activation analysis (INAA) was used to determine 30 elements in pumice from archaeological excavations to reveal their specific volcanic origin. The widespread pumiceous products of several eruptions in the Aegean region were used as abrasive tools and were therefore popular trade objects. A remarkable quantity of pumice and pumiceous tephra (several km 3 ) was produced by the 'Minoan eruption' of Thera (Santorini), which is assumed to have happened between 1450 and 1650 B.C. Thus the discovery of the primary fallout of 'Minoan' tephra in archaeologically stratified locations can be used as a relative time mark. Additionally, pumice lumps used as abrasive can serve for dating by first appearance. Essential to an identification of the primary volcanic source is the knowledge that pumices from the Aegean region can easily be distinguished by their trace element distribution patterns, as previous work has shown. The elements Al, Ba, Ca, Ce, Co, Cr, Cs, Dy, Eu, Fe, Hf, K, La, Lu, Mn, Na, Nd, Rb, Sb, Sc, Sm, Ta, Tb, Th, Ti, U, V, Yb, Zn and Zr were determined in 16 samples of pumice lumps from excavations in Tell-el-Dab'a and Tell-el-Herr (Egypt). Two irradiation cycles and five measurement runs were applied. A reliable identification of the samples is achieved by comparing these results to the database compiled in previous studies. (author)

  4. Stratified charge rotary engine combustion studies

    Science.gov (United States)

    Shock, H.; Hamady, F.; Somerton, C.; Stuecken, T.; Chouinard, E.; Rachal, T.; Kosterman, J.; Lambeth, M.; Olbrich, C.

    1989-07-01

    Analytical and experimental studies of the combustion process in a stratified charge rotary engine (SCRE) continue to be the subject of active research in recent years. Specifically to meet the demand for more sophisticated products, a detailed understanding of the engine system of interest is warranted. With this in mind the objective of this work is to develop an understanding of the controlling factors that affect the SCRE combustion process so that an efficient power dense rotary engine can be designed. The influence of the induction-exhaust systems and the rotor geometry are believed to have a significant effect on combustion chamber flow characteristics. In this report, emphasis is centered on Laser Doppler Velocimetry (LDV) measurements and on qualitative flow visualizations in the combustion chamber of the motored rotary engine assembly. This will provide a basic understanding of the flow process in the RCE and serve as a data base for verification of numerical simulations. Understanding fuel injection provisions is also important to the successful operation of the stratified charge rotary engine. Toward this end, flow visualizations depicting the development of high speed, high pressure fuel jets are described. Friction is an important consideration in an engine from the standpoint of lost work, durability and reliability. MSU Engine Research Laboratory efforts in accessing the frictional losses associated with the rotary engine are described. This includes work which describes losses in bearing, seal and auxillary components. Finally, a computer controlled mapping system under development is described. This system can be used to map shapes such as combustion chamber, intake manifolds or turbine blades accurately.

  5. Soft magnetic properties of bulk amorphous Co-based samples

    International Nuclear Information System (INIS)

    Fuezer, J.; Bednarcik, J.; Kollar, P.

    2006-01-01

    Ball milling of melt-spun ribbons and subsequent compaction of the resulting powders in the supercooled liquid region were used to prepare disc shaped bulk amorphous Co-based samples. The several bulk samples have been prepared by hot compaction with subsequent heat treatment (500 deg C - 575 deg C). The influence of the consolidation temperature and follow-up heat treatment on the magnetic properties of bulk samples was investigated. The final heat treatment leads to decrease of the coercivity to the value between the 7.5 to 9 A/m (Authors)

  6. A novel PMT test system based on waveform sampling

    Science.gov (United States)

    Yin, S.; Ma, L.; Ning, Z.; Qian, S.; Wang, Y.; Jiang, X.; Wang, Z.; Yu, B.; Gao, F.; Zhu, Y.; Wang, Z.

    2018-01-01

    Comparing with the traditional test system based on a QDC and TDC and scaler, a test system based on waveform sampling is constructed for signal sampling of the 8"R5912 and the 20"R12860 Hamamatsu PMT in different energy states from single to multiple photoelectrons. In order to achieve high throughput and to reduce the dead time in data processing, the data acquisition software based on LabVIEW is developed and runs with a parallel mechanism. The analysis algorithm is realized in LabVIEW and the spectra of charge, amplitude, signal width and rising time are analyzed offline. The results from Charge-to-Digital Converter, Time-to-Digital Converter and waveform sampling are discussed in detailed comparison.

  7. Unbiased tensor-based morphometry: improved robustness and sample size estimates for Alzheimer's disease clinical trials.

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P; Ching, Christopher R K; Boyle, Christina P; Rajagopalan, Priya; Gutman, Boris A; Leow, Alex D; Toga, Arthur W; Jack, Clifford R; Harvey, Danielle; Weiner, Michael W; Thompson, Paul M

    2013-02-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer's disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Improved mesh based photon sampling techniques for neutron activation analysis

    International Nuclear Information System (INIS)

    Relson, E.; Wilson, P. P. H.; Biondo, E. D.

    2013-01-01

    The design of fusion power systems requires analysis of neutron activation of large, complex volumes, and the resulting particles emitted from these volumes. Structured mesh-based discretization of these problems allows for improved modeling in these activation analysis problems. Finer discretization of these problems results in large computational costs, which drives the investigation of more efficient methods. Within an ad hoc subroutine of the Monte Carlo transport code MCNP, we implement sampling of voxels and photon energies for volumetric sources using the alias method. The alias method enables efficient sampling of a discrete probability distribution, and operates in 0(1) time, whereas the simpler direct discrete method requires 0(log(n)) time. By using the alias method, voxel sampling becomes a viable alternative to sampling space with the 0(1) approach of uniformly sampling the problem volume. Additionally, with voxel sampling it is straightforward to introduce biasing of volumetric sources, and we implement this biasing of voxels as an additional variance reduction technique that can be applied. We verify our implementation and compare the alias method, with and without biasing, to direct discrete sampling of voxels, and to uniform sampling. We study the behavior of source biasing in a second set of tests and find trends between improvements and source shape, material, and material density. Overall, however, the magnitude of improvements from source biasing appears to be limited. Future work will benefit from the implementation of efficient voxel sampling - particularly with conformal unstructured meshes where the uniform sampling approach cannot be applied. (authors)

  9. Contingency inferences driven by base rates: Valid by sampling

    Directory of Open Access Journals (Sweden)

    Florian Kutzner

    2011-04-01

    Full Text Available Fiedler et al. (2009, reviewed evidence for the utilization of a contingency inference strategy termed pseudocontingencies (PCs. In PCs, the more frequent levels (and, by implication, the less frequent levels are assumed to be associated. PCs have been obtained using a wide range of task settings and dependent measures. Yet, the readiness with which decision makers rely on PCs is poorly understood. A computer simulation explored two potential sources of subjective validity of PCs. First, PCs are shown to perform above chance level when the task is to infer the sign of moderate to strong population contingencies from a sample of observations. Second, contingency inferences based on PCs and inferences based on cell frequencies are shown to partially agree across samples. Intriguingly, this criterion and convergent validity are by-products of random sampling error, highlighting the inductive nature of contingency inferences.

  10. Community-based survey versus sentinel site sampling in ...

    African Journals Online (AJOL)

    rural children. Implications for nutritional surveillance and the development of nutritional programmes. G. c. Solarsh, D. M. Sanders, C. A. Gibson, E. Gouws. A study of the anthropometric status of under-5-year-olds was conducted in the Nqutu district of Kwazulu by means of a representative community-based sample and.

  11. A sampling-based approach to probabilistic pursuit evasion

    KAUST Repository

    Mahadevan, Aditya

    2012-05-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented with useful information to model interesting scenarios related to multi-agent interaction and coordination. © 2012 IEEE.

  12. Stability of Miscible Displacements Across Stratified Porous Media

    Energy Technology Data Exchange (ETDEWEB)

    Shariati, Maryam; Yortsos, Yanis C.

    2000-09-11

    This report studied macro-scale heterogeneity effects. Reflecting on their importance, current simulation practices of flow and displacement in porous media were invariably based on heterogeneous permeability fields. Here, it was focused on a specific aspect of such problems, namely the stability of miscible displacements in stratified porous media, where the displacement is perpendicular to the direction of stratification.

  13. Reliability of impingement sampling designs: An example from the Indian Point station

    International Nuclear Information System (INIS)

    Mattson, M.T.; Waxman, J.B.; Watson, D.A.

    1988-01-01

    A 4-year data base (1976-1979) of daily fish impingement counts at the Indian Point electric power station on the Hudson River was used to compare the precision and reliability of three random-sampling designs: (1) simple random, (2) seasonally stratified, and (3) empirically stratified. The precision of daily impingement estimates improved logarithmically for each design as more days in the year were sampled. Simple random sampling was the least, and empirically stratified sampling was the most precise design, and the difference in precision between the two stratified designs was small. Computer-simulated sampling was used to estimate the reliability of the two stratified-random-sampling designs. A seasonally stratified sampling design was selected as the most appropriate reduced-sampling program for Indian Point station because: (1) reasonably precise and reliable impingement estimates were obtained using this design for all species combined and for eight common Hudson River fish by sampling only 30% of the days in a year (110 d); and (2) seasonal strata may be more precise and reliable than empirical strata if future changes in annual impingement patterns occur. The seasonally stratified design applied to the 1976-1983 Indian Point impingement data showed that selection of sampling dates based on daily species-specific impingement variability gave results that were more precise, but not more consistently reliable, than sampling allocations based on the variability of all fish species combined. 14 refs., 1 fig., 6 tabs

  14. Epidemiology of Functional abdominal bloating and its impact on health related quality of life: male-female stratified propensity score analysis in a population based survey in mainland China.

    Directory of Open Access Journals (Sweden)

    Meijing Wu

    Full Text Available BACKGROUND: The epidemiology of Functional abdominal bloating (FAB and its impact on health-related quality of life (HRQoL in Chinese people remains unclear. METHODS: Randomised, stratified, multi-stage sampling methodology was used to select a representative sample of the general population from five cities in China (n = 16,078. All respondents completed the modified Rome II questionnaire; 20% were asked to complete the 36-item Short Form (SF-36. The associated factors of FAB were analyzed. The effects of FAB on HRQoL were estimated with gender stratification using propensity score techniques in 20% subsample. RESULTS: Overall, 643 individuals (4.00% had FAB and it was more prevalent in males than in females (4.87% vs. 3.04%, P<0.001. For males, self-reported history of dyspepsia was most strongly associated with FAB (OR = 2.78; 95% CI: 1.59, 4.72. However, the most strongly associated factor was self-reported health status for females (moderate health vs. good health: OR = 2.06, 95% CI: 1.07, 3.96. P = 0.030; poor health vs. good health: OR = 5.71, 95% CI: 2.06, 15.09. Concerning HRQoL, FAB was found to be related to two domains: role limitation due to physical problems (P = 0.030 and bodily pain (P<0.001 in females. While, in males, there were significant differences in multiple domains between those with and without FAB. CONCLUSION: The prevalence of FAB in China was lower than previous reports. Males who had ever been diagnosed with dyspepsia and females who were in a poor self-reported health status were correlated with a higher prevalence of FAB. FAB affected only physical health in females, but impaired both physical and mental health in males.

  15. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  16. Towards Cost-efficient Sampling Methods

    OpenAIRE

    Peng, Luo; Yongli, Li; Chong, Wu

    2014-01-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and...

  17. Race-Ethnicity, Poverty, Urban Stressors, and Telomere Length in a Detroit Community-based Sample.

    Science.gov (United States)

    Geronimus, Arline T; Pearson, Jay A; Linnenbringer, Erin; Schulz, Amy J; Reyes, Angela G; Epel, Elissa S; Lin, Jue; Blackburn, Elizabeth H

    2015-06-01

    Residents of distressed urban areas suffer early aging-related disease and excess mortality. Using a community-based participatory research approach in a collaboration between social researchers and cellular biologists, we collected a unique data set of 239 black, white, or Mexican adults from a stratified, multistage probability sample of three Detroit neighborhoods. We drew venous blood and measured telomere length (TL), an indicator of stress-mediated biological aging, linking respondents' TL to their community survey responses. We regressed TL on socioeconomic, psychosocial, neighborhood, and behavioral stressors, hypothesizing and finding an interaction between poverty and racial-ethnic group. Poor whites had shorter TL than nonpoor whites; poor and nonpoor blacks had equivalent TL; and poor Mexicans had longer TL than nonpoor Mexicans. Findings suggest unobserved heterogeneity bias is an important threat to the validity of estimates of TL differences by race-ethnicity. They point to health impacts of social identity as contingent, the products of structurally rooted biopsychosocial processes. © American Sociological Association 2015.

  18. Research on test of product based on spatial sampling criteria and variable step sampling mechanism

    Science.gov (United States)

    Li, Ruihong; Han, Yueping

    2014-09-01

    This paper presents an effective approach for online testing the assembly structures inside products using multiple views technique and X-ray digital radiography system based on spatial sampling criteria and variable step sampling mechanism. Although there are some objects inside one product to be tested, there must be a maximal rotary step for an object within which the least structural size to be tested is predictable. In offline learning process, Rotating the object by the step and imaging it and so on until a complete cycle is completed, an image sequence is obtained that includes the full structural information for recognition. The maximal rotary step is restricted by the least structural size and the inherent resolution of the imaging system. During online inspection process, the program firstly finds the optimum solutions to all different target parts in the standard sequence, i.e., finds their exact angles in one cycle. Aiming at the issue of most sizes of other targets in product are larger than that of the least structure, the paper adopts variable step-size sampling mechanism to rotate the product specific angles with different steps according to different objects inside the product and match. Experimental results show that the variable step-size method can greatly save time compared with the traditional fixed-step inspection method while the recognition accuracy is guaranteed.

  19. Prototypic Features of Loneliness in a Stratified Sample of Adolescents

    DEFF Research Database (Denmark)

    Lasgaard, Mathias; Elklit, Ask

    2009-01-01

    Dominant theoretical approaches in loneliness research emphasize the value of personality characteristics in explaining loneliness. The present study examines whether dysfunctional social strategies and attributions in lonely adolescents can be explained by personality characteristics...... guidance and intervention. Thus, professionals need to be knowledgeable about prototypic features of loneliness in addition to employing a pro-active approach when assisting adolescents who display prototypic features....

  20. Improved patient selection by stratified surgical intervention

    DEFF Research Database (Denmark)

    Wang, Miao; Bünger, Cody E; Li, Haisheng

    2015-01-01

    BACKGROUND CONTEXT: Choosing the best surgical treatment for patients with spinal metastases remains a significant challenge for spine surgeons. There is currently no gold standard for surgical treatments. The Aarhus Spinal Metastases Algorithm (ASMA) was established to help surgeons choose...... the most appropriate surgical intervention for patients with spinal metastases. PURPOSE: The purpose of this study was to evaluate the clinical outcome of stratified surgical interventions based on the ASMA, which combines life expectancy and the anatomical classification of patients with spinal metastases...... survival times in the five surgical groups determined by the ASMA were 2.1 (TS 0-4, TC 1-7), 5.1 (TS 5-8, TC 1-7), 12.1 (TS 9-11, TC 1-7 or TS 12-15, TC 7), 26.0 (TS 12-15, TC 4-6), and 36.0 (TS 12-15, TC 1-3) months. The 30-day mortality rate was 7.5%. Postoperative neurological function was maintained...

  1. Turbulent fluxes in stably stratified boundary layers

    International Nuclear Information System (INIS)

    L'vov, Victor S; Procaccia, Itamar; Rudenko, Oleksii

    2008-01-01

    We present here an extended version of an invited talk we gave at the international conference 'Turbulent Mixing and Beyond'. The dynamical and statistical description of stably stratified turbulent boundary layers with the important example of the stable atmospheric boundary layer in mind is addressed. Traditional approaches to this problem, based on the profiles of mean quantities, velocity second-order correlations and dimensional estimates of the turbulent thermal flux, run into a well-known difficulty, predicting the suppression of turbulence at a small critical value of the Richardson number, in contradiction to observations. Phenomenological attempts to overcome this problem suffer from various theoretical inconsistencies. Here, we present an approach taking into full account all the second-order statistics, which allows us to respect the conservation of total mechanical energy. The analysis culminates in an analytic solution of the profiles of all mean quantities and all second-order correlations, removing the unphysical predictions of previous theories. We propose that the approach taken here is sufficient to describe the lower parts of the atmospheric boundary layer, as long as the Richardson number does not exceed an order of unity. For much higher Richardson numbers, the physics may change qualitatively, requiring careful consideration of the potential Kelvin-Helmoholtz waves and their interaction with the vortical turbulence.

  2. Using machine learning to accelerate sampling-based inversion

    Science.gov (United States)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  3. Patch-based visual tracking with online representative sample selection

    Science.gov (United States)

    Ou, Weihua; Yuan, Di; Li, Donghao; Liu, Bin; Xia, Daoxun; Zeng, Wu

    2017-05-01

    Occlusion is one of the most challenging problems in visual object tracking. Recently, a lot of discriminative methods have been proposed to deal with this problem. For the discriminative methods, it is difficult to select the representative samples for the target template updating. In general, the holistic bounding boxes that contain tracked results are selected as the positive samples. However, when the objects are occluded, this simple strategy easily introduces the noises into the training data set and the target template and then leads the tracker to drift away from the target seriously. To address this problem, we propose a robust patch-based visual tracker with online representative sample selection. Different from previous works, we divide the object and the candidates into several patches uniformly and propose a score function to calculate the score of each patch independently. Then, the average score is adopted to determine the optimal candidate. Finally, we utilize the non-negative least square method to find the representative samples, which are used to update the target template. The experimental results on the object tracking benchmark 2013 and on the 13 challenging sequences show that the proposed method is robust to the occlusion and achieves promising results.

  4. The RBANS Effort Index: base rates in geriatric samples.

    Science.gov (United States)

    Duff, Kevin; Spering, Cynthia C; O'Bryant, Sid E; Beglinger, Leigh J; Moser, David J; Bayless, John D; Culp, Kennith R; Mold, James W; Adams, Russell L; Scott, James G

    2011-01-01

    The Effort Index (EI) of the RBANS was developed to assist clinicians in discriminating patients who demonstrate good effort from those with poor effort. However, there are concerns that older adults might be unfairly penalized by this index, which uses uncorrected raw scores. Using five independent samples of geriatric patients with a broad range of cognitive functioning (e.g., cognitively intact, nursing home residents, probable Alzheimer's disease), base rates of failure on the EI were calculated. In cognitively intact and mildly impaired samples, few older individuals were classified as demonstrating poor effort (e.g., 3% in cognitively intact). However, in the more severely impaired geriatric patients, over one third had EI scores that fell above suggested cutoff scores (e.g., 37% in nursing home residents, 33% in probable Alzheimer's disease). In the cognitively intact sample, older and less educated patients were more likely to have scores suggestive of poor effort. Education effects were observed in three of the four clinical samples. Overall cognitive functioning was significantly correlated with EI scores, with poorer cognition being associated with greater suspicion of low effort. The current results suggest that age, education, and level of cognitive functioning should be taken into consideration when interpreting EI results and that significant caution is warranted when examining EI scores in elders suspected of having dementia.

  5. Large eddy simulation of turbulent and stably-stratified flows

    International Nuclear Information System (INIS)

    Fallon, Benoit

    1994-01-01

    The unsteady turbulent flow over a backward-facing step is studied by mean of Large Eddy Simulations with structure function sub grid model, both in isothermal and stably-stratified configurations. Without stratification, the flow develops highly-distorted Kelvin-Helmholtz billows, undergoing to helical pairing, with A-shaped vortices shed downstream. We show that forcing injected by recirculation fluctuations governs this oblique mode instabilities development. The statistical results show good agreements with the experimental measurements. For stably-stratified configurations, the flow remains more bi-dimensional. We show with increasing stratification, how the shear layer growth is frozen by inhibition of pairing process then of Kelvin-Helmholtz instabilities, and the development of gravity waves or stable density interfaces. Eddy structures of the flow present striking analogies with the stratified mixing layer. Additional computations show the development of secondary Kelvin-Helmholtz instabilities on the vorticity layers between two primary structures. This important mechanism based on baroclinic effects (horizontal density gradients) constitutes an additional part of the turbulent mixing process. Finally, the feasibility of Large Eddy Simulation is demonstrated for industrial flows, by studying a complex stratified cavity. Temperature fluctuations are compared to experimental measurements. We also develop three-dimensional un-stationary animations, in order to understand and visualize turbulent interactions. (author) [fr

  6. Ultrasonic-based membrane aided sample preparation of urine proteomes.

    Science.gov (United States)

    Jesus, Jemmyson Romário; Santos, Hugo M; López-Fernández, H; Lodeiro, Carlos; Arruda, Marco Aurélio Zezzi; Capelo, J L

    2018-02-01

    A new ultrafast ultrasonic-based method for shotgun proteomics as well as label-free protein quantification in urine samples is developed. The method first separates the urine proteins using nitrocellulose-based membranes and then proteins are in-membrane digested using trypsin. The enzymatic digestion process is accelerated from overnight to four minutes using a sonoreactor ultrasonic device. Overall, the sample treatment pipeline comprising protein separation, digestion and identification is done in just 3h. The process is assessed using urine of healthy volunteers. The method shows that male can be differentiated from female using the protein content of urine in a fast, easy and straightforward way. 232 and 226 proteins are identified in urine of male and female, respectively. From this, 162 are common to both genders, whilst 70 are unique to male and 64 to female. From the 162 common proteins, 13 are present at levels statistically different (p minimalism concept as outlined by Halls, as each stage of this analysis is evaluated to minimize the time, cost, sample requirement, reagent consumption, energy requirements and production of waste products. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Grain distinct stratified nanolayers in aluminium alloys

    Energy Technology Data Exchange (ETDEWEB)

    Donatus, U., E-mail: uyimedonatus@yahoo.com [School of Materials, The University of Manchester, Manchester, M13 9PL, England (United Kingdom); Thompson, G.E.; Zhou, X.; Alias, J. [School of Materials, The University of Manchester, Manchester, M13 9PL, England (United Kingdom); Tsai, I.-L. [Oxford Instruments NanoAnalysis, HP12 2SE, High Wycombe (United Kingdom)

    2017-02-15

    The grains of aluminium alloys have stratified nanolayers which determine their mechanical and chemical responses. In this study, the nanolayers were revealed in the grains of AA6082 (T6 and T7 conditions), AA5083-O and AA2024-T3 alloys by etching the alloys in a solution comprising 20 g Cr{sub 2}O{sub 3} + 30 ml HPO{sub 3} in 1 L H{sub 2}O. Microstructural examination was conducted on selected grains of interest using scanning electron microscopy and electron backscatter diffraction technique. It was observed that the nanolayers are orientation dependent and are parallel to the {100} planes. They have ordered and repeated tunnel squares that are flawed at the sides which are aligned in the <100> directions. These flawed tunnel squares dictate the tunnelling corrosion morphology as well as appearing to have an affect on the arrangement and sizes of the precipitation hardening particles. The inclination of the stratified nanolayers, their interpacing, and the groove sizes have significant influence on the corrosion behaviour and seeming influence on the strengthening mechanism of the investigated aluminium alloys. - Highlights: • Stratified nanolayers in aluminium alloy grains. • Relationship of the stratified nanolayers with grain orientation. • Influence of the inclinations of the stratified nanolayers on corrosion. • Influence of the nanolayers interspacing and groove sizes on hardness and corrosion.

  8. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  9. Sample-Based Extreme Learning Machine with Missing Data

    Directory of Open Access Journals (Sweden)

    Hang Gao

    2015-01-01

    Full Text Available Extreme learning machine (ELM has been extensively studied in machine learning community during the last few decades due to its high efficiency and the unification of classification, regression, and so forth. Though bearing such merits, existing ELM algorithms cannot efficiently handle the issue of missing data, which is relatively common in practical applications. The problem of missing data is commonly handled by imputation (i.e., replacing missing values with substituted values according to available information. However, imputation methods are not always effective. In this paper, we propose a sample-based learning framework to address this issue. Based on this framework, we develop two sample-based ELM algorithms for classification and regression, respectively. Comprehensive experiments have been conducted in synthetic data sets, UCI benchmark data sets, and a real world fingerprint image data set. As indicated, without introducing extra computational complexity, the proposed algorithms do more accurate and stable learning than other state-of-the-art ones, especially in the case of higher missing ratio.

  10. Stratified charge rotary engine for general aviation

    Science.gov (United States)

    Mount, R. E.; Parente, A. M.; Hady, W. F.

    1986-01-01

    A development history, a current development status assessment, and a design feature and performance capabilities account are given for stratified-charge rotary engines applicable to aircraft propulsion. Such engines are capable of operating on Jet-A fuel with substantial cost savings, improved altitude capability, and lower fuel consumption by comparison with gas turbine powerplants. Attention is given to the current development program of a 400-hp engine scheduled for initial operations in early 1990. Stratified charge rotary engines are also applicable to ground power units, airborne APUs, shipboard generators, and vehicular engines.

  11. CdTe detector based PIXE mapping of geological samples

    Energy Technology Data Exchange (ETDEWEB)

    Chaves, P.C., E-mail: cchaves@ctn.ist.utl.pt [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal); Taborda, A. [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal); Oliveira, D.P.S. de [Laboratório Nacional de Energia e Geologia (LNEG), Apartado 7586, 2611-901 Alfragide (Portugal); Reis, M.A. [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal)

    2014-01-01

    A sample collected from a borehole drilled approximately 10 km ESE of Bragança, Trás-os-Montes, was analysed by standard and high energy PIXE at both CTN (previous ITN) PIXE setups. The sample is a fine-grained metapyroxenite grading to coarse-grained in the base with disseminated sulphides and fine veinlets of pyrrhotite and pyrite. Matrix composition was obtained at the standard PIXE setup using a 1.25 MeV H{sup +} beam at three different spots. Medium and high Z elemental concentrations were then determined using the DT2fit and DT2simul codes (Reis et al., 2008, 2013 [1,2]), on the spectra obtained in the High Resolution and High Energy (HRHE)-PIXE setup (Chaves et al., 2013 [3]) by irradiation of the sample with a 3.8 MeV proton beam provided by the CTN 3 MV Tandetron accelerator. In this paper we present results, discuss detection limits of the method and the added value of the use of the CdTe detector in this context.

  12. Chemometric classification of casework arson samples based on gasoline content.

    Science.gov (United States)

    Sinkov, Nikolai A; Sandercock, P Mark L; Harynuk, James J

    2014-02-01

    Detection and identification of ignitable liquids (ILs) in arson debris is a critical part of arson investigations. The challenge of this task is due to the complex and unpredictable chemical nature of arson debris, which also contains pyrolysis products from the fire. ILs, most commonly gasoline, are complex chemical mixtures containing hundreds of compounds that will be consumed or otherwise weathered by the fire to varying extents depending on factors such as temperature, air flow, the surface on which IL was placed, etc. While methods such as ASTM E-1618 are effective, data interpretation can be a costly bottleneck in the analytical process for some laboratories. In this study, we address this issue through the application of chemometric tools. Prior to the application of chemometric tools such as PLS-DA and SIMCA, issues of chromatographic alignment and variable selection need to be addressed. Here we use an alignment strategy based on a ladder consisting of perdeuterated n-alkanes. Variable selection and model optimization was automated using a hybrid backward elimination (BE) and forward selection (FS) approach guided by the cluster resolution (CR) metric. In this work, we demonstrate the automated construction, optimization, and application of chemometric tools to casework arson data. The resulting PLS-DA and SIMCA classification models, trained with 165 training set samples, have provided classification of 55 validation set samples based on gasoline content with 100% specificity and sensitivity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Modelling of vapour explosion in stratified geometrie

    International Nuclear Information System (INIS)

    Picchi, St.

    1999-01-01

    When a hot liquid comes into contact with a colder volatile liquid, one can obtain in some conditions an explosive vaporization, told vapour explosion, whose consequences can be important on neighbouring structures. This explosion needs the intimate mixing and the fine fragmentation between the two liquids. In a stratified vapour explosion, these two liquids are initially superposed and separated by a vapor film. A triggering of the explosion can induce a propagation of this along the film. A study of experimental results and existent models has allowed to retain the following main points: - the explosion propagation is due to a pressure wave propagating through the medium; - the mixing is due to the development of Kelvin-Helmholtz instabilities induced by the shear velocity between the two liquids behind the pressure wave. The presence of the vapour in the volatile liquid explains experimental propagation velocity and the velocity difference between the two fluids at the pressure wave crossing. A first model has been proposed by Brayer in 1994 in order to describe the fragmentation and the mixing of the two fluids. Results of the author do not show explosion propagation. We have therefore built a new mixing-fragmentation model based on the atomization phenomenon that develops itself during the pressure wave crossing. We have also taken into account the transient aspect of the heat transfer between fuel drops and the volatile liquid, and elaborated a model of transient heat transfer. These two models have been introduced in a multi-components, thermal, hydraulic code, MC3D. Results of calculation show a qualitative and quantitative agreement with experimental results and confirm basic options of the model. (author)

  14. Sutudy on exchange flow under the unstably stratified field

    OpenAIRE

    文沢, 元雄

    2005-01-01

    This paper deals with the exchange flow under the unstably stratified field. The author developed the effective measurement system as well as the numerical analysis program. The system and the program are applied to the helium-air exchange flow in a rectangular channel with inclination. Following main features of the exchange flow were discussed based on the calculated results.(1) Time required for establishing a quasi-steady state exchange flow.(2) The relationship between the inclination an...

  15. China's community-based strategy of universal preconception care in rural areas at a population level using a novel risk classification system for stratifying couples´ preconception health status.

    Science.gov (United States)

    Zhou, Qiongjie; Zhang, Shikun; Wang, Qiaomei; Shen, Haiping; Tian, Weidong; Chen, Jingqi; Acharya, Ganesh; Li, Xiaotian

    2016-12-28

    Preconception care (PCC) is recommended for optimizing a woman's health prior to pregnancy to minimize the risk of adverse pregnancy and birth outcomes. We aimed to evaluate the impact of strategy and a novel risk classification model of China´s "National Preconception Health Care Project" (NPHCP) in identifying risk factors and stratifying couples' preconception health status. We performed a secondary analysis of data collected by NPHCP during April 2010 to December 2012 in 220 selected counties in China. All couples enrolled in the project accepted free preconception health examination, risk evaluation, health education and medical advice. Risk factors were categorized into five preconception risk classes based on their amenability to prevention and treatment: A-avoidable risk factors, B- benefiting from targeted medical intervention, C-controllable but requiring close monitoring and treatment during pregnancy, D-diagnosable prenatally but not modifiable preconceptionally, X-pregnancy not advisable. Information on each couple´s socio-demographic and health status was recorded and further analyzed. Among the 2,142,849 couples who were enrolled to this study, the majority (92.36%) were from rural areas with low education levels (89.2% women and 88.3% men had education below university level). A total of 1463266 (68.29%) couples had one or more preconception risk factors mainly of category A, B and C, among which 46.25% were women and 51.92% were men. Category A risk factors were more common among men compared with women (38.13% versus 11.24%; P = 0.000). This project provided new insights into preconception health of Chinese couples of reproductive age. More than half of the male partners planning to father a child, were exposed to risk factors during the preconception period, suggesting that an integrated approach to PCC including both women and men is justified. Stratification based on the new risk classification model demonstrated that a majority of the

  16. PSA-stratified detection rates for [68Ga]THP-PSMA, a novel probe for rapid kit-based 68Ga-labeling and PET imaging, in patients with biochemical recurrence after primary therapy for prostate cancer.

    Science.gov (United States)

    Derlin, Thorsten; Schmuck, Sebastian; Juhl, Cathleen; Zörgiebel, Johanna; Schneefeld, Sophie M; Walte, Almut C A; Hueper, Katja; von Klot, Christoph A; Henkenberens, Christoph; Christiansen, Hans; Thackeray, James T; Ross, Tobias L; Bengel, Frank M

    2018-06-01

    [ 68 Ga]Tris(hydroxypyridinone)(THP)-PSMA is a novel radiopharmaceutical for one-step kit-based radiolabelling, based on direct chelation of 68 Ga 3+ at low concentration, room temperature and over a wide pH range, using direct elution from a 68 Ge/ 68 Ga-generator. We evaluated the clinical detection rates of [ 68 Ga]THP-PSMA PET/CT in patients with biochemically recurrent prostate cancer after prostatectomy. Consecutive patients (n=99) referred for evaluation of biochemical relapse of prostate cancer by [ 68 Ga]THP-PSMA PET/CT were analyzed retrospectively. Patients underwent a standard whole-body PET/CT (1 h p.i.), followed by delayed (3 h p.i.) imaging of the abdomen. PSA-stratified cohorts of positive PET/CT results, standardized uptake values (SUVs) and target-to-background ratios (TBRs) were analyzed, and compared between standard and delayed imaging. At least one lesion suggestive of recurrent or metastatic prostate cancer was identified on PET images in 52 patients (52.5%). Detection rates of [ 68 Ga]THP-PSMA PET/CT increased with increasing PSA level: 94.1% for a PSA value of ≥10 ng/mL, 77.3% for a PSA value of 2 to PSA value of 1 to PSA value of 0.5 to PSA value of >0.2 to PSA value of 0.01 to 0.2 ng/mL. [ 68 Ga]THP-PSMA uptake (SUVs) in metastases decreased over time, whereas TBRs improved. Delayed imaging at 3 h p.i. exclusively identified pathologic findings in 2% of [ 68 Ga]THP-PSMA PET/CT scans. Detection rate was higher in patients with a Gleason score ≥8 (P=0.02) and in patients receiving androgen deprivation therapy (P=0.003). In this study, [ 68 Ga]THP-PSMA PET/CT showed suitable detection rates in patients with biochemical recurrence of prostate cancer and PSA levels ≥ 2 ng /mL. Detections rates were lower than in previous studies evaluating other PSMA ligands, though prospective direct radiotracer comparison studies are mandatory particularly in patients with low PSA levels to evaluate the relative performance of different PSMA ligands.

  17. Solution-based targeted genomic enrichment for precious DNA samples

    Directory of Open Access Journals (Sweden)

    Shearer Aiden

    2012-05-01

    Full Text Available Abstract Background Solution-based targeted genomic enrichment (TGE protocols permit selective sequencing of genomic regions of interest on a massively parallel scale. These protocols could be improved by: 1 modifying or eliminating time consuming steps; 2 increasing yield to reduce input DNA and excessive PCR cycling; and 3 enhancing reproducible. Results We developed a solution-based TGE method for downstream Illumina sequencing in a non-automated workflow, adding standard Illumina barcode indexes during the post-hybridization amplification to allow for sample pooling prior to sequencing. The method utilizes Agilent SureSelect baits, primers and hybridization reagents for the capture, off-the-shelf reagents for the library preparation steps, and adaptor oligonucleotides for Illumina paired-end sequencing purchased directly from an oligonucleotide manufacturing company. Conclusions This solution-based TGE method for Illumina sequencing is optimized for small- or medium-sized laboratories and addresses the weaknesses of standard protocols by reducing the amount of input DNA required, increasing capture yield, optimizing efficiency, and improving reproducibility.

  18. Preview-based sampling for controlling gaseous simulations

    KAUST Repository

    Huang, Ruoguan

    2011-01-01

    In this work, we describe an automated method for directing the control of a high resolution gaseous fluid simulation based on the results of a lower resolution preview simulation. Small variations in accuracy between low and high resolution grids can lead to divergent simulations, which is problematic for those wanting to achieve a desired behavior. Our goal is to provide a simple method for ensuring that the high resolution simulation matches key properties from the lower resolution simulation. We first let a user specify a fast, coarse simulation that will be used for guidance. Our automated method samples the data to be matched at various positions and scales in the simulation, or allows the user to identify key portions of the simulation to maintain. During the high resolution simulation, a matching process ensures that the properties sampled from the low resolution simulation are maintained. This matching process keeps the different resolution simulations aligned even for complex systems, and can ensure consistency of not only the velocity field, but also advected scalar values. Because the final simulation is naturally similar to the preview simulation, only minor controlling adjustments are needed, allowing a simpler control method than that used in prior keyframing approaches. Copyright © 2011 by the Association for Computing Machinery, Inc.

  19. Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.

    Science.gov (United States)

    Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel

    2017-06-01

    Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.

  20. Soil classification basing on the spectral characteristics of topsoil samples

    Science.gov (United States)

    Liu, Huanjun; Zhang, Xiaokang; Zhang, Xinle

    2016-04-01

    Soil taxonomy plays an important role in soil utility and management, but China has only course soil map created based on 1980s data. New technology, e.g. spectroscopy, could simplify soil classification. The study try to classify soils basing on the spectral characteristics of topsoil samples. 148 topsoil samples of typical soils, including Black soil, Chernozem, Blown soil and Meadow soil, were collected from Songnen plain, Northeast China, and the room spectral reflectance in the visible and near infrared region (400-2500 nm) were processed with weighted moving average, resampling technique, and continuum removal. Spectral indices were extracted from soil spectral characteristics, including the second absorption positions of spectral curve, the first absorption vale's area, and slope of spectral curve at 500-600 nm and 1340-1360 nm. Then K-means clustering and decision tree were used respectively to build soil classification model. The results indicated that 1) the second absorption positions of Black soil and Chernozem were located at 610 nm and 650 nm respectively; 2) the spectral curve of the meadow is similar to its adjacent soil, which could be due to soil erosion; 3) decision tree model showed higher classification accuracy, and accuracy of Black soil, Chernozem, Blown soil and Meadow are 100%, 88%, 97%, 50% respectively, and the accuracy of Blown soil could be increased to 100% by adding one more spectral index (the first two vole's area) to the model, which showed that the model could be used for soil classification and soil map in near future.

  1. Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.

    Science.gov (United States)

    Ojeda, Mario Miguel; Sahai, Hardeo

    2002-01-01

    Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…

  2. Study of MRI in stratified viscous plasma configuration

    Science.gov (United States)

    Carlevaro, Nakia; Montani, Giovanni; Renzi, Fabrizio

    2017-02-01

    We analyze the morphology of the magneto-rotational instability (MRI) for a stratified viscous plasma disk configuration in differential rotation, taking into account the so-called corotation theorem for the background profile. In order to select the intrinsic Alfvénic nature of MRI, we deal with an incompressible plasma and we adopt a formulation of the local perturbation analysis based on the use of the magnetic flux function as a dynamical variable. Our study outlines, as consequence of the corotation condition, a marked asymmetry of the MRI with respect to the equatorial plane, particularly evident in a complete damping of the instability over a positive critical height on the equatorial plane. We also emphasize how such a feature is already present (although less pronounced) even in the ideal case, restoring a dependence of the MRI on the stratified morphology of the gravitational field.

  3. A Table-Based Random Sampling Simulation for Bioluminescence Tomography

    Directory of Open Access Journals (Sweden)

    Xiaomeng Zhang

    2006-01-01

    Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.

  4. Simulation of steam explosion in stratified melt-coolant configuration

    International Nuclear Information System (INIS)

    Leskovar, Matjaž; Centrih, Vasilij; Uršič, Mitja

    2016-01-01

    Highlights: • Strong steam explosions may develop spontaneously in stratified configurations. • Considerable melt-coolant premixed layer formed in subcooled water with hot melts. • Analysis with MC3D code provided insight into stratified steam explosion phenomenon. • Up to 25% of poured melt was mixed with water and available for steam explosion. • Better instrumented experiments needed to determine dominant mixing process. - Abstract: A steam explosion is an energetic fuel coolant interaction process, which may occur during a severe reactor accident when the molten core comes into contact with the coolant water. In nuclear reactor safety analyses steam explosions are primarily considered in melt jet-coolant pool configurations where sufficiently deep coolant pool conditions provide complete jet breakup and efficient premixture formation. Stratified melt-coolant configurations, i.e. a molten melt layer below a coolant layer, were up to now believed as being unable to generate strong explosive interactions. Based on the hypothesis that there are no interfacial instabilities in a stratified configuration it was assumed that the amount of melt in the premixture is insufficient to produce strong explosions. However, the recently performed experiments in the PULiMS and SES (KTH, Sweden) facilities with oxidic corium simulants revealed that strong steam explosions may develop spontaneously also in stratified melt-coolant configurations, where with high temperature melts and subcooled water conditions a considerable melt-coolant premixed layer is formed. In the article, the performed study of steam explosions in a stratified melt-coolant configuration in PULiMS like conditions is presented. The goal of this analytical work is to supplement the experimental activities within the PULiMS research program by addressing the key questions, especially regarding the explosivity of the formed premixed layer and the mechanisms responsible for the melt-water mixing. To

  5. Estimation of plant sampling uncertainty: an example based on chemical analysis of moss samples.

    Science.gov (United States)

    Dołęgowska, Sabina

    2016-11-01

    In order to estimate the level of uncertainty arising from sampling, 54 samples (primary and duplicate) of the moss species Pleurozium schreberi (Brid.) Mitt. were collected within three forested areas (Wierna Rzeka, Piaski, Posłowice Range) in the Holy Cross Mountains (south-central Poland). During the fieldwork, each primary sample composed of 8 to 10 increments (subsamples) was taken over an area of 10 m 2 whereas duplicate samples were collected in the same way at a distance of 1-2 m. Subsequently, all samples were triple rinsed with deionized water, dried, milled, and digested (8 mL HNO 3 (1:1) + 1 mL 30 % H 2 O 2 ) in a closed microwave system Multiwave 3000. The prepared solutions were analyzed twice for Cu, Fe, Mn, and Zn using FAAS and GFAAS techniques. All datasets were checked for normality and for normally distributed elements (Cu from Piaski, Zn from Posłowice, Fe, Zn from Wierna Rzeka). The sampling uncertainty was computed with (i) classical ANOVA, (ii) classical RANOVA, (iii) modified RANOVA, and (iv) range statistics. For the remaining elements, the sampling uncertainty was calculated with traditional and/or modified RANOVA (if the amount of outliers did not exceed 10 %) or classical ANOVA after Box-Cox transformation (if the amount of outliers exceeded 10 %). The highest concentrations of all elements were found in moss samples from Piaski, whereas the sampling uncertainty calculated with different statistical methods ranged from 4.1 to 22 %.

  6. A random spatial sampling method in a rural developing nation

    Science.gov (United States)

    Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas

    2014-01-01

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...

  7. MC3D modelling of stratified explosion

    International Nuclear Information System (INIS)

    Picchi, S.; Berthoud, G.

    1999-01-01

    It is known that a steam explosion can occur in a stratified geometry and that the observed yields are lower than in the case of explosion in a premixture configuration. However, very few models are available to quantify the amount of melt which can be involved and the pressure peak that can be developed. In the stratified application of the MC3D code, mixing and fragmentation of the melt are explained by the growth of Kelvin Helmholtz instabilities due to the shear flow of the two phase coolant above the melt. Such a model is then used to recalculate the Frost-Ciccarelli tin-water experiment. Pressure peak, speed of propagation, bubble shape and erosion height are well reproduced as well as the influence of the inertial constraint (height of the water pool). (author)

  8. MC3D modelling of stratified explosion

    Energy Technology Data Exchange (ETDEWEB)

    Picchi, S.; Berthoud, G. [DTP/SMTH/LM2, CEA, 38 - Grenoble (France)

    1999-07-01

    It is known that a steam explosion can occur in a stratified geometry and that the observed yields are lower than in the case of explosion in a premixture configuration. However, very few models are available to quantify the amount of melt which can be involved and the pressure peak that can be developed. In the stratified application of the MC3D code, mixing and fragmentation of the melt are explained by the growth of Kelvin Helmholtz instabilities due to the shear flow of the two phase coolant above the melt. Such a model is then used to recalculate the Frost-Ciccarelli tin-water experiment. Pressure peak, speed of propagation, bubble shape and erosion height are well reproduced as well as the influence of the inertial constraint (height of the water pool). (author)

  9. Equipment for extracting and conveying stratified minerals

    Energy Technology Data Exchange (ETDEWEB)

    Blumenthal, G.; Kunzer, H.; Plaga, K.

    1991-08-14

    This invention relates to equipment for extracting stratified minerals and conveying the said minerals along the working face, comprising a trough shaped conveyor run assembled from lengths, a troughed extraction run in lengths matching the lengths of conveyor troughing, which is linked to the top edge of the working face side of the conveyor troughing with freedom to swivel vertically, and a positively guided chain carrying extraction tools and scrapers along the conveyor and extraction runs.

  10. Inviscid incompressible limits of strongly stratified fluids

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Jin, B.J.; Novotný, A.

    2014-01-01

    Roč. 89, 3-4 (2014), s. 307-329 ISSN 0921-7134 R&D Projects: GA ČR GA201/09/0917 Institutional support: RVO:67985840 Keywords : compressible Navier-Stokes system * anelastic approximation * stratified fluid Subject RIV: BA - General Mathematics Impact factor: 0.528, year: 2014 http://iospress.metapress.com/content/d71255745tl50125/?p=969b60ae82634854ab8bd25505ce1f71&pi=3

  11. Large eddy simulation of stably stratified turbulence

    International Nuclear Information System (INIS)

    Shen Zhi; Zhang Zhaoshun; Cui Guixiang; Xu Chunxiao

    2011-01-01

    Stably stratified turbulence is a common phenomenon in atmosphere and ocean. In this paper the large eddy simulation is utilized for investigating homogeneous stably stratified turbulence numerically at Reynolds number Re = uL/v = 10 2 ∼10 3 and Froude number Fr = u/NL = 10 −2 ∼10 0 in which u is root mean square of velocity fluctuations, L is integral scale and N is Brunt-Vaïsälä frequency. Three sets of computation cases are designed with different initial conditions, namely isotropic turbulence, Taylor Green vortex and internal waves, to investigate the statistical properties from different origins. The computed horizontal and vertical energy spectra are consistent with observation in atmosphere and ocean when the composite parameter ReFr 2 is greater than O(1). It has also been found in this paper that the stratification turbulence can be developed under different initial velocity conditions and the internal wave energy is dominated in the developed stably stratified turbulence.

  12. The Toggle Local Planner for sampling-based motion planning

    KAUST Repository

    Denny, Jory

    2012-05-01

    Sampling-based solutions to the motion planning problem, such as the probabilistic roadmap method (PRM), have become commonplace in robotics applications. These solutions are the norm as the dimensionality of the planning space grows, i.e., d > 5. An important primitive of these methods is the local planner, which is used for validation of simple paths between two configurations. The most common is the straight-line local planner which interpolates along the straight line between the two configurations. In this paper, we introduce a new local planner, Toggle Local Planner (Toggle LP), which extends local planning to a two-dimensional subspace of the overall planning space. If no path exists between the two configurations in the subspace, then Toggle LP is guaranteed to correctly return false. Intuitively, more connections could be found by Toggle LP than by the straight-line planner, resulting in better connected roadmaps. As shown in our results, this is the case, and additionally, the extra cost, in terms of time or storage, for Toggle LP is minimal. Additionally, our experimental analysis of the planner shows the benefit for a wide array of robots, with DOF as high as 70. © 2012 IEEE.

  13. Predicting Drug-Target Interactions Based on Small Positive Samples.

    Science.gov (United States)

    Hu, Pengwei; Chan, Keith C C; Hu, Yanxing

    2018-01-01

    evaluation of ODT shows that it can be potentially useful. It confirms that predicting potential or missing DTIs based on the known interactions is a promising direction to solve problems related to the use of uncertain and unreliable negative samples and those related to the great demand in computational resources. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. Selecting Sample Preparation Workflows for Mass Spectrometry-Based Proteomic and Phosphoproteomic Analysis of Patient Samples with Acute Myeloid Leukemia.

    Science.gov (United States)

    Hernandez-Valladares, Maria; Aasebø, Elise; Selheim, Frode; Berven, Frode S; Bruserud, Øystein

    2016-08-22

    Global mass spectrometry (MS)-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML) biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC) or metal oxide affinity chromatography (MOAC). We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP) as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.

  15. Selecting Sample Preparation Workflows for Mass Spectrometry-Based Proteomic and Phosphoproteomic Analysis of Patient Samples with Acute Myeloid Leukemia

    Directory of Open Access Journals (Sweden)

    Maria Hernandez-Valladares

    2016-08-01

    Full Text Available Global mass spectrometry (MS-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC or metal oxide affinity chromatography (MOAC. We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.

  16. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    Directory of Open Access Journals (Sweden)

    Francesco Bonavolontà

    2014-10-01

    Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  17. [Causes of emergency dizziness stratified by etiology].

    Science.gov (United States)

    Qiao, Wenying; Liu, Jianguo; Zeng, Hong; Liu, Yugeng; Jia, Weihua; Wang, Honghong; Liu, Bo; Tan, Jing; Li, Changqing

    2014-06-03

    To explore the causes of emergency dizziness stratified to improve the diagnostic efficiency. A total of 1 857 cases of dizziness at our emergency department were collected and their etiologies stratified by age and gender. The top three diagnoses were benign paroxysmal positional vertigo (BPPV, 31.7%), hypertension (24.0%) and posterior circulation ischemia (PCI, 20.5%). Stratified by age, the main causes of dizziness included BPPV (n = 6), migraine-associated vertigo (n = 2), unknown cause (n = 1) for the group of vertigo (14.5%) and neurosis (7.3%) for 18-44 years; BPPV (36.8%), hypertension (22.4%) and migraine-associated vertigo (11.2%) for 45-59 years; hypertension (30.8%), PCI (29.8%) and BPPV (22.9%) for 60-74 years; PCI (30.7%), hypertension (28.6%) and BPPV (25.5%) for 75-92 years. BPPV, migraine and neurosis were more common in females while hypertension and PCI predominated in males (all P hypertension, neurosis and migraine showed the following significant demographic features: BPPV, PCI, hypertension, neurosis and migraine may be the main causes of dizziness. BPPV should be considered initially when vertigo was triggered repeatedly by positional change, especially for young and middle-aged women. And the other common causes of dizziness were migraine-associated vertigo, neurosis and Meniere's disease.Hypertension should be screened firstly in middle-aged and elderly patients presenting mainly with head heaviness and stretching. In elders with dizziness, BPPV is second in constituent ratio to PCI and hypertension.In middle-aged and elderly patients with dizziness, psychological factors should be considered and diagnosis and treatment should be offered timely.

  18. White dwarf stars with chemically stratified atmospheres

    Science.gov (United States)

    Muchmore, D.

    1982-01-01

    Recent observations and theory suggest that some white dwarfs may have chemically stratified atmospheres - thin layers of hydrogen lying above helium-rich envelopes. Models of such atmospheres show that a discontinuous temperature inversion can occur at the boundary between the layers. Model spectra for layered atmospheres at 30,000 K and 50,000 K tend to have smaller decrements at 912 A, 504 A, and 228 A than uniform atmospheres would have. On the basis of their continuous extreme ultraviolet spectra, it is possible to distinguish observationally between uniform and layered atmospheres for hot white dwarfs.

  19. Stratified B-trees and versioning dictionaries

    OpenAIRE

    Twigg, Andy; Byde, Andrew; Milos, Grzegorz; Moreton, Tim; Wilkes, John; Wilkie, Tom

    2011-01-01

    A classic versioned data structure in storage and computer science is the copy-on-write (CoW) B-tree -- it underlies many of today's file systems and databases, including WAFL, ZFS, Btrfs and more. Unfortunately, it doesn't inherit the B-tree's optimality properties; it has poor space utilization, cannot offer fast updates, and relies on random IO to scale. Yet, nothing better has been developed since. We describe the `stratified B-tree', which beats all known semi-external memory versioned B...

  20. Preview-based sampling for controlling gaseous simulations

    KAUST Repository

    Huang, Ruoguan; Melek, Zeki; Keyser, John

    2011-01-01

    to maintain. During the high resolution simulation, a matching process ensures that the properties sampled from the low resolution simulation are maintained. This matching process keeps the different resolution simulations aligned even for complex systems

  1. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  2. Exposure to childhood adversity and deficits in emotion recognition: results from a large, population-based sample.

    Science.gov (United States)

    Dunn, Erin C; Crawford, Katherine M; Soare, Thomas W; Button, Katherine S; Raffeld, Miriam R; Smith, Andrew D A C; Penton-Voak, Ian S; Munafò, Marcus R

    2018-03-07

    Emotion recognition skills are essential for social communication. Deficits in these skills have been implicated in mental disorders. Prior studies of clinical and high-risk samples have consistently shown that children exposed to adversity are more likely than their unexposed peers to have emotion recognition skills deficits. However, only one population-based study has examined this association. We analyzed data from children participating in the Avon Longitudinal Study of Parents and Children, a prospective birth cohort (n = 6,506). We examined the association between eight adversities, assessed repeatedly from birth to age 8 (caregiver physical or emotional abuse; sexual or physical abuse; maternal psychopathology; one adult in the household; family instability; financial stress; parent legal problems; neighborhood disadvantage) and the ability to recognize facial displays of emotion measured using the faces subtest of the Diagnostic Assessment of Non-Verbal Accuracy (DANVA) at age 8.5 years. In addition to examining the role of exposure (vs. nonexposure) to each type of adversity, we also evaluated the role of the timing, duration, and recency of each adversity using a Least Angle Regression variable selection procedure. Over three-quarters of the sample experienced at least one adversity. We found no evidence to support an association between emotion recognition deficits and previous exposure to adversity, either in terms of total lifetime exposure, timing, duration, or recency, or when stratifying by sex. Results from the largest population-based sample suggest that even extreme forms of adversity are unrelated to emotion recognition deficits as measured by the DANVA, suggesting the possible immutability of emotion recognition in the general population. These findings emphasize the importance of population-based studies to generate generalizable results. © 2018 Association for Child and Adolescent Mental Health.

  3. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  4. Finding metastabilities in reversible Markov chains based on incomplete sampling

    Directory of Open Access Journals (Sweden)

    Fackeldey Konstantin

    2017-01-01

    Full Text Available In order to fully characterize the state-transition behaviour of finite Markov chains one needs to provide the corresponding transition matrix P. In many applications such as molecular simulation and drug design, the entries of the transition matrix P are estimated by generating realizations of the Markov chain and determining the one-step conditional probability Pij for a transition from one state i to state j. This sampling can be computational very demanding. Therefore, it is a good idea to reduce the sampling effort. The main purpose of this paper is to design a sampling strategy, which provides a partial sampling of only a subset of the rows of such a matrix P. Our proposed approach fits very well to stochastic processes stemming from simulation of molecular systems or random walks on graphs and it is different from the matrix completion approaches which try to approximate the transition matrix by using a low-rank-assumption. It will be shown how Markov chains can be analyzed on the basis of a partial sampling. More precisely. First, we will estimate the stationary distribution from a partially given matrix P. Second, we will estimate the infinitesimal generator Q of P on the basis of this stationary distribution. Third, from the generator we will compute the leading invariant subspace, which should be identical to the leading invariant subspace of P. Forth, we will apply Robust Perron Cluster Analysis (PCCA+ in order to identify metastabilities using this subspace.

  5. Soil mixing of stratified contaminated sands.

    Science.gov (United States)

    Al-Tabba, A; Ayotamuno, M J; Martin, R J

    2000-02-01

    Validation of soil mixing for the treatment of contaminated ground is needed in a wide range of site conditions to widen the application of the technology and to understand the mechanisms involved. Since very limited work has been carried out in heterogeneous ground conditions, this paper investigates the effectiveness of soil mixing in stratified sands using laboratory-scale augers. This enabled a low cost investigation of factors such as grout type and form, auger design, installation procedure, mixing mode, curing period, thickness of soil layers and natural moisture content on the unconfined compressive strength, leachability and leachate pH of the soil-grout mixes. The results showed that the auger design plays a very important part in the mixing process in heterogeneous sands. The variability of the properties measured in the stratified soils and the measurable variations caused by the various factors considered, highlighted the importance of duplicating appropriate in situ conditions, the usefulness of laboratory-scale modelling of in situ conditions and the importance of modelling soil and contaminant heterogeneities at the treatability study stage.

  6. Stratified coastal ocean interactions with tropical cyclones

    Science.gov (United States)

    Glenn, S. M.; Miles, T. N.; Seroka, G. N.; Xu, Y.; Forney, R. K.; Yu, F.; Roarty, H.; Schofield, O.; Kohut, J.

    2016-01-01

    Hurricane-intensity forecast improvements currently lag the progress achieved for hurricane tracks. Integrated ocean observations and simulations during hurricane Irene (2011) reveal that the wind-forced two-layer circulation of the stratified coastal ocean, and resultant shear-induced mixing, led to significant and rapid ahead-of-eye-centre cooling (at least 6 °C and up to 11 °C) over a wide swath of the continental shelf. Atmospheric simulations establish this cooling as the missing contribution required to reproduce Irene's accelerated intensity reduction. Historical buoys from 1985 to 2015 show that ahead-of-eye-centre cooling occurred beneath all 11 tropical cyclones that traversed the Mid-Atlantic Bight continental shelf during stratified summer conditions. A Yellow Sea buoy similarly revealed significant and rapid ahead-of-eye-centre cooling during Typhoon Muifa (2011). These findings establish that including realistic coastal baroclinic processes in forecasts of storm intensity and impacts will be increasingly critical to mid-latitude population centres as sea levels rise and tropical cyclone maximum intensities migrate poleward. PMID:26953963

  7. Stratified Simulations of Collisionless Accretion Disks

    Energy Technology Data Exchange (ETDEWEB)

    Hirabayashi, Kota; Hoshino, Masahiro, E-mail: hirabayashi-k@eps.s.u-tokyo.ac.jp [Department of Earth and Planetary Science, The University of Tokyo, Tokyo, 113-0033 (Japan)

    2017-06-10

    This paper presents a series of stratified-shearing-box simulations of collisionless accretion disks in the recently developed framework of kinetic magnetohydrodynamics (MHD), which can handle finite non-gyrotropy of a pressure tensor. Although a fully kinetic simulation predicted a more efficient angular-momentum transport in collisionless disks than in the standard MHD regime, the enhanced transport has not been observed in past kinetic-MHD approaches to gyrotropic pressure anisotropy. For the purpose of investigating this missing link between the fully kinetic and MHD treatments, this paper explores the role of non-gyrotropic pressure and makes the first attempt to incorporate certain collisionless effects into disk-scale, stratified disk simulations. When the timescale of gyrotropization was longer than, or comparable to, the disk-rotation frequency of the orbit, we found that the finite non-gyrotropy selectively remaining in the vicinity of current sheets contributes to suppressing magnetic reconnection in the shearing-box system. This leads to increases both in the saturated amplitude of the MHD turbulence driven by magnetorotational instabilities and in the resultant efficiency of angular-momentum transport. Our results seem to favor the fast advection of magnetic fields toward the rotation axis of a central object, which is required to launch an ultra-relativistic jet from a black hole accretion system in, for example, a magnetically arrested disk state.

  8. Reliability assessment based on small samples of normal distribution

    International Nuclear Information System (INIS)

    Ma Zhibo; Zhu Jianshi; Xu Naixin

    2003-01-01

    When the pertinent parameter involved in reliability definition complies with normal distribution, the conjugate prior of its distributing parameters (μ, h) is of normal-gamma distribution. With the help of maximum entropy and the moments-equivalence principles, the subjective information of the parameter and the sampling data of its independent variables are transformed to a Bayesian prior of (μ,h). The desired estimates are obtained from either the prior or the posterior which is formed by combining the prior and sampling data. Computing methods are described and examples are presented to give demonstrations

  9. Unbiased tensor-based morphometry: Improved robustness and sample size estimates for Alzheimer’s disease clinical trials

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P.; Ching, Christopher R.K.; Boyle, Christina P.; Rajagopalan, Priya; Gutman, Boris A.; Leow, Alex D.; Toga, Arthur W.; Jack, Clifford R.; Harvey, Danielle; Weiner, Michael W.; Thompson, Paul M.

    2013-01-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer’s disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24 months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39 AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. PMID:23153970

  10. Two sample Bayesian prediction intervals for order statistics based on the inverse exponential-type distributions using right censored sample

    Directory of Open Access Journals (Sweden)

    M.M. Mohie El-Din

    2011-10-01

    Full Text Available In this paper, two sample Bayesian prediction intervals for order statistics (OS are obtained. This prediction is based on a certain class of the inverse exponential-type distributions using a right censored sample. A general class of prior density functions is used and the predictive cumulative function is obtained in the two samples case. The class of the inverse exponential-type distributions includes several important distributions such the inverse Weibull distribution, the inverse Burr distribution, the loglogistic distribution, the inverse Pareto distribution and the inverse paralogistic distribution. Special cases of the inverse Weibull model such as the inverse exponential model and the inverse Rayleigh model are considered.

  11. Individual and pen-based oral fluid sampling: A welfare-friendly sampling method for group-housed gestating sows.

    Science.gov (United States)

    Pol, Françoise; Dorenlor, Virginie; Eono, Florent; Eudier, Solveig; Eveno, Eric; Liégard-Vanhecke, Dorine; Rose, Nicolas; Fablet, Christelle

    2017-11-01

    The aims of this study were to assess the feasibility of individual and pen-based oral fluid sampling (OFS) in 35 pig herds with group-housed sows, compare these methods to blood sampling, and assess the factors influencing the success of sampling. Individual samples were collected from at least 30 sows per herd. Pen-based OFS was performed using devices placed in at least three pens for 45min. Information related to the farm, the sows, and their living conditions were collected. Factors significantly associated with the duration of sampling and the chewing behaviour of sows were identified by logistic regression. Individual OFS took 2min 42s on average; the type of floor, swab size, and operator were associated with a sampling time >2min. Pen-based OFS was obtained from 112 devices (62.2%). The type of floor, parity, pen-level activity, and type of feeding were associated with chewing behaviour. Pen activity was associated with the latency to interact with the device. The type of floor, gestation stage, parity, group size, and latency to interact with the device were associated with a chewing time >10min. After 15, 30 and 45min of pen-based OFS, 48%, 60% and 65% of the sows were lying down, respectively. The time spent after the beginning of sampling, genetic type, and time elapsed since the last meal were associated with 50% of the sows lying down at one time point. The mean time to blood sample the sows was 1min 16s and 2min 52s if the number of operators required was considered in the sampling time estimation. The genetic type, parity, and type of floor were significantly associated with a sampling time higher than 1min 30s. This study shows that individual OFS is easy to perform in group-housed sows by a single operator, even though straw-bedded animals take longer to sample than animals housed on slatted floors, and suggests some guidelines to optimise pen-based OFS success. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. A Sample-Based Forest Monitoring Strategy Using Landsat, AVHRR and MODIS Data to Estimate Gross Forest Cover Loss in Malaysia between 1990 and 2005

    Directory of Open Access Journals (Sweden)

    Peter Potapov

    2013-04-01

    Full Text Available Insular Southeast Asia is a hotspot of humid tropical forest cover loss. A sample-based monitoring approach quantifying forest cover loss from Landsat imagery was implemented to estimate gross forest cover loss for two eras, 1990–2000 and 2000–2005. For each time interval, a probability sample of 18.5 km × 18.5 km blocks was selected, and pairs of Landsat images acquired per sample block were interpreted to quantify forest cover area and gross forest cover loss. Stratified random sampling was implemented for 2000–2005 with MODIS-derived forest cover loss used to define the strata. A probability proportional to x (πpx design was implemented for 1990–2000 with AVHRR-derived forest cover loss used as the x variable to increase the likelihood of including forest loss area in the sample. The estimated annual gross forest cover loss for Malaysia was 0.43 Mha/yr (SE = 0.04 during 1990–2000 and 0.64 Mha/yr (SE = 0.055 during 2000–2005. Our use of the πpx sampling design represents a first practical trial of this design for sampling satellite imagery. Although the design performed adequately in this study, a thorough comparative investigation of the πpx design relative to other sampling strategies is needed before general design recommendations can be put forth.

  13. Health-related quality of life predictors during medical residency in a random, stratified sample of residents Preditores de qualidade de vida relacionada à saúde durante a residência médica em uma amostra randomizada e estratificada de médicos residentes

    Directory of Open Access Journals (Sweden)

    Paula Costa Mosca Macedo

    2009-06-01

    Full Text Available OBJECTIVE: To evaluate the quality of life during the first three years of training and identify its association with sociodemographicoccupational characteristics, leisure time and health habits. METHOD: A cross-sectional study with a random sample of 128 residents stratified by year of training was conducted. The Medical Outcome Study -short form 36 was administered. Mann-Whitney tests were carried out to compare percentile distributions of the eight quality of life domains, according to sociodemographic variables, and a multiple linear regression analysis was performed, followed by a validity checking for the resulting models. RESULTS: The physical component presented higher quality of life medians than the mental component. Comparisons between the three years showed that in almost all domains the quality of life scores of the second year residents were higher than the first year residents (p OBJETIVO: Avaliar a qualidade de vida do médico residente durante os três anos do treinamento e identificar sua associação com as características sociodemográficas-ocupacionais, tempo de lazer e hábitos de saúde. MÉTODO: Foi realizado um estudo transversal com amostra randomizada de 128 residentes, estratificada por ano de residência. O Medical Outcome Study-Short Form 36 foi aplicado; as distribuições percentis dos domínios de qualidade de vida de acordo com variáveis sociodemográficas foram analisadas pelo teste de Mann-Whitney e regressão linear múltipla, bem como estudo de validação pós-regressão. RESULTADOS: O componente físico da qualidade de vida apresentou medianas mais altas do que o mental. Comparações entre os três anos mostraram que quase todos os domínios de qualidade de vida tiveram escores maiores no segundo do que no primeiro ano (p < 0,01; em relação ao componente mental observamos maiores escores no terceiro ano do que nos demais (p < 0,01. Preditores de maior qualidade de vida foram: estar no segundo ou

  14. Dietary intakes of pesticides based on community duplicate diet samples.

    Science.gov (United States)

    Melnyk, Lisa Jo; Xue, Jianping; Brown, G Gordon; McCombs, Michelle; Nishioka, Marcia; Michael, Larry C

    2014-01-15

    The calculation of dietary intake of selected pesticides was accomplished using food samples collected from individual representatives of a defined demographic community using a community duplicate diet approach. A community of nine participants was identified in Apopka, FL from which intake assessments of organophosphate (OP) and pyrethroid pesticides were made. From these nine participants, sixty-seven individual samples were collected and subsequently analyzed by gas chromatography/mass spectrometry. Measured concentrations were used to estimate dietary intakes for individuals and for the community. Individual intakes of total OP and pyrethroid pesticides ranged from 6.7 to 996 ng and 1.2 to 16,000 ng, respectively. The community intake was 256 ng for OPs and 3430 ng for pyrethroid pesticides. The most commonly detected pesticide was permethrin, but the highest overall intake was of bifenthrin followed by esfenvalerate. These data indicate that the community in Apopka, FL, as represented by the nine individuals, was potentially exposed to both OP and pyrethroid pesticides at levels consistent with a dietary model and other field studies in which standard duplicate diet samples were collected. Higher levels of pyrethroid pesticides were measured than OPs, which is consistent with decreased usage of OPs. The diversity of pyrethroid pesticides detected in food samples was greater than expected. Continually changing pesticide usage patterns need to be considered when determining analytes of interest for large scale epidemiology studies. The Community Duplicate Diet Methodology is a tool for researchers to meet emerging exposure measurement needs that will lead to more accurate assessments of intake which may enhance decisions for chemical regulation. Successfully determining the intake of pesticides through the dietary route will allow for accurate assessments of pesticide exposures to a community of individuals, thereby significantly enhancing the research benefit

  15. The Performance Analysis Based on SAR Sample Covariance Matrix

    Directory of Open Access Journals (Sweden)

    Esra Erten

    2012-03-01

    Full Text Available Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given.

  16. Ecosystem metabolism in a stratified lake

    DEFF Research Database (Denmark)

    Stæhr, Peter Anton; Christensen, Jesper Philip Aagaard; Batt, Ryan D.

    2012-01-01

    , differences were not significant. During stratification, daily variability in epilimnetic DO was dominated by metabolism (46%) and air-water gas exchange (44%). Fluxes related to mixed-layer deepening dominated in meta- and hypolimnic waters (49% and 64%), while eddy diffusion (1% and 14%) was less important....... Although air-water gas exchange rates differed among the three formulations of gas-transfer velocity, this had no significant effect on metabolic rates....... that integrates rates across the entire depth profile and includes DO exchange between depth layers driven by mixed-layer deepening and eddy diffusivity. During full mixing, NEP was close to zero throughout the water column, and GPP and R were reduced 2-10 times compared to stratified periods. When present...

  17. Stratified growth in Pseudomonas aeruginosa biofilms

    DEFF Research Database (Denmark)

    Werner, E.; Roe, F.; Bugnicourt, A.

    2004-01-01

    In this study, stratified patterns of protein synthesis and growth were demonstrated in Pseudomonas aeruginosa biofilms. Spatial patterns of protein synthetic activity inside biofilms were characterized by the use of two green fluorescent protein (GFP) reporter gene constructs. One construct...... synthesis was restricted to a narrow band in the part of the biofilm adjacent to the source of oxygen. The zone of active GFP expression was approximately 60 Am wide in colony biofilms and 30 Am wide in flow cell biofilms. The region of the biofilm in which cells were capable of elongation was mapped...... by treating colony biofilms with carbenicillin, which blocks cell division, and then measuring individual cell lengths by transmission electron microscopy. Cell elongation was localized at the air interface of the biofilm. The heterogeneous anabolic patterns measured inside these biofilms were likely a result...

  18. Thermal instability in a stratified plasma

    International Nuclear Information System (INIS)

    Hermanns, D.F.M.; Priest, E.R.

    1989-01-01

    The thermal instability mechansism has been studied in connection to observed coronal features, like, e.g. prominences or cool cores in loops. Although these features show a lot of structure, most studies concern the thermal instability in an uniform medium. In this paper, we investigate the thermal instability and the interaction between thermal modes and the slow magneto-acoustic subspectrum for a stratified plasma slab. We fomulate the relevant system of equations and give some straightforward properties of the linear spectrum of a non-uniform plasma slab, i.e. the existence of continuous parts in the spectrum. We present a numerical scheme with which we can investigate the linear spectrum for equilibrium states with stratification. The slow and thermal subspectra of a crude coronal model are given as a preliminary result. (author). 6 refs.; 1 fig

  19. Improvements to TRAC models of condensing stratified flow. Pt. 1

    International Nuclear Information System (INIS)

    Zhang, Q.; Leslie, D.C.

    1991-12-01

    Direct contact condensation in stratified flow is an important phenomenon in LOCA analyses. In this report, the TRAC interfacial heat transfer model for stratified condensing flow has been assessed against the Bankoff experiments. A rectangular channel option has been added to the code to represent the experimental geometry. In almost all cases the TRAC heat transfer coefficient (HTC) over-predicts the condensation rates and in some cases it is so high that the predicted steam is sucked in from the normal outlet in order to conserve mass. Based on their cocurrent and countercurrent condensing flow experiments, Bankoff and his students (Lim 1981, Kim 1985) developed HTC models from the two cases. The replacement of the TRAC HTC with either of Bankoff's models greatly improves the predictions of condensation rates in the experiment with cocurrent condensing flow. However, the Bankoff HTC for countercurrent flow is preferable because it is based only on the local quantities rather than on the quantities averaged from the inlet. (author)

  20. Feasibility of self-sampled dried blood spot and saliva samples sent by mail in a population-based study

    International Nuclear Information System (INIS)

    Sakhi, Amrit Kaur; Bastani, Nasser Ezzatkhah; Ellingjord-Dale, Merete; Gundersen, Thomas Erik; Blomhoff, Rune; Ursin, Giske

    2015-01-01

    In large epidemiological studies it is often challenging to obtain biological samples. Self-sampling by study participants using dried blood spots (DBS) technique has been suggested to overcome this challenge. DBS is a type of biosampling where blood samples are obtained by a finger-prick lancet, blotted and dried on filter paper. However, the feasibility and efficacy of collecting DBS samples from study participants in large-scale epidemiological studies is not known. The aim of the present study was to test the feasibility and response rate of collecting self-sampled DBS and saliva samples in a population–based study of women above 50 years of age. We determined response proportions, number of phone calls to the study center with questions about sampling, and quality of the DBS. We recruited women through a study conducted within the Norwegian Breast Cancer Screening Program. Invitations, instructions and materials were sent to 4,597 women. The data collection took place over a 3 month period in the spring of 2009. Response proportions for the collection of DBS and saliva samples were 71.0% (3,263) and 70.9% (3,258), respectively. We received 312 phone calls (7% of the 4,597 women) with questions regarding sampling. Of the 3,263 individuals that returned DBS cards, 3,038 (93.1%) had been packaged and shipped according to instructions. A total of 3,032 DBS samples were sufficient for at least one biomarker analysis (i.e. 92.9% of DBS samples received by the laboratory). 2,418 (74.1%) of the DBS cards received by the laboratory were filled with blood according to the instructions (i.e. 10 completely filled spots with up to 7 punches per spot for up to 70 separate analyses). To assess the quality of the samples, we selected and measured two biomarkers (carotenoids and vitamin D). The biomarker levels were consistent with previous reports. Collecting self-sampled DBS and saliva samples through the postal services provides a low cost, effective and feasible

  1. Feasibility of self-sampled dried blood spot and saliva samples sent by mail in a population-based study.

    Science.gov (United States)

    Sakhi, Amrit Kaur; Bastani, Nasser Ezzatkhah; Ellingjord-Dale, Merete; Gundersen, Thomas Erik; Blomhoff, Rune; Ursin, Giske

    2015-04-11

    In large epidemiological studies it is often challenging to obtain biological samples. Self-sampling by study participants using dried blood spots (DBS) technique has been suggested to overcome this challenge. DBS is a type of biosampling where blood samples are obtained by a finger-prick lancet, blotted and dried on filter paper. However, the feasibility and efficacy of collecting DBS samples from study participants in large-scale epidemiological studies is not known. The aim of the present study was to test the feasibility and response rate of collecting self-sampled DBS and saliva samples in a population-based study of women above 50 years of age. We determined response proportions, number of phone calls to the study center with questions about sampling, and quality of the DBS. We recruited women through a study conducted within the Norwegian Breast Cancer Screening Program. Invitations, instructions and materials were sent to 4,597 women. The data collection took place over a 3 month period in the spring of 2009. Response proportions for the collection of DBS and saliva samples were 71.0% (3,263) and 70.9% (3,258), respectively. We received 312 phone calls (7% of the 4,597 women) with questions regarding sampling. Of the 3,263 individuals that returned DBS cards, 3,038 (93.1%) had been packaged and shipped according to instructions. A total of 3,032 DBS samples were sufficient for at least one biomarker analysis (i.e. 92.9% of DBS samples received by the laboratory). 2,418 (74.1%) of the DBS cards received by the laboratory were filled with blood according to the instructions (i.e. 10 completely filled spots with up to 7 punches per spot for up to 70 separate analyses). To assess the quality of the samples, we selected and measured two biomarkers (carotenoids and vitamin D). The biomarker levels were consistent with previous reports. Collecting self-sampled DBS and saliva samples through the postal services provides a low cost, effective and feasible

  2. Polymeric ionic liquid-based portable tip microextraction device for on-site sample preparation of water samples.

    Science.gov (United States)

    Chen, Lei; Pei, Junxian; Huang, Xiaojia; Lu, Min

    2018-06-05

    On-site sample preparation is highly desired because it avoids the transportation of large-volume samples and ensures the accuracy of the analytical results. In this work, a portable prototype of tip microextraction device (TMD) was designed and developed for on-site sample pretreatment. The assembly procedure of TMD is quite simple. Firstly, polymeric ionic liquid (PIL)-based adsorbent was in-situ prepared in a pipette tip. After that, the tip was connected with a syringe which was driven by a bidirectional motor. The flow rates in adsorption and desorption steps were controlled accurately by the motor. To evaluate the practicability of the developed device, the TMD was used to on-site sample preparation of waters and combined with high-performance liquid chromatography with diode array detection to measure trace estrogens in water samples. Under the most favorable conditions, the limits of detection (LODs, S/N = 3) for the target analytes were in the range of 4.9-22 ng/L, with good coefficients of determination. Confirmatory study well evidences that the extraction performance of TMD is comparable to that of the traditional laboratory solid-phase extraction process, but the proposed TMD is more simple and convenient. At the same time, the TMD avoids complicated sampling and transferring steps of large-volume water samples. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    Science.gov (United States)

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  4. Adaptive Sampling for Nonlinear Dimensionality Reduction Based on Manifold Learning

    DEFF Research Database (Denmark)

    Franz, Thomas; Zimmermann, Ralf; Goertz, Stefan

    2017-01-01

    We make use of the non-intrusive dimensionality reduction method Isomap in order to emulate nonlinear parametric flow problems that are governed by the Reynolds-averaged Navier-Stokes equations. Isomap is a manifold learning approach that provides a low-dimensional embedding space that is approxi...... to detect and fill up gaps in the sampling in the embedding space. The performance of the proposed manifold filling method will be illustrated by numerical experiments, where we consider nonlinear parameter-dependent steady-state Navier-Stokes flows in the transonic regime.......We make use of the non-intrusive dimensionality reduction method Isomap in order to emulate nonlinear parametric flow problems that are governed by the Reynolds-averaged Navier-Stokes equations. Isomap is a manifold learning approach that provides a low-dimensional embedding space...

  5. All-polymer microfluidic systems for droplet based sample analysis

    DEFF Research Database (Denmark)

    Poulsen, Carl Esben

    In this PhD project, I pursued to develop an all-polymer injection moulded microfluidic platform with integrated droplet based single cell interrogation. To allow for a proper ”one device - one experiment” methodology and to ensure a high relevancy to non-academic settings, the systems presented ...

  6. Sampling in image space for vision based SLAM

    NARCIS (Netherlands)

    Booij, O.; Zivkovic, Z.; Kröse, B.

    2008-01-01

    Loop closing in vision based SLAM applications is a difficult task. Comparing new image data with all previous image data acquired for the map is practically impossible because of the high computational costs. This problem is part of the bigger problem to acquire local geometric constraints from

  7. Protein expression based multimarker analysis of breast cancer samples

    International Nuclear Information System (INIS)

    Presson, Angela P; Horvath, Steve; Yoon, Nam K; Bagryanova, Lora; Mah, Vei; Alavi, Mohammad; Maresh, Erin L; Rajasekaran, Ayyappan K; Goodglick, Lee; Chia, David

    2011-01-01

    Tissue microarray (TMA) data are commonly used to validate the prognostic accuracy of tumor markers. For example, breast cancer TMA data have led to the identification of several promising prognostic markers of survival time. Several studies have shown that TMA data can also be used to cluster patients into clinically distinct groups. Here we use breast cancer TMA data to cluster patients into distinct prognostic groups. We apply weighted correlation network analysis (WGCNA) to TMA data consisting of 26 putative tumor biomarkers measured on 82 breast cancer patients. Based on this analysis we identify three groups of patients with low (5.4%), moderate (22%) and high (50%) mortality rates, respectively. We then develop a simple threshold rule using a subset of three markers (p53, Na-KATPase-β1, and TGF β receptor II) that can approximately define these mortality groups. We compare the results of this correlation network analysis with results from a standard Cox regression analysis. We find that the rule-based grouping variable (referred to as WGCNA*) is an independent predictor of survival time. While WGCNA* is based on protein measurements (TMA data), it validated in two independent Affymetrix microarray gene expression data (which measure mRNA abundance). We find that the WGCNA patient groups differed by 35% from mortality groups defined by a more conventional stepwise Cox regression analysis approach. We show that correlation network methods, which are primarily used to analyze the relationships between gene products, are also useful for analyzing the relationships between patients and for defining distinct patient groups based on TMA data. We identify a rule based on three tumor markers for predicting breast cancer survival outcomes

  8. The optimism trap: Migrants' educational choices in stratified education systems.

    Science.gov (United States)

    Tjaden, Jasper Dag; Hunkler, Christian

    2017-09-01

    Immigrant children's ambitious educational choices have often been linked to their families' high level of optimism and motivation for upward mobility. However, previous research has mostly neglected alternative explanations such as information asymmetries or anticipated discrimination. Moreover, immigrant children's higher dropout rates at the higher secondary and university level suggest that low performing migrant students could have benefitted more from pursuing less ambitious tracks, especially in countries that offer viable vocational alternatives. We examine ethnic minority's educational choices using a sample of academically low performing, lower secondary school students in Germany's highly stratified education system. We find that their families' optimism diverts migrant students from viable vocational alternatives. Information asymmetries and anticipated discrimination do not explain their high educational ambitions. While our findings further support the immigrant optimism hypothesis, we discuss how its effect may have different implications depending on the education system. Copyright © 2017. Published by Elsevier Inc.

  9. Enviromental sampling at remote sites based on radiological screening assessments

    International Nuclear Information System (INIS)

    Ebinger, M.H.; Hansen, W.R.; Wenz, G.; Oxenberg, T.P.

    1996-01-01

    Environmental radiation monitoring (ERM) data from remote sites on the White Sands Missile Range, New Mexico, were used to estimate doses to humans and terrestrial mammals from residual radiation deposited during testing of components containing depleted uranium (DU) and thorium (Th). ERM data were used with the DOE code RESRAD and a simple steady-state pathway code to estimate the potential adverse effects from DU and Th to workers in the contaminated zones, to hunters consuming animals from the contaminated zones, and to terrestrial mammals that inhabit the contaminated zones. Assessments of zones contaminated with DU and Th and DU alone were conducted. Radiological doses from Th and DU in soils were largest with a maximum of about 3.5 mrem y -1 in humans and maximum of about 0.1 mrad d -1 in deer. Dose estimates from DU alone in soils were significantly less with a maximum of about 1 mrem y -1 in humans and about 0.04 mrad d -1 in deer. The results of the dose estimates suggest strongly that environmental sampling in these affected areas can be infrequent and still provide adequate assessments of radiological doses to workers, hunters, and terrestrial mammals

  10. Designing Wood Supply Scenarios from Forest Inventories with Stratified Predictions

    Directory of Open Access Journals (Sweden)

    Philipp Kilham

    2018-02-01

    Full Text Available Forest growth and wood supply projections are increasingly used to estimate the future availability of woody biomass and the correlated effects on forests and climate. This research parameterizes an inventory-based business-as-usual wood supply scenario, with a focus on southwest Germany and the period 2002–2012 with a stratified prediction. First, the Classification and Regression Trees algorithm groups the inventory plots into strata with corresponding harvest probabilities. Second, Random Forest algorithms generate individual harvest probabilities for the plots of each stratum. Third, the plots with the highest individual probabilities are selected as harvested until the harvest probability of the stratum is fulfilled. Fourth, the harvested volume of these plots is predicted with a linear regression model trained on harvested plots only. To illustrate the pros and cons of this method, it is compared to a direct harvested volume prediction with linear regression, and a combination of logistic regression and linear regression. Direct harvested volume regression predicts comparable volume figures, but generates these volumes in a way that differs from business-as-usual. The logistic model achieves higher overall classification accuracies, but results in underestimations or overestimations of harvest shares for several subsets of the data. The stratified prediction method balances this shortcoming, and can be of general use for forest growth and timber supply projections from large-scale forest inventories.

  11. A modified stratified model for the 3C 273 jet

    International Nuclear Information System (INIS)

    Liu Wenpo; Shen Zhiqiang

    2009-01-01

    We present a modified stratified jet model to interpret the observed spectral energy distributions of knots in the 3C 273 jet. Based on the hypothesis of the single index of the particle energy spectrum at injection and identical emission processes among all the knots, the observed difference of spectral shape among different 3C 273 knots can be understood as a manifestation of the deviation of the equivalent Doppler factor of stratified emission regions in an individual knot from a characteristic one. The summed spectral energy distributions of all ten knots in the 3C 273 jet can be well fitted by two components: a low-energy component (radio to optical) dominated by synchrotron radiation and a high-energy component (UV, X-ray and γ-ray) dominated by inverse Compton scattering of the cosmic microwave background. This gives a consistent spectral index of α = 0.88 (S v ∝ v -α ) and a characteristic Doppler factor of 7.4. Assuming the average of the summed spectrum as the characteristic spectrum of each knot in the 3C 273 jet, we further get a distribution of Doppler factors. We discuss the possible implications of these results for the physical properties in the 3C 273 jet. Future GeV observations with GLAST could separate the γ-ray emission of 3C 273 from the large scale jet and the small scale jet (i.e. the core) through measuring the GeV spectrum.

  12. The effect of surfactant on stratified and stratifying gas-liquid flows

    Science.gov (United States)

    Heiles, Baptiste; Zadrazil, Ivan; Matar, Omar

    2013-11-01

    We consider the dynamics of a stratified/stratifying gas-liquid flow in horizontal tubes. This flow regime is characterised by the thin liquid films that drain under gravity along the pipe interior, forming a pool at the bottom of the tube, and the formation of large-amplitude waves at the gas-liquid interface. This regime is also accompanied by the detachment of droplets from the interface and their entrainment into the gas phase. We carry out an experimental study involving axial- and radial-view photography of the flow, in the presence and absence of surfactant. We show that the effect of surfactant is to reduce significantly the average diameter of the entrained droplets, through a tip-streaming mechanism. We also highlight the influence of surfactant on the characteristics of the interfacial waves, and the pressure gradient that drives the flow. EPSRC Programme Grant EP/K003976/1.

  13. Sampling Key Populations for HIV Surveillance: Results From Eight Cross-Sectional Studies Using Respondent-Driven Sampling and Venue-Based Snowball Sampling.

    Science.gov (United States)

    Rao, Amrita; Stahlman, Shauna; Hargreaves, James; Weir, Sharon; Edwards, Jessie; Rice, Brian; Kochelani, Duncan; Mavimbela, Mpumelelo; Baral, Stefan

    2017-10-20

    In using regularly collected or existing surveillance data to characterize engagement in human immunodeficiency virus (HIV) services among marginalized populations, differences in sampling methods may produce different pictures of the target population and may therefore result in different priorities for response. The objective of this study was to use existing data to evaluate the sample distribution of eight studies of female sex workers (FSW) and men who have sex with men (MSM), who were recruited using different sampling approaches in two locations within Sub-Saharan Africa: Manzini, Swaziland and Yaoundé, Cameroon. MSM and FSW participants were recruited using either respondent-driven sampling (RDS) or venue-based snowball sampling. Recruitment took place between 2011 and 2016. Participants at each study site were administered a face-to-face survey to assess sociodemographics, along with the prevalence of self-reported HIV status, frequency of HIV testing, stigma, and other HIV-related characteristics. Crude and RDS-adjusted prevalence estimates were calculated. Crude prevalence estimates from the venue-based snowball samples were compared with the overlap of the RDS-adjusted prevalence estimates, between both FSW and MSM in Cameroon and Swaziland. RDS samples tended to be younger (MSM aged 18-21 years in Swaziland: 47.6% [139/310] in RDS vs 24.3% [42/173] in Snowball, in Cameroon: 47.9% [99/306] in RDS vs 20.1% [52/259] in Snowball; FSW aged 18-21 years in Swaziland 42.5% [82/325] in RDS vs 8.0% [20/249] in Snowball; in Cameroon 15.6% [75/576] in RDS vs 8.1% [25/306] in Snowball). They were less educated (MSM: primary school completed or less in Swaziland 42.6% [109/310] in RDS vs 4.0% [7/173] in Snowball, in Cameroon 46.2% [138/306] in RDS vs 14.3% [37/259] in Snowball; FSW: primary school completed or less in Swaziland 86.6% [281/325] in RDS vs 23.9% [59/247] in Snowball, in Cameroon 87.4% [520/576] in RDS vs 77.5% [238/307] in Snowball) than the snowball

  14. Deducing magnetic resonance neuroimages based on knowledge from samples.

    Science.gov (United States)

    Jiang, Yuwei; Liu, Feng; Fan, Mingxia; Li, Xuzhou; Zhao, Zhiyong; Zeng, Zhaoling; Wang, Yi; Xu, Dongrong

    2017-12-01

    Because individual variance always exists, using the same set of predetermined parameters for magnetic resonance imaging (MRI) may not be exactly suitable for each participant. We propose a knowledge-based method that can repair MRI data of undesired contrast as if a new scan were acquired using imaging parameters that had been individually optimized. The method employed a strategy called analogical reasoning to deduce voxel-wise relaxation properties using morphological and biological similarity. The proposed framework involves steps of intensity normalization, tissue segmentation, relaxation time deducing, and image deducing. This approach has been preliminarily validated using conventional MRI data at 3T from several examples, including 5 normal and 9 clinical datasets. It can effectively improve the contrast of real MRI data by deducing imaging data using optimized imaging parameters based on deduced relaxation properties. The statistics of deduced images shows a high correlation with real data that were actually collected using the same set of imaging parameters. The proposed method of deducing MRI data using knowledge of relaxation times alternatively provides a way of repairing MRI data of less optimal contrast. The method is also capable of optimizing an MRI protocol for individual participants, thereby realizing personalized MR imaging. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  16. Scleroderma prevalence: demographic variations in a population-based sample.

    Science.gov (United States)

    Bernatsky, S; Joseph, L; Pineau, C A; Belisle, P; Hudson, M; Clarke, A E

    2009-03-15

    To estimate the prevalence of systemic sclerosis (SSc) using population-based administrative data, and to assess the sensitivity of case ascertainment approaches. We ascertained SSc cases from Quebec physician billing and hospitalization databases (covering approximately 7.5 million individuals). Three case definition algorithms were compared, and statistical methods accounting for imperfect case ascertainment were used to estimate SSc prevalence and case ascertainment sensitivity. A hierarchical Bayesian latent class regression model that accounted for possible between-test dependence conditional on disease status estimated the effect of patient characteristics on SSc prevalence and the sensitivity of the 3 ascertainment algorithms. Accounting for error inherent in both the billing and the hospitalization data, we estimated SSc prevalence in 2003 at 74.4 cases per 100,000 women (95% credible interval [95% CrI] 69.3-79.7) and 13.3 cases per 100,000 men (95% CrI 11.1-16.1). Prevalence was higher for older individuals, particularly in urban women (161.2 cases per 100,000, 95% CrI 148.6-175.0). Prevalence was lowest in young men (in rural areas, as low as 2.8 cases per 100,000, 95% CrI 1.4-4.8). In general, no single algorithm was very sensitive, with point estimates for sensitivity ranging from 20-73%. We found marked differences in SSc prevalence according to age, sex, and region. In general, no single case ascertainment approach was very sensitive for SSc. Therefore, using data from multiple sources, with adjustment for the imperfect nature of each, is an important strategy in population-based studies of SSc and similar conditions.

  17. The Risk-Stratified Osteoporosis Strategy Evaluation study (ROSE)

    DEFF Research Database (Denmark)

    Rubin, Katrine Hass; Holmberg, Teresa; Rothmann, Mette Juel

    2015-01-01

    The risk-stratified osteoporosis strategy evaluation study (ROSE) is a randomized prospective population-based study investigating the effectiveness of a two-step screening program for osteoporosis in women. This paper reports the study design and baseline characteristics of the study population....... 35,000 women aged 65-80 years were selected at random from the population in the Region of Southern Denmark and-before inclusion-randomized to either a screening group or a control group. As first step, a self-administered questionnaire regarding risk factors for osteoporosis based on FRAX......(®) was issued to both groups. As second step, subjects in the screening group with a 10-year probability of major osteoporotic fractures ≥15 % were offered a DXA scan. Patients diagnosed with osteoporosis from the DXA scan were advised to see their GP and discuss pharmaceutical treatment according to Danish...

  18. Experimental study of unsteady thermally stratified flow

    International Nuclear Information System (INIS)

    Lee, Sang Jun; Chung, Myung Kyoon

    1985-01-01

    Unsteady thermally stratified flow caused by two-dimensional surface discharge of warm water into a oblong channel was investigated. Experimental study was focused on the rapidly developing thermal diffusion at small Richardson number. The basic objectives were to study the interfacial mixing between a flowing layer of warm water and an underlying body of cold water and to accumulate experimental data to test computational turbulence models. Mean velocity field measurements were carried out by using NMR-CT(Nuclear Magnetic Resonance-Computerized Tomography). It detects quantitative flow image of any desired section in any direction of flow in short time. Results show that at small Richardson number warm layer rapidly penetrates into the cold layer because of strong turbulent mixing and instability between the two layers. It is found that the transfer of heat across the interface is more vigorous than that of momentum. It is also proved that the NMR-CT technique is a very valuable tool to measure unsteady three dimensional flow field. (Author)

  19. Economic viability of Stratified Medicine concepts : An investor perspective on drivers and conditions that favour using Stratified Medicine approaches in a cost-contained healthcare environment

    NARCIS (Netherlands)

    Fugel, Hans-Joerg; Nuijten, Mark; Postma, Maarten

    2016-01-01

    RATIONALE: Stratified Medicine (SM) is becoming a natural result of advances in biomedical science and a promising path for the innovation-based biopharmaceutical industry to create new investment opportunities. While the use of biomarkers to improve R&D efficiency and productivity is very much

  20. Analysis of Turbulent Combustion in Simplified Stratified Charge Conditions

    Science.gov (United States)

    Moriyoshi, Yasuo; Morikawa, Hideaki; Komatsu, Eiji

    The stratified charge combustion system has been widely studied due to the significant potentials for low fuel consumption rate and low exhaust gas emissions. The fuel-air mixture formation process in a direct-injection stratified charge engine is influenced by various parameters, such as atomization, evaporation, and in-cylinder gas motion at high temperature and high pressure conditions. It is difficult to observe the in-cylinder phenomena in such conditions and also challenging to analyze the following stratified charge combustion. Therefore, the combustion phenomena in simplified stratified charge conditions aiming to analyze the fundamental stratified charge combustion are examined. That is, an experimental apparatus which can control the mixture distribution and the gas motion at ignition timing was developed, and the effects of turbulence intensity, mixture concentration distribution, and mixture composition on stratified charge combustion were examined. As a result, the effects of fuel, charge stratification, and turbulence on combustion characteristics were clarified.

  1. Prevalence and nature of anaemia in a prospective, population-based sample of people with diabetes: Teesside anaemia in diabetes (TAD) study.

    Science.gov (United States)

    Jones, S C; Smith, D; Nag, S; Bilous, M T; Winship, S; Wood, A; Bilous, R W

    2010-06-01

    Anaemia occurs in 25% of people attending hospital diabetes clinics, but this may not be representative of all people with diabetes. We aimed to determine the prevalence of anaemia in a prospective population-based sample stratified by estimated glomerular filtration rate (eGFR) using the 4-point Modification of Diet in Renal Disease (MDRD) formula. All 7331 patients on our district register were stratified by eGFR. Seven hundred and thirty were approached by letter on two occasions. Two hundred and thirty-four (32%) returned questionnaires and blood samples. Responders (R), non-responders (NR) and the whole cohort (C) were similar: mean +/- sd age R 61.7 +/- 12.7 years; NR 61.3 +/- 15.1 years; C 61.8 +/- 14.2 years; diabetes duration R 8.8 +/- 8.6 years; NR 8.2 +/- 7.9 years; C 7.5 +/- 7.8 years, Type 1 diabetes R 10.1%, NR 10.8%, C 9.4%. Anaemia was defined using World Health Organization criteria: haemoglobin 60 ml/min per 1.73 m(2). Anaemia was as a result of erythropoietin deficiency in 34%, abnormal haematinics in 40% and was unexplained in 26% of patients. Five per cent of the patients had anaemia below the treatment threshold of 11 g/dl. The prevalence of unrecognized anaemia in population-based cohorts is lower than that in hospital-based studies. Current clinical surveillance in the UK is failing to detect anaemia in stage 3-5 chronic kidney disease (eGFR 60 ml/min per 1.73 m(2).

  2. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  3. The generalization ability of online SVM classification based on Markov sampling.

    Science.gov (United States)

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  4. Whole arm manipulation planning based on feedback velocity fields and sampling-based techniques.

    Science.gov (United States)

    Talaei, B; Abdollahi, F; Talebi, H A; Omidi Karkani, E

    2013-09-01

    Changing the configuration of a cooperative whole arm manipulator is not easy while enclosing an object. This difficulty is mainly because of risk of jamming caused by kinematic constraints. To reduce this risk, this paper proposes a feedback manipulation planning algorithm that takes grasp kinematics into account. The idea is based on a vector field that imposes perturbation in object motion inducing directions when the movement is considerably along manipulator redundant directions. Obstacle avoidance problem is then considered by combining the algorithm with sampling-based techniques. As experimental results confirm, the proposed algorithm is effective in avoiding jamming as well as obstacles for a 6-DOF dual arm whole arm manipulator. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Reflection and transmission of electromagnetic waves in planarly stratified media

    International Nuclear Information System (INIS)

    Caviglia, G.

    1999-01-01

    Propagation of time-harmonic electromagnetic waves in planarly stratified multilayers is investigated. Each layer is allowed to be inhomogeneous and the layers are separated by interfaces. The procedure is based on the representation of the electromagnetic field in the basis of the eigenvectors of the matrix characterizing the first-order system. Hence the local reflection and transmission matrices are defined and the corresponding differential equations, in the pertinent space variable are determined. The jump conditions at interfaces are also established. The present model incorporates dissipative materials and the procedure holds without any restrictions to material symmetries. Differential equations appeared in the literature are shown to hold in particular (one-dimensional) cases or to represent homogeneous layers only

  6. Microstructure of Turbulence in the Stably Stratified Boundary Layer

    Science.gov (United States)

    Sorbjan, Zbigniew; Balsley, Ben B.

    2008-11-01

    The microstructure of a stably stratified boundary layer, with a significant low-level nocturnal jet, is investigated based on observations from the CASES-99 campaign in Kansas, U.S.A. The reported, high-resolution vertical profiles of the temperature, wind speed, wind direction, pressure, and the turbulent dissipation rate, were collected under nocturnal conditions on October 14, 1999, using the CIRES Tethered Lifting System. Two methods for evaluating instantaneous (1-sec) background profiles are applied to the raw data. The background potential temperature is calculated using the “bubble sort” algorithm to produce a monotonically increasing potential temperature with increasing height. Other scalar quantities are smoothed using a running vertical average. The behaviour of background flow, buoyant overturns, turbulent fluctuations, and their respective histograms are presented. Ratios of the considered length scales and the Ozmidov scale are nearly constant with height, a fact that can be applied in practice for estimating instantaneous profiles of the dissipation rate.

  7. Hydrodynamics of stratified epithelium: Steady state and linearized dynamics

    Science.gov (United States)

    Yeh, Wei-Ting; Chen, Hsuan-Yi

    2016-05-01

    A theoretical model for stratified epithelium is presented. The viscoelastic properties of the tissue are assumed to be dependent on the spatial distribution of proliferative and differentiated cells. Based on this assumption, a hydrodynamic description of tissue dynamics at the long-wavelength, long-time limit is developed, and the analysis reveals important insights into the dynamics of an epithelium close to its steady state. When the proliferative cells occupy a thin region close to the basal membrane, the relaxation rate towards the steady state is enhanced by cell division and cell apoptosis. On the other hand, when the region where proliferative cells reside becomes sufficiently thick, a flow induced by cell apoptosis close to the apical surface enhances small perturbations. This destabilizing mechanism is general for continuous self-renewal multilayered tissues; it could be related to the origin of certain tissue morphology, tumor growth, and the development pattern.

  8. Advanced stratified charge rotary aircraft engine design study

    Science.gov (United States)

    Badgley, P.; Berkowitz, M.; Jones, C.; Myers, D.; Norwood, E.; Pratt, W. B.; Ellis, D. R.; Huggins, G.; Mueller, A.; Hembrey, J. H.

    1982-01-01

    A technology base of new developments which offered potential benefits to a general aviation engine was compiled and ranked. Using design approaches selected from the ranked list, conceptual design studies were performed of an advanced and a highly advanced engine sized to provide 186/250 shaft Kw/HP under cruise conditions at 7620/25,000 m/ft altitude. These are turbocharged, direct-injected stratified charge engines intended for commercial introduction in the early 1990's. The engine descriptive data includes tables, curves, and drawings depicting configuration, performance, weights and sizes, heat rejection, ignition and fuel injection system descriptions, maintenance requirements, and scaling data for varying power. An engine-airframe integration study of the resulting engines in advanced airframes was performed on a comparative basis with current production type engines. The results show airplane performance, costs, noise & installation factors. The rotary-engined airplanes display substantial improvements over the baseline, including 30 to 35% lower fuel usage.

  9. Simulation model of stratified thermal energy storage tank using finite difference method

    Science.gov (United States)

    Waluyo, Joko

    2016-06-01

    Stratified TES tank is normally used in the cogeneration plant. The stratified TES tanks are simple, low cost, and equal or superior in thermal performance. The advantage of TES tank is that it enables shifting of energy usage from off-peak demand for on-peak demand requirement. To increase energy utilization in a stratified TES tank, it is required to build a simulation model which capable to simulate the charging phenomenon in the stratified TES tank precisely. This paper is aimed to develop a novel model in addressing the aforementioned problem. The model incorporated chiller into the charging of stratified TES tank system in a closed system. The model was developed in one-dimensional type involve with heat transfer aspect. The model covers the main factors affect to degradation of temperature distribution namely conduction through the tank wall, conduction between cool and warm water, mixing effect on the initial flow of the charging as well as heat loss to surrounding. The simulation model is developed based on finite difference method utilizing buffer concept theory and solved in explicit method. Validation of the simulation model is carried out using observed data obtained from operating stratified TES tank in cogeneration plant. The temperature distribution of the model capable of representing S-curve pattern as well as simulating decreased charging temperature after reaching full condition. The coefficient of determination values between the observed data and model obtained higher than 0.88. Meaning that the model has capability in simulating the charging phenomenon in the stratified TES tank. The model is not only capable of generating temperature distribution but also can be enhanced for representing transient condition during the charging of stratified TES tank. This successful model can be addressed for solving the limitation temperature occurs in charging of the stratified TES tank with the absorption chiller. Further, the stratified TES tank can be

  10. Spatial Sampling of Weather Data for Regional Crop Yield Simulations

    Science.gov (United States)

    Van Bussel, Lenny G. J.; Ewert, Frank; Zhao, Gang; Hoffmann, Holger; Enders, Andreas; Wallach, Daniel; Asseng, Senthold; Baigorria, Guillermo A.; Basso, Bruno; Biernath, Christian; hide

    2016-01-01

    Field-scale crop models are increasingly applied at spatio-temporal scales that range from regions to the globe and from decades up to 100 years. Sufficiently detailed data to capture the prevailing spatio-temporal heterogeneity in weather, soil, and management conditions as needed by crop models are rarely available. Effective sampling may overcome the problem of missing data but has rarely been investigated. In this study the effect of sampling weather data has been evaluated for simulating yields of winter wheat in a region in Germany over a 30-year period (1982-2011) using 12 process-based crop models. A stratified sampling was applied to compare the effect of different sizes of spatially sampled weather data (10, 30, 50, 100, 500, 1000 and full coverage of 34,078 sampling points) on simulated wheat yields. Stratified sampling was further compared with random sampling. Possible interactions between sample size and crop model were evaluated. The results showed differences in simulated yields among crop models but all models reproduced well the pattern of the stratification. Importantly, the regional mean of simulated yields based on full coverage could already be reproduced by a small sample of 10 points. This was also true for reproducing the temporal variability in simulated yields but more sampling points (about 100) were required to accurately reproduce spatial yield variability. The number of sampling points can be smaller when a stratified sampling is applied as compared to a random sampling. However, differences between crop models were observed including some interaction between the effect of sampling on simulated yields and the model used. We concluded that stratified sampling can considerably reduce the number of required simulations. But, differences between crop models must be considered as the choice for a specific model can have larger effects on simulated yields than the sampling strategy. Assessing the impact of sampling soil and crop management

  11. Using the Internet to Support Exercise and Diet: A Stratified Norwegian Survey.

    Science.gov (United States)

    Wangberg, Silje C; Sørensen, Tove; Andreassen, Hege K

    2015-08-26

    Internet is used for a variety of health related purposes. Use differs and has differential effects on health according to socioeconomic status. We investigated to what extent the Norwegian population use the Internet to support exercise and diet, what kind of services they use, and whether there are social disparities in use. We expected to find differences according to educational attainment. In November 2013 we surveyed a stratified sample of 2196 persons drawn from a Web panel of about 50,000 Norwegians over 15 years of age. The questionnaire included questions about using the Internet, including social network sites (SNS), or mobile apps in relation to exercise or diet, as well as background information about education, body image, and health. The survey email was opened by 1187 respondents (54%). Of these, 89 did not click on the survey hyperlink (declined to participate), while another 70 did not complete the survey. The final sample size is thus 1028 (87% response rate). Compared to the Norwegian census the sample had a slight under-representation of respondents under the age of 30 and with low education. The data was weighted accordingly before analyses. Sixty-nine percent of women and 53% of men had read about exercise or diet on the Internet (χ(2)= 25.6, Psocial disparities in health, and continue to monitor population use. For Internet- and mobile-based interventions to support health behaviors, this study provides information relevant to tailoring of delivery media and components to user.

  12. Multiscale sample entropy and cross-sample entropy based on symbolic representation and similarity of stock markets

    Science.gov (United States)

    Wu, Yue; Shang, Pengjian; Li, Yilong

    2018-03-01

    A modified multiscale sample entropy measure based on symbolic representation and similarity (MSEBSS) is proposed in this paper to research the complexity of stock markets. The modified algorithm reduces the probability of inducing undefined entropies and is confirmed to be robust to strong noise. Considering the validity and accuracy, MSEBSS is more reliable than Multiscale entropy (MSE) for time series mingled with much noise like financial time series. We apply MSEBSS to financial markets and results show American stock markets have the lowest complexity compared with European and Asian markets. There are exceptions to the regularity that stock markets show a decreasing complexity over the time scale, indicating a periodicity at certain scales. Based on MSEBSS, we introduce the modified multiscale cross-sample entropy measure based on symbolic representation and similarity (MCSEBSS) to consider the degree of the asynchrony between distinct time series. Stock markets from the same area have higher synchrony than those from different areas. And for stock markets having relative high synchrony, the entropy values will decrease with the increasing scale factor. While for stock markets having high asynchrony, the entropy values will not decrease with the increasing scale factor sometimes they tend to increase. So both MSEBSS and MCSEBSS are able to distinguish stock markets of different areas, and they are more helpful if used together for studying other features of financial time series.

  13. Analysis of area-wide management of insect pests based on sampling

    Science.gov (United States)

    David W. Onstad; Mark S. Sisterson

    2011-01-01

    The control of invasive species greatly depends on area-wide pest management (AWPM) in heterogeneous landscapes. Decisions about when and where to treat a population with pesticide are based on sampling pest abundance. One of the challenges of AWPM is sampling large areas with limited funds to cover the cost of sampling. Additionally, AWPM programs are often confronted...

  14. The Recent Developments in Sample Preparation for Mass Spectrometry-Based Metabolomics.

    Science.gov (United States)

    Gong, Zhi-Gang; Hu, Jing; Wu, Xi; Xu, Yong-Jiang

    2017-07-04

    Metabolomics is a critical member in systems biology. Although great progress has been achieved in metabolomics, there are still some problems in sample preparation, data processing and data interpretation. In this review, we intend to explore the roles, challenges and trends in sample preparation for mass spectrometry- (MS-) based metabolomics. The newly emerged sample preparation methods were also critically examined, including laser microdissection, in vivo sampling, dried blood spot, microwave, ultrasound and enzyme-assisted extraction, as well as microextraction techniques. Finally, we provide some conclusions and perspectives for sample preparation in MS-based metabolomics.

  15. Aligning the Economic Value of Companion Diagnostics and Stratified Medicines

    Directory of Open Access Journals (Sweden)

    Edward D. Blair

    2012-11-01

    Full Text Available The twin forces of payors seeking fair pricing and the rising costs of developing new medicines has driven a closer relationship between pharmaceutical companies and diagnostics companies, because stratified medicines, guided by companion diagnostics, offer better commercial, as well as clinical, outcomes. Stratified medicines have created clinical success and provided rapid product approvals, particularly in oncology, and indeed have changed the dynamic between drug and diagnostic developers. The commercial payback for such partnerships offered by stratified medicines has been less well articulated, but this has shifted as the benefits in risk management, pricing and value creation for all stakeholders become clearer. In this larger healthcare setting, stratified medicine provides both physicians and patients with greater insight on the disease and provides rationale for providers to understand cost-effectiveness of treatment. This article considers how the economic value of stratified medicine relationships can be recognized and translated into better outcomes for all healthcare stakeholders.

  16. Advances in paper-based sample pretreatment for point-of-care testing.

    Science.gov (United States)

    Tang, Rui Hua; Yang, Hui; Choi, Jane Ru; Gong, Yan; Feng, Shang Sheng; Pingguan-Murphy, Belinda; Huang, Qing Sheng; Shi, Jun Ling; Mei, Qi Bing; Xu, Feng

    2017-06-01

    In recent years, paper-based point-of-care testing (POCT) has been widely used in medical diagnostics, food safety and environmental monitoring. However, a high-cost, time-consuming and equipment-dependent sample pretreatment technique is generally required for raw sample processing, which are impractical for low-resource and disease-endemic areas. Therefore, there is an escalating demand for a cost-effective, simple and portable pretreatment technique, to be coupled with the commonly used paper-based assay (e.g. lateral flow assay) in POCT. In this review, we focus on the importance of using paper as a platform for sample pretreatment. We firstly discuss the beneficial use of paper for sample pretreatment, including sample collection and storage, separation, extraction, and concentration. We highlight the working principle and fabrication of each sample pretreatment device, the existing challenges and the future perspectives for developing paper-based sample pretreatment technique.

  17. Visualization periodic flows in a continuously stratified fluid.

    Science.gov (United States)

    Bardakov, R.; Vasiliev, A.

    2012-04-01

    To visualize the flow pattern of viscous continuously stratified fluid both experimental and computational methods were developed. Computational procedures were based on exact solutions of set of the fundamental equations. Solutions of the problems of flows producing by periodically oscillating disk (linear and torsion oscillations) were visualized with a high resolutions to distinguish small-scale the singular components on the background of strong internal waves. Numerical algorithm of visualization allows to represent both the scalar and vector fields, such as velocity, density, pressure, vorticity, stream function. The size of the source, buoyancy and oscillation frequency, kinematic viscosity of the medium effects were traced in 2D an 3D posing problems. Precision schlieren instrument was used to visualize the flow pattern produced by linear and torsion oscillations of strip and disk in a continuously stratified fluid. Uniform stratification was created by the continuous displacement method. The buoyancy period ranged from 7.5 to 14 s. In the experiments disks with diameters from 9 to 30 cm and a thickness of 1 mm to 10 mm were used. Different schlieren methods that are conventional vertical slit - Foucault knife, vertical slit - filament (Maksoutov's method) and horizontal slit - horizontal grating (natural "rainbow" schlieren method) help to produce supplementing flow patterns. Both internal wave beams and fine flow components were visualized in vicinity and far from the source. Intensity of high gradient envelopes increased proportionally the amplitude of the source. In domains of envelopes convergence isolated small scale vortices and extended mushroom like jets were formed. Experiments have shown that in the case of torsion oscillations pattern of currents is more complicated than in case of forced linear oscillations. Comparison with known theoretical model shows that nonlinear interactions between the regular and singular flow components must be taken

  18. Sampling in interview-based qualitative research: A theoretical and practical guide

    OpenAIRE

    Robinson, Oliver

    2014-01-01

    Sampling is central to the practice of qualitative methods, but compared with data collection and analysis, its processes are discussed relatively little. A four-point approach to sampling in qualitative interview-based research is presented and critically discussed in this article, which integrates theory and process for the following: (1) Defining a sample universe, by way of specifying inclusion and exclusion criteria for potential participation; (2) Deciding upon a sample size, through th...

  19. RF Sub-sampling Receiver Architecture based on Milieu Adapting Techniques

    DEFF Research Database (Denmark)

    Behjou, Nastaran; Larsen, Torben; Jensen, Ole Kiel

    2012-01-01

    A novel sub-sampling based architecture is proposed which has the ability of reducing the problem of image distortion and improving the signal to noise ratio significantly. The technique is based on sensing the environment and adapting the sampling rate of the receiver to the best possible...

  20. How serious a problem is subsoil compaction in the Netherlands? A survey based on probability sampling

    Science.gov (United States)

    Brus, Dick J.; van den Akker, Jan J. H.

    2018-02-01

    Although soil compaction is widely recognized as a soil threat to soil resources, reliable estimates of the acreage of overcompacted soil and of the level of soil compaction parameters are not available. In the Netherlands data on subsoil compaction were collected at 128 locations selected by stratified random sampling. A map showing the risk of subsoil compaction in five classes was used for stratification. Measurements of bulk density, porosity, clay content and organic matter content were used to compute the relative bulk density and relative porosity, both expressed as a fraction of a threshold value. A subsoil was classified as overcompacted if either the relative bulk density exceeded 1 or the relative porosity was below 1. The sample data were used to estimate the means of the two subsoil compaction parameters and the overcompacted areal fraction. The estimated global means of relative bulk density and relative porosity were 0.946 and 1.090, respectively. The estimated areal fraction of the Netherlands with overcompacted subsoils was 43 %. The estimates per risk map unit showed two groups of map units: a low-risk group (units 1 and 2, covering only 4.6 % of the total area) and a high-risk group (units 3, 4 and 5). The estimated areal fraction of overcompacted subsoil was 0 % in the low-risk unit and 47 % in the high-risk unit. The map contains no information about where overcompacted subsoils occur. This was caused by the poor association of the risk map units 3, 4 and 5 with the subsoil compaction parameters and subsoil overcompaction. This can be explained by the lack of time for recuperation.

  1. Sampling guidelines for oral fluid-based surveys of group-housed animals.

    Science.gov (United States)

    Rotolo, Marisa L; Sun, Yaxuan; Wang, Chong; Giménez-Lirola, Luis; Baum, David H; Gauger, Phillip C; Harmon, Karen M; Hoogland, Marlin; Main, Rodger; Zimmerman, Jeffrey J

    2017-09-01

    Formulas and software for calculating sample size for surveys based on individual animal samples are readily available. However, sample size formulas are not available for oral fluids and other aggregate samples that are increasingly used in production settings. Therefore, the objective of this study was to develop sampling guidelines for oral fluid-based porcine reproductive and respiratory syndrome virus (PRRSV) surveys in commercial swine farms. Oral fluid samples were collected in 9 weekly samplings from all pens in 3 barns on one production site beginning shortly after placement of weaned pigs. Samples (n=972) were tested by real-time reverse-transcription PCR (RT-rtPCR) and the binary results analyzed using a piecewise exponential survival model for interval-censored, time-to-event data with misclassification. Thereafter, simulation studies were used to study the barn-level probability of PRRSV detection as a function of sample size, sample allocation (simple random sampling vs fixed spatial sampling), assay diagnostic sensitivity and specificity, and pen-level prevalence. These studies provided estimates of the probability of detection by sample size and within-barn prevalence. Detection using fixed spatial sampling was as good as, or better than, simple random sampling. Sampling multiple barns on a site increased the probability of detection with the number of barns sampled. These results are relevant to PRRSV control or elimination projects at the herd, regional, or national levels, but the results are also broadly applicable to contagious pathogens of swine for which oral fluid tests of equivalent performance are available. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  2. LVQ-SMOTE - Learning Vector Quantization based Synthetic Minority Over-sampling Technique for biomedical data.

    Science.gov (United States)

    Nakamura, Munehiro; Kajiwara, Yusuke; Otsuka, Atsushi; Kimura, Haruhiko

    2013-10-02

    Over-sampling methods based on Synthetic Minority Over-sampling Technique (SMOTE) have been proposed for classification problems of imbalanced biomedical data. However, the existing over-sampling methods achieve slightly better or sometimes worse result than the simplest SMOTE. In order to improve the effectiveness of SMOTE, this paper presents a novel over-sampling method using codebooks obtained by the learning vector quantization. In general, even when an existing SMOTE applied to a biomedical dataset, its empty feature space is still so huge that most classification algorithms would not perform well on estimating borderlines between classes. To tackle this problem, our over-sampling method generates synthetic samples which occupy more feature space than the other SMOTE algorithms. Briefly saying, our over-sampling method enables to generate useful synthetic samples by referring to actual samples taken from real-world datasets. Experiments on eight real-world imbalanced datasets demonstrate that our proposed over-sampling method performs better than the simplest SMOTE on four of five standard classification algorithms. Moreover, it is seen that the performance of our method increases if the latest SMOTE called MWMOTE is used in our algorithm. Experiments on datasets for β-turn types prediction show some important patterns that have not been seen in previous analyses. The proposed over-sampling method generates useful synthetic samples for the classification of imbalanced biomedical data. Besides, the proposed over-sampling method is basically compatible with basic classification algorithms and the existing over-sampling methods.

  3. The stratified H-index makes scientific impact transparent

    DEFF Research Database (Denmark)

    Würtz, Morten; Schmidt, Morten

    2017-01-01

    The H-index is widely used to quantify and standardize researchers' scientific impact. However, the H-index does not account for the fact that co-authors rarely contribute equally to a paper. Accordingly, we propose the use of a stratified H-index to measure scientific impact. The stratified H......-index supplements the conventional H-index with three separate H-indices: one for first authorships, one for second authorships and one for last authorships. The stratified H-index takes scientific output, quality and individual author contribution into account....

  4. Exploring the salivary microbiome of children stratified by the oral hygiene index

    Science.gov (United States)

    Mashima, Izumi; Theodorea, Citra F.; Thaweboon, Boonyanit; Thaweboon, Sroisiri; Scannapieco, Frank A.

    2017-01-01

    Poor oral hygiene often leads to chronic diseases such as periodontitis and dental caries resulting in substantial economic costs and diminished quality of life in not only adults but also in children. In this study, the salivary microbiome was characterized in a group of children stratified by the Simplified Oral Hygiene Index (OHI-S). Illumina MiSeq high-throughput sequencing based on the 16S rRNA was utilized to analyze 90 salivary samples (24 Good, 31 Moderate and 35 Poor oral hygiene) from a cohort of Thai children. A total of 38,521 OTUs (Operational Taxonomic Units) with a 97% similarity were characterized in all of the salivary samples. Twenty taxonomic groups (Seventeen genera, two families and one class; Streptococcus, Veillonella, Gemellaceae, Prevotella, Rothia, Porphyromonas, Granulicatella, Actinomyces, TM-7-3, Leptotrichia, Haemophilus, Selenomonas, Neisseria, Megasphaera, Capnocytophaga, Oribacterium, Abiotrophia, Lachnospiraceae, Peptostreptococcus, and Atopobium) were found in all subjects and constituted 94.5–96.5% of the microbiome. Of these twenty genera, the proportion of Streptococcus decreased while Veillonella increased with poor oral hygiene status (P oral hygiene group. This is the first study demonstrating an important association between increase of Veillonella and poor oral hygiene status in children. However, further studies are required to identify the majority of Veillonella at species level in salivary microbiome of the Poor oral hygiene group. PMID:28934367

  5. Characterisation of the suspended particulate matter in a stratified estuarine environment employing complementary techniques

    Science.gov (United States)

    Thomas, Luis P.; Marino, Beatriz M.; Szupiany, Ricardo N.; Gallo, Marcos N.

    2017-09-01

    The ability to predict the sediment and nutrient circulation within estuarine waters is of significant economic and ecological importance. In these complex systems, flocculation is a dynamically active process that is directly affected by the prevalent environmental conditions. Consequently, the floc properties continuously change, which greatly complicates the characterisation of the suspended particle matter (SPM). In the present study, three different techniques are combined in a stratified estuary under quiet weather conditions and with a low river discharge to search for a solution to this problem. The challenge is to obtain the concentration, size and flux of suspended elements through selected cross-sections using the method based on the simultaneous backscatter records of 1200 and 600 kHz ADCPs, isokinetic sampling data and LISST-25X measurements. The two-ADCP method is highly effective for determining the SPM size distributions in a non-intrusive way. The isokinetic sampling and the LISST-25X diffractometer offer point measurements at specific depths, which are especially useful for calibrating the ADCP backscatter intensity as a function of the SPM concentration and size, and providing complementary information on the sites where acoustic records are not available. Limitations and potentials of the techniques applied are discussed.

  6. Autonomous spatially adaptive sampling in experiments based on curvature, statistical error and sample spacing with applications in LDA measurements

    Science.gov (United States)

    Theunissen, Raf; Kadosh, Jesse S.; Allen, Christian B.

    2015-06-01

    Spatially varying signals are typically sampled by collecting uniformly spaced samples irrespective of the signal content. For signals with inhomogeneous information content, this leads to unnecessarily dense sampling in regions of low interest or insufficient sample density at important features, or both. A new adaptive sampling technique is presented directing sample collection in proportion to local information content, capturing adequately the short-period features while sparsely sampling less dynamic regions. The proposed method incorporates a data-adapted sampling strategy on the basis of signal curvature, sample space-filling, variable experimental uncertainty and iterative improvement. Numerical assessment has indicated a reduction in the number of samples required to achieve a predefined uncertainty level overall while improving local accuracy for important features. The potential of the proposed method has been further demonstrated on the basis of Laser Doppler Anemometry experiments examining the wake behind a NACA0012 airfoil and the boundary layer characterisation of a flat plate.

  7. Observer-Based Stabilization of Spacecraft Rendezvous with Variable Sampling and Sensor Nonlinearity

    Directory of Open Access Journals (Sweden)

    Zhuoshi Li

    2013-01-01

    Full Text Available This paper addresses the observer-based control problem of spacecraft rendezvous with nonuniform sampling period. The relative dynamic model is based on the classical Clohessy-Wiltshire equation, and sensor nonlinearity and sampling are considered together in a unified framework. The purpose of this paper is to perform an observer-based controller synthesis by using sampled and saturated output measurements, such that the resulting closed-loop system is exponentially stable. A time-dependent Lyapunov functional is developed which depends on time and the upper bound of the sampling period and also does not grow along the input update times. The controller design problem is solved in terms of the linear matrix inequality method, and the obtained results are less conservative than using the traditional Lyapunov functionals. Finally, a numerical simulation example is built to show the validity of the developed sampled-data control strategy.

  8. Degradation of organic dyes using spray deposited nanocrystalline stratified WO3/TiO2 photoelectrodes under sunlight illumination

    Science.gov (United States)

    Hunge, Y. M.; Yadav, A. A.; Mahadik, M. A.; Bulakhe, R. N.; Shim, J. J.; Mathe, V. L.; Bhosale, C. H.

    2018-02-01

    The need to utilize TiO2 based metal oxide hetero nanostructures for the degradation of environmental pollutants like Rhodamine B and reactive red 152 from the wastewater using stratified WO3/TiO2 catalyst under sunlight illumination. WO3, TiO2 and stratified WO3/TiO2 catalysts were prepared by a spray pyrolysis method. It was found that the stratified WO3/TiO2 heterostructure has high crystallinity, no mixed phase formation occurs, strong optical absorption in the visible region of the solar spectrum, and large surface area. The photocatalytic activity was tested for degradation of Rhodamine B (Rh B) and reactive red 152 in an aqueous medium. TiO2 layer in stratified WO3/TiO2 catalyst helps to extend its absorption spectrum in the solar light region. Rh B and Reactive red 152is eliminated up to 98 and 94% within the 30 and 40 min respectively at optimum experimental condition by stratified WO3/TiO2. Moreover, stratified WO3/TiO2 photoelectrode has good stability and reusability than individual TiO2 and WO3 thin film in the degradation of Rh B and reactive red 152. The photoelectrocatalytic experimental results indicate that stratified WO3/TiO2 photoelectrode is a promising material for dye removal.

  9. Stratified flows with variable density: mathematical modelling and numerical challenges.

    Science.gov (United States)

    Murillo, Javier; Navas-Montilla, Adrian

    2017-04-01

    Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux

  10. Use of Language Sample Analysis by School-Based SLPs: Results of a Nationwide Survey

    Science.gov (United States)

    Pavelko, Stacey L.; Owens, Robert E., Jr.; Ireland, Marie; Hahs-Vaughn, Debbie L.

    2016-01-01

    Purpose: This article examines use of language sample analysis (LSA) by school-based speech-language pathologists (SLPs), including characteristics of language samples, methods of transcription and analysis, barriers to LSA use, and factors affecting LSA use, such as American Speech-Language-Hearing Association certification, number of years'…

  11. Adaptive list sequential sampling method for population-based observational studies

    NARCIS (Netherlands)

    Hof, Michel H.; Ravelli, Anita C. J.; Zwinderman, Aeilko H.

    2014-01-01

    In population-based observational studies, non-participation and delayed response to the invitation to participate are complications that often arise during the recruitment of a sample. When both are not properly dealt with, the composition of the sample can be different from the desired

  12. Eating Disorders among a Community-Based Sample of Chilean Female Adolescents

    Science.gov (United States)

    Granillo, M. Teresa; Grogan-Kaylor, Andrew; Delva, Jorge; Castillo, Marcela

    2011-01-01

    The purpose of this study was to explore the prevalence and correlates of eating disorders among a community-based sample of female Chilean adolescents. Data were collected through structured interviews with 420 female adolescents residing in Santiago, Chile. Approximately 4% of the sample reported ever being diagnosed with an eating disorder.…

  13. Robotic, MEMS-based Multi Utility Sample Preparation Instrument for ISS Biological Workstation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project will develop a multi-functional, automated sample preparation instrument for biological wet-lab workstations on the ISS. The instrument is based on a...

  14. Sports and energy drink consumption among a population-based sample of young adults

    Science.gov (United States)

    Larson, Nicole; Laska, Melissa N.; Story, Mary; Neumark-Sztainer, Dianne

    2017-01-01

    Objective National data for the U.S. show increases in sports and energy drink consumption over the past decade with the largest increases among young adults ages 20–34. This study aimed to identify sociodemographic factors and health risk behaviors associated with sports and energy drink consumption among young adults. Design Cross-sectional analysis of survey data from the third wave of a cohort study (Project EAT-III: Eating and Activity in Teens and Young Adults). Regression models stratified on gender and adjusted for sociodemographic characteristics were used to examine associations of sports and energy drink consumption with eating behaviors, physical activity, media use, weight-control behaviors, sleep patterns, and substance use. Setting Participants completed baseline surveys in 1998–1999 as students at public secondary schools in Minneapolis/St. Paul, Minnesota and the EAT-III surveys online or by mail in 2008–2009. Subjects The sample consisted of 2,287 participants (55% female, mean age=25.3). Results Results showed 31.0% of young adults consumed sports drinks and 18.8% consumed energy drinks at least weekly. Among men and women, sports drink consumption was associated with higher sugar-sweetened soda and fruit juice intake, video game use, and use of muscle-enhancing substances like creatine (pEnergy drink consumption was associated with lower breakfast frequency and higher sugar-sweetened soda intake, video game use, use of unhealthy weight-control behaviors, trouble sleeping, and substance use among men and women (penergy drink consumption with other unhealthy behaviors in the design of programs and services for young adults. PMID:25683863

  15. Magnitude of 14C/12C variations based on archaeological samples

    International Nuclear Information System (INIS)

    Kusumgar, S.; Agrawal, D.P.

    1977-01-01

    The magnitude of 14 C/ 12 C variations in the period A.D. 5O0 to 200 B.C. and 370 B.C. to 2900 B.C. is discussed. The 14 C dates of well-dated archaeological samples from India and Egypt do not show any significant divergence from the historical ages. On the other hand, the corrections based on dendrochronological samples show marked deviations for the same time period. A plea is, therefore, made to study old tree samples from Anatolia and Irish bogs and archaeological samples from west Asia to arrive at a more realistic calibration curve. (author)

  16. A Method for Microalgae Proteomics Analysis Based on Modified Filter-Aided Sample Preparation.

    Science.gov (United States)

    Li, Song; Cao, Xupeng; Wang, Yan; Zhu, Zhen; Zhang, Haowei; Xue, Song; Tian, Jing

    2017-11-01

    With the fast development of microalgal biofuel researches, the proteomics studies of microalgae increased quickly. A filter-aided sample preparation (FASP) method is widely used proteomics sample preparation method since 2009. Here, a method of microalgae proteomics analysis based on modified filter-aided sample preparation (mFASP) was described to meet the characteristics of microalgae cells and eliminate the error caused by over-alkylation. Using Chlamydomonas reinhardtii as the model, the prepared sample was tested by standard LC-MS/MS and compared with the previous reports. The results showed mFASP is suitable for most of occasions of microalgae proteomics studies.

  17. ON SAMPLING BASED METHODS FOR THE DUBINS TRAVELING SALESMAN PROBLEM WITH NEIGHBORHOODS

    Directory of Open Access Journals (Sweden)

    Petr Váňa

    2015-12-01

    Full Text Available In this paper, we address the problem of path planning to visit a set of regions by Dubins vehicle, which is also known as the Dubins Traveling Salesman Problem Neighborhoods (DTSPN. We propose a modification of the existing sampling-based approach to determine increasing number of samples per goal region and thus improve the solution quality if a more computational time is available. The proposed modification of the sampling-based algorithm has been compared with performance of existing approaches for the DTSPN and results of the quality of the found solutions and the required computational time are presented in the paper.

  18. Sample Entropy-Based Approach to Evaluate the Stability of Double-Wire Pulsed MIG Welding

    Directory of Open Access Journals (Sweden)

    Ping Yao

    2014-01-01

    Full Text Available According to the sample entropy, this paper deals with a quantitative method to evaluate the current stability in double-wire pulsed MIG welding. Firstly, the sample entropy of current signals with different stability but the same parameters is calculated. The results show that the more stable the current, the smaller the value and the standard deviation of sample entropy. Secondly, four parameters, which are pulse width, peak current, base current, and frequency, are selected for four-level three-factor orthogonal experiment. The calculation and analysis of desired signals indicate that sample entropy values are affected by welding current parameters. Then, a quantitative method based on sample entropy is proposed. The experiment results show that the method can preferably quantify the welding current stability.

  19. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    Science.gov (United States)

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  20. Economic evaluation in stratified medicine: methodological issues and challenges

    Directory of Open Access Journals (Sweden)

    Hans-Joerg eFugel

    2016-05-01

    Full Text Available Background: Stratified Medicine (SM is becoming a practical reality with the targeting of medicines by using a biomarker or genetic-based diagnostic to identify the eligible patient sub-population. Like any healthcare intervention, SM interventions have costs and consequences that must be considered by reimbursement authorities with limited resources. Methodological standards and guidelines exist for economic evaluations in clinical pharmacology and are an important component for health technology assessments (HTAs in many countries. However, these guidelines have initially been developed for traditional pharmaceuticals and not for complex interventions with multiple components. This raises the issue as to whether these guidelines are adequate to SM interventions or whether new specific guidance and methodology is needed to avoid inconsistencies and contradictory findings when assessing economic value in SM.Objective: This article describes specific methodological challenges when conducting health economic (HE evaluations for SM interventions and outlines potential modifications necessary to existing evaluation guidelines /principles that would promote consistent economic evaluations for SM.Results/Conclusions: Specific methodological aspects for SM comprise considerations on the choice of comparator, measuring effectiveness and outcomes, appropriate modelling structure and the scope of sensitivity analyses. Although current HE methodology can be applied for SM, greater complexity requires further methodology development and modifications in the guidelines.

  1. Epistemic Information in Stratified M-Spaces

    Directory of Open Access Journals (Sweden)

    Mark Burgin

    2011-12-01

    Full Text Available Information is usually related to knowledge. However, the recent development of information theory demonstrated that information is a much broader concept, being actually present in and virtually related to everything. As a result, many unknown types and kinds of information have been discovered. Nevertheless, information that acts on knowledge, bringing new and updating existing knowledge, is of primary importance to people. It is called epistemic information, which is studied in this paper based on the general theory of information and further developing its mathematical stratum. As a synthetic approach, which reveals the essence of information, organizing and encompassing all main directions in information theory, the general theory of information provides efficient means for such a study. Different types of information dynamics representation use tools of mathematical disciplines such as the theory of categories, functional analysis, mathematical logic and algebra. Here we employ algebraic structures for exploration of information and knowledge dynamics. In Introduction (Section 1, we discuss previous studies of epistemic information. Section 2 gives a compressed description of the parametric phenomenological definition of information in the general theory of information. In Section 3, anthropic information, which is received, exchanged, processed and used by people is singled out and studied based on the Componential Triune Brain model. One of the basic forms of anthropic information called epistemic information, which is related to knowledge, is analyzed in Section 4. Mathematical models of epistemic information are studied in Section 5. In Conclusion, some open problems related to epistemic information are given.

  2. A support vector density-based importance sampling for reliability assessment

    International Nuclear Information System (INIS)

    Dai, Hongzhe; Zhang, Hao; Wang, Wei

    2012-01-01

    An importance sampling method based on the adaptive Markov chain simulation and support vector density estimation is developed in this paper for efficient structural reliability assessment. The methodology involves the generation of samples that can adaptively populate the important region by the adaptive Metropolis algorithm, and the construction of importance sampling density by support vector density. The use of the adaptive Metropolis algorithm may effectively improve the convergence and stability of the classical Markov chain simulation. The support vector density can approximate the sampling density with fewer samples in comparison to the conventional kernel density estimation. The proposed importance sampling method can effectively reduce the number of structural analysis required for achieving a given accuracy. Examples involving both numerical and practical structural problems are given to illustrate the application and efficiency of the proposed methodology.

  3. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  4. Final LDRD report : development of sample preparation methods for ChIPMA-based imaging mass spectrometry of tissue samples.

    Energy Technology Data Exchange (ETDEWEB)

    Maharrey, Sean P.; Highley, Aaron M.; Behrens, Richard, Jr.; Wiese-Smith, Deneille

    2007-12-01

    The objective of this short-term LDRD project was to acquire the tools needed to use our chemical imaging precision mass analyzer (ChIPMA) instrument to analyze tissue samples. This effort was an outgrowth of discussions with oncologists on the need to find the cellular origin of signals in mass spectra of serum samples, which provide biomarkers for ovarian cancer. The ultimate goal would be to collect chemical images of biopsy samples allowing the chemical images of diseased and nondiseased sections of a sample to be compared. The equipment needed to prepare tissue samples have been acquired and built. This equipment includes an cyro-ultramicrotome for preparing thin sections of samples and a coating unit. The coating unit uses an electrospray system to deposit small droplets of a UV-photo absorbing compound on the surface of the tissue samples. Both units are operational. The tissue sample must be coated with the organic compound to enable matrix assisted laser desorption/ionization (MALDI) and matrix enhanced secondary ion mass spectrometry (ME-SIMS) measurements with the ChIPMA instrument Initial plans to test the sample preparation using human tissue samples required development of administrative procedures beyond the scope of this LDRD. Hence, it was decided to make two types of measurements: (1) Testing the spatial resolution of ME-SIMS by preparing a substrate coated with a mixture of an organic matrix and a bio standard and etching a defined pattern in the coating using a liquid metal ion beam, and (2) preparing and imaging C. elegans worms. Difficulties arose in sectioning the C. elegans for analysis and funds and time to overcome these difficulties were not available in this project. The facilities are now available for preparing biological samples for analysis with the ChIPMA instrument. Some further investment of time and resources in sample preparation should make this a useful tool for chemical imaging applications.

  5. The Knowledge Economy, Gender and Stratified Migrations

    Directory of Open Access Journals (Sweden)

    Eleonore Kofman

    2007-06-01

    Full Text Available The promotion of knowledge economies and societies, equated with the mobile subject as bearer of technological, managerial and cosmopolitan competences, on the one hand, and insecurities about social order and national identities, on the other, have in the past few years led to increasing polarisation between skilled migrants and those deemed to lack useful skills. The former are considered to be bearers of human capital and have the capacity to assimilate seamlessly and are therefore worthy of citizenship; the latter are likely to pose problems of assimilation and dependency due to their economic and cultural ‘otherness’ and offered a transient status and partial citizenship by receiving states. In the European context this trend has been reinforced by the redrawing of European geopolitical space creating new boundaries of exclusion and social justice. The emphasis on the knowledge economy also generates gender inequalities and stratifications based on skills and types of knowledge with implications for citizenship and social justice.

  6. The Knowledge Economy, Gender and Stratified Migrations

    Directory of Open Access Journals (Sweden)

    Eleonore Kofman

    2007-12-01

    Full Text Available The promotion of knowledge economies and societies, equated with the mobile subject as bearer of technological, managerial and cosmopolitan competences, on the one hand, and insecurities about social order and national identities, on the other, have in the past few years led to increasing polarisation between skilled migrants and those deemed to lack useful skills. The former are considered to be bearers of human capital and have the capacity to assimilate seamlessly and are therefore worthy of citizenship; the latter are likely to pose problems of assimilation and dependency due to their economic and cultural ‘otherness’ and offered a transient status and partial citizenship by receiving states. In the European context this trend has been reinforced by the redrawing of European geopolitical space creating new boundaries of exclusion and social justice. The emphasis on the knowledge economy also generates gender inequalities and stratifications based on skills and types of knowledge with implications for citizenship and social justice.

  7. Methodology Series Module 5: Sampling Strategies.

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  8. Methodology series module 5: Sampling strategies

    Directory of Open Access Journals (Sweden)

    Maninder Singh Setia

    2016-01-01

    Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability samplingbased on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability samplingbased on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.

  9. 40 CFR 761.298 - Decisions based on PCB concentration measurements resulting from sampling.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Decisions based on PCB concentration... Cleanup and On-Site Disposal of Bulk PCB Remediation Waste and Porous Surfaces in Accordance With § 761.61(a)(6) § 761.298 Decisions based on PCB concentration measurements resulting from sampling. (a) For...

  10. Sample selection based on kernel-subclustering for the signal reconstruction of multifunctional sensors

    International Nuclear Information System (INIS)

    Wang, Xin; Wei, Guo; Sun, Jinwei

    2013-01-01

    The signal reconstruction methods based on inverse modeling for the signal reconstruction of multifunctional sensors have been widely studied in recent years. To improve the accuracy, the reconstruction methods have become more and more complicated because of the increase in the model parameters and sample points. However, there is another factor that affects the reconstruction accuracy, the position of the sample points, which has not been studied. A reasonable selection of the sample points could improve the signal reconstruction quality in at least two ways: improved accuracy with the same number of sample points or the same accuracy obtained with a smaller number of sample points. Both ways are valuable for improving the accuracy and decreasing the workload, especially for large batches of multifunctional sensors. In this paper, we propose a sample selection method based on kernel-subclustering distill groupings of the sample data and produce the representation of the data set for inverse modeling. The method calculates the distance between two data points based on the kernel-induced distance instead of the conventional distance. The kernel function is a generalization of the distance metric by mapping the data that are non-separable in the original space into homogeneous groups in the high-dimensional space. The method obtained the best results compared with the other three methods in the simulation. (paper)

  11. Asymptotic Effectiveness of the Event-Based Sampling According to the Integral Criterion

    Directory of Open Access Journals (Sweden)

    Marek Miskowicz

    2007-01-01

    Full Text Available A rapid progress in intelligent sensing technology creates new interest in a development of analysis and design of non-conventional sampling schemes. The investigation of the event-based sampling according to the integral criterion is presented in this paper. The investigated sampling scheme is an extension of the pure linear send-on- delta/level-crossing algorithm utilized for reporting the state of objects monitored by intelligent sensors. The motivation of using the event-based integral sampling is outlined. The related works in adaptive sampling are summarized. The analytical closed-form formulas for the evaluation of the mean rate of event-based traffic, and the asymptotic integral sampling effectiveness, are derived. The simulation results verifying the analytical formulas are reported. The effectiveness of the integral sampling is compared with the related linear send-on-delta/level-crossing scheme. The calculation of the asymptotic effectiveness for common signals, which model the state evolution of dynamic systems in time, is exemplified.

  12. Gas mass transfer for stratified flows

    International Nuclear Information System (INIS)

    Duffey, R.B.; Hughes, E.D.

    1995-01-01

    We analyzed gas absorption and release in water bodies using existing surface renewal theory. We show a new relation between turbulent momentum and mass transfer from gas to water, including the effects of waves and wave roughness, by evaluating the equilibrium integral turbulent dissipation due to energy transfer to the water from the wind. Using Kolmogoroff turbulence arguments the gas transfer velocity, or mass transfer coefficient, is then naturally and straightforwardly obtained as a non-linear function of the wind speed drag coefficient and the square root of the molecular diffusion coefficient. In dimensionless form, the theory predicts the turbulent Sherwood number to be Sh t = (2/√π)Sc 1/2 , where Sh t is based on an integral dissipation length scale in the air. The theory confirms the observed nonlinear variation of the mass transfer coefficient as a function of the wind speed; gives the correct transition with turbulence-centered models for smooth surfaces at low speeds; and predicts experimental data from both laboratory and environmental measurements within the data scatter. The differences between the available laboratory and field data measurements are due to the large differences in the drag coefficient between wind tunnels and oceans. The results also imply that the effect of direct aeration due to bubble entrainment at wave breaking is no more than a 20% increase in the mass transfer for the highest speeds. The theory has importance to mass transfer in both the geo-physical and chemical engineering literature

  13. Survey of sampling-based methods for uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.

    2006-01-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition

  14. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  15. Immunophenotype Discovery, Hierarchical Organization, and Template-based Classification of Flow Cytometry Samples

    Directory of Open Access Journals (Sweden)

    Ariful Azad

    2016-08-01

    Full Text Available We describe algorithms for discovering immunophenotypes from large collections of flow cytometry (FC samples, and using them to organize the samples into a hierarchy based on phenotypic similarity. The hierarchical organization is helpful for effective and robust cytometry data mining, including the creation of collections of cell populations characteristic of different classes of samples, robust classification, and anomaly detection. We summarize a set of samples belonging to a biological class or category with a statistically derived template for the class. Whereas individual samples are represented in terms of their cell populations (clusters, a template consists of generic meta-populations (a group of homogeneous cell populations obtained from the samples in a class that describe key phenotypes shared among all those samples. We organize an FC data collection in a hierarchical data structure that supports the identification of immunophenotypes relevant to clinical diagnosis. A robust template-based classification scheme is also developed, but our primary focus is in the discovery of phenotypic signatures and inter-sample relationships in an FC data collection. This collective analysis approach is more efficient and robust since templates describe phenotypic signatures common to cell populations in several samples, while ignoring noise and small sample-specific variations.We have applied the template-base scheme to analyze several data setsincluding one representing a healthy immune system, and one of Acute Myeloid Leukemia (AMLsamples. The last task is challenging due to the phenotypic heterogeneity of the severalsubtypes of AML. However, we identified thirteen immunophenotypes corresponding to subtypes of AML, and were able to distinguish Acute Promyelocytic Leukemia from other subtypes of AML.

  16. Performance of local information-based link prediction: a sampling perspective

    Science.gov (United States)

    Zhao, Jichang; Feng, Xu; Dong, Li; Liang, Xiao; Xu, Ke

    2012-08-01

    Link prediction is pervasively employed to uncover the missing links in the snapshots of real-world networks, which are usually obtained through different kinds of sampling methods. In the previous literature, in order to evaluate the performance of the prediction, known edges in the sampled snapshot are divided into the training set and the probe set randomly, without considering the underlying sampling approaches. However, different sampling methods might lead to different missing links, especially for the biased ways. For this reason, random partition-based evaluation of performance is no longer convincing if we take the sampling method into account. In this paper, we try to re-evaluate the performance of local information-based link predictions through sampling method governed division of the training set and the probe set. It is interesting that we find that for different sampling methods, each prediction approach performs unevenly. Moreover, most of these predictions perform weakly when the sampling method is biased, which indicates that the performance of these methods might have been overestimated in the prior works.

  17. Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems

    KAUST Repository

    Xinyu Tang,

    2010-01-25

    Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end-effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot\\'s degrees of freedom. In addition to supporting efficient sampling of configurations, we show that the RD-space formulation naturally supports planning and, in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end-effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1,000 links in time comparable to open chain sampling, and we can generate samples for 1,000-link multi-loop systems of varying topologies in less than a second. © 2010 The Author(s).

  18. Revisiting random walk based sampling in networks: evasion of burn-in period and frequent regenerations.

    Science.gov (United States)

    Avrachenkov, Konstantin; Borkar, Vivek S; Kadavankandy, Arun; Sreedharan, Jithin K

    2018-01-01

    In the framework of network sampling, random walk (RW) based estimation techniques provide many pragmatic solutions while uncovering the unknown network as little as possible. Despite several theoretical advances in this area, RW based sampling techniques usually make a strong assumption that the samples are in stationary regime, and hence are impelled to leave out the samples collected during the burn-in period. This work proposes two sampling schemes without burn-in time constraint to estimate the average of an arbitrary function defined on the network nodes, for example, the average age of users in a social network. The central idea of the algorithms lies in exploiting regeneration of RWs at revisits to an aggregated super-node or to a set of nodes, and in strategies to enhance the frequency of such regenerations either by contracting the graph or by making the hitting set larger. Our first algorithm, which is based on reinforcement learning (RL), uses stochastic approximation to derive an estimator. This method can be seen as intermediate between purely stochastic Markov chain Monte Carlo iterations and deterministic relative value iterations. The second algorithm, which we call the Ratio with Tours (RT)-estimator, is a modified form of respondent-driven sampling (RDS) that accommodates the idea of regeneration. We study the methods via simulations on real networks. We observe that the trajectories of RL-estimator are much more stable than those of standard random walk based estimation procedures, and its error performance is comparable to that of respondent-driven sampling (RDS) which has a smaller asymptotic variance than many other estimators. Simulation studies also show that the mean squared error of RT-estimator decays much faster than that of RDS with time. The newly developed RW based estimators (RL- and RT-estimators) allow to avoid burn-in period, provide better control of stability along the sample path, and overall reduce the estimation time. Our

  19. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  20. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  1. Stochastic bounded consensus tracking of leader-follower multi-agent systems with measurement noises based on sampled data with general sampling delay

    International Nuclear Information System (INIS)

    Wu Zhi-Hai; Peng Li; Xie Lin-Bo; Wen Ji-Wei

    2013-01-01

    In this paper we provide a unified framework for consensus tracking of leader-follower multi-agent systems with measurement noises based on sampled data with a general sampling delay. First, a stochastic bounded consensus tracking protocol based on sampled data with a general sampling delay is presented by employing the delay decomposition technique. Then, necessary and sufficient conditions are derived for guaranteeing leader-follower multi-agent systems with measurement noises and a time-varying reference state to achieve mean square bounded consensus tracking. The obtained results cover no sampling delay, a small sampling delay and a large sampling delay as three special cases. Last, simulations are provided to demonstrate the effectiveness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  2. 350 keV accelerator based PGNAA setup to detect nitrogen in bulk samples

    Energy Technology Data Exchange (ETDEWEB)

    Naqvi, A.A., E-mail: aanaqvi@kfupm.edu.sa [Department of Physics and King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia); Al-Matouq, Faris A.; Khiari, F.Z.; Gondal, M.A.; Rehman, Khateeb-ur [Department of Physics and King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia); Isab, A.A. [Department of Chemistry, King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia); Raashid, M.; Dastageer, M.A. [Department of Physics and King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia)

    2013-11-21

    Nitrogen concentration was measured in explosive and narcotics proxy material, e.g. anthranilic acid, caffeine, melamine, and urea samples, bulk samples through thermal neutron capture reaction using 350 keV accelerator based prompt gamma ray neutron activation (PGNAA) setup. Intensity of 2.52, 3.53–3.68, 4.51, 5.27–5.30 and 10.38 MeV prompt gamma rays of nitrogen from the bulk samples was measured using a cylindrical 100 mm×100 mm (diameter×height ) BGO detector. Inspite of interference of nitrogen gamma rays from bulk samples with capture prompt gamma rays from BGO detector material, an excellent agreement between the experimental and calculated yields of nitrogen gamma rays has been obtained. This is an indication of the excellent performance of the PGNAA setup for detection of nitrogen in bulk samples.

  3. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    Science.gov (United States)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  4. 350 keV accelerator based PGNAA setup to detect nitrogen in bulk samples

    International Nuclear Information System (INIS)

    Naqvi, A.A.; Al-Matouq, Faris A.; Khiari, F.Z.; Gondal, M.A.; Rehman, Khateeb-ur; Isab, A.A.; Raashid, M.; Dastageer, M.A.

    2013-01-01

    Nitrogen concentration was measured in explosive and narcotics proxy material, e.g. anthranilic acid, caffeine, melamine, and urea samples, bulk samples through thermal neutron capture reaction using 350 keV accelerator based prompt gamma ray neutron activation (PGNAA) setup. Intensity of 2.52, 3.53–3.68, 4.51, 5.27–5.30 and 10.38 MeV prompt gamma rays of nitrogen from the bulk samples was measured using a cylindrical 100 mm×100 mm (diameter×height ) BGO detector. Inspite of interference of nitrogen gamma rays from bulk samples with capture prompt gamma rays from BGO detector material, an excellent agreement between the experimental and calculated yields of nitrogen gamma rays has been obtained. This is an indication of the excellent performance of the PGNAA setup for detection of nitrogen in bulk samples

  5. 350 keV accelerator based PGNAA setup to detect nitrogen in bulk samples

    Science.gov (United States)

    Naqvi, A. A.; Al-Matouq, Faris A.; Khiari, F. Z.; Gondal, M. A.; Rehman, Khateeb-ur; Isab, A. A.; Raashid, M.; Dastageer, M. A.

    2013-11-01

    Nitrogen concentration was measured in explosive and narcotics proxy material, e.g. anthranilic acid, caffeine, melamine, and urea samples, bulk samples through thermal neutron capture reaction using 350 keV accelerator based prompt gamma ray neutron activation (PGNAA) setup. Intensity of 2.52, 3.53-3.68, 4.51, 5.27-5.30 and 10.38 MeV prompt gamma rays of nitrogen from the bulk samples was measured using a cylindrical 100 mm×100 mm (diameter×height ) BGO detector. Inspite of interference of nitrogen gamma rays from bulk samples with capture prompt gamma rays from BGO detector material, an excellent agreement between the experimental and calculated yields of nitrogen gamma rays has been obtained. This is an indication of the excellent performance of the PGNAA setup for detection of nitrogen in bulk samples.

  6. Influence of Freezing and Storage Procedure on Human Urine Samples in NMR-Based Metabolomics

    OpenAIRE

    Rist, Manuela; Muhle-Goll, Claudia; Görling, Benjamin; Bub, Achim; Heissler, Stefan; Watzl, Bernhard; Luy, Burkhard

    2013-01-01

    It is consensus in the metabolomics community that standardized protocols should be followed for sample handling, storage and analysis, as it is of utmost importance to maintain constant measurement conditions to identify subtle biological differences. The aim of this work, therefore, was to systematically investigate the influence of freezing procedures and storage temperatures and their effect on NMR spectra as a potentially disturbing aspect for NMR-based metabolomics studies. Urine sample...

  7. A design-based approximation to the Bayes Information Criterion in finite population sampling

    Directory of Open Access Journals (Sweden)

    Enrico Fabrizi

    2014-05-01

    Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.

  8. Sample preparation techniques based on combustion reactions in closed vessels - A brief overview and recent applications

    International Nuclear Information System (INIS)

    Flores, Erico M.M.; Barin, Juliano S.; Mesko, Marcia F.; Knapp, Guenter

    2007-01-01

    In this review, a general discussion of sample preparation techniques based on combustion reactions in closed vessels is presented. Applications for several kinds of samples are described, taking into account the literature data reported in the last 25 years. The operational conditions as well as the main characteristics and drawbacks are discussed for bomb combustion, oxygen flask and microwave-induced combustion (MIC) techniques. Recent applications of MIC techniques are discussed with special concern for samples not well digested by conventional microwave-assisted wet digestion as, for example, coal and also for subsequent determination of halogens

  9. Acceptance Sampling Plans Based on Truncated Life Tests for Sushila Distribution

    Directory of Open Access Journals (Sweden)

    Amer Ibrahim Al-Omari

    2018-03-01

    Full Text Available An acceptance sampling plan problem based on truncated life tests when the lifetime following a Sushila distribution is considered in this paper. For various acceptance numbers, confidence levels and values of the ratio between fixed experiment time and particular mean lifetime, the minimum sample sizes required to ascertain a specified mean life were found. The operating characteristic function values of the suggested sampling plans and the producer’s risk are presented. Some tables are provided and the results are illustrated by an example of a real data set.

  10. The effect of existing turbulence on stratified shear instability

    Science.gov (United States)

    Kaminski, Alexis; Smyth, William

    2017-11-01

    Ocean turbulence is an essential process governing, for example, heat uptake by the ocean. In the stably-stratified ocean interior, this turbulence occurs in discrete events driven by vertical variations of the horizontal velocity. Typically, these events have been modelled by assuming an initially laminar stratified shear flow which develops wavelike instabilities, becomes fully turbulent, and then relaminarizes into a stable state. However, in the real ocean there is always some level of turbulence left over from previous events, and it is not yet understood how this turbulence impacts the evolution of future mixing events. Here, we perform a series of direct numerical simulations of turbulent events developing in stratified shear flows that are already at least weakly turbulent. We do so by varying the amplitude of the initial perturbations, and examine the subsequent development of the instability and the impact on the resulting turbulent fluxes. This work is supported by NSF Grant OCE1537173.

  11. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  12. Rapid filtration separation-based sample preparation method for Bacillus spores in powdery and environmental matrices.

    Science.gov (United States)

    Isabel, Sandra; Boissinot, Maurice; Charlebois, Isabelle; Fauvel, Chantal M; Shi, Lu-E; Lévesque, Julie-Christine; Paquin, Amélie T; Bastien, Martine; Stewart, Gale; Leblanc, Eric; Sato, Sachiko; Bergeron, Michel G

    2012-03-01

    Authorities frequently need to analyze suspicious powders and other samples for biothreat agents in order to assess environmental safety. Numerous nucleic acid detection technologies have been developed to detect and identify biowarfare agents in a timely fashion. The extraction of microbial nucleic acids from a wide variety of powdery and environmental samples to obtain a quality level adequate for these technologies still remains a technical challenge. We aimed to develop a rapid and versatile method of separating bacteria from these samples and then extracting their microbial DNA. Bacillus atrophaeus subsp. globigii was used as a simulant of Bacillus anthracis. We studied the effects of a broad variety of powdery and environmental samples on PCR detection and the steps required to alleviate their interference. With a benchmark DNA extraction procedure, 17 of the 23 samples investigated interfered with bacterial lysis and/or PCR-based detection. Therefore, we developed the dual-filter method for applied recovery of microbial particles from environmental and powdery samples (DARE). The DARE procedure allows the separation of bacteria from contaminating matrices that interfere with PCR detection. This procedure required only 2 min, while the DNA extraction process lasted 7 min, for a total of sample preparation procedure allowed the recovery of cleaned bacterial spores and relieved detection interference caused by a wide variety of samples. Our procedure was easily completed in a laboratory facility and is amenable to field application and automation.

  13. Methodology Series Module 5: Sampling Strategies

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability samplingbased on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability samplingbased on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  14. Risk-Based Sampling: I Don't Want to Weight in Vain.

    Science.gov (United States)

    Powell, Mark R

    2015-12-01

    Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.

  15. Women’s experience with home-based self-sampling for human papillomavirus testing

    International Nuclear Information System (INIS)

    Sultana, Farhana; Mullins, Robyn; English, Dallas R.; Simpson, Julie A.; Drennan, Kelly T.; Heley, Stella; Wrede, C. David; Brotherton, Julia M. L.; Saville, Marion; Gertig, Dorota M.

    2015-01-01

    Increasing cervical screening coverage by reaching inadequately screened groups is essential for improving the effectiveness of cervical screening programs. Offering HPV self-sampling to women who are never or under-screened can improve screening participation, however participation varies widely between settings. Information on women’s experience with self-sampling and preferences for future self-sampling screening is essential for programs to optimize participation. The survey was conducted as part of a larger trial (“iPap”) investigating the effect of HPV self-sampling on participation of never and under-screened women in Victoria, Australia. Questionnaires were mailed to a) most women who participated in the self-sampling to document their experience with and preference for self-sampling in future, and b) a sample of the women who did not participate asking reasons for non-participation and suggestions for enabling participation. Reasons for not having a previous Pap test were also explored. About half the women who collected a self sample for the iPap trial returned the subsequent questionnaire (746/1521). Common reasons for not having cervical screening were that having Pap test performed by a doctor was embarrassing (18 %), not having the time (14 %), or that a Pap test was painful and uncomfortable (11 %). Most (94 %) found the home-based self-sampling less embarrassing, less uncomfortable (90 %) and more convenient (98 %) compared with their last Pap test experience (if they had one); however, many were unsure about the test accuracy (57 %). Women who self-sampled thought the instructions were clear (98 %), it was easy to use the swab (95 %), and were generally confident that they did the test correctly (81 %). Most preferred to take the self-sample at home in the future (88 %) because it was simple and did not require a doctor’s appointment. Few women (126/1946, 7 %) who did not return a self-sample in the iPap trial returned the questionnaire

  16. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Science.gov (United States)

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  17. Networked Estimation for Event-Based Sampling Systems with Packet Dropouts

    Directory of Open Access Journals (Sweden)

    Young Soo Suh

    2009-04-01

    Full Text Available This paper is concerned with a networked estimation problem in which sensor data are transmitted over the network. In the event-based sampling scheme known as level-crossing or send-on-delta (SOD, sensor data are transmitted to the estimator node if the difference between the current sensor value and the last transmitted one is greater than a given threshold. Event-based sampling has been shown to be more efficient than the time-triggered one in some situations, especially in network bandwidth improvement. However, it cannot detect packet dropout situations because data transmission and reception do not use a periodical time-stamp mechanism as found in time-triggered sampling systems. Motivated by this issue, we propose a modified event-based sampling scheme called modified SOD in which sensor data are sent when either the change of sensor output exceeds a given threshold or the time elapses more than a given interval. Through simulation results, we show that the proposed modified SOD sampling significantly improves estimation performance when packet dropouts happen.

  18. Target and suspect screening of psychoactive substances in sewage-based samples by UHPLC-QTOF

    Energy Technology Data Exchange (ETDEWEB)

    Baz-Lomba, J.A., E-mail: jba@niva.no [Norwegian Institute for Water Research, Gaustadalléen 21, NO-0349, Oslo (Norway); Faculty of Medicine, University of Oslo, PO box 1078 Blindern, 0316, Oslo (Norway); Reid, Malcolm J.; Thomas, Kevin V. [Norwegian Institute for Water Research, Gaustadalléen 21, NO-0349, Oslo (Norway)

    2016-03-31

    The quantification of illicit drug and pharmaceutical residues in sewage has been shown to be a valuable tool that complements existing approaches in monitoring the patterns and trends of drug use. The present work delineates the development of a novel analytical tool and dynamic workflow for the analysis of a wide range of substances in sewage-based samples. The validated method can simultaneously quantify 51 target psychoactive substances and pharmaceuticals in sewage-based samples using an off-line automated solid phase extraction (SPE-DEX) method, using Oasis HLB disks, followed by ultra-high performance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (UHPLC-QTOF) in MS{sup e}. Quantification and matrix effect corrections were overcome with the use of 25 isotopic labeled internal standards (ILIS). Recoveries were generally greater than 60% and the limits of quantification were in the low nanogram-per-liter range (0.4–187 ng L{sup −1}). The emergence of new psychoactive substances (NPS) on the drug scene poses a specific analytical challenge since their market is highly dynamic with new compounds continuously entering the market. Suspect screening using high-resolution mass spectrometry (HRMS) simultaneously allowed the unequivocal identification of NPS based on a mass accuracy criteria of 5 ppm (of the molecular ion and at least two fragments) and retention time (2.5% tolerance) using the UNIFI screening platform. Applying MS{sup e} data against a suspect screening database of over 1000 drugs and metabolites, this method becomes a broad and reliable tool to detect and confirm NPS occurrence. This was demonstrated through the HRMS analysis of three different sewage-based sample types; influent wastewater, passive sampler extracts and pooled urine samples resulting in the concurrent quantification of known psychoactive substances and the identification of NPS and pharmaceuticals. - Highlights: • A novel reiterative workflow

  19. Target and suspect screening of psychoactive substances in sewage-based samples by UHPLC-QTOF

    International Nuclear Information System (INIS)

    Baz-Lomba, J.A.; Reid, Malcolm J.; Thomas, Kevin V.

    2016-01-01

    The quantification of illicit drug and pharmaceutical residues in sewage has been shown to be a valuable tool that complements existing approaches in monitoring the patterns and trends of drug use. The present work delineates the development of a novel analytical tool and dynamic workflow for the analysis of a wide range of substances in sewage-based samples. The validated method can simultaneously quantify 51 target psychoactive substances and pharmaceuticals in sewage-based samples using an off-line automated solid phase extraction (SPE-DEX) method, using Oasis HLB disks, followed by ultra-high performance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (UHPLC-QTOF) in MS"e. Quantification and matrix effect corrections were overcome with the use of 25 isotopic labeled internal standards (ILIS). Recoveries were generally greater than 60% and the limits of quantification were in the low nanogram-per-liter range (0.4–187 ng L"−"1). The emergence of new psychoactive substances (NPS) on the drug scene poses a specific analytical challenge since their market is highly dynamic with new compounds continuously entering the market. Suspect screening using high-resolution mass spectrometry (HRMS) simultaneously allowed the unequivocal identification of NPS based on a mass accuracy criteria of 5 ppm (of the molecular ion and at least two fragments) and retention time (2.5% tolerance) using the UNIFI screening platform. Applying MS"e data against a suspect screening database of over 1000 drugs and metabolites, this method becomes a broad and reliable tool to detect and confirm NPS occurrence. This was demonstrated through the HRMS analysis of three different sewage-based sample types; influent wastewater, passive sampler extracts and pooled urine samples resulting in the concurrent quantification of known psychoactive substances and the identification of NPS and pharmaceuticals. - Highlights: • A novel reiterative workflow based on three

  20. Evaluating the effect of disturbed ensemble distributions on SCFG based statistical sampling of RNA secondary structures

    Directory of Open Access Journals (Sweden)

    Scheid Anika

    2012-07-01

    Full Text Available Abstract Background Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent stochastic context-free grammar (SCFG that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples, where neither of these two competing approaches generally outperforms the other. Results In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones, then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst

  1. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    Science.gov (United States)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  2. Sample Data Synchronization and Harmonic Analysis Algorithm Based on Radial Basis Function Interpolation

    Directory of Open Access Journals (Sweden)

    Huaiqing Zhang

    2014-01-01

    Full Text Available The spectral leakage has a harmful effect on the accuracy of harmonic analysis for asynchronous sampling. This paper proposed a time quasi-synchronous sampling algorithm which is based on radial basis function (RBF interpolation. Firstly, a fundamental period is evaluated by a zero-crossing technique with fourth-order Newton’s interpolation, and then, the sampling sequence is reproduced by the RBF interpolation. Finally, the harmonic parameters can be calculated by FFT on the synchronization of sampling data. Simulation results showed that the proposed algorithm has high accuracy in measuring distorted and noisy signals. Compared to the local approximation schemes as linear, quadric, and fourth-order Newton interpolations, the RBF is a global approximation method which can acquire more accurate results while the time-consuming is about the same as Newton’s.

  3. Adaptive sampling rate control for networked systems based on statistical characteristics of packet disordering.

    Science.gov (United States)

    Li, Jin-Na; Er, Meng-Joo; Tan, Yen-Kheng; Yu, Hai-Bin; Zeng, Peng

    2015-09-01

    This paper investigates an adaptive sampling rate control scheme for networked control systems (NCSs) subject to packet disordering. The main objectives of the proposed scheme are (a) to avoid heavy packet disordering existing in communication networks and (b) to stabilize NCSs with packet disordering, transmission delay and packet loss. First, a novel sampling rate control algorithm based on statistical characteristics of disordering entropy is proposed; secondly, an augmented closed-loop NCS that consists of a plant, a sampler and a state-feedback controller is transformed into an uncertain and stochastic system, which facilitates the controller design. Then, a sufficient condition for stochastic stability in terms of Linear Matrix Inequalities (LMIs) is given. Moreover, an adaptive tracking controller is designed such that the sampling period tracks a desired sampling period, which represents a significant contribution. Finally, experimental results are given to illustrate the effectiveness and advantages of the proposed scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Generalized sample entropy analysis for traffic signals based on similarity measure

    Science.gov (United States)

    Shang, Du; Xu, Mengjia; Shang, Pengjian

    2017-05-01

    Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.

  5. Aleatoric and epistemic uncertainties in sampling based nuclear data uncertainty and sensitivity analyses

    International Nuclear Information System (INIS)

    Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.

    2012-01-01

    Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)

  6. Cassette-based in-situ TEM sample inspection in the dual-beam FIB

    International Nuclear Information System (INIS)

    Kendrick, A B; Moore, T M; Zaykova-Feldman, L; Amador, G; Hammer, M

    2008-01-01

    A novel method is presented, combining site-specific TEM sample preparation and in-situ STEM analysis in a dual-beam microscope (FIB/SEM) fitted with a chamber mounted nano-manipulator. TEM samples are prepared using a modified in-situ, lift-out method, whereby the samples are thinned and oriented for immediate in-situ STEM analysis using the tilt, translation, and rotation capabilities of a FIB/SEM sample stage, a nano-manipulator, and a novel cassette. This cassette can provide a second tilt axis, orthogonal to the stage tilt axis, so that the STEM image contrast can be optimized to reveal the structural features of the sample (true STEM imaging in the FIB/SEM). The angles necessary for stage rotation and probe shaft rotation are calculated based on the position of the nano-manipulator relative to the stage and door and the stage tilt angle. A FIB/SEM instrument, equipped with a high resolution scanning electron column, can provide sufficiently high image resolution to enable many failure analysis and process control applications to be successfully carried out without requiring the use of a separate dedicated TEM/STEM instrument. The benefits of this novel approach are increased throughput and reduced cost per sample. Comparative analysis of different sample preparation methods is provided, and the STEM images obtained are shown.

  7. Nitrogen Detection in Bulk Samples Using a D-D Reaction-Based Portable Neutron Generator

    Directory of Open Access Journals (Sweden)

    A. A. Naqvi

    2013-01-01

    Full Text Available Nitrogen concentration was measured via 2.52 MeV nitrogen gamma ray from melamine, caffeine, urea, and disperse orange bulk samples using a newly designed D-D portable neutron generator-based prompt gamma ray setup. Inspite of low flux of thermal neutrons produced by D-D reaction-based portable neutron generator and interference of 2.52 MeV gamma rays from nitrogen in bulk samples with 2.50 MeV gamma ray from bismuth in BGO detector material, an excellent agreement between the experimental and calculated yields of nitrogen gamma rays indicates satisfactory performance of the setup for detection of nitrogen in bulk samples.

  8. A Study of Assimilation Bias in Name-Based Sampling of Migrants

    Directory of Open Access Journals (Sweden)

    Schnell Rainer

    2014-06-01

    Full Text Available The use of personal names for screening is an increasingly popular sampling technique for migrant populations. Although this is often an effective sampling procedure, very little is known about the properties of this method. Based on a large German survey, this article compares characteristics of respondents whose names have been correctly classified as belonging to a migrant population with respondentswho aremigrants and whose names have not been classified as belonging to a migrant population. Although significant differences were found for some variables even with some large effect sizes, the overall bias introduced by name-based sampling (NBS is small as long as procedures with small false-negative rates are employed.

  9. Fabry-Pérot cavity based on chirped sampled fiber Bragg gratings.

    Science.gov (United States)

    Zheng, Jilin; Wang, Rong; Pu, Tao; Lu, Lin; Fang, Tao; Li, Weichun; Xiong, Jintian; Chen, Yingfang; Zhu, Huatao; Chen, Dalei; Chen, Xiangfei

    2014-02-10

    A novel kind of Fabry-Pérot (FP) structure based on chirped sampled fiber Bragg grating (CSFBG) is proposed and demonstrated. In this structure, the regular chirped FBG (CFBG) that functions as reflecting mirror in the FP cavity is replaced by CSFBG, which is realized by chirping the sampling periods of a sampled FBG having uniform local grating period. The realization of such CSFBG-FPs having diverse properties just needs a single uniform pitch phase mask and sub-micrometer precision moving stage. Compared with the conventional CFBG-FP, it becomes more flexible to design CSFBG-FPs of diverse functions, and the fabrication process gets simpler. As a demonstration, based on the same experimental facilities, FPs with uniform FSR (~73 pm) and chirped FSR (varying from 28 pm to 405 pm) are fabricated respectively, which shows good agreement with simulation results.

  10. Inference for multivariate regression model based on multiply imputed synthetic data generated via posterior predictive sampling

    Science.gov (United States)

    Moura, Ricardo; Sinha, Bimal; Coelho, Carlos A.

    2017-06-01

    The recent popularity of the use of synthetic data as a Statistical Disclosure Control technique has enabled the development of several methods of generating and analyzing such data, but almost always relying in asymptotic distributions and in consequence being not adequate for small sample datasets. Thus, a likelihood-based exact inference procedure is derived for the matrix of regression coefficients of the multivariate regression model, for multiply imputed synthetic data generated via Posterior Predictive Sampling. Since it is based in exact distributions this procedure may even be used in small sample datasets. Simulation studies compare the results obtained from the proposed exact inferential procedure with the results obtained from an adaptation of Reiters combination rule to multiply imputed synthetic datasets and an application to the 2000 Current Population Survey is discussed.

  11. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  12. Problems with sampling desert tortoises: A simulation analysis based on field data

    Science.gov (United States)

    Freilich, J.E.; Camp, R.J.; Duda, J.J.; Karl, A.E.

    2005-01-01

    The desert tortoise (Gopherus agassizii) was listed as a U.S. threatened species in 1990 based largely on population declines inferred from mark-recapture surveys of 2.59-km2 (1-mi2) plots. Since then, several census methods have been proposed and tested, but all methods still pose logistical or statistical difficulties. We conducted computer simulations using actual tortoise location data from 2 1-mi2 plot surveys in southern California, USA, to identify strengths and weaknesses of current sampling strategies. We considered tortoise population estimates based on these plots as "truth" and then tested various sampling methods based on sampling smaller plots or transect lines passing through the mile squares. Data were analyzed using Schnabel's mark-recapture estimate and program CAPTURE. Experimental subsampling with replacement of the 1-mi2 data using 1-km2 and 0.25-km2 plot boundaries produced data sets of smaller plot sizes, which we compared to estimates from the 1-mi 2 plots. We also tested distance sampling by saturating a 1-mi 2 site with computer simulated transect lines, once again evaluating bias in density estimates. Subsampling estimates from 1-km2 plots did not differ significantly from the estimates derived at 1-mi2. The 0.25-km2 subsamples significantly overestimated population sizes, chiefly because too few recaptures were made. Distance sampling simulations were biased 80% of the time and had high coefficient of variation to density ratios. Furthermore, a prospective power analysis suggested limited ability to detect population declines as high as 50%. We concluded that poor performance and bias of both sampling procedures was driven by insufficient sample size, suggesting that all efforts must be directed to increasing numbers found in order to produce reliable results. Our results suggest that present methods may not be capable of accurately estimating desert tortoise populations.

  13. Algorithm/Architecture Co-design of the Generalized Sampling Theorem Based De-Interlacer.

    NARCIS (Netherlands)

    Beric, A.; Haan, de G.; Sethuraman, R.; Meerbergen, van J.

    2005-01-01

    De-interlacing is a major determinant of image quality in a modern display processing chain. The de-interlacing method based on the generalized sampling theorem (GST)applied to motion estimation and motion compensation provides the best de-interlacing results. With HDTV interlaced input material

  14. Sociocultural Experiences of Bulimic and Non-Bulimic Adolescents in a School-Based Chinese Sample

    Science.gov (United States)

    Jackson, Todd; Chen, Hong

    2010-01-01

    From a large school-based sample (N = 3,084), 49 Mainland Chinese adolescents (31 girls, 18 boys) who endorsed all DSM-IV criteria for bulimia nervosa (BN) or sub-threshold BN and 49 matched controls (31 girls, 18 boys) completed measures of demographics and sociocultural experiences related to body image. Compared to less symptomatic peers, those…

  15. Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms

    KAUST Repository

    Fidel, Adam; Jacobs, Sam Ade; Sharma, Shishir; Amato, Nancy M.; Rauchwerger, Lawrence

    2014-01-01

    Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.

  16. Some advances in importance sampling of reliability models based on zero variance approximation

    NARCIS (Netherlands)

    Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Scheinhardt, Willem R.W.; Juneja, Sandeep

    We are interested in estimating, through simulation, the probability of entering a rare failure state before a regeneration state. Since this probability is typically small, we apply importance sampling. The method that we use is based on finding the most likely paths to failure. We present an

  17. Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms

    KAUST Repository

    Fidel, Adam

    2014-05-01

    Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.

  18. Theorical and practical bases for blood sample collection from the heel of newborns for neonatal screening

    Directory of Open Access Journals (Sweden)

    Marcela Vela-Amieva

    2014-07-01

    collected in a special filter paper (Guthrie’s card. Despite its apparent simplicity, NBS laboratories commonly receive a large number of samples collected incorrectly and technically unsuitable for perfor4ming biochemical determinations. The aim of the present paper is to offer recommendations based on scientific evidence, for the properly blood collection on filter paper for NBS programs.

  19. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    Science.gov (United States)

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  20. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    Science.gov (United States)

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  1. Bacterial production, protozoan grazing, and mineralization in stratified Lake Vechten

    NARCIS (Netherlands)

    Bloem, J.

    1989-01-01

    The role of heterotrophic nanoflagellates (HNAN, size 2-20 μm) in grazing on bacteria and mineralization of organic matter in stratified Lake Vechten was studied.

    Quantitative effects of manipulation and fixation on HNAN were checked. Considerable losses were caused by

  2. The dynamics of small inertial particles in weakly stratified turbulence

    NARCIS (Netherlands)

    van Aartrijk, M.; Clercx, H.J.H.

    We present an overview of a numerical study on the small-scale dynamics and the large-scale dispersion of small inertial particles in stably stratified turbulence. Three types of particles are examined: fluid particles, light inertial particles (with particle-to-fluid density ratio 1Ͽp/Ͽf25) and

  3. Dispersion of (light) inertial particles in stratified turbulence

    NARCIS (Netherlands)

    van Aartrijk, M.; Clercx, H.J.H.; Armenio, Vincenzo; Geurts, Bernardus J.; Fröhlich, Jochen

    2010-01-01

    We present a brief overview of a numerical study of the dispersion of particles in stably stratified turbulence. Three types of particles arc examined: fluid particles, light inertial particles ($\\rho_p/\\rho_f = \\mathcal{O}(1)$) and heavy inertial particles ($\\rho_p/\\rho_f \\gg 1$). Stratification

  4. On Internal Waves in a Density-Stratified Estuary

    NARCIS (Netherlands)

    Kranenburg, C.

    1991-01-01

    In this article some field observations, made in recent years, of internal wave motions in a density-stratified estuary are presented, In order to facilitate the appreciation of the results, and to make some quantitative comparisons, the relevant theory is also summarized. Furthermore, the origins

  5. FDTD scattered field formulation for scatterers in stratified dispersive media.

    Science.gov (United States)

    Olkkonen, Juuso

    2010-03-01

    We introduce a simple scattered field (SF) technique that enables finite difference time domain (FDTD) modeling of light scattering from dispersive objects residing in stratified dispersive media. The introduced SF technique is verified against the total field scattered field (TFSF) technique. As an application example, we study surface plasmon polariton enhanced light transmission through a 100 nm wide slit in a silver film.

  6. Plane Stratified Flow in a Room Ventilated by Displacement Ventilation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm; Nickel, J.; Baron, D. J. G.

    2004-01-01

    The air movement in the occupied zone of a room ventilated by displacement ventilation exists as a stratified flow along the floor. This flow can be radial or plane according to the number of wall-mounted diffusers and the room geometry. The paper addresses the situations where plane flow...

  7. Dual Spark Plugs For Stratified-Charge Rotary Engine

    Science.gov (United States)

    Abraham, John; Bracco, Frediano V.

    1996-01-01

    Fuel efficiency of stratified-charge, rotary, internal-combustion engine increased by improved design featuring dual spark plugs. Second spark plug ignites fuel on upstream side of main fuel injector; enabling faster burning and more nearly complete utilization of fuel.

  8. Prognosis research strategy (PROGRESS) 4: Stratified medicine research

    NARCIS (Netherlands)

    A. Hingorani (Aroon); D.A.W.M. van der Windt (Daniëlle); R.D. Riley (Richard); D. Abrams; K.G.M. Moons (Karel); E.W. Steyerberg (Ewout); S. Schroter (Sara); W. Sauerbrei (Willi); D.G. Altman (Douglas); H. Hemingway; A. Briggs (Andrew); N. Brunner; P. Croft (Peter); J. Hayden (Jill); P.A. Kyzas (Panayiotis); N. Malats (Núria); G. Peat; P. Perel (Pablo); I. Roberts (Ian); A. Timmis (Adam)

    2013-01-01

    textabstractIn patients with a particular disease or health condition, stratified medicine seeks to identify thosewho will have the most clinical benefit or least harm from a specific treatment. In this article, thefourth in the PROGRESS series, the authors discuss why prognosis research should form

  9. A Case Study on Stratified Settlement and Rebound Characteristics due to Dewatering in Shanghai Subway Station

    OpenAIRE

    Wang, Jianxiu; Huang, Tianrong; Sui, Dongchang

    2013-01-01

    Based on the Yishan Metro Station Project of Shanghai Metro Line number 9, a centrifugal model test was conducted to investigate the behavior of stratified settlement and rebound (SSR) of Shanghai soft clay caused by dewatering in deep subway station pit. The soil model was composed of three layers, and the dewatering process was simulated by self-invention of decompressing devise. The results indicate that SSR occurs when the decompression was carried out, and only negative rebound was found...

  10. Experimental investigation of droplet separation in a horizontal counter-current air/water stratified flow

    International Nuclear Information System (INIS)

    Gabriel, Stephan Gerhard

    2015-01-01

    A stratified counter-current two-phase gas/liquid flow can occur in various technical systems. In the past investigations have mainly been motivated by the possible occurrence of these flows in accident scenarios of nuclear light water-reactors and in numerous applications in process engineering. However, the precise forecast of flow parameters, is still challenging, for instance due to their strong dependency on the geometric boundary conditions. A new approach which uses CFD methods (Computational Fluid Dynamics) promises a better understanding of the flow phenomena and simultaneously a higher scalability of the findings. RANS methods (Reynolds Averaged Navier Stokes) are preferred in order to compute industrial processes and geometries. A very deep understanding of the flow behavior and equation systems based on real physics are necessary preconditions to develop the equation system for a reliable RANS approach with predictive power. Therefore, local highly resolved, experimental data is needed in order to provide and validate the required turbulence and phase interaction models. The central objective of this work is to provide the data needed for the code development for these unsteady, turbulent and three-dimensional flows. Experiments were carried out at the WENKA facility (Water Entrainment Channel Karlsruhe) at the Karlsruhe Institute of Technology (KIT). The work consists of a detailed description of the test-facility including a new bended channel, the measurement techniques and the experimental results. The characterization of the new channel was done by flow maps. A high-speed imaging study gives an impression of the occurring flow regimes, and different flow phenomena like droplet separation. The velocity distributions as well as various turbulence values were investigated by particle image velocimetry (PIV). In the liquid phase fluorescent tracer-particles were used to suppress optical reflections from the phase surface (fluorescent PIV, FPIV

  11. Replication Variance Estimation under Two-phase Sampling in the Presence of Non-response

    Directory of Open Access Journals (Sweden)

    Muqaddas Javed

    2014-09-01

    Full Text Available Kim and Yu (2011 discussed replication variance estimator for two-phase stratified sampling. In this paper estimators for mean have been proposed in two-phase stratified sampling for different situation of existence of non-response at first phase and second phase. The expressions of variances of these estimators have been derived. Furthermore, replication-based jackknife variance estimators of these variances have also been derived. Simulation study has been conducted to investigate the performance of the suggested estimators.

  12. Paper membrane-based SERS platform for the determination of glucose in blood samples.

    Science.gov (United States)

    Torul, Hilal; Çiftçi, Hakan; Çetin, Demet; Suludere, Zekiye; Boyacı, Ismail Hakkı; Tamer, Uğur

    2015-11-01

    In this report, we present a paper membrane-based surface-enhanced Raman scattering (SERS) platform for the determination of blood glucose level using a nitrocellulose membrane as substrate paper, and the microfluidic channel was simply constructed by wax-printing method. The rod-shaped gold nanorod particles were modified with 4-mercaptophenylboronic acid (4-MBA) and 1-decanethiol (1-DT) molecules and used as embedded SERS probe for paper-based microfluidics. The SERS measurement area was simply constructed by dropping gold nanoparticles on nitrocellulose membrane, and the blood sample was dropped on the membrane hydrophilic channel. While the blood cells and proteins were held on nitrocellulose membrane, glucose molecules were moved through the channel toward the SERS measurement area. Scanning electron microscopy (SEM) was used to confirm the effective separation of blood matrix, and total analysis is completed in 5 min. In SERS measurements, the intensity of the band at 1070 cm(-1) which is attributed to B-OH vibration decreased depending on the rise in glucose concentration in the blood sample. The glucose concentration was found to be 5.43 ± 0.51 mM in the reference blood sample by using a calibration equation, and the certified value for glucose was 6.17 ± 0.11 mM. The recovery of the glucose in the reference blood sample was about 88 %. According to these results, the developed paper-based microfluidic SERS platform has been found to be suitable for use for the detection of glucose in blood samples without any pretreatment procedure. We believe that paper-based microfluidic systems may provide a wide field of usage for paper-based applications.

  13. Impressions of the turbulence variability in a weakly stratified, flat-bottom deep-sea ‘boundary layer’

    NARCIS (Netherlands)

    van Haren, H.

    2015-01-01

    The character of turbulent overturns in a weakly stratified deep-sea is investigated in some detail using 144 high-resolution temperature sensors at 0.7 m intervals, starting 5 m above the bottom. A 9-day, 1 Hz sampled record from the 912 m depth flat-bottom (<0.5% bottom-slope) mooring site in the

  14. Sample preparation prior to the LC-MS-based metabolomics/metabonomics of blood-derived samples.

    Science.gov (United States)

    Gika, Helen; Theodoridis, Georgios

    2011-07-01

    Blood represents a very important biological fluid and has been the target of continuous and extensive research for diagnostic, or health and drug monitoring reasons. Recently, metabonomics/metabolomics have emerged as a new and promising 'omics' platform that shows potential in biomarker discovery, especially in areas such as disease diagnosis, assessment of drug efficacy or toxicity. Blood is collected in various establishments in conditions that are not standardized. Next, the samples are prepared and analyzed using different methodologies or tools. When targeted analysis of key molecules (e.g., a drug or its metabolite[s]) is the aim, enforcement of certain measures or additional analyses may correct and harmonize these discrepancies. In omics fields such as those performed by holistic analytical approaches, no such rules or tools are available. As a result, comparison or correlation of results or data fusion becomes impractical. However, it becomes evident that such obstacles should be overcome in the near future to allow for large-scale studies that involve the assaying of samples from hundreds of individuals. In this case the effect of sample handling and preparation becomes very serious, in order to avoid wasting months of work from experts and expensive instrument time. The present review aims to cover the different methodologies applied to the pretreatment of blood prior to LC-MS metabolomic/metabonomic studies. The article tries to critically compare the methods and highlight issues that need to be addressed.

  15. Application of In-Segment Multiple Sampling in Object-Based Classification

    Directory of Open Access Journals (Sweden)

    Nataša Đurić

    2014-12-01

    Full Text Available When object-based analysis is applied to very high-resolution imagery, pixels within the segments reveal large spectral inhomogeneity; their distribution can be considered complex rather than normal. When normality is violated, the classification methods that rely on the assumption of normally distributed data are not as successful or accurate. It is hard to detect normality violations in small samples. The segmentation process produces segments that vary highly in size; samples can be very big or very small. This paper investigates whether the complexity within the segment can be addressed using multiple random sampling of segment pixels and multiple calculations of similarity measures. In order to analyze the effect sampling has on classification results, statistics and probability value equations of non-parametric two-sample Kolmogorov-Smirnov test and parametric Student’s t-test are selected as similarity measures in the classification process. The performance of both classifiers was assessed on a WorldView-2 image for four land cover classes (roads, buildings, grass and trees and compared to two commonly used object-based classifiers—k-Nearest Neighbor (k-NN and Support Vector Machine (SVM. Both proposed classifiers showed a slight improvement in the overall classification accuracies and produced more accurate classification maps when compared to the ground truth image.

  16. Sample size for post-marketing safety studies based on historical controls.

    Science.gov (United States)

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  17. Stratified prevention: opportunities and limitations. Report on the 1st interdisciplinary cardiovascular workshop in Augsburg.

    Science.gov (United States)

    Kirchhof, Gregor; Lindner, Josef Franz; Achenbach, Stephan; Berger, Klaus; Blankenberg, Stefan; Fangerau, Heiner; Gimpel, Henner; Gassner, Ulrich M; Kersten, Jens; Magnus, Dorothea; Rebscher, Herbert; Schunkert, Heribert; Rixen, Stephan; Kirchhof, Paulus

    2018-03-01

    Sufficient exercise and sleep, a balanced diet, moderate alcohol consumption and a good approach to handle stress have been known as lifestyles that protect health and longevity since the Middle Age. This traditional prevention quintet, turned into a sextet by smoking cessation, has been the basis of the "preventive personality" that formed in the twentieth century. Recent analyses of big data sets including genomic and physiological measurements have unleashed novel opportunities to estimate individual health risks with unprecedented accuracy, allowing to target preventive interventions to persons at high risk and at the same time to spare those in whom preventive measures may not be needed or even be harmful. To fully grasp these opportunities for modern preventive medicine, the established healthy life styles require supplementation by stratified prevention. The opportunities of these developments for life and health contrast with justified concerns: A "surveillance society", able to predict individual behaviour based on big data, threatens individual freedom and jeopardises equality. Social insurance law and the new German Disease Prevention Act (Präventionsgesetz) rightly stress the need for research to underpin stratified prevention which is accessible to all, ethical, effective, and evidence based. An ethical and acceptable development of stratified prevention needs to start with autonomous individuals who control and understand all information pertaining to their health. This creates a mandate for lifelong health education, enabled in an individualised form by digital technology. Stratified prevention furthermore requires the evidence-based development of a new taxonomy of cardiovascular diseases that reflects disease mechanisms. Such interdisciplinary research needs broad support from society and a better use of biosamples and data sets within an updated research governance framework.

  18. Contamination of apple orchard soils and fruit trees with copper-based fungicides: sampling aspects.

    Science.gov (United States)

    Wang, Quanying; Liu, Jingshuang; Liu, Qiang

    2015-01-01

    Accumulations of copper in orchard soils and fruit trees due to the application of Cu-based fungicides have become research hotspots. However, information about the sampling strategies, which can affect the accuracy of the following research results, is lacking. This study aimed to determine some sampling considerations when Cu accumulations in the soils and fruit trees of apple orchards are studied. The study was conducted in three apple orchards from different sites. Each orchard included two different histories of Cu-based fungicides usage, varying from 3 to 28 years. Soil samples were collected from different locations varying with the distances from tree trunk to the canopy drip line. Fruits and leaves from the middle heights of tree canopy at two locations (outer canopy and inner canopy) were collected. The variation in total soil Cu concentrations between orchards was much greater than the variation within orchards. Total soil Cu concentrations had a tendency to increase with the increasing history of Cu-based fungicides usage. Moreover, total soil Cu concentrations had the lowest values at the canopy drip line, while the highest values were found at the half distances between the trunk and the canopy drip line. Additionally, Cu concentrations of leaves and fruits from the outer parts of the canopy were significantly higher than from the inner parts. Depending on the findings of this study, not only the between-orchard variation but also the within-orchard variation should be taken into consideration when conducting future soil and tree samplings in apple orchards.

  19. The hybrid model for sampling multiple elastic scattering angular deflections based on Goudsmit-Saunderson theory

    Directory of Open Access Journals (Sweden)

    Wasaye Muhammad Abdul

    2017-01-01

    Full Text Available An algorithm for the Monte Carlo simulation of electron multiple elastic scattering based on the framework of SuperMC (Super Monte Carlo simulation program for nuclear and radiation process is presented. This paper describes efficient and accurate methods by which the multiple scattering angular deflections are sampled. The Goudsmit-Saunderson theory of multiple scattering has been used for sampling angular deflections. Differential cross-sections of electrons and positrons by neutral atoms have been calculated by using Dirac partial wave program ELSEPA. The Legendre coefficients are accurately computed by using the Gauss-Legendre integration method. Finally, a novel hybrid method for sampling angular distribution has been developed. The model uses efficient rejection sampling method for low energy electrons (500 mean free paths. For small path lengths, a simple, efficient and accurate analytical distribution function has been proposed. The later uses adjustable parameters determined from the fitting of Goudsmith-Saunderson angular distribution. A discussion of the sampling efficiency and accuracy of this newly developed algorithm is given. The efficiency of rejection sampling algorithm is at least 50 % for electron kinetic energies less than 500 keV and longer path lengths (>500 mean free paths. Monte Carlo Simulation results are then compared with measured angular distributions of Ross et al. The comparison shows that our results are in good agreement with experimental measurements.

  20. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  1. A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model

    Directory of Open Access Journals (Sweden)

    Michael Pfaffermayr

    2014-10-01

    Full Text Available The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007, the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.

  2. Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images

    Science.gov (United States)

    Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun

    2013-01-01

    This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608

  3. Eating Disorders Among a Community-based Sample of Chilean Female Adolescents

    Science.gov (United States)

    Granillo, M. Teresa; Grogan-Kaylor, Andrew; Delva, Jorge; Castillo, Marcela

    2010-01-01

    The purpose of this study was to explore the prevalence and correlates of eating disorders among a community-based sample of female Chilean adolescents. Data were collected through structured interviews with 420 female adolescents residing in Santiago, Chile. Approximately 4% of the sample reported ever being diagnosed with an eating disorder. Multivariate logistic regression analyses revealed that those with higher symptoms of anxiety and who had tried cigarettes were significantly more likely to have been diagnosed with an eating disorder. Findings indicate that Chilean female adolescents are at risk of eating disorders and that eating disorders, albeit maladaptive, may be a means to cope with negative affect, specifically anxiety. PMID:22121329

  4. Field-based random sampling without a sampling frame: control selection for a case-control study in rural Africa.

    Science.gov (United States)

    Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E

    2001-01-01

    Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.

  5. Exploring the role of wave drag in the stable stratified oceanic and atmospheric bottom boundary layer in the cnrs-toulouse (cnrm-game) large stratified water flume

    NARCIS (Netherlands)

    Kleczek, M.; Steeneveld, G.J.; Paci, A.; Calmer, R.; Belleudy, A.; Canonici, J.C.; Murguet, F.; Valette, V.

    2014-01-01

    This paper reports on a laboratory experiment in the CNRM-GAME (Toulouse) stratified water flume of a stably stratified boundary layer, in order to quantify the momentum transfer due to orographically induced gravity waves by gently undulating hills in a boundary layer flow. In a stratified fluid, a

  6. Transfer function design based on user selected samples for intuitive multivariate volume exploration

    KAUST Repository

    Zhou, Liang; Hansen, Charles

    2013-01-01

    Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.

  7. Sample preparation for large-scale bioanalytical studies based on liquid chromatographic techniques.

    Science.gov (United States)

    Medvedovici, Andrei; Bacalum, Elena; David, Victor

    2018-01-01

    Quality of the analytical data obtained for large-scale and long term bioanalytical studies based on liquid chromatography depends on a number of experimental factors including the choice of sample preparation method. This review discusses this tedious part of bioanalytical studies, applied to large-scale samples and using liquid chromatography coupled with different detector types as core analytical technique. The main sample preparation methods included in this paper are protein precipitation, liquid-liquid extraction, solid-phase extraction, derivatization and their versions. They are discussed by analytical performances, fields of applications, advantages and disadvantages. The cited literature covers mainly the analytical achievements during the last decade, although several previous papers became more valuable in time and they are included in this review. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Study on a pattern classification method of soil quality based on simplified learning sample dataset

    Science.gov (United States)

    Zhang, Jiahua; Liu, S.; Hu, Y.; Tian, Y.

    2011-01-01

    Based on the massive soil information in current soil quality grade evaluation, this paper constructed an intelligent classification approach of soil quality grade depending on classical sampling techniques and disordered multiclassification Logistic regression model. As a case study to determine the learning sample capacity under certain confidence level and estimation accuracy, and use c-means algorithm to automatically extract the simplified learning sample dataset from the cultivated soil quality grade evaluation database for the study area, Long chuan county in Guangdong province, a disordered Logistic classifier model was then built and the calculation analysis steps of soil quality grade intelligent classification were given. The result indicated that the soil quality grade can be effectively learned and predicted by the extracted simplified dataset through this method, which changed the traditional method for soil quality grade evaluation. ?? 2011 IEEE.

  9. Transfer function design based on user selected samples for intuitive multivariate volume exploration

    KAUST Repository

    Zhou, Liang

    2013-02-01

    Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.

  10. Pronounceability: a measure of language samples based on children's mastery of the phonemes employed in them.

    Science.gov (United States)

    Whissell, Cynthia

    2003-06-01

    56 samples (n > half a million phonemes) of names (e.g., men's, women's jets'), song lyrics (e.g., Paul Simon's, rap, Beatles'), poems (frequently anthologized English poems), and children's materials (books directed at children ages 3-10 years) were used to study a proposed new measure of English language samples--Pronounceability-based on children's mastery of some phonemes in advance of others. This measure was provisionally equated with greater "youthfulness" and "playfulness" in language samples and with less "maturity." Findings include the facts that women's names were less pronounceable than men's and that poetry was less pronounceable than song lyrics or children's materials. In a supplementary study, 13 university student volunteers' assessments of the youth of randomly constructed names was linearly related to how pronounceable each name was (eta = .8), providing construct validity for the interpretation of Pronounceability as a measure of Youthfulness.

  11. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy

    Science.gov (United States)

    Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.

    2017-07-01

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity

  12. General Practitioners' and patients' perceptions towards stratified care: a theory informed investigation.

    Science.gov (United States)

    Saunders, Benjamin; Bartlam, Bernadette; Foster, Nadine E; Hill, Jonathan C; Cooper, Vince; Protheroe, Joanne

    2016-08-31

    Stratified primary care involves changing General Practitioners' (GPs) clinical behaviour in treating patients, away from the current stepped care approach to instead identifying early treatment options that are matched to patients' risk of persistent disabling pain. This article explores the perspectives of UK-based GPs and patients about a prognostic stratified care model being developed for patients with the five most common primary care musculoskeletal pain presentations. The focus was on views about acceptability, and anticipated barriers and facilitators to the use of stratified care in routine practice. Four focus groups and six semi-structured telephone interviews were conducted with GPs (n = 23), and three focus groups with patients (n = 20). Data were analysed thematically; and identified themes examined in relation to the Theoretical Domains Framework (TDF), which facilitates comprehensive identification of behaviour change determinants. A critical approach was taken in using the TDF, examining the nuanced interrelationships between theoretical domains. Four key themes were identified: Acceptability of clinical decision-making guided by stratified care; impact on the therapeutic relationship; embedding a prognostic approach within a biomedical model; and practical issues in using stratified care. Whilst within each theme specific findings are reported, common across themes was the identified relationships between the theoretical domains of knowledge, skills, professional role and identity, environmental context and resources, and goals. Through analysis of these identified relationships it was found that, for GPs and patients to perceive stratified care as being acceptable, it must be seen to enhance GPs' knowledge and skills, not undermine GPs' and patients' respective identities and be integrated within the environmental context of the consultation with minimal disruption. Findings highlight the importance of taking into account the context of

  13. Effective sampling range of a synthetic protein-based attractant for Ceratitis capitata (Diptera: Tephritidae).

    Science.gov (United States)

    Epsky, Nancy D; Espinoza, Hernán R; Kendra, Paul E; Abernathy, Robert; Midgarden, David; Heath, Robert R

    2010-10-01

    Studies were conducted in Honduras to determine effective sampling range of a female-targeted protein-based synthetic attractant for the Mediterranean fruit fly, Ceratitis capitata (Wiedemann) (Diptera: Tephritidae). Multilure traps were baited with ammonium acetate, putrescine, and trimethylamine lures (three-component attractant) and sampled over eight consecutive weeks. Field design consisted of 38 traps (over 0.5 ha) placed in a combination of standard and high-density grids to facilitate geostatistical analysis, and tests were conducted in coffee (Coffea arabica L.),mango (Mangifera indica L.),and orthanique (Citrus sinensis X Citrus reticulata). Effective sampling range, as determined from the range parameter obtained from experimental variograms that fit a spherical model, was approximately 30 m for flies captured in tests in coffee or mango and approximately 40 m for flies captured in orthanique. For comparison, a release-recapture study was conducted in mango using wild (field-collected) mixed sex C. capitata and an array of 20 baited traps spaced 10-50 m from the release point. Contour analysis was used to document spatial distribution of fly recaptures and to estimate effective sampling range, defined by the area that encompassed 90% of the recaptures. With this approach, effective range of the three-component attractant was estimated to be approximately 28 m, similar to results obtained from variogram analysis. Contour maps indicated that wind direction had a strong influence on sampling range, which was approximately 15 m greater upwind compared with downwind from the release point. Geostatistical analysis of field-captured insects in appropriately designed trapping grids may provide a supplement or alternative to release-recapture studies to estimate sampling ranges for semiochemical-based trapping systems.

  14. Influence of Freezing and Storage Procedure on Human Urine Samples in NMR-Based Metabolomics

    Directory of Open Access Journals (Sweden)

    Burkhard Luy

    2013-04-01

    Full Text Available It is consensus in the metabolomics community that standardized protocols should be followed for sample handling, storage and analysis, as it is of utmost importance to maintain constant measurement conditions to identify subtle biological differences. The aim of this work, therefore, was to systematically investigate the influence of freezing procedures and storage temperatures and their effect on NMR spectra as a potentially disturbing aspect for NMR-based metabolomics studies. Urine samples were collected from two healthy volunteers, centrifuged and divided into aliquots. Urine aliquots were frozen either at −20 °C, on dry ice, at −80 °C or in liquid nitrogen and then stored at −20 °C, −80 °C or in liquid nitrogen vapor phase for 1–5 weeks before NMR analysis. Results show spectral changes depending on the freezing procedure, with samples frozen on dry ice showing the largest deviations. The effect was found to be based on pH differences, which were caused by variations in CO2 concentrations introduced by the freezing procedure. Thus, we recommend that urine samples should be frozen at −20 °C and transferred to lower storage temperatures within one week and that freezing procedures should be part of the publication protocol.

  15. Influence of Freezing and Storage Procedure on Human Urine Samples in NMR-Based Metabolomics.

    Science.gov (United States)

    Rist, Manuela J; Muhle-Goll, Claudia; Görling, Benjamin; Bub, Achim; Heissler, Stefan; Watzl, Bernhard; Luy, Burkhard

    2013-04-09

    It is consensus in the metabolomics community that standardized protocols should be followed for sample handling, storage and analysis, as it is of utmost importance to maintain constant measurement conditions to identify subtle biological differences. The aim of this work, therefore, was to systematically investigate the influence of freezing procedures and storage temperatures and their effect on NMR spectra as a potentially disturbing aspect for NMR-based metabolomics studies. Urine samples were collected from two healthy volunteers, centrifuged and divided into aliquots. Urine aliquots were frozen either at -20 °C, on dry ice, at -80 °C or in liquid nitrogen and then stored at -20 °C, -80 °C or in liquid nitrogen vapor phase for 1-5 weeks before NMR analysis. Results show spectral changes depending on the freezing procedure, with samples frozen on dry ice showing the largest deviations. The effect was found to be based on pH differences, which were caused by variations in CO2 concentrations introduced by the freezing procedure. Thus, we recommend that urine samples should be frozen at -20 °C and transferred to lower storage temperatures within one week and that freezing procedures should be part of the publication protocol.

  16. Finite-key analysis for quantum key distribution with weak coherent pulses based on Bernoulli sampling

    Science.gov (United States)

    Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato

    2017-07-01

    An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.

  17. Grouped fuzzy SVM with EM-based partition of sample space for clustered microcalcification detection.

    Science.gov (United States)

    Wang, Huiya; Feng, Jun; Wang, Hongyu

    2017-07-20

    Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.

  18. Quadrature demodulation based circuit implementation of pulse stream for ultrasonic signal FRI sparse sampling

    International Nuclear Information System (INIS)

    Shoupeng, Song; Zhou, Jiang

    2017-01-01

    Converting ultrasonic signal to ultrasonic pulse stream is the key step of finite rate of innovation (FRI) sparse sampling. At present, ultrasonic pulse-stream-forming techniques are mainly based on digital algorithms. No hardware circuit that can achieve it has been reported. This paper proposes a new quadrature demodulation (QD) based circuit implementation method for forming an ultrasonic pulse stream. Elaborating on FRI sparse sampling theory, the process of ultrasonic signal is explained, followed by a discussion and analysis of ultrasonic pulse-stream-forming methods. In contrast to ultrasonic signal envelope extracting techniques, a quadrature demodulation method (QDM) is proposed. Simulation experiments were performed to determine its performance at various signal-to-noise ratios (SNRs). The circuit was then designed, with mixing module, oscillator, low pass filter (LPF), and root of square sum module. Finally, application experiments were carried out on pipeline sample ultrasonic flaw testing. The experimental results indicate that the QDM can accurately convert ultrasonic signal to ultrasonic pulse stream, and reverse the original signal information, such as pulse width, amplitude, and time of arrival. This technique lays the foundation for ultrasonic signal FRI sparse sampling directly with hardware circuitry. (paper)

  19. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  20. GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies.

    Science.gov (United States)

    Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain

    2015-01-01

    Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/.

  1. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    International Nuclear Information System (INIS)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-01-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults. (paper)

  2. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    Science.gov (United States)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-02-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults.

  3. Introducing a rainfall compound distribution model based on weather patterns sub-sampling

    Directory of Open Access Journals (Sweden)

    F. Garavaglia

    2010-06-01

    Full Text Available This paper presents a probabilistic model for daily rainfall, using sub-sampling based on meteorological circulation. We classified eight typical but contrasted synoptic situations (weather patterns for France and surrounding areas, using a "bottom-up" approach, i.e. from the shape of the rain field to the synoptic situations described by geopotential fields. These weather patterns (WP provide a discriminating variable that is consistent with French climatology, and allows seasonal rainfall records to be split into more homogeneous sub-samples, in term of meteorological genesis.

    First results show how the combination of seasonal and WP sub-sampling strongly influences the identification of the asymptotic behaviour of rainfall probabilistic models. Furthermore, with this level of stratification, an asymptotic exponential behaviour of each sub-sample appears as a reasonable hypothesis. This first part is illustrated with two daily rainfall records from SE of France.

    The distribution of the multi-exponential weather patterns (MEWP is then defined as the composition, for a given season, of all WP sub-sample marginal distributions, weighted by the relative frequency of occurrence of each WP. This model is finally compared to Exponential and Generalized Pareto distributions, showing good features in terms of robustness and accuracy. These final statistical results are computed from a wide dataset of 478 rainfall chronicles spread on the southern half of France. All these data cover the 1953–2005 period.

  4. Damage evolution analysis of coal samples under cyclic loading based on single-link cluster method

    Science.gov (United States)

    Zhang, Zhibo; Wang, Enyuan; Li, Nan; Li, Xuelong; Wang, Xiaoran; Li, Zhonghui

    2018-05-01

    In this paper, the acoustic emission (AE) response of coal samples under cyclic loading is measured. The results show that there is good positive relation between AE parameters and stress. The AE signal of coal samples under cyclic loading exhibits an obvious Kaiser Effect. The single-link cluster (SLC) method is applied to analyze the spatial evolution characteristics of AE events and the damage evolution process of coal samples. It is found that a subset scale of the SLC structure becomes smaller and smaller when the number of cyclic loading increases, and there is a negative linear relationship between the subset scale and the degree of damage. The spatial correlation length ξ of an SLC structure is calculated. The results show that ξ fluctuates around a certain value from the second cyclic loading process to the fifth cyclic loading process, but spatial correlation length ξ clearly increases in the sixth loading process. Based on the criterion of microcrack density, the coal sample failure process is the transformation from small-scale damage to large-scale damage, which is the reason for changes in the spatial correlation length. Through a systematic analysis, the SLC method is an effective method to research the damage evolution process of coal samples under cyclic loading, and will provide important reference values for studying coal bursts.

  5. Target and suspect screening of psychoactive substances in sewage-based samples by UHPLC-QTOF.

    Science.gov (United States)

    Baz-Lomba, J A; Reid, Malcolm J; Thomas, Kevin V

    2016-03-31

    The quantification of illicit drug and pharmaceutical residues in sewage has been shown to be a valuable tool that complements existing approaches in monitoring the patterns and trends of drug use. The present work delineates the development of a novel analytical tool and dynamic workflow for the analysis of a wide range of substances in sewage-based samples. The validated method can simultaneously quantify 51 target psychoactive substances and pharmaceuticals in sewage-based samples using an off-line automated solid phase extraction (SPE-DEX) method, using Oasis HLB disks, followed by ultra-high performance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (UHPLC-QTOF) in MS(e). Quantification and matrix effect corrections were overcome with the use of 25 isotopic labeled internal standards (ILIS). Recoveries were generally greater than 60% and the limits of quantification were in the low nanogram-per-liter range (0.4-187 ng L(-1)). The emergence of new psychoactive substances (NPS) on the drug scene poses a specific analytical challenge since their market is highly dynamic with new compounds continuously entering the market. Suspect screening using high-resolution mass spectrometry (HRMS) simultaneously allowed the unequivocal identification of NPS based on a mass accuracy criteria of 5 ppm (of the molecular ion and at least two fragments) and retention time (2.5% tolerance) using the UNIFI screening platform. Applying MS(e) data against a suspect screening database of over 1000 drugs and metabolites, this method becomes a broad and reliable tool to detect and confirm NPS occurrence. This was demonstrated through the HRMS analysis of three different sewage-based sample types; influent wastewater, passive sampler extracts and pooled urine samples resulting in the concurrent quantification of known psychoactive substances and the identification of NPS and pharmaceuticals. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Use of GIS-Based Sampling to Inform Food Security Assessments and Decision Making in Kenya

    Science.gov (United States)

    Wahome, A.; Ndubi, A. O.; Ndungu, L. W.; Mugo, R. M.; Flores Cordova, A. I.

    2017-12-01

    Kenya relies on agricultural production for supporting local consumption and other processing value chains. With changing climate in a rain-fed dependent agricultural production system, cropping zones are shifting and proper decision making will require updated data. Where up-to-date data is not available it is important that it is generated and passed over to relevant stakeholders to inform their decision making. The process of generating this data should be cost effective and less time consuming. The Kenyan State Department of Agriculture (SDA) runs an insurance programme for maize farmers in a number of counties in Kenya. Previously, SDA was using a list of farmers to identify the crop fields for this insurance programme. However, the process of listing of all farmers in each Unit Area of Insurance (UAI) proved to be tedious and very costly, hence need for an alternative approach, but acceptable sampling methodology. Building on the existing cropland maps, SERVIR, a joint NASA-USAID initiative that brings Earth observations (EO) for improved environmental decision making in developing countries, specifically its hub in Eastern and Soutehrn Africa developed a High Resolution Map based on 10m Sentinel satellite images from which a GIS based sampling frame for identifying maize fields was developed. Sampling points were randomly generated in each UAI and navigated to using hand-held GPS units for identification of maize farmers. In the GIS-based identification of farmers SDA uses 1 day to cover an area covered in 1 week by list identification of farmers. Similarly, SDA spends approximately 3,000 USD per sub-county to locate maize fields using GIS-based sampling as compared 10,000 USD they used to spend before. This has resulted in 70% cost reduction.

  7. A nuclear radiation multi-parameter measurement system based on pulse-shape sampling

    International Nuclear Information System (INIS)

    Qiu Xiaolin; Fang Guoming; Xu Peng; Di Yuming

    2007-01-01

    In this paper, A nuclear radiation multi-parameter measurement system based on pulse-shape sampling is introduced, including the system's characteristics, composition, operating principle, experiment data and analysis. Compared with conventional nuclear measuring apparatus, it has some remarkable advantages such as the synchronous detection using multi-parameter measurement in the same measurement platform and the general analysis of signal data by user-defined program. (authors)

  8. Graphene-based sample supports for in situ high-resolution TEM electrical investigations

    International Nuclear Information System (INIS)

    Westenfelder, B; Scholz, F; Meyer, J C; Biskupek, J; Algara-Siller, G; Lechner, L G; Kaiser, U; Kusterer, J; Kohn, E; Krill, C E III

    2011-01-01

    Specially designed transmission electron microscopy (TEM) sample carriers have been developed to enable atomically resolved studies of the heat-induced evolution of adsorbates on graphene and their influence on electrical conductivity. Here, we present a strategy for graphene-based carrier realization, evaluating its design with respect to fabrication effort and applications potential. We demonstrate that electrical current can lead to very high temperatures in suspended graphene membranes, and we determine that current-induced cleaning of graphene results from Joule heating.

  9. A mechanically enhanced hybrid nano-stratified barrier with a defect suppression mechanism for highly reliable flexible OLEDs.

    Science.gov (United States)

    Jeong, Eun Gyo; Kwon, Seonil; Han, Jun Hee; Im, Hyeon-Gyun; Bae, Byeong-Soo; Choi, Kyung Cheol

    2017-05-18

    Understanding the mechanical behaviors of encapsulation barriers under bending stress is important when fabricating flexible organic light-emitting diodes (FOLEDs). The enhanced mechanical characteristics of a nano-stratified barrier were analyzed based on a defect suppression mechanism, and then experimentally demonstrated. Following the Griffith model, naturally-occurring cracks, which were caused by Zn etching at the interface of the nano-stratified structure, can curb the propagation of defects. Cross-section images after bending tests provided remarkable evidence to support the existence of a defect suppression mechanism. Many visible cracks were found in a single Al 2 O 3 layer, but not in the nano-stratified structure, due to the mechanism. The nano-stratified structure also enhanced the barrier's physical properties by changing the crystalline phase of ZnO. In addition, experimental results demonstrated the effect of the mechanism in various ways. The nano-stratified barrier maintained a low water vapor transmission rate after 1000 iterations of a 1 cm bending radius test. Using this mechanically enhanced hybrid nano-stratified barrier, FOLEDs were successfully encapsulated without losing mechanical or electrical performance. Finally, comparative lifetime measurements were conducted to determine reliability. After 2000 hours of constant current driving and 1000 iterations with a 1 cm bending radius, the FOLEDs retained 52.37% of their initial luminance, which is comparable to glass-lid encapsulation, with 55.96% retention. Herein, we report a mechanically enhanced encapsulation technology for FOLEDs using a nano-stratified structure with a defect suppression mechanism.

  10. 100 GHz pulse waveform measurement based on electro-optic sampling

    Science.gov (United States)

    Feng, Zhigang; Zhao, Kejia; Yang, Zhijun; Miao, Jingyuan; Chen, He

    2018-05-01

    We present an ultrafast pulse waveform measurement system based on an electro-optic sampling technique at 1560 nm and prepare LiTaO3-based electro-optic modulators with a coplanar waveguide structure. The transmission and reflection characteristics of electrical pulses on a coplanar waveguide terminated with an open circuit and a resistor are investigated by analyzing the corresponding time-domain pulse waveforms. We measure the output electrical pulse waveform of a 100 GHz photodiode and the obtained rise times of the impulse and step responses are 2.5 and 3.4 ps, respectively.

  11. The role of graphene-based sorbents in modern sample preparation techniques.

    Science.gov (United States)

    de Toffoli, Ana Lúcia; Maciel, Edvaldo Vasconcelos Soares; Fumes, Bruno Henrique; Lanças, Fernando Mauro

    2018-01-01

    The application of graphene-based sorbents in sample preparation techniques has increased significantly since 2011. These materials have good physicochemical properties to be used as sorbent and have shown excellent results in different sample preparation techniques. Graphene and its precursor graphene oxide have been considered to be good candidates to improve the extraction and concentration of different classes of target compounds (e.g., parabens, polycyclic aromatic hydrocarbon, pyrethroids, triazines, and so on) present in complex matrices. Its applications have been employed during the analysis of different matrices (e.g., environmental, biological and food). In this review, we highlight the most important characteristics of graphene-based material, their properties, synthesis routes, and the most important applications in both off-line and on-line sample preparation techniques. The discussion of the off-line approaches includes methods derived from conventional solid-phase extraction focusing on the miniaturized magnetic and dispersive modes. The modes of microextraction techniques called stir bar sorptive extraction, solid phase microextraction, and microextraction by packed sorbent are discussed. The on-line approaches focus on the use of graphene-based material mainly in on-line solid phase extraction, its variation called in-tube solid-phase microextraction, and on-line microdialysis systems. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Acrylamide exposure among Turkish toddlers from selected cereal-based baby food samples.

    Science.gov (United States)

    Cengiz, Mehmet Fatih; Gündüz, Cennet Pelin Boyacı

    2013-10-01

    In this study, acrylamide exposure from selected cereal-based baby food samples was investigated among toddlers aged 1-3 years in Turkey. The study contained three steps. The first step was collecting food consumption data and toddlers' physical properties, such as gender, age and body weight, using a questionnaire given to parents by a trained interviewer between January and March 2012. The second step was determining the acrylamide levels in food samples that were reported on by the parents in the questionnaire, using a gas chromatography-mass spectrometry (GC-MS) method. The last step was combining the determined acrylamide levels in selected food samples with individual food consumption and body weight data using a deterministic approach to estimate the acrylamide exposure levels. The mean acrylamide levels of baby biscuits, breads, baby bread-rusks, crackers, biscuits, breakfast cereals and powdered cereal-based baby foods were 153, 225, 121, 604, 495, 290 and 36 μg/kg, respectively. The minimum, mean and maximum acrylamide exposures were estimated to be 0.06, 1.43 and 6.41 μg/kg BW per day, respectively. The foods that contributed to acrylamide exposure were aligned from high to low as bread, crackers, biscuits, baby biscuits, powdered cereal-based baby foods, baby bread-rusks and breakfast cereals. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Identification of major planktonic sulfur oxidizers in stratified freshwater lake.

    Directory of Open Access Journals (Sweden)

    Hisaya Kojima

    Full Text Available Planktonic sulfur oxidizers are important constituents of ecosystems in stratified water bodies, and contribute to sulfide detoxification. In contrast to marine environments, taxonomic identities of major planktonic sulfur oxidizers in freshwater lakes still remain largely unknown. Bacterioplankton community structure was analyzed in a stratified freshwater lake, Lake Mizugaki in Japan. In the clone libraries of 16S rRNA gene, clones very closely related to a sulfur oxidizer isolated from this lake, Sulfuritalea hydrogenivorans, were detected in deep anoxic water, and occupied up to 12.5% in each library of different water depth. Assemblages of planktonic sulfur oxidizers were specifically analyzed by constructing clone libraries of genes involved in sulfur oxidation, aprA, dsrA, soxB and sqr. In the libraries, clones related to betaproteobacteria were detected with high frequencies, including the close relatives of Sulfuritalea hydrogenivorans.

  14. Mixing of stratified flow around bridge piers in steady current

    DEFF Research Database (Denmark)

    Jensen, Bjarne; Carstensen, Stefan; Christensen, Erik Damgaard

    2018-01-01

    This paper presents the results of an experimental and numerical investigation of the mixing of stratified flow around bridge pier structures. In this study, which was carried out in connection with the Fehmarnbelt Fixed Link environmental impact assessment, the mixing processes of two-layer stra......This paper presents the results of an experimental and numerical investigation of the mixing of stratified flow around bridge pier structures. In this study, which was carried out in connection with the Fehmarnbelt Fixed Link environmental impact assessment, the mixing processes of two......-layer stratification was studied in which the lower level had a higher salinity than the upper layer. The physical experiments investigated two different pier designs. A general study was made regarding forces on the piers in which the effect of the current angle relative to the structure was also included...

  15. Stratified charge rotary aircraft engine technology enablement program

    Science.gov (United States)

    Badgley, P. R.; Irion, C. E.; Myers, D. M.

    1985-01-01

    The multifuel stratified charge rotary engine is discussed. A single rotor, 0.7L/40 cu in displacement, research rig engine was tested. The research rig engine was designed for operation at high speeds and pressures, combustion chamber peak pressure providing margin for speed and load excursions above the design requirement for a high is advanced aircraft engine. It is indicated that the single rotor research rig engine is capable of meeting the established design requirements of 120 kW, 8,000 RPM, 1,379 KPA BMEP. The research rig engine, when fully developed, will be a valuable tool for investigating, advanced and highly advanced technology components, and provide an understanding of the stratified charge rotary engine combustion process.

  16. Analysis of photonic band-gap structures in stratified medium

    DEFF Research Database (Denmark)

    Tong, Ming-Sze; Yinchao, Chen; Lu, Yilong

    2005-01-01

    in electromagnetic and microwave applications once the Maxwell's equations are appropriately modeled. Originality/value - The method validates its values and properties through extensive studies on regular and defective 1D PBG structures in stratified medium, and it can be further extended to solving more......Purpose - To demonstrate the flexibility and advantages of a non-uniform pseudo-spectral time domain (nu-PSTD) method through studies of the wave propagation characteristics on photonic band-gap (PBG) structures in stratified medium Design/methodology/approach - A nu-PSTD method is proposed...... in solving the Maxwell's equations numerically. It expands the temporal derivatives using the finite differences, while it adopts the Fourier transform (FT) properties to expand the spatial derivatives in Maxwell's equations. In addition, the method makes use of the chain-rule property in calculus together...

  17. A sampling and metagenomic sequencing-based methodology for monitoring antimicrobial resistance in swine herds

    DEFF Research Database (Denmark)

    Munk, Patrick; Dalhoff Andersen, Vibe; de Knegt, Leonardo

    2016-01-01

    Objectives Reliable methods for monitoring antimicrobial resistance (AMR) in livestock and other reservoirs are essential to understand the trends, transmission and importance of agricultural resistance. Quantification of AMR is mostly done using culture-based techniques, but metagenomic read...... mapping shows promise for quantitative resistance monitoring. Methods We evaluated the ability of: (i) MIC determination for Escherichia coli; (ii) cfu counting of E. coli; (iii) cfu counting of aerobic bacteria; and (iv) metagenomic shotgun sequencing to predict expected tetracycline resistance based...... cultivation-based techniques in terms of predicting expected tetracycline resistance based on antimicrobial consumption. Our metagenomic approach had sufficient resolution to detect antimicrobial-induced changes to individual resistance gene abundances. Pen floor manure samples were found to represent rectal...

  18. Diagnosing intramammary infections: evaluation of definitions based on a single milk sample.

    Science.gov (United States)

    Dohoo, I R; Smith, J; Andersen, S; Kelton, D F; Godden, S

    2011-01-01

    Criteria for diagnosing intramammary infections (IMI) have been debated for many years. Factors that may be considered in making a diagnosis include the organism of interest being found on culture, the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and whether or not concurrent evidence of inflammation existed (often measured by somatic cell count). However, research using these criteria has been hampered by the lack of a "gold standard" test (i.e., a perfect test against which the criteria can be evaluated) and the need for very large data sets of culture results to have sufficient numbers of quarters with infections with a variety of organisms. This manuscript used 2 large data sets of culture results to evaluate several definitions (sets of criteria) for classifying a quarter as having, or not having an IMI by comparing the results from a single culture to a gold standard diagnosis based on a set of 3 milk samples. The first consisted of 38,376 milk samples from which 25,886 triplicate sets of milk samples taken 1 wk apart were extracted. The second consisted of 784 quarters that were classified as infected or not based on a set of 3 milk samples collected at 2-d intervals. From these quarters, a total of 3,136 additional samples were evaluated. A total of 12 definitions (named A to L) based on combinations of the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and the somatic cell count were evaluated for each organism (or group of organisms) with sufficient data. The sensitivity (ability of a definition to detect IMI) and the specificity (Sp; ability of a definition to correctly classify noninfected quarters) were both computed. For all species, except Staphylococcus aureus, the sensitivity of all definitions was definition A). With the exception of "any organism" and coagulase-negative staphylococci, all Sp estimates were over 94% in the daily data and over 97

  19. Community genomics among stratified microbial assemblages in the ocean's interior

    DEFF Research Database (Denmark)

    DeLong, Edward F; Preston, Christina M; Mincer, Tracy

    2006-01-01

    Microbial life predominates in the ocean, yet little is known about its genomic variability, especially along the depth continuum. We report here genomic analyses of planktonic microbial communities in the North Pacific Subtropical Gyre, from the ocean's surface to near-sea floor depths. Sequence......, and host-viral interactions. Comparative genomic analyses of stratified microbial communities have the potential to provide significant insight into higher-order community organization and dynamics....

  20. Large Eddy Simulation of stratified flows over structures

    OpenAIRE

    Brechler J.; Fuka V.

    2013-01-01

    We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model) to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.

  1. Large Eddy Simulation of stratified flows over structures

    Directory of Open Access Journals (Sweden)

    Brechler J.

    2013-04-01

    Full Text Available We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.

  2. Large Eddy Simulation of stratified flows over structures

    Science.gov (United States)

    Fuka, V.; Brechler, J.

    2013-04-01

    We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model) to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.

  3. Propagation of acoustic waves in a stratified atmosphere, 1

    Science.gov (United States)

    Kalkofen, W.; Rossi, P.; Bodo, G.; Massaglia, S.

    1994-01-01

    This work is motivated by the chromospheric 3 minute oscillations observed in the K(sub 2v) bright points. We study acoustic gravity waves in a one-dimensional, gravitationally stratified, isothermal atmosphere. The oscillations are excited either by a velocity pulse imparted to a layer in an atmosphere of infinite vertical extent, or by a piston forming the lower boundary of a semi-infinite medium. We consider both linear and non-linear waves.

  4. A statistical mechanics approach to mixing in stratified fluids

    OpenAIRE

    Venaille , Antoine; Gostiaux , Louis; Sommeria , Joël

    2016-01-01

    Accepted for the Journal of Fluid Mechanics; Predicting how much mixing occurs when a given amount of energy is injected into a Boussinesq fluid is a longstanding problem in stratified turbulence. The huge number of degrees of freedom involved in these processes renders extremely difficult a deterministic approach to the problem. Here we present a statistical mechanics approach yielding a prediction for a cumulative, global mixing efficiency as a function of a global Richard-son number and th...

  5. Background stratified Poisson regression analysis of cohort data.

    Science.gov (United States)

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

  6. Ethanol dehydration to ethylene in a stratified autothermal millisecond reactor.

    Science.gov (United States)

    Skinner, Michael J; Michor, Edward L; Fan, Wei; Tsapatsis, Michael; Bhan, Aditya; Schmidt, Lanny D

    2011-08-22

    The concurrent decomposition and deoxygenation of ethanol was accomplished in a stratified reactor with 50-80 ms contact times. The stratified reactor comprised an upstream oxidation zone that contained Pt-coated Al(2)O(3) beads and a downstream dehydration zone consisting of H-ZSM-5 zeolite films deposited on Al(2)O(3) monoliths. Ethanol conversion, product selectivity, and reactor temperature profiles were measured for a range of fuel:oxygen ratios for two autothermal reactor configurations using two different sacrificial fuel mixtures: a parallel hydrogen-ethanol feed system and a series methane-ethanol feed system. Increasing the amount of oxygen relative to the fuel resulted in a monotonic increase in ethanol conversion in both reaction zones. The majority of the converted carbon was in the form of ethylene, where the ethanol carbon-carbon bonds stayed intact while the oxygen was removed. Over 90% yield of ethylene was achieved by using methane as a sacrificial fuel. These results demonstrate that noble metals can be successfully paired with zeolites to create a stratified autothermal reactor capable of removing oxygen from biomass model compounds in a compact, continuous flow system that can be configured to have multiple feed inputs, depending on process restrictions. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Background stratified Poisson regression analysis of cohort data

    International Nuclear Information System (INIS)

    Richardson, David B.; Langholz, Bryan

    2012-01-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)

  8. Impact of a decision aid about stratified ovarian cancer risk-management on women's knowledge and intentions: a randomised online experimental survey study.

    Science.gov (United States)

    Meisel, Susanne F; Freeman, Maddie; Waller, Jo; Fraser, Lindsay; Gessler, Sue; Jacobs, Ian; Kalsi, Jatinderpal; Manchanda, Ranjit; Rahman, Belinda; Side, Lucy; Wardle, Jane; Lanceley, Anne; Sanderson, Saskia C

    2017-11-16

    Risk stratification using genetic and other types of personal information could improve current best available approaches to ovarian cancer risk reduction, improving identification of women at increased risk of ovarian cancer and reducing unnecessary interventions for women at lower risk. Amounts of information given to women may influence key informed decision-related outcomes, e.g. knowledge. The primary aim of this study was to compare informed decision-related outcomes between women given one of two versions (gist vs. extended) of a decision aid about stratified ovarian cancer risk-management. This was an experimental survey study comparing the effects of brief (gist) information with lengthier, more detailed (extended) information on cognitions relevant to informed decision-making about participating in risk-stratified ovarian cancer screening. Women with no personal history of ovarian cancer were recruited through an online survey company and randomised to view the gist (n = 512) or extended (n = 519) version of a website-based decision aid and completed an online survey. Primary outcomes were knowledge and intentions. Secondary outcomes included attitudes (values) and decisional conflict. There were no significant differences between the gist and extended conditions in knowledge about ovarian cancer (time*group interaction: F = 0.20, p = 0.66) or intention to participate in ovarian cancer screening based on genetic risk assessment (t(1029) = 0.43, p = 0.67). There were also no between-groups differences in secondary outcomes. In the sample overall (n = 1031), knowledge about ovarian cancer increased from before to after exposure to the decision aid (from 5.71 to 6.77 out of a possible 10: t = 19.04, p type of content for decision aids about stratified ovarian cancer risk-management. This study was registered with the ISRCTN registry; registration number: ISRCTN48627877 .

  9. Interfacial transport characteristics in a gas-liquid or an immiscible liquid-liquid stratified flow

    International Nuclear Information System (INIS)

    Inoue, A.; Aoki, S.; Aritomi, M.; Kozawa, Y.

    1982-01-01

    This paper is a review for an interfacial transport characteristics of mass, momentum and energy in a gas-liquid or a immiscible liquid-liquid stratified flow with wavy interface which have been studied in our division. In the experiment, a characteristic of wave motion and its effect to the turbulence near the interface as well as overall flow characteristics like pressure drop, position of the interface were investigated in an air-water, an air-mercury and a water-liquid metal stratified flow. On the other hand, several models based on the mixing length model and a two-equation model of turbulence, with special interfacial boundary conditions in which the wavy surface was regarded as a rough surface correspond to the wavy height, a source of turbulent energy equal to the wave energy and a damped-turbulence due to the surface tension, were proposed to predict the flow characteristics and the interfacial heat transfer in a fully developed and an undeveloped stratified flow and examined by the experimental data. (author)

  10. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  11. Simultaneous multicopter-based air sampling and sensing of meteorological variables

    Science.gov (United States)

    Brosy, Caroline; Krampf, Karina; Zeeman, Matthias; Wolf, Benjamin; Junkermann, Wolfgang; Schäfer, Klaus; Emeis, Stefan; Kunstmann, Harald

    2017-08-01

    The state and composition of the lowest part of the planetary boundary layer (PBL), i.e., the atmospheric surface layer (SL), reflects the interactions of external forcing, land surface, vegetation, human influence and the atmosphere. Vertical profiles of atmospheric variables in the SL at high spatial (meters) and temporal (1 Hz and better) resolution increase our understanding of these interactions but are still challenging to measure appropriately. Traditional ground-based observations include towers that often cover only a few measurement heights at a fixed location. At the same time, most remote sensing techniques and aircraft measurements have limitations to achieve sufficient detail close to the ground (up to 50 m). Vertical and horizontal transects of the PBL can be complemented by unmanned aerial vehicles (UAV). Our aim in this case study is to assess the use of a multicopter-type UAV for the spatial sampling of air and simultaneously the sensing of meteorological variables for the study of the surface exchange processes. To this end, a UAV was equipped with onboard air temperature and humidity sensors, while wind conditions were determined from the UAV's flight control sensors. Further, the UAV was used to systematically change the location of a sample inlet connected to a sample tube, allowing the observation of methane abundance using a ground-based analyzer. Vertical methane gradients of about 0.3 ppm were found during stable atmospheric conditions. Our results showed that both methane and meteorological conditions were in agreement with other observations at the site during the ScaleX-2015 campaign. The multicopter-type UAV was capable of simultaneous in situ sensing of meteorological state variables and sampling of air up to 50 m above the surface, which extended the vertical profile height of existing tower-based infrastructure by a factor of 5.

  12. Sensitivity of the Geomagnetic Octupole to a Stably Stratified Layer in the Earth's Core

    Science.gov (United States)

    Yan, C.; Stanley, S.

    2017-12-01

    The presence of a stably stratified layer at the top of the core has long been proposed for Earth, based on evidence from seismology and geomagnetic secular variation. Geodynamo modeling offers a unique window to inspect the properties and dynamics in Earth's core. For example, numerical simulations have shown that magnetic field morphology is sensitive to the presence of stably stratified layers in a planet's core. Here we use the mMoSST numerical dynamo model to investigate the effects of a thin stably stratified layer at the top of the fluid outer core in Earth on the resulting large-scale geomagnetic field morphology. We find that the existence of a stable layer has significant influence on the octupolar component of the magnetic field in our models, whereas the quadrupole doesn't show an obvious trend. This suggests that observations of the geomagnetic field can be applied to provide information of the properties of this plausible stable layer, such as how thick and how stable this layer could be. Furthermore, we have examined whether the dominant thermal signature from mantle tomography at the core-mantle boundary (CMB) (a degree & order 2 spherical harmonic) can influence our results. We found that this heat flux pattern at the CMB has no outstanding effects on the quadrupole and octupole magnetic field components. Our studies suggest that if there is a stably stratified layer at the top of the Earth's core, it must be limited in terms of stability and thickness, in order to be compatible with the observed paleomagnetic record.

  13. Spatial Analysis of Geothermal Resource Potential in New York and Pennsylvania: A Stratified Kriging Approach

    Science.gov (United States)

    Smith, J. D.; Whealton, C. A.; Stedinger, J. R.

    2014-12-01

    Resource assessments for low-grade geothermal applications employ available well temperature measurements to determine if the resource potential is sufficient for supporting district heating opportunities. This study used a compilation of bottomhole temperature (BHT) data from recent unconventional shale oil and gas wells, along with legacy oil, gas, and storage wells, in Pennsylvania (PA) and New York (NY). Our study's goal was to predict the geothermal resource potential and associated uncertainty for the NY-PA region using kriging interpolation. The dataset was scanned for outliers, and some observations were removed. Because these wells were drilled for reasons other than geothermal resource assessment, their spatial density varied widely. An exploratory spatial statistical analysis revealed differences in the spatial structure of the geothermal gradient data (the kriging semi-variogram and its nugget variance, shape, sill, and the degree of anisotropy). As a result, a stratified kriging procedure was adopted to better capture the statistical structure of the data, to generate an interpolated surface, and to quantify the uncertainty of the computed surface. The area was stratified reflecting different physiographic provinces in NY and PA that have geologic properties likely related to variations in the value of the geothermal gradient. The kriging prediction and the variance-of-prediction were determined for each province by the generation of a semi-variogram using only the wells that were located within that province. A leave-one-out cross validation (LOOCV) was conducted as a diagnostic tool. The results of stratified kriging were compared to kriging using the whole region to determine the impact of stratification. The two approaches provided similar predictions of the geothermal gradient. However, the variance-of-prediction was different. The stratified approach is recommended because it gave a more appropriate site-specific characterization of uncertainty

  14. Inconsistencies between alcohol screening results based on AUDIT-C scores and reported drinking on the AUDIT-C questions: prevalence in two US national samples

    Science.gov (United States)

    2014-01-01

    Background The AUDIT-C is an extensively validated screen for unhealthy alcohol use (i.e. drinking above recommended limits or alcohol use disorder), which consists of three questions about alcohol consumption. AUDIT-C scores ≥4 points for men and ≥3 for women are considered positive screens based on US validation studies that compared the AUDIT-C to “gold standard” measures of unhealthy alcohol use from independent, detailed interviews. However, results of screening—positive or negative based on AUDIT-C scores—can be inconsistent with reported drinking on the AUDIT-C questions. For example, individuals can screen positive based on the AUDIT-C score while reporting drinking below US recommended limits on the same AUDIT-C. Alternatively, they can screen negative based on the AUDIT-C score while reporting drinking above US recommended limits. Such inconsistencies could complicate interpretation of screening results, but it is unclear how often they occur in practice. Methods This study used AUDIT-C data from respondents who reported past-year drinking on one of two national US surveys: a general population survey (N = 26,610) and a Veterans Health Administration (VA) outpatient survey (N = 467,416). Gender-stratified analyses estimated the prevalence of AUDIT-C screen results—positive or negative screens based on the AUDIT-C score—that were inconsistent with reported drinking (above or below US recommended limits) on the same AUDIT-C. Results Among men who reported drinking, 13.8% and 21.1% of US general population and VA samples, respectively, had screening results based on AUDIT-C scores (positive or negative) that were inconsistent with reported drinking on the AUDIT-C questions (above or below US recommended limits). Among women who reported drinking, 18.3% and 20.7% of US general population and VA samples, respectively, had screening results that were inconsistent with reported drinking. Limitations This study did not include an

  15. Bio-sample detection on paper-based devices with inkjet printer-sprayed reagents.

    Science.gov (United States)

    Liang, Wun-Hong; Chu, Chien-Hung; Yang, Ruey-Jen

    2015-12-01

    The reagent required for bio-sample detection on paper-based analytical devices is generally introduced manually using a pipette. Such an approach is time-consuming; particularly if a large number of devices are required. Automated methods provide a far more convenient solution for large-scale production, but incur a substantial cost. Accordingly, the present study proposes a low-cost method for the paper-based analytical devices in which the biochemical reagents are sprayed onto the device directly using a modified commercial inkjet printer. The feasibility of the proposed method is demonstrated by performing aspartate aminotransferase (AST) and alanine aminotransferase (ALT) tests using simple two-dimensional (2D) paper-based devices. In both cases, the reaction process is analyzed using an image-processing-based colorimetric method. The experimental results show that for AST detection within the 0-105 U/l concentration range, the optimal observation time is around four minutes, while for ALT detection in the 0-125 U/l concentration range, the optimal observation time is approximately one minute. Finally, for both samples, the detection performance of the sprayed-reagent analytical devices is insensitive to the glucose concentration. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Improved metamodel-based importance sampling for the performance assessment of radioactive waste repositories

    International Nuclear Information System (INIS)

    Cadini, F.; Gioletta, A.; Zio, E.

    2015-01-01

    In the context of a probabilistic performance assessment of a radioactive waste repository, the estimation of the probability of exceeding the dose threshold set by a regulatory body is a fundamental task. This may become difficult when the probabilities involved are very small, since the classically used sampling-based Monte Carlo methods may become computationally impractical. This issue is further complicated by the fact that the computer codes typically adopted in this context requires large computational efforts, both in terms of time and memory. This work proposes an original use of a Monte Carlo-based algorithm for (small) failure probability estimation in the context of the performance assessment of a near surface radioactive waste repository. The algorithm, developed within the context of structural reliability, makes use of an estimated optimal importance density and a surrogate, kriging-based metamodel approximating the system response. On the basis of an accurate analytic analysis of the algorithm, a modification is proposed which allows further reducing the computational efforts by a more effective training of the metamodel. - Highlights: • We tackle uncertainty propagation in a radwaste repository performance assessment. • We improve a kriging-based importance sampling for estimating failure probabilities. • We justify the modification by an analytic, comparative analysis of the algorithms. • The probability of exceeding dose thresholds in radwaste repositories is estimated. • The algorithm is further improved reducing the number of its free parameters

  17. Molecular-based rapid inventories of sympatric diversity: a comparison of DNA barcode clustering methods applied to geography-based vs clade-based sampling of amphibians.

    Science.gov (United States)

    Paz, Andrea; Crawford, Andrew J

    2012-11-01

    Molecular markers offer a universal source of data for quantifying biodiversity. DNA barcoding uses a standardized genetic marker and a curated reference database to identify known species and to reveal cryptic diversity within wellsampled clades. Rapid biological inventories, e.g. rapid assessment programs (RAPs), unlike most barcoding campaigns, are focused on particular geographic localities rather than on clades. Because of the potentially sparse phylogenetic sampling, the addition of DNA barcoding to RAPs may present a greater challenge for the identification of named species or for revealing cryptic diversity. In this article we evaluate the use of DNA barcoding for quantifying lineage diversity within a single sampling site as compared to clade-based sampling, and present examples from amphibians. We compared algorithms for identifying DNA barcode clusters (e.g. species, cryptic species or Evolutionary Significant Units) using previously published DNA barcode data obtained from geography-based sampling at a site in Central Panama, and from clade-based sampling in Madagascar. We found that clustering algorithms based on genetic distance performed similarly on sympatric as well as clade-based barcode data, while a promising coalescent-based method performed poorly on sympatric data. The various clustering algorithms were also compared in terms of speed and software implementation. Although each method has its shortcomings in certain contexts, we recommend the use of the ABGD method, which not only performs fairly well under either sampling method, but does so in a few seconds and with a user-friendly Web interface.

  18. Efficient sample preparation method based on solvent-assisted dispersive solid-phase extraction for the trace detection of butachlor in urine and waste water samples.

    Science.gov (United States)

    Aladaghlo, Zolfaghar; Fakhari, Alireza; Behbahani, Mohammad

    2016-10-01

    In this work, an efficient sample preparation method termed solvent-assisted dispersive solid-phase extraction was applied. The used sample preparation method was based on the dispersion of the sorbent (benzophenone) into the aqueous sample to maximize the interaction surface. In this approach, the dispersion of the sorbent at a very low milligram level was achieved by inserting a solution of the sorbent and disperser solvent into the aqueous sample. The cloudy solution created from the dispersion of the sorbent in the bulk aqueous sample. After pre-concentration of the butachlor, the cloudy solution was centrifuged and butachlor in the sediment phase dissolved in ethanol and determined by gas chromatography with flame ionization detection. Under the optimized conditions (solution pH = 7.0, sorbent: benzophenone, 2%, disperser solvent: ethanol, 500 μL, centrifuged at 4000 rpm for 3 min), the method detection limit for butachlor was 2, 3 and 3 μg/L for distilled water, waste water, and urine sample, respectively. Furthermore, the preconcentration factor was 198.8, 175.0, and 174.2 in distilled water, waste water, and urine sample, respectively. Solvent-assisted dispersive solid-phase extraction was successfully used for the trace monitoring of butachlor in urine and waste water samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples

    Directory of Open Access Journals (Sweden)

    Liu Xin

    2015-09-01

    Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.

  20. Recent trends in sorption-based sample preparation and liquid chromatography techniques for food analysis.

    Science.gov (United States)

    V Soares Maciel, Edvaldo; de Toffoli, Ana Lúcia; Lanças, Fernando Mauro

    2018-04-20

    The accelerated rising of the world's population increased the consumption of food, thus demanding more rigors in the control of residue and contaminants in food-based products marketed for human consumption. In view of the complexity of most food matrices, including fruits, vegetables, different types of meat, beverages, among others, a sample preparation step is important to provide more reliable results when combined with HPLC separations. An adequate sample preparation step before the chromatographic analysis is mandatory in obtaining higher precision and accuracy in order to improve the extraction of the target analytes, one of the priorities in analytical chemistry. The recent discovery of new materials such as ionic liquids, graphene-derived materials, molecularly imprinted polymers, restricted access media, magnetic nanoparticles, and carbonaceous nanomaterials, provided high sensitivity and selectivity results in an extensive variety of applications. These materials, as well as their several possible combinations, have been demonstrated to be highly appropriate for the extraction of different analytes in complex samples such as food products. The main characteristics and application of these new materials in food analysis will be presented and discussed in this paper. Another topic discussed in this review covers the main advantages and limitations of sample preparation microtechniques, as well as their off-line and on-line combination with HPLC for food analysis. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    KAUST Repository

    Beck, Joakim

    2018-02-19

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized for a specified error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a single-loop Monte Carlo method that uses the Laplace approximation of the return value of the inner loop. The first demonstration example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  2. Characterization of acid-base properties of two gibbsite samples in the context of literature results.

    Science.gov (United States)

    Adekola, F; Fédoroff, M; Geckeis, H; Kupcik, T; Lefèvre, G; Lützenkirchen, J; Plaschke, M; Preocanin, T; Rabung, T; Schild, D

    2011-02-01

    Two different gibbsites, one commercial and one synthesized according to a frequently applied recipe, were studied in an interlaboratory attempt to gain insight into the origin of widely differing reports on gibbsite acid-base surface properties. In addition to a thorough characterization of the two solids, several methods relevant to the interfacial charging were applied to the two samples: potentiometric titrations to obtain the "apparent" proton related surface charge density, zeta-potential measurements characterizing the potential at the plane of shear, and Attenuated Total Reflection Infrared Spectroscopy (ATR-IR) to obtain information on the variation of counter-ion adsorption with pH (using nitrate as a probe). Values of the IEP at 9-10 and 11.2-11.3 were found for the commercial and synthesized sample, respectively. The experimental observations revealed huge differences in the charging behavior between the two samples. Such differences also appeared in the titration kinetics. A detailed literature review revealed similar disparity with no apparent systematic trend. While previously the waiting time between additions had been advocated to explain such differences among synthesized samples, our results do not support such a conclusion. Instead, we find that the amount of titrant added in each aliquot appears to have a significant influence on the titration curves. While we can relate a number of observations to others, a number of open questions and contradictions remain. We suggest various processes, which can explain the observed behavior. Copyright © 2010 Elsevier Inc. All rights reserved.

  3. Estimated ventricle size using Evans index: reference values from a population-based sample.

    Science.gov (United States)

    Jaraj, D; Rabiei, K; Marlow, T; Jensen, C; Skoog, I; Wikkelsø, C

    2017-03-01

    Evans index is an estimate of ventricular size used in the diagnosis of idiopathic normal-pressure hydrocephalus (iNPH). Values >0.3 are considered pathological and are required by guidelines for the diagnosis of iNPH. However, there are no previous epidemiological studies on Evans index, and normal values in adults are thus not precisely known. We examined a representative sample to obtain reference values and descriptive data on Evans index. A population-based sample (n = 1235) of men and women aged ≥70 years was examined. The sample comprised people living in private households and residential care, systematically selected from the Swedish population register. Neuropsychiatric examinations, including head computed tomography, were performed between 1986 and 2000. Evans index ranged from 0.11 to 0.46. The mean value in the total sample was 0.28 (SD, 0.04) and 20.6% (n = 255) had values >0.3. Among men aged ≥80 years, the mean value of Evans index was 0.3 (SD, 0.03). Individuals with dementia had a mean value of Evans index of 0.31 (SD, 0.05) and those with radiological signs of iNPH had a mean value of 0.36 (SD, 0.04). A substantial number of subjects had ventricular enlargement according to current criteria. Clinicians and researchers need to be aware of the range of values among older individuals. © 2017 EAN.

  4. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  5. Computational fragment-based screening using RosettaLigand: the SAMPL3 challenge

    Science.gov (United States)

    Kumar, Ashutosh; Zhang, Kam Y. J.

    2012-05-01

    SAMPL3 fragment based virtual screening challenge provides a valuable opportunity for researchers to test their programs, methods and screening protocols in a blind testing environment. We participated in SAMPL3 challenge and evaluated our virtual fragment screening protocol, which involves RosettaLigand as the core component by screening a 500 fragments Maybridge library against bovine pancreatic trypsin. Our study reaffirmed that the real test for any virtual screening approach would be in a blind testing environment. The analyses presented in this paper also showed that virtual screening performance can be improved, if a set of known active compounds is available and parameters and methods that yield better enrichment are selected. Our study also highlighted that to achieve accurate orientation and conformation of ligands within a binding site, selecting an appropriate method to calculate partial charges is important. Another finding is that using multiple receptor ensembles in docking does not always yield better enrichment than individual receptors. On the basis of our results and retrospective analyses from SAMPL3 fragment screening challenge we anticipate that chances of success in a fragment screening process could be increased significantly with careful selection of receptor structures, protein flexibility, sufficient conformational sampling within binding pocket and accurate assignment of ligand and protein partial charges.

  6. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    Science.gov (United States)

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl

    2018-06-01

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  7. Seroincidence of non-typhoid Salmonella infections: convenience vs. random community-based sampling.

    Science.gov (United States)

    Emborg, H-D; Simonsen, J; Jørgensen, C S; Harritshøj, L H; Krogfelt, K A; Linneberg, A; Mølbak, K

    2016-01-01

    The incidence of reported infections of non-typhoid Salmonella is affected by biases inherent to passive laboratory surveillance, whereas analysis of blood sera may provide a less biased alternative to estimate the force of Salmonella transmission in humans. We developed a mathematical model that enabled a back-calculation of the annual seroincidence of Salmonella based on measurements of specific antibodies. The aim of the present study was to determine the seroincidence in two convenience samples from 2012 (Danish blood donors, n = 500, and pregnant women, n = 637) and a community-based sample of healthy individuals from 2006 to 2007 (n = 1780). The lowest antibody levels were measured in the samples from the community cohort and the highest in pregnant women. The annual Salmonella seroincidences were 319 infections/1000 pregnant women [90% credibility interval (CrI) 210-441], 182/1000 in blood donors (90% CrI 85-298) and 77/1000 in the community cohort (90% CrI 45-114). Although the differences between study populations decreased when accounting for different age distributions the estimates depend on the study population. It is important to be aware of this issue and define a certain population under surveillance in order to obtain consistent results in an application of serological measures for public health purposes.

  8. Final Report: Laser-Based Optical Trap for Remote Sampling of Interplanetary and Atmospheric Particulate Matter

    Science.gov (United States)

    Stysley, Paul

    2016-01-01

    Applicability to Early Stage Innovation NIAC Cutting edge and innovative technologies are needed to achieve the demanding requirements for NASA origin missions that require sample collection as laid out in the NRC Decadal Survey. This proposal focused on fully understanding the state of remote laser optical trapping techniques for capturing particles and returning them to a target site. In future missions, a laser-based optical trapping system could be deployed on a lander that would then target particles in the lower atmosphere and deliver them to the main instrument for analysis, providing remote access to otherwise inaccessible samples. Alternatively, for a planetary mission the laser could combine ablation and trapping capabilities on targets typically too far away or too hard for traditional drilling sampling systems. For an interstellar mission, a remote laser system could gather particles continuously at a safe distance; this would avoid the necessity of having a spacecraft fly through a target cloud such as a comet tail. If properly designed and implemented, a laser-based optical trapping system could fundamentally change the way scientists designand implement NASA missions that require mass spectroscopy and particle collection.

  9. Comparison between conservative perturbation and sampling based methods for propagation of Non-Neutronic uncertainties

    International Nuclear Information System (INIS)

    Campolina, Daniel de A.M.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2013-01-01

    For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using sampling based method is recent because of the huge computational effort required. In this work a sample space of MCNP calculations were used as a black box model to propagate the uncertainty of system parameters. The efficiency of the method was compared to a conservative method. Uncertainties in input parameters of the reactor considered non-neutronic uncertainties, including geometry dimensions and density. The effect of the uncertainties on the effective multiplication factor of the system was analyzed respect to the possibility of using many uncertainties in the same input. If the case includes more than 46 parameters with uncertainty in the same input, the sampling based method is proved to be more efficient than the conservative method. (author)

  10. Gaussian process based intelligent sampling for measuring nano-structure surfaces

    Science.gov (United States)

    Sun, L. J.; Ren, M. J.; Yin, Y. H.

    2016-09-01

    Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.

  11. Sampling-Based Motion Planning Algorithms for Replanning and Spatial Load Balancing

    Energy Technology Data Exchange (ETDEWEB)

    Boardman, Beth Leigh [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-12

    The common theme of this dissertation is sampling-based motion planning with the two key contributions being in the area of replanning and spatial load balancing for robotic systems. Here, we begin by recalling two sampling-based motion planners: the asymptotically optimal rapidly-exploring random tree (RRT*), and the asymptotically optimal probabilistic roadmap (PRM*). We also provide a brief background on collision cones and the Distributed Reactive Collision Avoidance (DRCA) algorithm. The next four chapters detail novel contributions for motion replanning in environments with unexpected static obstacles, for multi-agent collision avoidance, and spatial load balancing. First, we show improved performance of the RRT* when using the proposed Grandparent-Connection (GP) or Focused-Refinement (FR) algorithms. Next, the Goal Tree algorithm for replanning with unexpected static obstacles is detailed and proven to be asymptotically optimal. A multi-agent collision avoidance problem in obstacle environments is approached via the RRT*, leading to the novel Sampling-Based Collision Avoidance (SBCA) algorithm. The SBCA algorithm is proven to guarantee collision free trajectories for all of the agents, even when subject to uncertainties in the knowledge of the other agents’ positions and velocities. Given that a solution exists, we prove that livelocks and deadlock will lead to the cost to the goal being decreased. We introduce a new deconfliction maneuver that decreases the cost-to-come at each step. This new maneuver removes the possibility of livelocks and allows a result to be formed that proves convergence to the goal configurations. Finally, we present a limited range Graph-based Spatial Load Balancing (GSLB) algorithm which fairly divides a non-convex space among multiple agents that are subject to differential constraints and have a limited travel distance. The GSLB is proven to converge to a solution when maximizing the area covered by the agents. The analysis

  12. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Directory of Open Access Journals (Sweden)

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  13. The Child Behavior Checklist for Ages 1.5-5 (CBCL/1(1/2)-5): Assessment and analysis of parent- and caregiver-reported problems in a population-based sample of Danish preschool children

    DEFF Research Database (Denmark)

    Kristensen, Solvejg; Henriksen, Tine Brink; Bilenberg, Niels

    2010-01-01

    Background: Psychometric instruments are used increasingly within research and clinical settings, and therefore standardization has become an important prerequisite, even for investigating very young children. Currently, there are no standardized psychometric instruments available for assessment...... of preschool children in Denmark. Aims: The aim was to achieve Danish national norm scores for the Child Behavior Checklist for Ages 1(1/2)-5 (CBCL/1(1/2)-5) and the Caregiver Report Form (C-TRF). Methods: The study was based on an age- and gender-stratified cohort sample of 1750 children aged 1(1/2)-5 years...... born at Aarhus University Hospital, Denmark. The CBCL/1(1/2)-5 and C-TRF were mailed to parents, who were asked to pass on the C-TRF to the preschool caregiver. The national standard register data gave access to information on socio-economic status, family type, ethnicity and parental educational level...

  14. Gel dosimetry - a laser based 3D scanner for gel samples - research in India

    Energy Technology Data Exchange (ETDEWEB)

    Widmer, Johannes [Institut fuer Angewandte Photophysik, TU Dresden (Germany); Photonics Division, VIT University, Vellore, Tamil Nadu (India); Dhiviyaraj Kalaiselven, Senthil Kumar [Photonics Division, VIT University, Vellore, Tamil Nadu (India); Department of Therapeutic Radiology, University of Minnesota, Minneapolis (United States); James, Jebaseelan Samuel [Photonics Division, VIT University, Vellore, Tamil Nadu (India)

    2013-07-01

    A laser based 3D scanner is developed to take tomography images of partly transparent samples. The scanner is optimized to characterize gel samples from spatially resolved dosimetry measurements. The resulting device should be suitably designed to be constructed in India. This gave me valuable insight into the scientific and technological environment of the country and made me find my way through a quite different culture of research and commerce, within and beyond the scientific context of the university. The project was implemented during a nine months stay at the Vellore Institute of Technology University in Vellore, Tamil Nadu, India, in co-operation with the Christian Medical College, Vellore, in 2006/07. It was conducted within the framework of existing research activities of the host university.

  15. The relationship between orbital, earth-based, and sample data for lunar landing sites

    Science.gov (United States)

    Clark, P. E.; Hawke, B. R.; Basu, A.

    1990-01-01

    Results are reported of a detailed examination of data available for the Apollo lunar landing sites, including the Apollo orbital measurements of six major elements derived from XRF and gamma-ray instruments and geochemical parameters derived from earth-based spectral reflectivity data. Wherever orbital coverage for Apollo landing sites exist, the remote data were correlated with geochemical data derived from the soil sample averages for major geological units and the major rock components associated with these units. Discrepancies were observed between the remote and the soil-anlysis elemental concentration data, which were apparently due to the differences in the extent of exposure of geological units, and, hence, major rock eomponents, in the area sampled. Differences were observed in signal depths between various orbital experiments, which may provide a mechanism for explaining differences between the XRF and other landing-site data.

  16. Effects of sampling conditions on DNA-based estimates of American black bear abundance

    Science.gov (United States)

    Laufenberg, Jared S.; Van Manen, Frank T.; Clark, Joseph D.

    2013-01-01

    DNA-based capture-mark-recapture techniques are commonly used to estimate American black bear (Ursus americanus) population abundance (N). Although the technique is well established, many questions remain regarding study design. In particular, relationships among N, capture probability of heterogeneity mixtures A and B (pA and pB, respectively, or p, collectively), the proportion of each mixture (π), number of capture occasions (k), and probability of obtaining reliable estimates of N are not fully understood. We investigated these relationships using 1) an empirical dataset of DNA samples for which true N was unknown and 2) simulated datasets with known properties that represented a broader array of sampling conditions. For the empirical data analysis, we used the full closed population with heterogeneity data type in Program MARK to estimate N for a black bear population in Great Smoky Mountains National Park, Tennessee. We systematically reduced the number of those samples used in the analysis to evaluate the effect that changes in capture probabilities may have on parameter estimates. Model-averaged N for females and males were 161 (95% CI = 114–272) and 100 (95% CI = 74–167), respectively (pooled N = 261, 95% CI = 192–419), and the average weekly p was 0.09 for females and 0.12 for males. When we reduced the number of samples of the empirical data, support for heterogeneity models decreased. For the simulation analysis, we generated capture data with individual heterogeneity covering a range of sampling conditions commonly encountered in DNA-based capture-mark-recapture studies and examined the relationships between those conditions and accuracy (i.e., probability of obtaining an estimated N that is within 20% of true N), coverage (i.e., probability that 95% confidence interval includes true N), and precision (i.e., probability of obtaining a coefficient of variation ≤20%) of estimates using logistic regression. The capture probability

  17. SPR based immunosensor for detection of Legionella pneumophila in water samples

    Science.gov (United States)

    Enrico, De Lorenzis; Manera, Maria G.; Montagna, Giovanni; Cimaglia, Fabio; Chiesa, Maurizio; Poltronieri, Palmiro; Santino, Angelo; Rella, Roberto

    2013-05-01

    Detection of legionellae by water sampling is an important factor in epidemiological investigations of Legionnaires' disease and its prevention. To avoid labor-intensive problems with conventional methods, an alternative, highly sensitive and simple method is proposed for detecting L. pneumophila in aqueous samples. A compact Surface Plasmon Resonance (SPR) instrumentation prototype, provided with proper microfluidics tools, is built. The developed immunosensor is capable of dynamically following the binding between antigens and the corresponding antibody molecules immobilized on the SPR sensor surface. A proper immobilization strategy is used in this work that makes use of an important efficient step aimed at the orientation of antibodies onto the sensor surface. The feasibility of the integration of SPR-based biosensing setups with microfluidic technologies, resulting in a low-cost and portable biosensor is demonstrated.

  18. Surface plasmon resonance based sensing of different chemical and biological samples using admittance loci method

    Science.gov (United States)

    Brahmachari, Kaushik; Ghosh, Sharmila; Ray, Mina

    2013-06-01

    The admittance loci method plays an important role in the design of multilayer thin film structures. In this paper, admittance loci method has been explored theoretically for sensing of various chemical and biological samples based on surface plasmon resonance (SPR) phenomenon. A dielectric multilayer structure consisting of a Boro silicate glass (BSG) substrate, calcium fluoride (CaF2) and zirconium dioxide (ZrO2) along with different dielectric layers has been investigated. Moreover, admittance loci as well as SPR curves of metal-dielectric multilayer structure consisting of the BSG substrate, gold metal film and various dielectric samples has been simulated in MATLAB environment. To validate the proposed simulation results, calibration curves have also been provided.

  19. A dansyl based fluorescence chemosensor for Hg2+ and its application in the complicated environment samples

    Science.gov (United States)

    Zhou, Shuai; Zhou, Ze-Quan; Zhao, Xuan-Xuan; Xiao, Yu-Hao; Xi, Gang; Liu, Jin-Ting; Zhao, Bao-Xiang

    2015-09-01

    We have developed a novel fluorescent chemosensor (DAM) based on dansyl and morpholine units for the detection of mercury ion with excellent selectivity and sensitivity. In the presence of Hg2+ in a mixture solution of HEPES buffer (pH 7.5, 20 mM) and MeCN (2/8, v/v) at room temperature, the fluorescence of DAM was almost completely quenched from green to colorless with fast response time. Moreover, DAM also showed its excellent anti-interference capability even in the presence of large amount of interfering ions. It is worth noting that DAM could be used to detect Hg2+ specifically in the Yellow River samples, which significantly implied the potential applications of DAM in the complicated environment samples.

  20. A dansyl based fluorescence chemosensor for Hg(2+) and its application in the complicated environment samples.

    Science.gov (United States)

    Zhou, Shuai; Zhou, Ze-Quan; Zhao, Xuan-Xuan; Xiao, Yu-Hao; Xi, Gang; Liu, Jin-Ting; Zhao, Bao-Xiang

    2015-09-05

    We have developed a novel fluorescent chemosensor (DAM) based on dansyl and morpholine units for the detection of mercury ion with excellent selectivity and sensitivity. In the presence of Hg(2+) in a mixture solution of HEPES buffer (pH 7.5, 20 mM) and MeCN (2/8, v/v) at room temperature, the fluorescence of DAM was almost completely quenched from green to colorless with fast response time. Moreover, DAM also showed its excellent anti-interference capability even in the presence of large amount of interfering ions. It is worth noting that DAM could be used to detect Hg(2+) specifically in the Yellow River samples, which significantly implied the potential applications of DAM in the complicated environment samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    International Nuclear Information System (INIS)

    Zhu, T.

    2015-01-01

    Research on the uncertainty of nuclear data is motivated by practical necessity. Nuclear data uncertainties can propagate through nuclear system simulations into operation and safety related parameters. The tolerance for uncertainties in nuclear reactor design and operation can affect the economic efficiency of nuclear power, and essentially its sustainability. The goal of the present PhD research is to establish a methodology of nuclear data uncertainty quantification (NDUQ) for MCNPX, the continuous-energy Monte-Carlo (M-C) code. The high fidelity (continuous-energy treatment and flexible geometry modelling) of MCNPX makes it the choice of routine criticality safety calculations at PSI/LRS, but also raises challenges for NDUQ by conventional sensitivity/uncertainty (S/U) methods. For example, only recently in 2011, the capability of calculating continuous energy κ_e_f_f sensitivity to nuclear data was demonstrated in certain M-C codes by using the method of iterated fission probability. The methodology developed during this PhD research is fundamentally different from the conventional S/U approach: nuclear data are treated as random variables and sampled in accordance to presumed probability distributions. When sampled nuclear data are used in repeated model calculations, the output variance is attributed to the collective uncertainties of nuclear data. The NUSS (Nuclear data Uncertainty Stochastic Sampling) tool is based on this sampling approach and implemented to work with MCNPX’s ACE format of nuclear data, which also gives NUSS compatibility with MCNP and SERPENT M-C codes. In contrast, multigroup uncertainties are used for the sampling of ACE-formatted pointwise-energy nuclear data in a groupwise manner due to the more limited quantity and quality of nuclear data uncertainties. Conveniently, the usage of multigroup nuclear data uncertainties allows consistent comparison between NUSS and other methods (both S/U and sampling-based) that employ the same

  2. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, T.

    2015-07-01

    Research on the uncertainty of nuclear data is motivated by practical necessity. Nuclear data uncertainties can propagate through nuclear system simulations into operation and safety related parameters. The tolerance for uncertainties in nuclear reactor design and operation can affect the economic efficiency of nuclear power, and essentially its sustainability. The goal of the present PhD research is to establish a methodology of nuclear data uncertainty quantification (NDUQ) for MCNPX, the continuous-energy Monte-Carlo (M-C) code. The high fidelity (continuous-energy treatment and flexible geometry modelling) of MCNPX makes it the choice of routine criticality safety calculations at PSI/LRS, but also raises challenges for NDUQ by conventional sensitivity/uncertainty (S/U) methods. For example, only recently in 2011, the capability of calculating continuous energy κ{sub eff} sensitivity to nuclear data was demonstrated in certain M-C codes by using the method of iterated fission probability. The methodology developed during this PhD research is fundamentally different from the conventional S/U approach: nuclear data are treated as random variables and sampled in accordance to presumed probability distributions. When sampled nuclear data are used in repeated model calculations, the output variance is attributed to the collective uncertainties of nuclear data. The NUSS (Nuclear data Uncertainty Stochastic Sampling) tool is based on this sampling approach and implemented to work with MCNPX’s ACE format of nuclear data, which also gives NUSS compatibility with MCNP and SERPENT M-C codes. In contrast, multigroup uncertainties are used for the sampling of ACE-formatted pointwise-energy nuclear data in a groupwise manner due to the more limited quantity and quality of nuclear data uncertainties. Conveniently, the usage of multigroup nuclear data uncertainties allows consistent comparison between NUSS and other methods (both S/U and sampling-based) that employ the same

  3. Pulsed photothermal profiling of water-based samples using a spectrally composite reconstruction approach

    International Nuclear Information System (INIS)

    Majaron, B; Milanic, M

    2010-01-01

    Pulsed photothermal profiling involves reconstruction of temperature depth profile induced in a layered sample by single-pulse laser exposure, based on transient change in mid-infrared (IR) emission from its surface. Earlier studies have indicated that in watery tissues, featuring a pronounced spectral variation of mid-IR absorption coefficient, analysis of broadband radiometric signals within the customary monochromatic approximation adversely affects profiling accuracy. We present here an experimental comparison of pulsed photothermal profiling in layered agar gel samples utilizing a spectrally composite kernel matrix vs. the customary approach. By utilizing a custom reconstruction code, the augmented approach reduces broadening of individual temperature peaks to 14% of the absorber depth, in contrast to 21% obtained with the customary approach.

  4. Two-phase pressurized thermal shock investigations using a 3D two-fluid modeling of stratified flow with condensation

    International Nuclear Information System (INIS)

    Yao, W.; Coste, P.; Bestion, D.; Boucker, M.

    2003-01-01

    In this paper, a local 3D two-fluid model for a turbulent stratified flow with/without condensation, which can be used to predict two-phase pressurized thermal shock, is presented. A modified turbulent K- model is proposed with turbulence production induced by interfacial friction. A model of interfacial friction based on a interfacial sublayer concept and three interfacial heat transfer models, namely, a model based on the small eddies controlled surface renewal concept (HDM, Hughes and Duffey, 1991), a model based on the asymptotic behavior of the Eddy Viscosity (EVM), and a model based on the Interfacial Sublayer concept (ISM) are implemented into a preliminary version of the NEPTUNE code based on the 3D module of the CATHARE code. As a first step to apply the above models to predict the two-phase thermal shock, the models are evaluated by comparison of calculated profiles with several experiments: a turbulent air-water stratified flow without interfacial heat transfer; a turbulent steam-water stratified flow with condensation; turbulence induced by the impact of a water jet in a water pool. The prediction results agree well with the experimental data. In addition, the comparison of three interfacial heat transfer models shows that EVM and ISM gave better prediction results while HDM highly overestimated the interfacial heat transfers compared to the experimental data of a steam water stratified flow

  5. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  6. A genetic algorithm-based framework for wavelength selection on sample categorization.

    Science.gov (United States)

    Anzanello, Michel J; Yamashita, Gabrielli; Marcelo, Marcelo; Fogliatto, Flávio S; Ortiz, Rafael S; Mariotti, Kristiane; Ferrão, Marco F

    2017-08-01

    In forensic and pharmaceutical scenarios, the application of chemometrics and optimization techniques has unveiled common and peculiar features of seized medicine and drug samples, helping investigative forces to track illegal operations. This paper proposes a novel framework aimed at identifying relevant subsets of attenuated total reflectance Fourier transform infrared (ATR-FTIR) wavelengths for classifying samples into two classes, for example authentic or forged categories in case of medicines, or salt or base form in cocaine analysis. In the first step of the framework, the ATR-FTIR spectra were partitioned into equidistant intervals and the k-nearest neighbour (KNN) classification technique was applied to each interval to insert samples into proper classes. In the next step, selected intervals were refined through the genetic algorithm (GA) by identifying a limited number of wavelengths from the intervals previously selected aimed at maximizing classification accuracy. When applied to Cialis®, Viagra®, and cocaine ATR-FTIR datasets, the proposed method substantially decreased the number of wavelengths needed to categorize, and increased the classification accuracy. From a practical perspective, the proposed method provides investigative forces with valuable information towards monitoring illegal production of drugs and medicines. In addition, focusing on a reduced subset of wavelengths allows the development of portable devices capable of testing the authenticity of samples during police checking events, avoiding the need for later laboratorial analyses and reducing equipment expenses. Theoretically, the proposed GA-based approach yields more refined solutions than the current methods relying on interval approaches, which tend to insert irrelevant wavelengths in the retained intervals. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Geochemical signature of land-based activities in Caribbean coral surface samples

    Science.gov (United States)

    Prouty, N.G.; Hughen, K.A.; Carilli, J.

    2008-01-01

    Anthropogenic threats, such as increased sedimentation, agrochemical run-off, coastal development, tourism, and overfishing, are of great concern to the Mesoamerican Caribbean Reef System (MACR). Trace metals in corals can be used to quantify and monitor the impact of these land-based activities. Surface coral samples from the MACR were investigated for trace metal signatures resulting from relative differences in water quality. Samples were analyzed at three spatial scales (colony, reef, and regional) as part of a hierarchical multi-scale survey. A primary goal of the paper is to elucidate the extrapolation of information between fine-scale variation at the colony or reef scale and broad-scale patterns at the regional scale. Of the 18 metals measured, five yielded statistical differences at the colony and/or reef scale, suggesting fine-scale spatial heterogeneity not conducive to regional interpretation. Five metals yielded a statistical difference at the regional scale with an absence of a statistical difference at either the colony or reef scale. These metals are barium (Ba), manganese (Mn), chromium (Cr), copper (Cu), and antimony (Sb). The most robust geochemical indicators of land-based activities are coral Ba and Mn concentrations, which are elevated in samples from the southern region of the Gulf of Honduras relative to those from the Turneffe Islands. These findings are consistent with the occurrence of the most significant watersheds in the MACR from southern Belize to Honduras, which contribute sediment-laden freshwater to the coastal zone primarily as a result of human alteration to the landscape (e.g., deforestation and agricultural practices). Elevated levels of Cu and Sb were found in samples from Honduras and may be linked to industrial shipping activities where copper-antimony additives are commonly used in antifouling paints. Results from this study strongly demonstrate the impact of terrestrial runoff and anthropogenic activities on coastal water

  8. A copula-based sampling method for data-driven prognostics

    International Nuclear Information System (INIS)

    Xi, Zhimin; Jing, Rong; Wang, Pingfeng; Hu, Chao

    2014-01-01

    This paper develops a Copula-based sampling method for data-driven prognostics. The method essentially consists of an offline training process and an online prediction process: (i) the offline training process builds a statistical relationship between the failure time and the time realizations at specified degradation levels on the basis of off-line training data sets; and (ii) the online prediction process identifies probable failure times for online testing units based on the statistical model constructed in the offline process and the online testing data. Our contributions in this paper are three-fold, namely the definition of a generic health index system to quantify the health degradation of an engineering system, the construction of a Copula-based statistical model to learn the statistical relationship between the failure time and the time realizations at specified degradation levels, and the development of a simulation-based approach for the prediction of remaining useful life (RUL). Two engineering case studies, namely the electric cooling fan health prognostics and the 2008 IEEE PHM challenge problem, are employed to demonstrate the effectiveness of the proposed methodology. - Highlights: • We develop a novel mechanism for data-driven prognostics. • A generic health index system quantifies health degradation of engineering systems. • Off-line training model is constructed based on the Bayesian Copula model. • Remaining useful life is predicted from a simulation-based approach

  9. Quantifying Tip-Sample Interactions in Vacuum Using Cantilever-Based Sensors: An Analysis

    Science.gov (United States)

    Dagdeviren, Omur E.; Zhou, Chao; Altman, Eric I.; Schwarz, Udo D.

    2018-04-01

    Atomic force microscopy is an analytical characterization method that is able to image a sample's surface topography at high resolution while simultaneously probing a variety of different sample properties. Such properties include tip-sample interactions, the local measurement of which has gained much popularity in recent years. To this end, either the oscillation frequency or the oscillation amplitude and phase of the vibrating force-sensing cantilever are recorded as a function of tip-sample distance and subsequently converted into quantitative values for the force or interaction potential. Here, we theoretically and experimentally show that the force law obtained from such data acquired under vacuum conditions using the most commonly applied methods may deviate more than previously assumed from the actual interaction when the oscillation amplitude of the probe is of the order of the decay length of the force near the surface, which may result in a non-negligible error if correct absolute values are of importance. Caused by approximations made in the development of the mathematical reconstruction procedures, the related inaccuracies can be effectively suppressed by using oscillation amplitudes sufficiently larger than the decay length. To facilitate efficient data acquisition, we propose a technique that includes modulating the drive amplitude at a constant height from the surface while monitoring the oscillation amplitude and phase. Ultimately, such an amplitude-sweep-based force spectroscopy enables shorter data acquisition times and increased accuracy for quantitative chemical characterization compared to standard approaches that vary the tip-sample distance. An additional advantage is that since no feedback loop is active while executing the amplitude sweep, the force can be consistently recovered deep into the repulsive regime.

  10. RandomSpot: A web-based tool for systematic random sampling of virtual slides.

    Science.gov (United States)

    Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E

    2015-01-01

    This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.

  11. An efficient modularized sample-based method to estimate the first-order Sobol' index

    International Nuclear Information System (INIS)

    Li, Chenzhao; Mahadevan, Sankaran

    2016-01-01

    Sobol' index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol' index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol' index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap. - Highlights: • An efficient method to estimate the first-order Sobol' index. • Estimate the index from input–output samples directly. • Computational cost is not proportional to the number of model inputs. • Handle both uncorrelated and correlated model inputs.

  12. Outcomes of couples with infidelity in a community-based sample of couple therapy.

    Science.gov (United States)

    Atkins, David C; Marín, Rebeca A; Lo, Tracy T Y; Klann, Notker; Hahlweg, Kurt

    2010-04-01

    Infidelity is an often cited problem for couples seeking therapy, but the research literature to date is very limited on couple therapy outcomes when infidelity is a problem. The current study is a secondary analysis of a community-based sample of couple therapy in Germany and Austria. Outcomes for 145 couples who reported infidelity as a problem in their relationship were compared with 385 couples who sought therapy for other reasons. Analyses based on hierarchical linear modeling revealed that infidelity couples were significantly more distressed and reported more depressive symptoms at the start of therapy but continued improving through the end of therapy and to 6 months posttherapy. At the follow-up assessment, infidelity couples were not statistically distinguishable from non-infidelity couples, replicating previous research. Sexual dissatisfaction did not depend on infidelity status. Although there was substantial missing data, sensitivity analyses suggested that the primary findings were not due to missing data. The current findings based on a large community sample replicated previous work from an efficacy trial and show generally optimistic results for couples in which there has been an affair. 2010 APA, all rights reserved

  13. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai

    2015-09-16

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  14. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai; Pang, Herbert; Tong, Tiejun; Genton, Marc G.

    2015-01-01

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  15. Ionic liquid-based dispersive microextraction of nitro toluenes in water samples

    International Nuclear Information System (INIS)

    Berton, Paula; Regmi, Bishnu P.; Spivak, David A.; Warner, Isiah M.

    2014-01-01

    We describe a method for dispersive liquid-liquid microextraction of nitrotoluene-based compounds. This method is based on use of the room temperature ionic liquid (RTIL) 1-hexyl-4-methylpyridinium bis(trifluoromethylsulfonyl)imide as the accepting phase, and is shown to work well for extraction of 4-nitrotoluene, 2,4-dinitrotoluene, and 2,4,6-trinitrotoluene. Separation and subsequent detection of analytes were accomplished via HPLC with UV detection. Several parameters that influence the efficiency of the extraction were optimized using experimental design. In this regard, a Plackett–Burman design was used for initial screening, followed by a central composite design to further optimize the influencing variables. For a 5-mL water sample, the optimized IL-DLLME procedure requires 26 mg of the RTIL as extraction solvent and 680 μL of methanol as the dispersant. Under optimum conditions, limits of detection (LODs) are lower than 1.05 μg L −1 . Relative standard deviations for 6 replicate determinations at a 4 μg L −1 analyte level are <4.3 % (calculated using peak areas). Correlation coefficients of >0.998 were achieved. This method was successfully applied to extraction and determination of nitrotoluene-based compounds in spiked tap and lake water samples. (author)

  16. Impact of a decision aid about stratified ovarian cancer risk-management on women’s knowledge and intentions: a randomised online experimental survey study

    Directory of Open Access Journals (Sweden)

    Susanne F. Meisel

    2017-11-01

    Full Text Available Abstract Background Risk stratification using genetic and other types of personal information could improve current best available approaches to ovarian cancer risk reduction, improving identification of women at increased risk of ovarian cancer and reducing unnecessary interventions for women at lower risk. Amounts of information given to women may influence key informed decision-related outcomes, e.g. knowledge. The primary aim of this study was to compare informed decision-related outcomes between women given one of two versions (gist vs. extended of a decision aid about stratified ovarian cancer risk-management. Methods This was an experimental survey study comparing the effects of brief (gist information with lengthier, more detailed (extended information on cognitions relevant to informed decision-making about participating in risk-stratified ovarian cancer screening. Women with no personal history of ovarian cancer were recruited through an online survey company and randomised to view the gist (n = 512 or extended (n = 519 version of a website-based decision aid and completed an online survey. Primary outcomes were knowledge and intentions. Secondary outcomes included attitudes (values and decisional conflict. Results There were no significant differences between the gist and extended conditions in knowledge about ovarian cancer (time*group interaction: F = 0.20, p = 0.66 or intention to participate in ovarian cancer screening based on genetic risk assessment (t(1029 = 0.43, p = 0.67. There were also no between-groups differences in secondary outcomes. In the sample overall (n = 1031, knowledge about ovarian cancer increased from before to after exposure to the decision aid (from 5.71 to 6.77 out of a possible 10: t = 19.04, p < 0.001, and 74% of participants said that they would participate in ovarian cancer screening based on genetic risk assessment. Conclusions No differences in knowledge or

  17. Economic viability of Stratified Medicine concepts: An investor perspective on drivers and conditions that favour using Stratified Medicine approaches in a cost-contained healthcare environment.

    Science.gov (United States)

    Fugel, Hans-Joerg; Nuijten, Mark; Postma, Maarten

    2016-12-25

    Stratified Medicine (SM) is becoming a natural result of advances in biomedical science and a promising path for the innovation-based biopharmaceutical industry to create new investment opportunities. While the use of biomarkers to improve R&D efficiency and productivity is very much acknowledged by industry, much work remains to be done to understand the drivers and conditions that favour using a stratified approach to create economically viable products and to justify the investment in SM interventions as a stratification option. In this paper we apply a decision analytical methodology to address the economic attractiveness of different SM development options in a cost-contained healthcare environment. For this purpose, a hypothetical business case in the oncology market has been developed considering four feasible development scenarios. The article outlines the effects of development time and time to peak sales as key economic value drivers influencing profitability of SM interventions under specific conditions. If regulatory and reimbursement challenges can be solved, decreasing development time and enhancing early market penetration would most directly improve the economic attractiveness of SM interventions. Appropriate tailoring of highly differentiated patient subgroups is the prerequisite to leverage potential efficiency gains in the R&D process. Also, offering a better targeted and hence ultimately more cost-effective therapy at reimbursable prices will facilitate time to market access and allow increasing market share gains within the targeted populations. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Determination of total concentration of chemically labeled metabolites as a means of metabolome sample normalization and sample loading optimization in mass spectrometry-based metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2012-12-18

    For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C

  19. Sample-based reporting of official national control of veterinary drug residues

    DEFF Research Database (Denmark)

    Andersen, Jens Hinge; Jensen, Louise Grønhøj Hørbye; Madsen, Helle L.

    assessment as well as risk management. The European Food Safety Authority has been assigned with the task to set up a system for data collection based on individual analytical results. A pilot project has been launched with participants from eleven Member States for parallel reporting of monitoring results...... from 2015 in aggregated form as well as individual analytical results using a standardised data model. The challenges that face the pilot participants include provisions for categorised sample information, specific method performance data, result evaluation and follow-up actions. Experience gained...

  20. Calibrating passive sampling and passive dosing techniques to lipid based concentrations

    DEFF Research Database (Denmark)

    Mayer, Philipp; Schmidt, Stine Nørgaard; Annika, A.

    2011-01-01

    Equilibrium sampling into various formats of the silicone polydimethylsiloxane (PDMS) is increasingly used to measure the exposure of hydrophobic organic chemicals in environmental matrices, and passive dosing from silicone is increasingly used to control and maintain their exposure in laboratory...... coated vials and with Head Space Solid Phase Microextraction (HS-SPME) yielded lipid based concentrations that were in good agreement with each other, but about a factor of two higher than measured lipid-normalized concentrations in the organisms. Passive dosing was applied to bioconcentration...

  1. Design of cross-sensitive temperature and strain sensor based on sampled fiber grating

    Directory of Open Access Journals (Sweden)

    Zhang Xiaohang

    2017-02-01

    Full Text Available In this paper,a cross-sensitive temperature and strain sensor based on sampled fiber grating is designed.Its temperature measurement range is -50-200℃,and the strain measurement rangeis 0-2 000 με.The characteristics of the sensor are obtained using simulation method.Utilizing SPSS software,we found the dual-parameter matrix equations of measurement of temperature and strain,and calibrated the four sensing coefficients of the matrix equations.

  2. ELISA-based assay for IP-10 detection from filter paper samples

    DEFF Research Database (Denmark)

    Drabe, Camilla Heldbjerg; Blauenfeldt, Thomas; Ruhwald, Morten

    2014-01-01

    IP-10 is a small pro-inflammatory chemokine secreted primarily from monocytes and fibroblasts. Alterations in IP-10 levels have been associated with inflammatory conditions including viral and bacterial infections, immune dysfunction, and tumor development. IP-10 is increasingly recognized as a b...... as a biomarker that predicts severity of various diseases and can be used in the immunodiagnostics of Mycobacterium tuberculosis and cytomegalovirus infection. Here, we describe an ELISA-based method to detect IP-10 from dried blood and plasma spot samples....

  3. Indications for tonsillectomy stratified by the level of evidence

    Science.gov (United States)

    Windfuhr, Jochen P.

    2016-01-01

    Background: One of the most significant clinical trials, demonstrating the efficacy of tonsillectomy (TE) for recurrent throat infection in severely affected children, was published in 1984. This systematic review was undertaken to compile various indications for TE as suggested in the literature after 1984 and to stratify the papers according to the current concept of evidence-based medicine. Material and methods: A systematic Medline research was performed using the key word of “tonsillectomy“ in combination with different filters such as “systematic reviews“, “meta-analysis“, “English“, “German“, and “from 1984/01/01 to 2015/05/31“. Further research was performed in the Cochrane Database of Systematic Reviews, National Guideline Clearinghouse, Guidelines International Network and BMJ Clinical Evidence using the same key word. Finally, data from the “Trip Database” were researched for “tonsillectomy” and “indication“ and “from: 1984 to: 2015“ in combination with either “systematic review“ or “meta-analysis“ or “metaanalysis”. Results: A total of 237 papers were retrieved but only 57 matched our inclusion criteria covering the following topics: peritonsillar abscess (3), guidelines (5), otitis media with effusion (5), psoriasis (3), PFAPA syndrome (6), evidence-based indications (5), renal diseases (7), sleep-related breathing disorders (11), and tonsillitis/pharyngitis (12), respectively. Conclusions: 1) The literature suggests, that TE is not indicated to treat otitis media with effusion. 2) It has been shown, that the PFAPA syndrome is self-limiting and responds well to steroid administration, at least in a considerable amount of children. The indication for TE therefore appears to be imbalanced but further research is required to clarify the value of surgery. 3) Abscesstonsillectomy as a routine is not justified and indicated only for cases not responding to other measures of treatment, evident complications

  4. Appreciating the difference between design-based and model-based sampling strategies in quantitative morphology of the nervous system.

    Science.gov (United States)

    Geuna, S

    2000-11-20

    Quantitative morphology of the nervous system has undergone great developments over recent years, and several new technical procedures have been devised and applied successfully to neuromorphological research. However, a lively debate has arisen on some issues, and a great deal of confusion appears to exist that is definitely responsible for the slow spread of the new techniques among scientists. One such element of confusion is related to uncertainty about the meaning, implications, and advantages of the design-based sampling strategy that characterize the new techniques. In this article, to help remove this uncertainty, morphoquantitative methods are described and contrasted on the basis of the inferential paradigm of the sampling strategy: design-based vs model-based. Moreover, some recommendations are made to help scientists judge the appropriateness of a method used for a given study in relation to its specific goals. Finally, the use of the term stereology to label, more or less expressly, only some methods is critically discussed. Copyright 2000 Wiley-Liss, Inc.

  5. Polymeric membrane sensors based on Cd(II) Schiff base complexes for selective iodide determination in environmental and medicinal samples.

    Science.gov (United States)

    Singh, Ashok Kumar; Mehtab, Sameena

    2008-01-15

    The two cadmium chelates of schiff bases, N,N'-bis(salicylidene)-1,4-diaminobutane, (Cd-S(1)) and N,N'-bis(salicylidene)-3,4-diaminotoluene (Cd-S(2)), have been synthesized and explored as ionophores for preparing PVC-based membrane sensors selective to iodide(I) ion. Potentiometric investigations indicate high affinity of these receptors for iodide ion. Polyvinyl chloride (PVC)-based membranes of Cd-S(1) and Cd-S(2) using as hexadecyltrimethylammonium bromide (HTAB) cation discriminator and o-nitrophenyloctyl ether (o-NPOE), dibutylphthalate (DBP), acetophenone (AP) and tributylphosphate (TBP) as plasticizing solvent mediators were prepared and investigated as iodide-selective sensors. The best performance was shown by the membrane of composition (w/w) of (Cd-S(1)) (7%):PVC (31%):DBP (60%):HTAB (2%). The sensor works well over a wide concentration range 5.3x10(-7) to 1.0x10(-2)M with Nernstian compliance (59.2mVdecade(-1) of activity) within pH range 2.5-9.0 with a response time of 11s and showed good selectivity for iodide ion over a number of anions. The sensor exhibits adequate life (3 months) with good reproducibility (S.D.+/-0.24mV) and could be used successfully for the determination of iodide content in environmental water samples and mouth wash samples.

  6. Physiotherapists' views of implementing a stratified treatment approach for patients with low back pain in Germany: a qualitative study

    NARCIS (Netherlands)

    Karstens, S.; Kuithan, P.; Joos, S.; Hill, J.C.; Wensing, M.; Steinhauser, J.; Krug, K.; Szecsenyi, J.

    2018-01-01

    BACKGROUND: The STarT-Back-Approach (STarT: Subgroups for Targeted Treatment) was developed in the UK and has demonstrated clinical and cost effectiveness. Based on the results of a brief questionnaire, patients with low back pain are stratified into three treatment groups. Since the organisation of

  7. Clustering of samples and elements based on multi-variable chemical data

    International Nuclear Information System (INIS)

    Op de Beeck, J.

    1984-01-01

    Clustering and classification are defined in the context of multivariable chemical analysis data. Classical multi-variate techniques, commonly used to interpret such data, are shown to be based on probabilistic and geometrical principles which are not justified for analytical data, since in that case one assumes or expects a system of more or less systematically related objects (samples) as defined by measurements on more or less systematically interdependent variables (elements). For the specific analytical problem of data set concerning a large number of trace elements determined in a large number of samples, a deterministic cluster analysis can be used to develop the underlying classification structure. Three main steps can be distinguished: diagnostic evaluation and preprocessing of the raw input data; computation of a symmetric matrix with pairwise standardized dissimilarity values between all possible pairs of samples and/or elements; and ultrametric clustering strategy to produce the final classification as a dendrogram. The software packages designed to perform these tasks are discussed and final results are given. Conclusions are formulated concerning the dangers of using multivariate, clustering and classification software packages as a black-box

  8. A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2002-12-01

    Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.

  9. Convolutional neural networks based on augmented training samples for synthetic aperture radar target recognition

    Science.gov (United States)

    Yan, Yue

    2018-03-01

    A synthetic aperture radar (SAR) automatic target recognition (ATR) method based on the convolutional neural networks (CNN) trained by augmented training samples is proposed. To enhance the robustness of CNN to various extended operating conditions (EOCs), the original training images are used to generate the noisy samples at different signal-to-noise ratios (SNRs), multiresolution representations, and partially occluded images. Then, the generated images together with the original ones are used to train a designed CNN for target recognition. The augmented training samples can contrapuntally improve the robustness of the trained CNN to the covered EOCs, i.e., the noise corruption, resolution variance, and partial occlusion. Moreover, the significantly larger training set effectively enhances the representation capability for other conditions, e.g., the standard operating condition (SOC), as well as the stability of the network. Therefore, better performance can be achieved by the proposed method for SAR ATR. For experimental evaluation, extensive experiments are conducted on the Moving and Stationary Target Acquisition and Recognition dataset under SOC and several typical EOCs.

  10. The association between Internet addiction and personality disorders in a general population-based sample.

    Science.gov (United States)

    Zadra, Sina; Bischof, Gallus; Besser, Bettina; Bischof, Anja; Meyer, Christian; John, Ulrich; Rumpf, Hans-Jürgen

    2016-12-01

    Background and aims Data on Internet addiction (IA) and its association with personality disorder are rare. Previous studies are largely restricted to clinical samples and insufficient measurement of IA. Methods Cross-sectional analysis data are based on a German sub-sample (n = 168; 86 males; 71 meeting criteria for IA) with increased levels of excessive Internet use derived from a general population sample (n = 15,023). IA was assessed with a comprehensive standardized interview using the structure of the Composite International Diagnostic Interview and the criteria of Internet Gaming Disorder as suggested in DSM-5. Impulsivity, attention deficit hyperactivity disorder, and self-esteem were assessed with the widely used questionnaires. Results Participants with IA showed higher frequencies of personality disorders (29.6%) compared to those without IA (9.3%; p < .001). In males with IA, Cluster C personality disorders were more prevalent than among non-addicted males. Compared to participants who had IA only, lower rates of remission of IA were found among participants with IA and additional cluster B personality disorder. Personality disorders were significantly associated with IA in multivariate analysis. Comorbidity of IA and personality disorders must be considered in prevention and treatment.

  11. A sampling-based Bayesian model for gas saturation estimationusing seismic AVA and marine CSEM data

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Jinsong; Hoversten, Michael; Vasco, Don; Rubin, Yoram; Hou,Zhangshuan

    2006-04-04

    We develop a sampling-based Bayesian model to jointly invertseismic amplitude versus angles (AVA) and marine controlled-sourceelectromagnetic (CSEM) data for layered reservoir models. The porosityand fluid saturation in each layer of the reservoir, the seismic P- andS-wave velocity and density in the layers below and above the reservoir,and the electrical conductivity of the overburden are considered asrandom variables. Pre-stack seismic AVA data in a selected time windowand real and quadrature components of the recorded electrical field areconsidered as data. We use Markov chain Monte Carlo (MCMC) samplingmethods to obtain a large number of samples from the joint posteriordistribution function. Using those samples, we obtain not only estimatesof each unknown variable, but also its uncertainty information. Thedeveloped method is applied to both synthetic and field data to explorethe combined use of seismic AVA and EM data for gas saturationestimation. Results show that the developed method is effective for jointinversion, and the incorporation of CSEM data reduces uncertainty influid saturation estimation, when compared to results from inversion ofAVA data only.

  12. A Sampling Based Approach to Spacecraft Autonomous Maneuvering with Safety Specifications

    Science.gov (United States)

    Starek, Joseph A.; Barbee, Brent W.; Pavone, Marco

    2015-01-01

    This paper presents a methods for safe spacecraft autonomous maneuvering that leverages robotic motion-planning techniques to spacecraft control. Specifically the scenario we consider is an in-plan rendezvous of a chaser spacecraft in proximity to a target spacecraft at the origin of the Clohessy Wiltshire Hill frame. The trajectory for the chaser spacecraft is generated in a receding horizon fashion by executing a sampling based robotic motion planning algorithm name Fast Marching Trees (FMT) which efficiently grows a tree of trajectories over a set of probabillistically drawn samples in the state space. To enforce safety the tree is only grown over actively safe samples for which there exists a one-burn collision avoidance maneuver that circularizes the spacecraft orbit along a collision-free coasting arc and that can be executed under potential thrusters failures. The overall approach establishes a provably correct framework for the systematic encoding of safety specifications into the spacecraft trajectory generations process and appears amenable to real time implementation on orbit. Simulation results are presented for a two-fault tolerant spacecraft during autonomous approach to a single client in Low Earth Orbit.

  13. A Lateral Flow Strip Based Aptasensor for Detection of Ochratoxin A in Corn Samples

    Directory of Open Access Journals (Sweden)

    Guilan Zhang

    2018-01-01

    Full Text Available Ochratoxin A (OTA is a mycotoxin identified as a contaminant in grains and wine throughout the world, and convenient, rapid and sensitive detection methods for OTA have been a long-felt need for food safety monitoring. Herein, we presented a new competitive format based lateral flow strip fluorescent aptasensor for one-step determination of OTA in corn samples. Briefly, biotin-cDNA was immobilized on the surface of a nitrocellulose filter on the test line. Without OTA, Cy5-labeled aptamer combined with complementary strands formed a stable double helix. In the presence of OTA, however, the Cy5-aptamer/OTA complexes were generated, and therefore less free aptamer was captured in the test zone, leading to an obvious decrease in fluorescent signals on the test line. The test strip showed an excellent linear relationship in the range from 1 ng·mL−1 to 1000 ng·mL−1 with the LOD of 0.40 ng·mL−1, IC15 value of 3.46 ng·mL−1 and recoveries from 96.4% to 104.67% in spiked corn samples. Thus, the strip sensor developed in this study is an acceptable alternative for rapid detection of the OTA level in grain samples.

  14. Sample similarity analysis of angles of repose based on experimental results for DEM calibration

    Science.gov (United States)

    Tan, Yuan; Günthner, Willibald A.; Kessler, Stephan; Zhang, Lu

    2017-06-01

    As a fundamental material property, particle-particle friction coefficient is usually calculated based on angle of repose which can be obtained experimentally. In the present study, the bottomless cylinder test was carried out to investigate this friction coefficient of a kind of biomass material, i.e. willow chips. Because of its irregular shape and varying particle size distribution, calculation of the angle becomes less applicable and decisive. In the previous studies only one section of those uneven slopes is chosen in most cases, although standard methods in definition of a representable section are barely found. Hence, we presented an efficient and reliable method from the new technology, 3D scan, which was used to digitize the surface of heaps and generate its point cloud. Then, two tangential lines of any selected section were calculated through the linear least-squares regression (LLSR), such that the left and right angle of repose of a pile could be derived. As the next step, a certain sum of sections were stochastic selected, and calculations were repeated correspondingly in order to achieve sample of angles, which was plotted in Cartesian coordinates as spots diagram. Subsequently, different samples were acquired through various selections of sections. By applying similarities and difference analysis of these samples, the reliability of this proposed method was verified. Phased results provides a realistic criterion to reduce the deviation between experiment and simulation as a result of random selection of a single angle, which will be compared with the simulation results in the future.

  15. Real-time viscosity and mass density sensors requiring microliter sample volume based on nanomechanical resonators.

    Science.gov (United States)

    Bircher, Benjamin A; Duempelmann, Luc; Renggli, Kasper; Lang, Hans Peter; Gerber, Christoph; Bruns, Nico; Braun, Thomas

    2013-09-17

    A microcantilever based method for fluid viscosity and mass density measurements with high temporal resolution and microliter sample consumption is presented. Nanomechanical cantilever vibration is driven by photothermal excitation and detected by an optical beam deflection system using two laser beams of different wavelengths. The theoretical framework relating cantilever response to the viscosity and mass density of the surrounding fluid was extended to consider higher flexural modes vibrating at high Reynolds numbers. The performance of the developed sensor and extended theory was validated over a viscosity range of 1-20 mPa·s and a corresponding mass density range of 998-1176 kg/m(3) using reference fluids. Separating sample plugs from the carrier fluid by a two-phase configuration in combination with a microfluidic flow cell, allowed samples of 5 μL to be sequentially measured under continuous flow, opening the method to fast and reliable screening applications. To demonstrate the study of dynamic processes, the viscosity and mass density changes occurring during the free radical polymerization of acrylamide were monitored and compared to published data. Shear-thinning was observed in the viscosity data at higher flexural modes, which vibrate at elevated frequencies. Rheokinetic models allowed the monomer-to-polymer conversion to be tracked in spite of the shear-thinning behavior, and could be applied to study the kinetics of unknown processes.

  16. Wireless Technology Recognition Based on RSSI Distribution at Sub-Nyquist Sampling Rate for Constrained Devices.

    Science.gov (United States)

    Liu, Wei; Kulin, Merima; Kazaz, Tarik; Shahid, Adnan; Moerman, Ingrid; De Poorter, Eli

    2017-09-12

    Driven by the fast growth of wireless communication, the trend of sharing spectrum among heterogeneous technologies becomes increasingly dominant. Identifying concurrent technologies is an important step towards efficient spectrum sharing. However, due to the complexity of recognition algorithms and the strict condition of sampling speed, communication systems capable of recognizing signals other than their own type are extremely rare. This work proves that multi-model distribution of the received signal strength indicator (RSSI) is related to the signals' modulation schemes and medium access mechanisms, and RSSI from different technologies may exhibit highly distinctive features. A distinction is made between technologies with a streaming or a non-streaming property, and appropriate feature spaces can be established either by deriving parameters such as packet duration from RSSI or directly using RSSI's probability distribution. An experimental study shows that even RSSI acquired at a sub-Nyquist sampling rate is able to provide sufficient features to differentiate technologies such as Wi-Fi, Long Term Evolution (LTE), Digital Video Broadcasting-Terrestrial (DVB-T) and Bluetooth. The usage of the RSSI distribution-based feature space is illustrated via a sample algorithm. Experimental evaluation indicates that more than 92% accuracy is achieved with the appropriate configuration. As the analysis of RSSI distribution is straightforward and less demanding in terms of system requirements, we believe it is highly valuable for recognition of wideband technologies on constrained devices in the context of dynamic spectrum access.

  17. Numerical simulations of the stratified oceanic bottom boundary layer

    Science.gov (United States)

    Taylor, John R.

    Numerical simulations are used to consider several problems relevant to the turbulent oceanic bottom boundary layer. In the first study, stratified open channel flow is considered with thermal boundary conditions chosen to approximate a shallow sea. Specifically, a constant heat flux is applied at the free surface and the lower wall is assumed to be adiabatic. When the surface heat flux is strong, turbulent upwellings of low speed fluid from near the lower wall are inhibited by the stable stratification. Subsequent studies consider a stratified bottom Ekman layer over a non-sloping lower wall. The influence of the free surface is removed by using an open boundary condition at the top of the computational domain. Particular attention is paid to the influence of the outer layer stratification on the boundary layer structure. When the density field is initialized with a linear profile, a turbulent mixed layer forms near the wall, which is separated from the outer layer by a strongly stable pycnocline. It is found that the bottom stress is not strongly affected by the outer layer stratification. However, stratification reduces turbulent transport to the outer layer and strongly limits the boundary layer height. The mean shear at the top of the boundary layer is enhanced when the outer layer is stratified, and this shear is strong enough to cause intermittent instabilities above the pycnocline. Turbulence-generated internal gravity waves are observed in the outer layer with a relatively narrow frequency range. An explanation for frequency content of these waves is proposed, starting with an observed broad-banded turbulent spectrum and invoking linear viscous decay to explain the preferential damping of low and high frequency waves. During the course of this work, an open-source computational fluid dynamics code has been developed with a number of advanced features including scalar advection, subgrid-scale models for large-eddy simulation, and distributed memory

  18. E25 stratified torch ignition engine emissions and combustion analysis

    International Nuclear Information System (INIS)

    Rodrigues Filho, Fernando Antonio; Baêta, José Guilherme Coelho; Teixeira, Alysson Fernandes; Valle, Ramón Molina; Fonseca de Souza, José Leôncio

    2016-01-01

    Highlights: • A stratified torch ignition (STI) engine was built and tested. • The STI engines was tested in a wide range of load and speed. • Significant reduction on emissions was achieved by means of the STI system. • Low cyclic variability characterized the lean combustion process of the torch ignition engine. • HC emission is the main drawback of the stratified torch ignition engine. - Abstract: Vehicular emissions significantly increase atmospheric air pollution and greenhouse gases (GHG). This fact associated with fast global vehicle fleet growth calls for prompt scientific community technological solutions in order to promote a significant reduction in vehicle fuel consumption and emissions, especially of fossil fuels to comply with future legislation. To meet this goal, a prototype stratified torch ignition (STI) engine was built from a commercial existing baseline engine. In this system, combustion starts in a pre-combustion chamber, where the pressure increase pushes the combustion jet flames through calibrated nozzles to be precisely targeted into the main chamber. These combustion jet flames are endowed with high thermal and kinetic energy, being able to generate a stable lean combustion process. The high kinetic and thermal energy of the combustion jet flame results from the load stratification. This is carried out through direct fuel injection in the pre-combustion chamber by means of a prototype gasoline direct injector (GDI) developed for a very low fuel flow rate. In this work the engine out-emissions of CO, NOx, HC and CO_2 of the STI engine are presented and a detailed analysis supported by the combustion parameters is conducted. The results obtained in this work show a significant decrease in the specific emissions of CO, NOx and CO_2 of the STI engine in comparison with the baseline engine. On the other hand, HC specific emission increased due to wall wetting from the fuel hitting in the pre-combustion chamber wall.

  19. Direct contact condensation induced transition from stratified to slug flow

    International Nuclear Information System (INIS)

    Strubelj, Luka; Ezsoel, Gyoergy; Tiselj, Iztok

    2010-01-01

    Selected condensation-induced water hammer experiments performed on PMK-2 device were numerically modelled with three-dimensional two-fluid models of computer codes NEPTUNE C FD and CFX. Experimental setup consists of the horizontal pipe filled with the hot steam that is being slowly flooded with cold water. In most of the experimental cases, slow flooding of the pipe was abruptly interrupted by a strong slugging and water hammer, while in the selected experimental runs performed at higher initial pressures and temperatures that are analysed in the present work, the transition from the stratified into the slug flow was not accompanied by the water hammer pressure peak. That makes these cases more suitable tests for evaluation of the various condensation models in the horizontally stratified flows and puts them in the range of the available CFD (Computational Fluid Dynamics) codes. The key models for successful simulation appear to be the condensation model of the hot vapour on the cold liquid and the interfacial momentum transfer model. The surface renewal types of condensation correlations, developed for condensation in the stratified flows, were used in the simulations and were applied also in the regions of the slug flow. The 'large interface' model for inter-phase momentum transfer model was compared to the bubble drag model. The CFD simulations quantitatively captured the main phenomena of the experiments, while the stochastic nature of the particular condensation-induced water hammer experiments did not allow detailed prediction of the time and position of the slug formation in the pipe. We have clearly shown that even the selected experiments without water hammer present a tough test for the applied CFD codes, while modelling of the water hammer pressure peaks in two-phase flow, being a strongly compressible flow phenomena, is beyond the capability of the current CFD codes.

  20. Are Flow Injection-based Approaches Suitable for Automated Handling of Solid Samples?

    DEFF Research Database (Denmark)

    Miró, Manuel; Hansen, Elo Harald; Cerdà, Victor

    Flow-based approaches were originally conceived for liquid-phase analysis, implying that constituents in solid samples generally had to be transferred into the liquid state, via appropriate batch pretreatment procedures, prior to analysis. Yet, in recent years, much effort has been focused...... electrolytic or aqueous leaching, on-line dialysis/microdialysis, in-line filtration, and pervaporation-based procedures have been successfully implemented in continuous flow/flow injection systems. In this communication, the new generation of flow analysis, including sequential injection, multicommutated flow.......g., soils, sediments, sludges), and thus, ascertaining the potential mobility, bioavailability and eventual impact of anthropogenic elements on biota [2]. In this context, the principles of sequential injection-microcolumn extraction (SI-MCE) for dynamic fractionation are explained in detail along...

  1. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Energy Technology Data Exchange (ETDEWEB)

    Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it [Sapienza Università di Roma, Dipartimento di Ingegneria Civile, Edile e Ambientale (Italy); Alfonso, L. [Hydroinformatics Chair Group, UNESCO-IHE, Delft (Netherlands); Di Baldassarre, G. [Department of Earth Sciences, Program for Air, Water and Landscape Sciences, Uppsala University (Sweden)

    2016-06-08

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  2. Evidence-Based Practice Questionnaire: A Confirmatory Factor Analysis in a Social Work Sample

    Directory of Open Access Journals (Sweden)

    Karen Rice

    2010-10-01

    Full Text Available This study examined the psychometric properties of the Evidence-Based Practice Questionnaire (EBPQ. The 24-item EBPQ was developed to measure health professionals’ attitudes toward, knowledge of, and use of evidence-based practice (EBP. A confirmatory factor analysis was performed on the EBPQ given to a random sample of National Association of Social Work members (N = 167. The coefficient alpha of the EBPQ was .93. The study supported a 23-item 3-factor model with acceptable model fit indices (χ² = 469.04; RMSEA = .081; SRMR = .068; CFI = .900. This study suggests a slightly modified EBPQ may be a useful tool to assess social workers’ attitudes toward, knowledge of, and use of EBP.

  3. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    International Nuclear Information System (INIS)

    Ridolfi, E.; Napolitano, F.; Alfonso, L.; Di Baldassarre, G.

    2016-01-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  4. Effects of unstratified and centre-stratified randomization in multi-centre clinical trials.

    Science.gov (United States)

    Anisimov, Vladimir V

    2011-01-01

    This paper deals with the analysis of randomization effects in multi-centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre-stratified block-permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson-gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed-form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre-stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.

  5. Technetium reduction and removal in a stratified fjord

    International Nuclear Information System (INIS)

    Keith-Roach, M.; Roos, P.

    2002-01-01

    The distribution of Tc in the water columns of a stratified fjord has been measured to investigate the behaviour and fate of Tc on reaching reducing waters. Slow mixing in the water column of the fjord results in vertical transport of the dissolved Tc to the oxic/anoxic interface. Tc is reduced just below the interface and at 21 m 60% is sorbed to particulate and colloidal material. Tc is carried to the sediments sorbed to the particulate material, where there is a current inventory of approximately 3 Bq m -2 . (LN)

  6. Stability of unstably stratified shear flow between parallel plates

    Energy Technology Data Exchange (ETDEWEB)

    Fujimura, Kaoru; Kelly, R E

    1987-09-01

    The linear stability of unstably stratified shear flows between two horizontal parallel plates was investigated. Eigenvalue problems were solved numerically by making use of the expansion method in Chebyshev polynomials, and the critical Rayleigh numbers were obtained accurately in the Reynolds number range of (0.01, 100). It was found that the critical Rayleigh number increases with an increase of the Reynolds number. The result strongly supports previous stability analyses except for the analysis by Makino and Ishikawa (J. Jpn. Soc. Fluid Mech. 4 (1985) 148 - 158) in which a decrease of the critical Rayleigh number was obtained.

  7. Stability of unstably stratified shear flow between parallel plates

    International Nuclear Information System (INIS)

    Fujimura, Kaoru; Kelly, R.E.

    1987-01-01

    The linear stability of unstably stratified shear flows between two horizontal parallel plates was investigated. Eigenvalue problems were solved numerically by making use of the expansion method in Chebyshev polynomials, and the critical Rayleigh numbers were obtained accurately in the Reynolds number range of [0.01, 100]. It was found that the critical Rayleigh number increases with an increase of the Reynolds number. The result strongly supports previous stability analyses except for the analysis by Makino and Ishikawa [J. Jpn. Soc. Fluid Mech. 4 (1985) 148 - 158] in which a decrease of the critical Rayleigh number was obtained. (author)

  8. Technetium reduction and removal in a stratified fjord

    Energy Technology Data Exchange (ETDEWEB)

    Keith-Roach, M.; Roos, P. [Risoe National Lab., Roskilde (Denmark)

    2002-04-01

    The distribution of Tc in the water columns of a stratified fjord has been measured to investigate the behaviour and fate of Tc on reaching reducing waters. Slow mixing in the water column of the fjord results in vertical transport of the dissolved Tc to the oxic/anoxic interface. Tc is reduced just below the interface and at 21 m 60% is sorbed to particulate and colloidal material. Tc is carried to the sediments sorbed to the particulate material, where there is a current inventory of approximately 3 Bq m{sup -2}. (LN)

  9. Development of a natural gas stratified charge rotary engine

    Energy Technology Data Exchange (ETDEWEB)

    Sierens, R.; Verdonck, W.

    1985-01-01

    A water model has been used to determine the positions of separate inlet ports for a natural gas, stratified charge rotary engine. The flow inside the combustion chamber (mainly during the induction period) has been registered by a film camera. From these tests the best locations of the inlet ports have been obtained, a prototype of this engine has been built by Audi NSU and tested in the laboratories of the university of Gent. The results of these tests, for different stratification configurations, are given. These results are comparable with the best results obtained by Audi NSU for a homogeneous natural gas rotary engine.

  10. Seasonal variability of Dinophysis spp. and Protoceratium reticulatum associated to lipophilic shellfish toxins in a strongly stratified Chilean fjord

    Science.gov (United States)

    Alves-de-Souza, Catharina; Varela, Daniel; Contreras, Cristóbal; de La Iglesia, Pablo; Fernández, Pamela; Hipp, Byron; Hernández, Cristina; Riobó, Pilar; Reguera, Beatriz; Franco, José M.; Diogène, Jorge; García, Carlos; Lagos, Néstor

    2014-03-01

    The fine scale vertical distribution of Dinophysis spp. and Protoceratium reticulatum (potential producers of lipophilic shellfish toxins, LSTs) and its relation with LSTs in shellfish was studied in Reloncaví fjord, a strongly stratified system in Southern Chile. Samples were taken over two years from late spring to early autumn (2007-2008 period) and from early spring to late summer (2008-2009 period). Dinophysis spp., in particular Dinophysis acuminata, were always detected, often forming thin layers in the region of the salinity driven pycnocline, with cell maxima for D. acuminata of 28.5×103 cells L-1 in March 2008 and 17.1×103 cells L-1 in November 2008. During the 2008-2009 sampling period, blooms of D. acuminata co-occurred with high densities of cryptophyceans and the ciliate Mesodinium spp. The highest levels of pectenotoxin-2 (PTX-2; 2.2 ng L-1) were found in the plankton in February 2009, associated with moderate densities of D. acuminata, Dinophysis tripos and Dinophysis subcircularis (0.1-0.6×103 cells L-1). However, only trace levels of PTX-2 were observed in bivalves at that time. Dinophysistoxin (DTX-1 and DTX-3) levels in bivalves and densities of Dinophysis spp. were not well correlated. Low DTX levels in bivalves observed during a major bloom of D. acuminata in March 2008 suggested that there is a large seasonal intraspecific variability in toxin content of Dinophysis spp. driven by changes in population structure associated with distinct LST toxin profiles in Reloncaví fjord during the study period. A heterogeneous vertical distribution was also observed for P. reticulatum, whose presence was restricted to summer months. A bloom of this species of 2.2×103 cells L-1 at 14 m depth in February 2009 was positively correlated with high concentrations of yessotoxins in bivalves (51-496 ng g-1) and plankton samples (3.2 ng L-1). Our results suggest that a review of monitoring strategies for Dinophysis spp. in strongly stratified fjord systems

  11. Study of microtip-based extraction and purification of DNA from human samples for portable devices

    Science.gov (United States)

    Fotouhi, Gareth

    DNA sample preparation is essential for genetic analysis. However, rapid and easy-to-use methods are a major challenge to obtaining genetic information. Furthermore, DNA sample preparation technology must follow the growing need for point-of-care (POC) diagnostics. The current use of centrifuges, large robots, and laboratory-intensive protocols has to be minimized to meet the global challenge of limited access healthcare by bringing the lab to patients through POC devices. To address these challenges, a novel extraction method of genomic DNA from human samples is presented by using heat-cured polyethyleneimine-coated microtips generating a high electric field. The microtip extraction method is based on recent work using an electric field and capillary action integrated into an automated device. The main challenges to the method are: (1) to obtain a stable microtip surface for the controlled capture and release of DNA and (2) to improve the recovery of DNA from samples with a high concentration of inhibitors, such as human samples. The present study addresses these challenges by investigating the heat curing of polyethyleneimine (PEI) coated on the surface of the microtip. Heat-cured PEI-coated microtips are shown to control the capture and release of DNA. Protocols are developed for the extraction and purification of DNA from human samples. Heat-cured PEI-coated microtip methods of DNA sample preparation are used to extract genomic DNA from human samples. It is discovered through experiment that heat curing of a PEI layer on a gold-coated surface below 150°C could inhibit the signal of polymerase chain reaction (PCR). Below 150°C, the PEI layer is not completely cured and dissolved off the gold-coated surface. Dissolved PEI binds with DNA to inhibit PCR. Heat curing of a PEI layer above 150°C on a gold-coated surface prevents inhibition to PCR and gel electrophoresis. In comparison to gold-coated microtips, the 225°C-cured PEI-coated microtips improve the

  12. Alkaline Peptone Water-Based Enrichment Method for mcr-3 From Acute Diarrheic Outpatient Gut Samples

    Directory of Open Access Journals (Sweden)

    Qiaoling Sun

    2018-05-01

    Full Text Available A third plasmid-mediated colistin resistance gene, mcr-3, is increasingly being reported in Enterobacteriaceae and Aeromonas spp. from animals and humans. To investigate the molecular epidemiology of mcr in the gut flora of Chinese outpatients, 152 stool specimens were randomly collected from outpatients in our hospital from May to June, 2017. Stool specimens enriched in alkaline peptone water or Luria-Bertani (LB broth were screened for mcr-1, mcr-2, and mcr-3 using polymerase chain reaction (PCR-based assays. Overall, 19.1% (29/152 and 5.3% (8/152 of the stool samples enriched in alkaline peptone water were PCR-positive for mcr-1 and mcr-3, respectively, while 2.7% (4/152 of samples were positive for both mcr-1 and mcr-3. Strains isolated from the samples that were both mcr-1- and mcr-3-positive were subjected to antimicrobial susceptibility testing by broth microdilution. They were also screened for the presence of other resistance genes by PCR, while multilocus sequence typing and whole-genome sequencing were used to investigate the molecular epidemiology and genetic environment, respectively, of the resistance genes. mcr-3-positive Aeromonas veronii strain 126-14, containing a mcr-3.8-mcr-3-like2 segment, and mcr-1-positive Escherichia coli strain 126-1, belonging to sequence type 1485, were isolated from the sample from a diarrheic butcher with no history of colistin treatment. A. veronii 126-14 had a colistin minimum inhibitory concentration (MIC of 2 µg/mL and was susceptible to antibiotics in common use, while E. coli 126-1 produced TEM-1, CTX-M-55, and CTX-M-14 β-lactamases and was resistant to colistin, ceftazidime, and cefotaxime. Overall, there was a higher detection rate of mcr-3-carrying strains with low colistin MICs from the samples enriched in alkaline peptone water than from samples grown in LB broth.

  13. Evaluation of physical sampling efficiency for cyclone-based personal bioaerosol samplers in moving air environments.

    Science.gov (United States)

    Su, Wei-Chung; Tolchinsky, Alexander D; Chen, Bean T; Sigaev, Vladimir I; Cheng, Yung Sung

    2012-09-01

    The need to determine occupational exposure to bioaerosols has notably increased in the past decade, especially for microbiology-related workplaces and laboratories. Recently, two new cyclone-based personal bioaerosol samplers were developed by the National Institute for Occupational Safety and Health (NIOSH) in the USA and the Research Center for Toxicology and Hygienic Regulation of Biopreparations (RCT & HRB) in Russia to monitor bioaerosol exposure in the workplace. Here, a series of wind tunnel experiments were carried out to evaluate the physical sampling performance of these two samplers in moving air conditions, which could provide information for personal biological monitoring in a moving air environment. The experiments were conducted in a small wind tunnel facility using three wind speeds (0.5, 1.0 and 2.0 m s(-1)) and three sampling orientations (0°, 90°, and 180°) with respect to the wind direction. Monodispersed particles ranging from 0.5 to 10 μm were employed as the test aerosols. The evaluation of the physical sampling performance was focused on the aspiration efficiency and capture efficiency of the two samplers. The test results showed that the orientation-averaged aspiration efficiencies of the two samplers closely agreed with the American Conference of Governmental Industrial Hygienists (ACGIH) inhalable convention within the particle sizes used in the evaluation tests, and the effect of the wind speed on the aspiration efficiency was found negligible. The capture efficiencies of these two samplers ranged from 70% to 80%. These data offer important information on the insight into the physical sampling characteristics of the two test samplers.

  14. Automated CBED processing: Sample thickness estimation based on analysis of zone-axis CBED pattern

    Energy Technology Data Exchange (ETDEWEB)

    Klinger, M., E-mail: klinger@post.cz; Němec, M.; Polívka, L.; Gärtnerová, V.; Jäger, A.

    2015-03-15

    An automated processing of convergent beam electron diffraction (CBED) patterns is presented. The proposed methods are used in an automated tool for estimating the thickness of transmission electron microscopy (TEM) samples by matching an experimental zone-axis CBED pattern with a series of patterns simulated for known thicknesses. The proposed tool detects CBED disks, localizes a pattern in detected disks and unifies the coordinate system of the experimental pattern with the simulated one. The experimental pattern is then compared disk-by-disk with a series of simulated patterns each corresponding to different known thicknesses. The thickness of the most similar simulated pattern is then taken as the thickness estimate. The tool was tested on [0 1 1] Si, [0 1 0] α-Ti and [0 1 1] α-Ti samples prepared using different techniques. Results of the presented approach were compared with thickness estimates based on analysis of CBED patterns in two beam conditions. The mean difference between these two methods was 4.1% for the FIB-prepared silicon samples, 5.2% for the electro-chemically polished titanium and 7.9% for Ar{sup +} ion-polished titanium. The proposed techniques can also be employed in other established CBED analyses. Apart from the thickness estimation, it can potentially be used to quantify lattice deformation, structure factors, symmetry, defects or extinction distance. - Highlights: • Automated TEM sample thickness estimation using zone-axis CBED is presented. • Computer vision and artificial intelligence are employed in CBED processing. • This approach reduces operator effort, analysis time and increases repeatability. • Individual parts can be employed in other analyses of CBED/diffraction pattern.

  15. Prevalence of depression: Comparisons of different depression definitions in population-based samples of older adults.

    Science.gov (United States)

    Sjöberg, Linnea; Karlsson, Björn; Atti, Anna-Rita; Skoog, Ingmar; Fratiglioni, Laura; Wang, Hui-Xin

    2017-10-15

    Depression prevalence in older adults varies largely across studies, which probably reflects methodological rather than true differences. This study aims to explore whether and to what extent the prevalence of depression varies when using different diagnostic criteria and rating scales, and various samples of older adults. A population-based sample of 3353 individuals aged 60-104 years from the Swedish National Study on Aging and Care in Kungsholmen (SNAC-K) were examined in 2001-2004. Point prevalence of depression was estimated by: 1) diagnostic criteria, ICD-10 and DSM-IV-TR/DSM-5; 2) rating scales, MADRS and GDS-15; and 3) self-report. Depression prevalence in sub-samples by dementia status, living place, and socio-demographics were compared. The prevalence of any depression (including all severity grades) was 4.2% (moderate/severe: 1.6%) for ICD-10 and 9.3% (major: 2.1%) for DSM-IV-TR; 10.6% for MADRS and 9.2% for GDS-15; and 9.1% for self-report. Depression prevalence was lower in the dementia-free sample as compared to the total population. Furthermore, having poor physical function, or not having a partner were independently associated with higher depression prevalence, across most of the depression definitions. The response rate was 73.3% and this may have resulted in an underestimation of depression. Depression prevalence was similar across all depression definitions except for ICD-10, showing much lower figures. However, independent of the definition used, depression prevalence varies greatly by dementia status, physical functioning, and marital status. These findings may be useful for clinicians when assessing depression in older adults and for researchers when exploring and comparing depression prevalence across studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.

    Energy Technology Data Exchange (ETDEWEB)

    Matulef, Kevin Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewer resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.

  17. Dynamic failure of dry and fully saturated limestone samples based on incubation time concept

    Directory of Open Access Journals (Sweden)

    Yuri V. Petrov

    2017-02-01

    Full Text Available This paper outlines the results of experimental study of the dynamic rock failure based on the comparison of dry and saturated limestone samples obtained during the dynamic compression and split tests. The tests were performed using the Kolsky method and its modifications for dynamic splitting. The mechanical data (e.g. strength, time and energy characteristics of this material at high strain rates are obtained. It is shown that these characteristics are sensitive to the strain rate. A unified interpretation of these rate effects, based on the structural–temporal approach, is hereby presented. It is demonstrated that the temporal dependence of the dynamic compressive and split tensile strengths of dry and saturated limestone samples can be predicted by the incubation time criterion. Previously discovered possibilities to optimize (minimize the energy input for the failure process is discussed in connection with industrial rock failure processes. It is shown that the optimal energy input value associated with critical load, which is required to initialize failure in the rock media, strongly depends on the incubation time and the impact duration. The optimal load shapes, which minimize the momentum for a single failure impact, are demonstrated. Through this investigation, a possible approach to reduce the specific energy required for rock cutting by means of high-frequency vibrations is also discussed.

  18. SYBR green-based detection of Leishmania infantum DNA using peripheral blood samples.

    Science.gov (United States)

    Ghasemian, Mehrdad; Gharavi, Mohammad Javad; Akhlaghi, Lame; Mohebali, Mehdi; Meamar, Ahmad Reza; Aryan, Ehsan; Oormazdi, Hormozd; Ghayour, Zahra

    2016-03-01

    Parasitological methods for the diagnosis of visceral leishmaniasis (VL) require invasive sampling procedures. The aim of this study was to detect Leishmania infantum (L. infantum) DNA by real time-PCR method in peripheral blood of symptomatic VL patient and compared its performance with nested PCR, an established molecular method with very high diagnostic indices. 47 parasitologically confirmed VL patients diagnosed by direct agglutination test (DAT > 3200), bone marrow aspiration and presented characteristic clinical features (fever, hepatosplenomegaly, and anemia) and 40 controls (non-endemic healthy control-30, Malaria-2, Toxoplasma gondii-2, Mycobacterium tuberculosis-2, HBV-1, HCV-1, HSV-1 and CMV-1) were enrolled in this study. SYBR-green based real time-PCR and nested PCR was performed to amplify the Kinetoplast DNA minicircle gene using the DNA extracted from Buffy coat. From among 47 patients, 45 (95.7 %) were positive by both nested-PCR and real time-PCR. These results indicate that real time-PCR was not only as sensitive as a nested-PCR assay for detection of Leishmania kDNA in clinical sample, but also more rapid. The advantage of real time-PCR based methods over nested-PCR is simple to perform, more faster in which nested-PCR requires post-PCR processing and reducing contamination risk.

  19. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade; Manavi, Kasra; Burgos, Juan; Denny, Jory; Thomas, Shawna; Amato, Nancy M.

    2012-01-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  20. Raman spectral signatures of cervical exfoliated cells from liquid-based cytology samples

    Science.gov (United States)

    Kearney, Padraig; Traynor, Damien; Bonnier, Franck; Lyng, Fiona M.; O'Leary, John J.; Martin, Cara M.

    2017-10-01

    It is widely accepted that cervical screening has significantly reduced the incidence of cervical cancer worldwide. The primary screening test for cervical cancer is the Papanicolaou (Pap) test, which has extremely variable specificity and sensitivity. There is an unmet clinical need for methods to aid clinicians in the early detection of cervical precancer. Raman spectroscopy is a label-free objective method that can provide a biochemical fingerprint of a given sample. Compared with studies on infrared spectroscopy, relatively few Raman spectroscopy studies have been carried out to date on cervical cytology. The aim of this study was to define the Raman spectral signatures of cervical exfoliated cells present in liquid-based cytology Pap test specimens and to compare the signature of high-grade dysplastic cells to each of the normal cell types. Raman spectra were recorded from single exfoliated cells and subjected to multivariate statistical analysis. The study demonstrated that Raman spectroscopy can identify biochemical signatures associated with the most common cell types seen in liquid-based cytology samples; superficial, intermediate, and parabasal cells. In addition, biochemical changes associated with high-grade dysplasia could be identified suggesting that Raman spectroscopy could be used to aid current cervical screening tests.

  1. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade

    2012-05-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  2. Sampling based uncertainty analysis of 10% hot leg break LOCA in large scale test facility

    International Nuclear Information System (INIS)

    Sengupta, Samiran; Kraina, V.; Dubey, S. K.; Rao, R. S.; Gupta, S. K.

    2010-01-01

    Sampling based uncertainty analysis was carried out to quantify uncertainty in predictions of best estimate code RELAP5/MOD3.2 for a thermal hydraulic test (10% hot leg break LOCA) performed in the Large Scale Test Facility (LSTF) as a part of an IAEA coordinated research project. The nodalisation of the test facility was qualified for both steady state and transient level by systematically applying the procedures led by uncertainty methodology based on accuracy extrapolation (UMAE); uncertainty analysis was carried out using the Latin hypercube sampling (LHS) method to evaluate uncertainty for ten input parameters. Sixteen output parameters were selected for uncertainty evaluation and uncertainty band between 5 th and 95 th percentile of the output parameters were evaluated. It was observed that the uncertainty band for the primary pressure during two phase blowdown is larger than that of the remaining period. Similarly, a larger uncertainty band is observed relating to accumulator injection flow during reflood phase. Importance analysis was also carried out and standard rank regression coefficients were computed to quantify the effect of each individual input parameter on output parameters. It was observed that the break discharge coefficient is the most important uncertain parameter relating to the prediction of all the primary side parameters and that the steam generator (SG) relief pressure setting is the most important parameter in predicting the SG secondary pressure

  3. Direct RNA-based detection of CTX-M β-lactamases in human blood samples.

    Science.gov (United States)

    Stein, Claudia; Makarewicz, Oliwia; Pfeifer, Yvonne; Brandt, Christian; Pletz, Mathias W

    2015-05-01

    Bloodstream infections with ESBL-producers are associated with increased mortality, which is due to delayed appropriate treatment resulting in clinical failure. Current routine diagnostics for detection of bloodstream infections consists of blood culture followed by species identification and susceptibility testing. In attempts to improve and accelerate diagnostic procedures, PCR-based methods have been developed. These methods focus on species identification covering only a limited number of ESBL coding genes. Therefore, they fail to cover the steadily further evolving genetic diversity of clinically relevant β-lactamases. We have recently designed a fast and novel RNA targeting method to detect and specify CTX-M alleles from bacterial cultures, based on an amplification-pyrosequencing approach. We further developed this assay towards a diagnostic tool for clinical use and evaluated its sensitivity and specificity when applied directly to human blood samples. An optimized protocol for mRNA isolation allows detection of specific CTX-M groups from as little as 100 CFU/mL blood via reverse transcription, amplification, and pyrosequencing directly from human EDTA blood samples as well as from pre-incubated human blood cultures with a turnaround time for test results of <7 h. Copyright © 2015 Elsevier GmbH. All rights reserved.

  4. Sieve-based device for MALDI sample preparation. III. Its power for quantitative measurements.

    Science.gov (United States)

    Molin, Laura; Cristoni, Simone; Seraglia, Roberta; Traldi, Pietro

    2011-02-01

    The solid sample inhomogeneity is a weak point of traditional MALDI deposition techniques that reflects negatively on quantitative analysis. The recently developed sieve-based device (SBD) sample deposition method, based on the electrospraying of matrix/analyte solutions through a grounded sieve, allows the homogeneous deposition of microcrystals with dimensions smaller than that of the laser spot. In each microcrystal the matrix/analyte molar ratio can be considered constant. Then, by irradiating different portions of the microcrystal distribution an identical response is obtained. This result suggests the employment of SBD in the development of quantitative procedures. For this aim, mixtures of different proteins of known molarity were analyzed, showing a good relationship between molarity and intensity ratios. This behaviour was also observed in the case of proteins with quite different ionic yields. The power of the developed method for quantitative evaluation was also tested by the measurement of the abundance of IGPP[Oxi]GPP[Oxi]GLMGPP (m/z 1219) present in the collagen-α-5(IV) chain precursor, differently expressed in urines from healthy subjects and diabetic-nephropathic patients, confirming its overexpression in the presence of nephropathy. The data obtained indicate that SBD is a particularly effective method for quantitative analysis also in biological fluids of interest. Copyright © 2011 John Wiley & Sons, Ltd.

  5. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    Directory of Open Access Journals (Sweden)

    Ying Yan

    2017-01-01

    Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.

  6. SOLUTION OF A MULTIVARIATE STRATIFIED SAMPLING PROBLEM THROUGH CHEBYSHEV GOAL PROGRAMMING

    Directory of Open Access Journals (Sweden)

    Mohd. Vaseem Ismail

    2010-12-01

    Full Text Available In this paper, we consider the problem of minimizing the variances for the various characters with fixed (given budget. Each convex objective function is first linearised at its minimal point where it meets the linear cost constraint. The resulting multiobjective linear programming problem is then solved by Chebyshev goal programming. A numerical example is given to illustrate the procedure.

  7. ENVIRONMENTALLY STRATIFIED SAMPLING DESIGN FOR THE DEVELOPMENT OF THE GREAT LAKES ENVIRONMENTAL INDICATORS

    Science.gov (United States)

    Ecological indicators must be shown to be responsive to stress. For large-scale observational studies the best way to demonstrate responsiveness is by evaluating indicators along a gradient of stress, but such gradients are often unknown for a population of sites prior to site se...

  8. Automated Prediction of Catalytic Mechanism and Rate Law Using Graph-Based Reaction Path Sampling.

    Science.gov (United States)

    Habershon, Scott

    2016-04-12

    In a recent article [ J. Chem. Phys. 2015 , 143 , 094106 ], we introduced a novel graph-based sampling scheme which can be used to generate chemical reaction paths in many-atom systems in an efficient and highly automated manner. The main goal of this work is to demonstrate how this approach, when combined with direct kinetic modeling, can be used to determine the mechanism and phenomenological rate law of a complex catalytic cycle, namely cobalt-catalyzed hydroformylation of ethene. Our graph-based sampling scheme generates 31 unique chemical products and 32 unique chemical reaction pathways; these sampled structures and reaction paths enable automated construction of a kinetic network model of the catalytic system when combined with density functional theory (DFT) calculations of free energies and resultant transition-state theory rate constants. Direct simulations of this kinetic network across a range of initial reactant concentrations enables determination of both the reaction mechanism and the associated rate law in an automated fashion, without the need for either presupposing a mechanism or making steady-state approximations in kinetic analysis. Most importantly, we find that the reaction mechanism which emerges from these simulations is exactly that originally proposed by Heck and Breslow; furthermore, the simulated rate law is also consistent with previous experimental and computational studies, exhibiting a complex dependence on carbon monoxide pressure. While the inherent errors of using DFT simulations to model chemical reactivity limit the quantitative accuracy of our calculated rates, this work confirms that our automated simulation strategy enables direct analysis of catalytic mechanisms from first principles.

  9. Migraine and Mental Health in a Population-Based Sample of Adolescents.

    Science.gov (United States)

    Orr, Serena L; Potter, Beth K; Ma, Jinhui; Colman, Ian

    2017-01-01

    To explore the relationship between migraine and anxiety disorders, mood disorders and perceived mental health in a population-based sample of adolescents. The Canadian Community Health Survey (CCHS) is a cross-sectional health survey sampling a nationally representative group of Canadians. In this observational study, data on all 61,375 participants aged 12-19 years from six survey cycles were analyzed. The relationships between self-reported migraine, perceived mental health, and mood/anxiety disorders were modeled using univariate and multivariate logistic regression. The migraine-depression association was also explored in a subset of participants using the Composite International Diagnostic Interview-Short Form (CIDI-SF) depression scale. The odds of migraine were higher among those with mood disorders, with the strongest association in 2011-2 (adjusted odds ratio [aOR]=4.59; 95% confidence interval [CI 95%]=3.44-6.12), and the weakest in 2009-10 (aOR=3.06, CI 95%=2.06-4.55). The migraine-mood disorders association was also significant throughout all cycles, other than 2011-2, when the CIDI-SF depression scale was employed. The odds of migraine were higher among those with anxiety disorders, with the strongest association in 2011-2 (aOR=4.21, CI 95%=3.31-5.35) and the weakest in 2010 (aOR=1.87, CI 95%=1.10-3.37). The inverse association between high perceived mental health and the odds of migraine was observed in all CCHS cycles, with the strongest association in 2011-2 (aOR=0.58, CI 95%=0.48-0.69) and the weakest in 2003-4 (aOR=0.75, CI 95%=0.62-0.91). This study provides evidence, derived from a large population-based sample of adolescents, for a link between migraine and mood/anxiety disorders.

  10. MANAGERIAL DECISION IN INNOVATIVE EDUCATION SYSTEMS STATISTICAL SURVEY BASED ON SAMPLE THEORY

    Directory of Open Access Journals (Sweden)

    Gheorghe SĂVOIU

    2012-12-01

    Full Text Available Before formulating the statistical hypotheses and the econometrictesting itself, a breakdown of some of the technical issues is required, which are related to managerial decision in innovative educational systems, the educational managerial phenomenon tested through statistical and mathematical methods, respectively the significant difference in perceiving the current qualities, knowledge, experience, behaviour and desirable health, obtained through a questionnaire applied to a stratified population at the end,in the educational environment, either with educational activities, or with simultaneously managerial and educational activities. The details having to do with research focused on the survey theory, turning into a working tool the questionnaires and statistical data that are processed from those questionnaires, are summarized below.

  11. A quick method based on SIMPLISMA-KPLS for simultaneously selecting outlier samples and informative samples for model standardization in near infrared spectroscopy

    Science.gov (United States)

    Li, Li-Na; Ma, Chang-Ming; Chang, Ming; Zhang, Ren-Cheng

    2017-12-01

    A novel method based on SIMPLe-to-use Interactive Self-modeling Mixture Analysis (SIMPLISMA) and Kernel Partial Least Square (KPLS), named as SIMPLISMA-KPLS, is proposed in this paper for selection of outlier samples and informative samples simultaneously. It is a quick algorithm used to model standardization (or named as model transfer) in near infrared (NIR) spectroscopy. The NIR experiment data of the corn for analysis of the protein content is introduced to evaluate the proposed method. Piecewise direct standardization (PDS) is employed in model transfer. And the comparison of SIMPLISMA-PDS-KPLS and KS-PDS-KPLS is given in this research by discussion of the prediction accuracy of protein content and calculation speed of each algorithm. The conclusions include that SIMPLISMA-KPLS can be utilized as an alternative sample selection method for model transfer. Although it has similar accuracy to Kennard-Stone (KS), it is different from KS as it employs concentration information in selection program. This means that it ensures analyte information is involved in analysis, and the spectra (X) of the selected samples is interrelated with concentration (y). And it can be used for outlier sample elimination simultaneously by validation of calibration. According to the statistical data results of running time, it is clear that the sample selection process is more rapid when using KPLS. The quick algorithm of SIMPLISMA-KPLS is beneficial to improve the speed of online measurement using NIR spectroscopy.

  12. MITIE: Simultaneous RNA-Seq-based transcript identification and quantification in multiple samples.

    Science.gov (United States)

    Behr, Jonas; Kahles, André; Zhong, Yi; Sreedharan, Vipin T; Drewe, Philipp; Rätsch, Gunnar

    2013-10-15

    High-throughput sequencing of mRNA (RNA-Seq) has led to tremendous improvements in the detection of expressed genes and reconstruction of RNA transcripts. However, the extensive dynamic range of gene expression, technical limitations and biases, as well as the observed complexity of the transcriptional landscape, pose profound computational challenges for transcriptome reconstruction. We present the novel framework MITIE (Mixed Integer Transcript IdEntification) for simultaneous transcript reconstruction and quantification. We define a likelihood function based on the negative binomial distribution, use a regularization approach to select a few transcripts collectively explaining the observed read data and show how to find the optimal solution using Mixed Integer Programming. MITIE can (i) take advantage of known transcripts, (ii) reconstruct and quantify transcripts simultaneously in multiple samples, and (iii) resolve the location of multi-mapping reads. It is designed for genome- and assembly-based transcriptome reconstruction. We present an extensive study based on realistic simulated RNA-Seq data. When compared with state-of-the-art approaches, MITIE proves to be significantly more sensitive and overall more accurate. Moreover, MITIE yields substantial performance gains when used with multiple samples. We applied our system to 38 Drosophila melanogaster modENCODE RNA-Seq libraries and estimated the sensitivity of reconstructing omitted transcript annotations and the specificity with respect to annotated transcripts. Our results corroborate that a well-motivated objective paired with appropriate optimization techniques lead to significant improvements over the state-of-the-art in transcriptome reconstruction. MITIE is implemented in C++ and is available from http://bioweb.me/mitie under the GPL license.

  13. Crystallization of a compositionally stratified basal magma ocean

    Science.gov (United States)

    Laneuville, Matthieu; Hernlund, John; Labrosse, Stéphane; Guttenberg, Nicholas

    2018-03-01

    Earth's ∼3.45 billion year old magnetic field is regenerated by dynamo action in its convecting liquid metal outer core. However, convection induces an isentropic thermal gradient which, coupled with a high core thermal conductivity, results in rapid conducted heat loss. In the absence of implausibly high radioactivity or alternate sources of motion to drive the geodynamo, the Earth's early core had to be significantly hotter than the melting point of the lower mantle. While the existence of a dense convecting basal magma ocean (BMO) has been proposed to account for high early core temperatures, the requisite physical and chemical properties for a BMO remain controversial. Here we relax the assumption of a well-mixed convecting BMO and instead consider a BMO that is initially gravitationally stratified owing to processes such as mixing between metals and silicates at high temperatures in the core-mantle boundary region during Earth's accretion. Using coupled models of crystallization and heat transfer through a stratified BMO, we show that very high temperatures could have been trapped inside the early core, sequestering enough heat energy to run an ancient geodynamo on cooling power alone.

  14. Dyadic Green's function of an eccentrically stratified sphere.

    Science.gov (United States)

    Moneda, Angela P; Chrissoulidis, Dimitrios P

    2014-03-01

    The electric dyadic Green's function (dGf) of an eccentrically stratified sphere is built by use of the superposition principle, dyadic algebra, and the addition theorem of vector spherical harmonics. The end result of the analytical formulation is a set of linear equations for the unknown vector wave amplitudes of the dGf. The unknowns are calculated by truncation of the infinite sums and matrix inversion. The theory is exact, as no simplifying assumptions are required in any one of the analytical steps leading to the dGf, and it is general in the sense that any number, position, size, and electrical properties can be considered for the layers of the sphere. The point source can be placed outside of or in any lossless part of the sphere. Energy conservation, reciprocity, and other checks verify that the dGf is correct. A numerical application is made to a stratified sphere made of gold and glass, which operates as a lens.

  15. Crenothrix are major methane consumers in stratified lakes.

    Science.gov (United States)

    Oswald, Kirsten; Graf, Jon S; Littmann, Sten; Tienken, Daniela; Brand, Andreas; Wehrli, Bernhard; Albertsen, Mads; Daims, Holger; Wagner, Michael; Kuypers, Marcel Mm; Schubert, Carsten J; Milucka, Jana

    2017-09-01

    Methane-oxidizing bacteria represent a major biological sink for methane and are thus Earth's natural protection against this potent greenhouse gas. Here we show that in two stratified freshwater lakes a substantial part of upward-diffusing methane was oxidized by filamentous gamma-proteobacteria related to Crenothrix polyspora. These filamentous bacteria have been known as contaminants of drinking water supplies since 1870, but their role in the environmental methane removal has remained unclear. While oxidizing methane, these organisms were assigned an 'unusual' methane monooxygenase (MMO), which was only distantly related to 'classical' MMO of gamma-proteobacterial methanotrophs. We now correct this assignment and show that Crenothrix encode a typical gamma-proteobacterial PmoA. Stable isotope labeling in combination swith single-cell imaging mass spectrometry revealed methane-dependent growth of the lacustrine Crenothrix with oxygen as well as under oxygen-deficient conditions. Crenothrix genomes encoded pathways for the respiration of oxygen as well as for the reduction of nitrate to N 2 O. The observed abundance and planktonic growth of Crenothrix suggest that these methanotrophs can act as a relevant biological sink for methane in stratified lakes and should be considered in the context of environmental removal of methane.

  16. LONGITUDINAL OSCILLATIONS IN DENSITY STRATIFIED AND EXPANDING SOLAR WAVEGUIDES

    Energy Technology Data Exchange (ETDEWEB)

    Luna-Cardozo, M. [Instituto de Astronomia y Fisica del Espacio, CONICET-UBA, CC. 67, Suc. 28, 1428 Buenos Aires (Argentina); Verth, G. [School of Computing, Engineering and Information Sciences, Northumbria University, Newcastle Upon Tyne NE1 8ST (United Kingdom); Erdelyi, R., E-mail: mluna@iafe.uba.ar, E-mail: robertus@sheffield.ac.uk, E-mail: gary.verth@northumbria.ac.uk [Solar Physics and Space Plasma Research Centre (SP2RC), University of Sheffield, Hicks Building, Hounsfield Road, Sheffield S3 7RH (United Kingdom)

    2012-04-01

    Waves and oscillations can provide vital information about the internal structure of waveguides in which they propagate. Here, we analytically investigate the effects of density and magnetic stratification on linear longitudinal magnetohydrodynamic (MHD) waves. The focus of this paper is to study the eigenmodes of these oscillations. It is our specific aim to understand what happens to these MHD waves generated in flux tubes with non-constant (e.g., expanding or magnetic bottle) cross-sectional area and density variations. The governing equation of the longitudinal mode is derived and solved analytically and numerically. In particular, the limit of the thin flux tube approximation is examined. The general solution describing the slow longitudinal MHD waves in an expanding magnetic flux tube with constant density is found. Longitudinal MHD waves in density stratified loops with constant magnetic field are also analyzed. From analytical solutions, the frequency ratio of the first overtone and fundamental mode is investigated in stratified waveguides. For small expansion, a linear dependence between the frequency ratio and the expansion factor is found. From numerical calculations it was found that the frequency ratio strongly depends on the density profile chosen and, in general, the numerical results are in agreement with the analytical results. The relevance of these results for solar magneto-seismology is discussed.

  17. Random forcing of geostrophic motion in rotating stratified turbulence

    Science.gov (United States)

    Waite, Michael L.

    2017-12-01

    Random forcing of geostrophic motion is a common approach in idealized simulations of rotating stratified turbulence. Such forcing represents the injection of energy into large-scale balanced motion, and the resulting breakdown of quasi-geostrophic turbulence into inertia-gravity waves and stratified turbulence can shed light on the turbulent cascade processes of the atmospheric mesoscale. White noise forcing is commonly employed, which excites all frequencies equally, including frequencies much higher than the natural frequencies of large-scale vortices. In this paper, the effects of these high frequencies in the forcing are investigated. Geostrophic motion is randomly forced with red noise over a range of decorrelation time scales τ, from a few time steps to twice the large-scale vortex time scale. It is found that short τ (i.e., nearly white noise) results in about 46% more gravity wave energy than longer τ, despite the fact that waves are not directly forced. We argue that this effect is due to wave-vortex interactions, through which the high frequencies in the forcing are able to excite waves at their natural frequencies. It is concluded that white noise forcing should be avoided, even if it is only applied to the geostrophic motion, when a careful investigation of spontaneous wave generation is needed.

  18. Internal circle uplifts, transversality and stratified G-structures

    Energy Technology Data Exchange (ETDEWEB)

    Babalic, Elena Mirela [Department of Theoretical Physics, National Institute of Physics and Nuclear Engineering,Str. Reactorului no.30, P.O.BOX MG-6, Postcode 077125, Bucharest-Magurele (Romania); Department of Physics, University of Craiova,13 Al. I. Cuza Str., Craiova 200585 (Romania); Lazaroiu, Calin Iuliu [Center for Geometry and Physics, Institute for Basic Science,Pohang 790-784 (Korea, Republic of)

    2015-11-24

    We study stratified G-structures in N=2 compactifications of M-theory on eight-manifolds M using the uplift to the auxiliary nine-manifold M̂=M×S{sup 1}. We show that the cosmooth generalized distribution D̂ on M̂ which arises in this formalism may have pointwise transverse or non-transverse intersection with the pull-back of the tangent bundle of M, a fact which is responsible for the subtle relation between the spinor stabilizers arising on M and M̂ and for the complicated stratified G-structure on M which we uncovered in previous work. We give a direct explanation of the latter in terms of the former and relate explicitly the defining forms of the SU(2) structure which exists on the generic locus U of M to the defining forms of the SU(3) structure which exists on an open subset Û of M̂, thus providing a dictionary between the eight- and nine-dimensional formalisms.

  19. STRESS DISTRIBUTION IN THE STRATIFIED MASS CONTAINING VERTICAL ALVEOLE

    Directory of Open Access Journals (Sweden)

    Bobileva Tatiana Nikolaevna

    2017-08-01

    Full Text Available Almost all subsurface rocks used as foundations for various types of structures are stratified. Such heterogeneity may cause specific behaviour of the materials under strain. Differential equations describing the behaviour of such materials contain rapidly fluctuating coefficients, in view of this, solution of such equations is more time-consuming when using today’s computers. The method of asymptotic averaging leads to getting homogeneous medium under study to averaged equations with fixed factors. The present article is concerned with stratified soil mass consisting of pair-wise alternative isotropic elastic layers. In the results of elastic modules averaging, the present soil mass with horizontal rock stratification is simulated by homogeneous transversal-isotropic half-space with isotropy plane perpendicular to the standing axis. Half-space is loosened by a vertical alveole of circular cross-section, and virgin ground is under its own weight. For horizontal parting planes of layers, the following two types of surface conditions are set: ideal contact and backlash without cleavage. For homogeneous transversal-isotropic half-space received with a vertical alveole, the analytical solution of S.G. Lekhnitsky, well known in scientific papers, is used. The author gives expressions for stress components and displacements in soil mass for different marginal conditions on the alveole surface. Such research problems arise when constructing and maintaining buildings and when composite materials are used.

  20. Measuring mixing efficiency in experiments of strongly stratified turbulence

    Science.gov (United States)

    Augier, P.; Campagne, A.; Valran, T.; Calpe Linares, M.; Mohanan, A. V.; Micard, D.; Viboud, S.; Segalini, A.; Mordant, N.; Sommeria, J.; Lindborg, E.

    2017-12-01

    Oceanic and atmospheric models need better parameterization of the mixing efficiency. Therefore, we need to measure this quantity for flows representative of geophysical flows, both in terms of types of flows (with vortices and/or waves) and of dynamical regimes. In order to reach sufficiently large Reynolds number for strongly stratified flows, experiments for which salt is used to produce the stratification have to be carried out in a large rotating platform of at least 10-meter diameter.We present new experiments done in summer 2017 to study experimentally strongly stratified turbulence and mixing efficiency in the Coriolis platform. The flow is forced by a slow periodic movement of an array of large vertical or horizontal cylinders. The velocity field is measured by 3D-2C scanned horizontal particles image velocimetry (PIV) and 2D vertical PIV. Six density-temperature probes are used to measure vertical and horizontal profiles and signals at fixed positions.We will show how we rely heavily on open-science methods for this study. Our new results on the mixing efficiency will be presented and discussed in terms of mixing parameterization.

  1. Optimal energy growth in a stably stratified shear flow

    Science.gov (United States)

    Jose, Sharath; Roy, Anubhab; Bale, Rahul; Iyer, Krithika; Govindarajan, Rama

    2018-02-01

    Transient growth of perturbations by a linear non-modal evolution is studied here in a stably stratified bounded Couette flow. The density stratification is linear. Classical inviscid stability theory states that a parallel shear flow is stable to exponentially growing disturbances if the Richardson number (Ri) is greater than 1/4 everywhere in the flow. Experiments and numerical simulations at higher Ri show however that algebraically growing disturbances can lead to transient amplification. The complexity of a stably stratified shear flow stems from its ability to combine this transient amplification with propagating internal gravity waves (IGWs). The optimal perturbations associated with maximum energy amplification are numerically obtained at intermediate Reynolds numbers. It is shown that in this wall-bounded flow, the three-dimensional optimal perturbations are oblique, unlike in unstratified flow. A partitioning of energy into kinetic and potential helps in understanding the exchange of energies and how it modifies the transient growth. We show that the apportionment between potential and kinetic energy depends, in an interesting manner, on the Richardson number, and on time, as the transient growth proceeds from an optimal perturbation. The oft-quoted stabilizing role of stratification is also probed in the non-diffusive limit in the context of disturbance energy amplification.

  2. Non-Darcy Mixed Convection in a Doubly Stratified Porous Medium with Soret-Dufour Effects

    Directory of Open Access Journals (Sweden)

    D. Srinivasacharya

    2014-01-01

    Full Text Available This paper presents the nonsimilarity solutions for mixed convection heat and mass transfer along a semi-infinite vertical plate embedded in a doubly stratified fluid saturated porous medium in the presence of Soret and Dufour effects. The flow in the porous medium is described by employing the Darcy-Forchheimer based model. The nonlinear governing equations and their associated boundary conditions are initially cast into dimensionless forms and then solved numerically. The influence of pertinent parameters on dimensionless velocity, temperature, concentration, heat, and mass transfer in terms of the local Nusselt and Sherwood numbers is discussed and presented graphically.

  3. The optical interface of a photonic crystal: Modeling an opal with a stratified effective index

    OpenAIRE

    Maurin, Isabelle; Moufarej, Elias; Laliotis, Athanasios; Bloch, Daniel

    2014-01-01

    An artificial opal is a compact arrangement of transparent spheres, and is an archetype of a three-dimensional photonic crystal. Here, we describe the optics of an opal using a flexible model based upon a stratified medium whose (effective) index is governed by the opal density in a small planar slice of the opal. We take into account the effect of the substrate and assume a well- controlled number of layers, as it occurs for an opal fabricated by Langmuir-Blodgett deposition. The calculation...

  4. A UAV-Based Fog Collector Design for Fine-Scale Aerobiological Sampling

    Science.gov (United States)

    Gentry, Diana; Guarro, Marcello; Demachkie, Isabella Siham; Stumfall, Isabel; Dahlgren, Robert P.

    2017-01-01

    Airborne microbes are found throughout the troposphere and into the stratosphere. Knowing how the activity of airborne microorganisms can alter water, carbon, and other geochemical cycles is vital to a full understanding of local and global ecosystems. Just as on the land or in the ocean, atmospheric regions vary in habitability; the underlying geochemical, climatic, and ecological dynamics must be characterized at different scales to be effectively modeled. Most aerobiological studies have focused on a high level: 'How high are airborne microbes found?' and 'How far can they travel?' Most fog and cloud water studies collect from stationary ground stations (point) or along flight transects (1D). To complement and provide context for this data, we have designed a UAV-based modified fog and cloud water collector to retrieve 4D-resolved samples for biological and chemical analysis.Our design uses a passive impacting collector hanging from a rigid rod suspended between two multi-rotor UAVs. The suspension design reduces the effect of turbulence and potential for contamination from the UAV downwash. The UAVs are currently modeled in a leader-follower configuration, taking advantage of recent advances in modular UAVs, UAV swarming, and flight planning.The collector itself is a hydrophobic mesh. Materials including Tyvek, PTFE, nylon, and polypropylene monofilament fabricated via laser cutting, CNC knife, or 3D printing were characterized for droplet collection efficiency using a benchtop atomizer and particle counter. Because the meshes can be easily and inexpensively fabricated, a set can be pre-sterilized and brought to the field for 'hot swapping' to decrease cross-contamination between flight sessions or use as negative controls.An onboard sensor and logging system records the time and location of each sample; when combined with flight tracking data, the samples can be resolved into a 4D volumetric map of the fog bank. Collected samples can be returned to the lab for

  5. Access and completion of a Web-based treatment in a population-based sample of tornado-affected adolescents.

    Science.gov (United States)

    Price, Matthew; Yuen, Erica K; Davidson, Tatiana M; Hubel, Grace; Ruggiero, Kenneth J

    2015-08-01

    Although Web-based treatments have significant potential to assess and treat difficult-to-reach populations, such as trauma-exposed adolescents, the extent that such treatments are accessed and used is unclear. The present study evaluated the proportion of adolescents who accessed and completed a Web-based treatment for postdisaster mental health symptoms. Correlates of access and completion were examined. A sample of 2,000 adolescents living in tornado-affected communities was assessed via structured telephone interview and invited to a Web-based treatment. The modular treatment addressed symptoms of posttraumatic stress disorder, depression, and alcohol and tobacco use. Participants were randomized to experimental or control conditions after accessing the site. Overall access for the intervention was 35.8%. Module completion for those who accessed ranged from 52.8% to 85.6%. Adolescents with parents who used the Internet to obtain health-related information were more likely to access the treatment. Adolescent males were less likely to access the treatment. Future work is needed to identify strategies to further increase the reach of Web-based treatments to provide clinical services in a postdisaster context. (c) 2015 APA, all rights reserved).

  6. Improving the UNC Passive Aerosol Sampler Model Based on Comparison with Commonly Used Aerosol Sampling Methods.

    Science.gov (United States)

    Shirdel, Mariam; Andersson, Britt M; Bergdahl, Ingvar A; Sommar, Johan N; Wingfors, Håkan; Liljelind, Ingrid E

    2018-03-12

    In an occupational environment, passive sampling could be an alternative to active sampling with pumps for sampling of dust. One passive sampler is the University of North Carolina passive aerosol sampler (UNC sampler). It is often analysed by microscopic imaging. Promising results have been shown for particles above 2.5 µm, but indicate large underestimations for PM2.5. The aim of this study was to evaluate, and possibly improve, the UNC sampler for stationary sampling in a working environment. Sampling was carried out at 8-h intervals during 24 h in four locations in an open pit mine with UNC samplers, respirable cyclones, PM10 and PM2.5 impactors, and an aerodynamic particle sizer (APS). The wind was minimal. For quantification, two modifications of the UNC sampler analysis model, UNC sampler with hybrid model and UNC sampler with area factor, were compared with the original one, UNC sampler with mesh factor derived from wind tunnel experiments. The effect of increased resolution for the microscopic imaging was examined. Use of the area factor and a higher resolution eliminated the underestimation for PM10 and PM2.5. The model with area factor had the overall lowest deviation versus the impactor and the cyclone. The intraclass correlation (ICC) showed that the UNC sampler had a higher precision and better ability to distinguish between different exposure levels compared to the cyclone (ICC: 0.51 versus 0.24), but lower precision compared to the impactor (PM10: 0.79 versus 0.99; PM2.5: 0.30 versus 0.45). The particle size distributions as calculated from the different UNC sampler analysis models were visually compared with the distributions determined by APS. The distributions were obviously different when the UNC sampler with mesh factor was used but came to a reasonable agreement when the area factor was used. High resolution combined with a factor based on area only, results in no underestimation of small particles compared to impactors and cyclones and a

  7. Toward cost-efficient sampling methods

    Science.gov (United States)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  8. Generating samples for association studies based on HapMap data

    Directory of Open Access Journals (Sweden)

    Chen Yixuan

    2008-01-01

    Full Text Available Abstract Background With the completion of the HapMap project, a variety of computational algorithms and tools have been proposed for haplotype inference, tag SNP selection and genome-wide association studies. Simulated data are commonly used in evaluating these new developed approaches. In addition to simulations based on population models, empirical data generated by perturbing real data, has also been used because it may inherit specific properties from real data. However, there is no tool that is publicly available to generate large scale simulated variation data by taking into account knowledge from the HapMap project. Results A computer program (gs was developed to quickly generate a large number of samples based on real data that are useful for a variety of purposes, including evaluating methods for haplotype inference, tag SNP selection and association studies. Two approaches have been implemented to generate dense SNP haplotype/genotype data that share similar local linkage disequilibrium (LD patterns as those in human populations. The first approach takes haplotype pairs from samples as inputs, and the second approach takes patterns of haplotype block structures as inputs. Both quantitative and qualitative traits have been incorporated in the program. Phenotypes are generated based on a disease model, or based on the effect of a quantitative trait nucleotide, both of which can be specified by users. In addition to single-locus disease models, two-locus disease models have also been implemented that can incorporate any degree of epistasis. Users are allowed to specify all nine parameters in a 3 × 3 penetrance table. For several commonly used two-locus disease models, the program can automatically calculate penetrances based on the population prevalence and marginal effects of a disease that users can conveniently specify. Conclusion The program gs can effectively generate large scale genetic and phenotypic variation data that can be

  9. Proton NMR-based metabolite analyses of archived serial paired serum and urine samples from myeloma patients at different stages of disease activity identifies acetylcarnitine as a novel marker of active disease.

    Directory of Open Access Journals (Sweden)

    Alessia Lodi

    Full Text Available BACKGROUND: Biomarker identification is becoming increasingly important for the development of personalized or stratified therapies. Metabolomics yields biomarkers indicative of phenotype that can be used to characterize transitions between health and disease, disease progression and therapeutic responses. The desire to reproducibly detect ever greater numbers of metabolites at ever diminishing levels has naturally nurtured advances in best practice for sample procurement, storage and analysis. Reciprocally, since many of the available extensive clinical archives were established prior to the metabolomics era and were not processed in such an 'ideal' fashion, considerable scepticism has arisen as to their value for metabolomic analysis. Here we have challenged that paradigm. METHODS: We performed proton nuclear magnetic resonance spectroscopy-based metabolomics on blood serum and urine samples from 32 patients representative of a total cohort of 1970 multiple myeloma patients entered into the United Kingdom Medical Research Council Myeloma IX trial. FINDINGS: Using serial paired blood and urine samples we detected metabolite profiles that associated with diagnosis, post-treatment remission and disease progression. These studies identified carnitine and acetylcarnitine as novel potential biomarkers of active disease both at diagnosis and relapse and as a mediator of disease associated pathologies. CONCLUSIONS: These findings show that samples conventionally processed and archived can provide useful metabolomic information that has important implications for understanding the biology of myeloma, discovering new therapies and identifying biomarkers potentially useful in deciding the choice and application of therapy.

  10. Newly developed liquid-based cytology. TACAS™: cytological appearance and HPV testing using liquid-based sample.

    Science.gov (United States)

    Kubushiro, Kaneyuki; Taoka, Hideki; Sakurai, Nobuyuki; Yamamoto, Yasuhiro; Kurasaki, Akiko; Asakawa, Yasuyuki; Iwahara, Minoru; Takahashi, Kei

    2011-09-01

    Cell profiles determined by the thin-layer advanced cytology assay system (TACAS™), a liquid-based cytology technique newly developed in Japan, were analyzed in this study. Hybrid capture 2 (HC-2) was also performed using the liquid-based samples prepared by TACAS to ascertain its ability to detect human papillomavirus (HPV). Cell collection samples from uterine cervix were obtained from 359 patients and examined cytologically. A HC-2 assay for HPV was carried out in the cell specimens. All specimens were found to show background factors such as leukocytes. After excluding the 5 unsatisfactory cases from the total 354 cases, 82 cases (23.2%) were positive and 272 cases (76.8%) were negative for HPV. Cell specimens from 30 HPV-positive cases and 166 HPV-negative cases were subjected to 4 weeks of preservation at room temperature. Then, when subsequently re-assayed, 28 cases (93.3%) in the former group were found to be HPV positive and 164 cases (98.8%) in the latter group were found to be HPV negative. These results supported the excellent reproducibility of TACAS for HPV testing. A reasonable inference from the foregoing analysis is that TACAS may be distinguished from other liquid-based cytological approaches, such as ThinPrep and SurePath, in that it can retain the cell backgrounds. Furthermore, this study raises the possibility that cell specimens prepared using TACAS could be preserved for at least 4 weeks prior to carrying out a HC-2 assay for HPV.

  11. Prevalence of premenstrual syndrome and premenstrual dysphoric disorder in a population-based sample in China.

    Science.gov (United States)

    Qiao, Mingqi; Zhang, Huiyun; Liu, Huimin; Luo, Songping; Wang, Tianfang; Zhang, Junlong; Ji, Lijin

    2012-05-01

    To investigate the prevalence of premenstrual syndrome (PMS) and premenstrual dysphoric disorder (PMDD), and the frequency and severity of the symptoms in a population-based sample of Chinese women of reproductive age. Women aged 18-45 years were screened for suspected PMS and PMDD based on the ACOG recommendations for a diagnosis of PMS and diagnostic and statistical manual of mental disorders, fourth edition (DSM-IV). For those who were consistent with PMS diagnostic criteria, the daily record of severity of problems (DRSP) questionnaire was used to assess the symptoms prospectively over 2 months. Participants were then categorized as having no perceived symptoms, mild PMS, moderate PMS, and PMDD, based on a validated algorithm. Among the study group, the incidence of PMDD was 2.1% and PMS was 21.1%. The most common symptoms were irritability (91.21%), breast tenderness (77.62%), depression (68.31%), abdominal bloating (63.70%) and angry outbursts (59.62%). The prevalence of PMS/PMDD and the frequency and severity of the symptoms have their own characteristics in Chinese women. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  12. Reflectors and tuning elements for widely-tunable GaAs-based sampled grating DBR lasers

    Science.gov (United States)

    Brox, O.; Wenzel, H.; Della Case, P.; Tawfieq, M.; Sumpf, B.; Weyers, M.; Knigge, A.

    2018-02-01

    Widely-tunable lasers without moving parts are attractive light sources for sensors in industry and biomedicine. In contrast to InP based sampled grating (SG) distributed Bragg reflector (DBR) diode lasers which are commercially available, shorter wavelength GaAs SG-DBR lasers are still under development. One reason is the difficulty to integrate gratings with coupling coefficients that are high enough for functional grating bursts with lengths below 10 μm. Recently we have demonstrated > 20 nm wide quasi-continuous tuning with a GaAs based SG-DBR laser emitting around 975 nm. Wavelength selective reflectors are realized with SGs having different burst periods for the front and back mirrors. Thermal tuning elements (resistors) which are placed on top of the SG allow the control of the spectral positions of the SG reflector combs and hence to adjust the Vernier mode. In this work we characterize subsections of the developed SG-DBR laser to further improve its performance. We study the impact of two different vertical structures (with vertical far field FWHMs of 41° and 24°) and two grating orders on the coupling coefficient. Gratings with coupling coefficients above 350 cm-1 have been integrated into SG-DBR lasers. We also examine electronic tuning elements (a technique which is typically applied in InP based SG-DBR lasers and allows tuning within nanoseconds) and discuss the limitations in the GaAs material system

  13. Discrete Multiwavelet Critical-Sampling Transform-Based OFDM System over Rayleigh Fading Channels

    Directory of Open Access Journals (Sweden)

    Sameer A. Dawood

    2015-01-01

    Full Text Available Discrete multiwavelet critical-sampling transform (DMWCST has been proposed instead of fast Fourier transform (FFT in the realization of the orthogonal frequency division multiplexing (OFDM system. The proposed structure further reduces the level of interference and improves the bandwidth efficiency through the elimination of the cyclic prefix due to the good orthogonality and time-frequency localization properties of the multiwavelet transform. The proposed system was simulated using MATLAB to allow various parameters of the system to be varied and tested. The performance of DMWCST-based OFDM (DMWCST-OFDM was compared with that of the discrete wavelet transform-based OFDM (DWT-OFDM and the traditional FFT-based OFDM (FFT-OFDM over flat fading and frequency-selective fading channels. Results obtained indicate that the performance of the proposed DMWCST-OFDM system achieves significant improvement compared to those of DWT-OFDM and FFT-OFDM systems. DMWCST improves the performance of the OFDM system by a factor of 1.5–2.5 dB and 13–15.5 dB compared with the DWT and FFT, respectively. Therefore the proposed system offers higher data rate in wireless mobile communications.

  14. Sampling-based exploration of folded state of a protein under kinematic and geometric constraints

    KAUST Repository

    Yao, Peggy

    2011-10-04

    Flexibility is critical for a folded protein to bind to other molecules (ligands) and achieve its functions. The conformational selection theory suggests that a folded protein deforms continuously and its ligand selects the most favorable conformations to bind to. Therefore, one of the best options to study protein-ligand binding is to sample conformations broadly distributed over the protein-folded state. This article presents a new sampler, called kino-geometric sampler (KGS). This sampler encodes dominant energy terms implicitly by simple kinematic and geometric constraints. Two key technical contributions of KGS are (1) a robotics-inspired Jacobian-based method to simultaneously deform a large number of interdependent kinematic cycles without any significant break-up of the closure constraints, and (2) a diffusive strategy to generate conformation distributions that diffuse quickly throughout the protein folded state. Experiments on four very different test proteins demonstrate that KGS can efficiently compute distributions containing conformations close to target (e.g., functional) conformations. These targets are not given to KGS, hence are not used to bias the sampling process. In particular, for a lysine-binding protein, KGS was able to sample conformations in both the intermediate and functional states without the ligand, while previous work using molecular dynamics simulation had required the ligand to be taken into account in the potential function. Overall, KGS demonstrates that kino-geometric constraints characterize the folded subset of a protein conformation space and that this subset is small enough to be approximated by a relatively small distribution of conformations. © 2011 Wiley Periodicals, Inc.

  15. Greedy Sampling and Incremental Surrogate Model-Based Tailoring of Aeroservoelastic Model Database for Flexible Aircraft

    Science.gov (United States)

    Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.

    2018-01-01

    This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.

  16. Sample similarity analysis of angles of repose based on experimental results for DEM calibration

    Directory of Open Access Journals (Sweden)

    Tan Yuan

    2017-01-01

    Full Text Available As a fundamental material property, particle-particle friction coefficient is usually calculated based on angle of repose which can be obtained experimentally. In the present study, the bottomless cylinder test was carried out to investigate this friction coefficient of a kind of biomass material, i.e. willow chips. Because of its irregular shape and varying particle size distribution, calculation of the angle becomes less applicable and decisive. In the previous studies only one section of those uneven slopes is chosen in most cases, although standard methods in definition of a representable section are barely found. Hence, we presented an efficient and reliable method from the new technology, 3D scan, which was used to digitize the surface of heaps and generate its point cloud. Then, two tangential lines of any selected section were calculated through the linear least-squares regression (LLSR, such that the left and right angle of repose of a pile could be derived. As the next step, a certain sum of sections were stochastic selected, and calculations were repeated correspondingly in order to achieve sample of angles, which was plotted in Cartesian coordinates as spots diagram. Subsequently, different samples were acquired through various selections of sections. By applying similarities and difference analysis of these samples, the reliability of this proposed method was verified. Phased results provides a realistic criterion to reduce the deviation between experiment and simulation as a result of random selection of a single angle, which will be compared with the simulation results in the future.

  17. Effective sampling range of food-based attractants for female Anastrepha suspensa (Diptera: Tephritidae).

    Science.gov (United States)

    Kendra, Paul E; Epsky, Nancy D; Heath, Robert R

    2010-04-01

    Release-recapture studies were conducted with both feral and sterile females of the Caribbean fruit fly, Anastrepha suspensa (Loew) (Diptera: Tephritidae), to determine sampling range for a liquid protein bait (torula yeast/borax) and for a two-component synthetic lure (ammonium acetate and putrescine). Tests were done in a guava, Psidium guajava L., grove and involved releasing flies at a central point and recording the numbers captured after 7 h and 1, 2, 3, and 6 d in an array of 25 Multilure traps located 9-46 m from the release point. In all tests, highest rate of recapture occurred within the first day of release, so estimations of sampling range were based on a 24-h period. Trap distances were grouped into four categories (30 m from release point) and relative trapping efficiency (percentage of capture) was determined for each distance group. Effective sampling range was defined as the maximum distance at w