Monitoring oil persistence on beaches : SCAT versus stratified random sampling designs
International Nuclear Information System (INIS)
Short, J.W.; Lindeberg, M.R.; Harris, P.M.; Maselko, J.M.; Pella, J.J.; Rice, S.D.
2003-01-01
In the event of a coastal oil spill, shoreline clean-up assessment teams (SCAT) commonly rely on visual inspection of the entire affected area to monitor the persistence of the oil on beaches. Occasionally, pits are excavated to evaluate the persistence of subsurface oil. This approach is practical for directing clean-up efforts directly following a spill. However, sampling of the 1989 Exxon Valdez oil spill in Prince William Sound 12 years later has shown that visual inspection combined with pit excavation does not offer estimates of contaminated beach area of stranded oil volumes. This information is needed to statistically evaluate the significance of change with time. Assumptions regarding the correlation of visually-evident surface oil and cryptic subsurface oil are usually not evaluated as part of the SCAT mandate. Stratified random sampling can avoid such problems and could produce precise estimates of oiled area and volume that allow for statistical assessment of major temporal trends and the extent of the impact. The 2001 sampling of the shoreline of Prince William Sound showed that 15 per cent of surface oil occurrences were associated with subsurface oil. This study demonstrates the usefulness of the stratified random sampling method and shows how sampling design parameters impact statistical outcome. Power analysis based on the study results, indicate that optimum power is derived when unnecessary stratification is avoided. It was emphasized that sampling effort should be balanced between choosing sufficient beaches for sampling and the intensity of sampling
International Nuclear Information System (INIS)
Skalski, J.R.; Hoffman, A.; Ransom, B.H.; Steig, T.W.
1993-01-01
Five alternate sampling designs are compared using 15 d of 24-h continuous hydroacoustic data to identify the most favorable approach to fixed-location hydroacoustic monitoring of salmonid outmigrants. Four alternative aproaches to systematic sampling are compared among themselves and with stratified random sampling (STRS). Stratifying systematic sampling (STSYS) on a daily basis is found to reduce sampling error in multiday monitoring studies. Although sampling precision was predictable with varying levels of effort in STRS, neither magnitude nor direction of change in precision was predictable when effort was varied in systematic sampling (SYS). Furthermore, modifying systematic sampling to include replicated (e.g., nested) sampling (RSYS) is further shown to provide unbiased point and variance estimates as does STRS. Numerous short sampling intervals (e.g., 12 samples of 1-min duration per hour) must be monitored hourly using RSYS to provide efficient, unbiased point and interval estimates. For equal levels of effort, STRS outperformed all variations of SYS examined. Parametric approaches to confidence interval estimates are found to be superior to nonparametric interval estimates (i.e., bootstrap and jackknife) in estimating total fish passage. 10 refs., 1 fig., 8 tabs
International Nuclear Information System (INIS)
Amendola, A.; Astolfi, M.; Lisanti, B.
1983-01-01
The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems
International Nuclear Information System (INIS)
Makepeace, C.E.; Horvath, F.J.; Stocker, H.
1981-11-01
The aim of a stratified random sampling plan is to provide the best estimate (in the absence of full-shift personal gravimetric sampling) of personal exposure to respirable quartz among underground miners. One also gains information of the exposure distribution of all the miners at the same time. Three variables (or strata) are considered in the present scheme: locations, occupations and times of sampling. Random sampling within each stratum ensures that each location, occupation and time of sampling has equal opportunity of being selected without bias. Following implementation of the plan and analysis of collected data, one can determine the individual exposures and the mean. This information can then be used to identify those groups whose exposure contributes significantly to the collective exposure. In turn, this identification, along with other considerations, allows the mine operator to carry out a cost-benefit optimization and eventual implementation of engineering controls for these groups. This optimization and engineering control procedure, together with the random sampling plan, can then be used in an iterative manner to minimize the mean value of the distribution and collective exposures
Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino
2012-01-01
Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...
International Nuclear Information System (INIS)
Makepeace, C.E.
1981-01-01
Sampling strategies for the monitoring of deleterious agents present in uranium mine air in underground and surface mining areas are described. These methods are designed to prevent overexposure of the lining of the respiratory system of uranium miners to ionizing radiation from radon and radon daughters, and whole body overexposure to external gamma radiation. A detailed description is provided of stratified random sampling monitoring methodology for obtaining baseline data to be used as a reference for subsequent compliance assessment
Bayesian stratified sampling to assess corpus utility
Energy Technology Data Exchange (ETDEWEB)
Hochberg, J.; Scovel, C.; Thomas, T.; Hall, S.
1998-12-01
This paper describes a method for asking statistical questions about a large text corpus. The authors exemplify the method by addressing the question, ``What percentage of Federal Register documents are real documents, of possible interest to a text researcher or analyst?`` They estimate an answer to this question by evaluating 200 documents selected from a corpus of 45,820 Federal Register documents. Bayesian analysis and stratified sampling are used to reduce the sampling uncertainty of the estimate from over 3,100 documents to fewer than 1,000. A possible application of the method is to establish baseline statistics used to estimate recall rates for information retrieval systems.
Stratified sampling design based on data mining.
Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung
2013-09-01
To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.
Monte Carlo stratified source-sampling
International Nuclear Information System (INIS)
Blomquist, R.N.; Gelbard, E.M.
1997-01-01
In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo open-quotes eigenvalue of the worldclose quotes problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. The original test-problem was treated by a special code designed specifically for that purpose. Recently ANL started work on a method for dealing with more realistic eigenvalue of the world configurations, and has been incorporating this method into VIM. The original method has been modified to take into account real-world statistical noise sources not included in the model problem. This paper constitutes a status report on work still in progress
Directory of Open Access Journals (Sweden)
Atta Ullah
2014-01-01
Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.
Stratified source-sampling techniques for Monte Carlo eigenvalue analysis
International Nuclear Information System (INIS)
Mohamed, A.
1998-01-01
In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results
Random forcing of geostrophic motion in rotating stratified turbulence
Waite, Michael L.
2017-12-01
Random forcing of geostrophic motion is a common approach in idealized simulations of rotating stratified turbulence. Such forcing represents the injection of energy into large-scale balanced motion, and the resulting breakdown of quasi-geostrophic turbulence into inertia-gravity waves and stratified turbulence can shed light on the turbulent cascade processes of the atmospheric mesoscale. White noise forcing is commonly employed, which excites all frequencies equally, including frequencies much higher than the natural frequencies of large-scale vortices. In this paper, the effects of these high frequencies in the forcing are investigated. Geostrophic motion is randomly forced with red noise over a range of decorrelation time scales τ, from a few time steps to twice the large-scale vortex time scale. It is found that short τ (i.e., nearly white noise) results in about 46% more gravity wave energy than longer τ, despite the fact that waves are not directly forced. We argue that this effect is due to wave-vortex interactions, through which the high frequencies in the forcing are able to excite waves at their natural frequencies. It is concluded that white noise forcing should be avoided, even if it is only applied to the geostrophic motion, when a careful investigation of spontaneous wave generation is needed.
Prototypic Features of Loneliness in a Stratified Sample of Adolescents
Directory of Open Access Journals (Sweden)
Mathias Lasgaard
2009-06-01
Full Text Available Dominant theoretical approaches in loneliness research emphasize the value of personality characteristics in explaining loneliness. The present study examines whether dysfunctional social strategies and attributions in lonely adolescents can be explained by personality characteristics. A questionnaire survey was conducted with 379 Danish Grade 8 students (M = 14.1 years, SD = 0.4 from 22 geographically stratified and randomly selected schools. Hierarchical linear regression analysis showed that network orientation, success expectation and avoidance in affiliative situations predicted loneliness independent of personality characteristics, demographics and social desirability. The study indicates that dysfunctional strategies and attributions in affiliative situations are directly related to loneliness in adolescence. These strategies and attributions may preclude lonely adolescents from guidance and intervention. Thus, professionals need to be knowledgeable about prototypic features of loneliness in addition to employing a pro-active approach when assisting adolescents who display prototypic features.
Temporally stratified sampling programs for estimation of fish impingement
International Nuclear Information System (INIS)
Kumar, K.D.; Griffith, J.S.
1977-01-01
Impingement monitoring programs often expend valuable and limited resources and fail to provide a dependable estimate of either total annual impingement or those biological and physicochemical factors affecting impingement. In situations where initial monitoring has identified ''problem'' fish species and the periodicity of their impingement, intensive sampling during periods of high impingement will maximize information obtained. We use data gathered at two nuclear generating facilities in the southeastern United States to discuss techniques of designing such temporally stratified monitoring programs and their benefits and drawbacks. Of the possible temporal patterns in environmental factors within a calendar year, differences among seasons are most influential in the impingement of freshwater fishes in the Southeast. Data on the threadfin shad (Dorosoma petenense) and the role of seasonal temperature changes are utilized as an example to demonstrate ways of most efficiently and accurately estimating impingement of the species
Distribution-Preserving Stratified Sampling for Learning Problems.
Cervellera, Cristiano; Maccio, Danilo
2017-06-09
The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data. This is obvious for unsupervised learning problems, where the goal is to gain insights on the distribution of the data, but it is also relevant for supervised problems, where the theory explains how the training set distribution influences the generalization error. In this paper, we analyze the technique of stratified sampling from the point of view of distances between probabilities. This allows us to introduce an algorithm, based on recursive binary partition of the input space, aimed at obtaining samples that are distributed as much as possible as the original data. A theoretical analysis is proposed, proving the (greedy) optimality of the procedure together with explicit error bounds. An adaptive version of the algorithm is also introduced to cope with streaming data. Simulation tests on various data sets and different learning tasks are also provided.
Effects of unstratified and centre-stratified randomization in multi-centre clinical trials.
Anisimov, Vladimir V
2011-01-01
This paper deals with the analysis of randomization effects in multi-centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre-stratified block-permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson-gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed-form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre-stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.
Xun-Ping, W; An, Z
2017-07-27
Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.
Independent random sampling methods
Martino, Luca; Míguez, Joaquín
2018-01-01
This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the li...
Data splitting for artificial neural networks using SOM-based stratified sampling.
May, R J; Maier, H R; Dandy, G C
2010-03-01
Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.
Stereo imaging and random array stratified imaging for cargo radiation inspecting
International Nuclear Information System (INIS)
Wang Jingjin; Zeng Yu
2003-01-01
This paper presents a Stereo Imaging and Random Array Stratified Imaging for cargo container radiation Inspecting. By using dual-line vertical detector array scan, a stereo image of inspected cargo can be obtained and watched with virtual reality view. The random detector array has only one-row of detectors but distributed in a certain horizontal dimension randomly. To scan a cargo container with this random array detector, a 'defocused' image is obtained. By using 'anti-random focusing', one layer of the image can be focused on the background of all defocused images from other layers. A stratified X-ray image of overlapped bike wheels is presented
A random spatial sampling method in a rural developing nation
Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas
2014-01-01
Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...
A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin
Blaschek, Michael; Duttmann, Rainer
2015-04-01
The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using
Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems
Czech Academy of Sciences Publication Activity Database
Šmíd, Martin
2012-01-01
Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf
Directory of Open Access Journals (Sweden)
Øren Anita
2008-12-01
Full Text Available Abstract Background Prior studies on the impact of problem gambling in the family mainly include help-seeking populations with small numbers of participants. The objective of the present stratified probability sample study was to explore the epidemiology of problem gambling in the family in the general population. Methods Men and women 16–74 years-old randomly selected from the Norwegian national population database received an invitation to participate in this postal questionnaire study. The response rate was 36.1% (3,483/9,638. Given the lack of validated criteria, two survey questions ("Have you ever noticed that a close relative spent more and more money on gambling?" and "Have you ever experienced that a close relative lied to you about how much he/she gambles?" were extrapolated from the Lie/Bet Screen for pathological gambling. Respondents answering "yes" to both questions were defined as Concerned Significant Others (CSOs. Results Overall, 2.0% of the study population was defined as CSOs. Young age, female gender, and divorced marital status were factors positively associated with being a CSO. CSOs often reported to have experienced conflicts in the family related to gambling, worsening of the family's financial situation, and impaired mental and physical health. Conclusion Problematic gambling behaviour not only affects the gambling individual but also has a strong impact on the quality of life of family members.
Braithwaite, Jeffrey; Greenfield, David; Westbrook, Johanna; Pawsey, Marjorie; Westbrook, Mary; Gibberd, Robert; Naylor, Justine; Nathan, Sally; Robinson, Maureen; Runciman, Bill; Jackson, Margaret; Travaglia, Joanne; Johnston, Brian; Yen, Desmond; McDonald, Heather; Low, Lena; Redman, Sally; Johnson, Betty; Corbett, Angus; Hennessy, Darlene; Clark, John; Lancaster, Judie
2010-02-01
Despite the widespread use of accreditation in many countries, and prevailing beliefs that accreditation is associated with variables contributing to clinical care and organisational outcomes, little systematic research has been conducted to examine its validity as a predictor of healthcare performance. To determine whether accreditation performance is associated with self-reported clinical performance and independent ratings of four aspects of organisational performance. Independent blinded assessment of these variables in a random, stratified sample of health service organisations. Acute care: large, medium and small health-service organisations in Australia. Study participants Nineteen health service organisations employing 16 448 staff treating 321 289 inpatients and 1 971 087 non-inpatient services annually, representing approximately 5% of the Australian acute care health system. Correlations of accreditation performance with organisational culture, organisational climate, consumer involvement, leadership and clinical performance. Results Accreditation performance was significantly positively correlated with organisational culture (rho=0.618, p=0.005) and leadership (rho=0.616, p=0.005). There was a trend between accreditation and clinical performance (rho=0.450, p=0.080). Accreditation was unrelated to organisational climate (rho=0.378, p=0.110) and consumer involvement (rho=0.215, p=0.377). Accreditation results predict leadership behaviours and cultural characteristics of healthcare organisations but not organisational climate or consumer participation, and a positive trend between accreditation and clinical performance is noted.
BWIP-RANDOM-SAMPLING, Random Sample Generation for Nuclear Waste Disposal
International Nuclear Information System (INIS)
Sagar, B.
1989-01-01
1 - Description of program or function: Random samples for different distribution types are generated. Distribution types as required for performance assessment modeling of geologic nuclear waste disposal are provided. These are: - Uniform, - Log-uniform (base 10 or natural), - Normal, - Lognormal (base 10 or natural), - Exponential, - Bernoulli, - User defined continuous distribution. 2 - Method of solution: A linear congruential generator is used for uniform random numbers. A set of functions is used to transform the uniform distribution to the other distributions. Stratified, rather than random, sampling can be chosen. Truncated limits can be specified on many distributions, whose usual definition has an infinite support. 3 - Restrictions on the complexity of the problem: Generation of correlated random variables is not included
Lafontaine, Sean J V; Sawada, M; Kristjansson, Elizabeth
2017-02-16
With the expansion and growth of research on neighbourhood characteristics, there is an increased need for direct observational field audits. Herein, we introduce a novel direct observational audit method and systematic social observation instrument (SSOI) for efficiently assessing neighbourhood aesthetics over large urban areas. Our audit method uses spatial random sampling stratified by residential zoning and incorporates both mobile geographic information systems technology and virtual environments. The reliability of our method was tested in two ways: first, in 15 Ottawa neighbourhoods, we compared results at audited locations over two subsequent years, and second; we audited every residential block (167 blocks) in one neighbourhood and compared the distribution of SSOI aesthetics index scores with results from the randomly audited locations. Finally, we present interrater reliability and consistency results on all observed items. The observed neighbourhood average aesthetics index score estimated from four or five stratified random audit locations is sufficient to characterize the average neighbourhood aesthetics. The SSOI was internally consistent and demonstrated good to excellent interrater reliability. At the neighbourhood level, aesthetics is positively related to SES and physical activity and negatively correlated with BMI. The proposed approach to direct neighbourhood auditing performs sufficiently and has the advantage of financial and temporal efficiency when auditing a large city.
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here are benthic habitat imagery that result from benthic photo-quadrat surveys conducted along transects at stratified random sites across...
Brachytherapy dose-volume histogram computations using optimized stratified sampling methods
International Nuclear Information System (INIS)
Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.
2002-01-01
A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points
Directory of Open Access Journals (Sweden)
Paula Costa Mosca Macedo
2009-06-01
Full Text Available OBJECTIVE: To evaluate the quality of life during the first three years of training and identify its association with sociodemographicoccupational characteristics, leisure time and health habits. METHOD: A cross-sectional study with a random sample of 128 residents stratified by year of training was conducted. The Medical Outcome Study -short form 36 was administered. Mann-Whitney tests were carried out to compare percentile distributions of the eight quality of life domains, according to sociodemographic variables, and a multiple linear regression analysis was performed, followed by a validity checking for the resulting models. RESULTS: The physical component presented higher quality of life medians than the mental component. Comparisons between the three years showed that in almost all domains the quality of life scores of the second year residents were higher than the first year residents (p OBJETIVO: Avaliar a qualidade de vida do médico residente durante os três anos do treinamento e identificar sua associação com as características sociodemográficas-ocupacionais, tempo de lazer e hábitos de saúde. MÉTODO: Foi realizado um estudo transversal com amostra randomizada de 128 residentes, estratificada por ano de residência. O Medical Outcome Study-Short Form 36 foi aplicado; as distribuições percentis dos domínios de qualidade de vida de acordo com variáveis sociodemográficas foram analisadas pelo teste de Mann-Whitney e regressão linear múltipla, bem como estudo de validação pós-regressão. RESULTADOS: O componente físico da qualidade de vida apresentou medianas mais altas do que o mental. Comparações entre os três anos mostraram que quase todos os domínios de qualidade de vida tiveram escores maiores no segundo do que no primeiro ano (p < 0,01; em relação ao componente mental observamos maiores escores no terceiro ano do que nos demais (p < 0,01. Preditores de maior qualidade de vida foram: estar no segundo ou
A census-weighted, spatially-stratified household sampling strategy for urban malaria epidemiology
Directory of Open Access Journals (Sweden)
Slutsker Laurence
2008-02-01
Full Text Available Abstract Background Urban malaria is likely to become increasingly important as a consequence of the growing proportion of Africans living in cities. A novel sampling strategy was developed for urban areas to generate a sample simultaneously representative of population and inhabited environments. Such a strategy should facilitate analysis of important epidemiological relationships in this ecological context. Methods Census maps and summary data for Kisumu, Kenya, were used to create a pseudo-sampling frame using the geographic coordinates of census-sampled structures. For every enumeration area (EA designated as urban by the census (n = 535, a sample of structures equal to one-tenth the number of households was selected. In EAs designated as rural (n = 32, a geographically random sample totalling one-tenth the number of households was selected from a grid of points at 100 m intervals. The selected samples were cross-referenced to a geographic information system, and coordinates transferred to handheld global positioning units. Interviewers found the closest eligible household to the sampling point and interviewed the caregiver of a child aged Results 4,336 interviews were completed in 473 of the 567 study area EAs from June 2002 through February 2003. EAs without completed interviews were randomly distributed, and non-response was approximately 2%. Mean distance from the assigned sampling point to the completed interview was 74.6 m, and was significantly less in urban than rural EAs, even when controlling for number of households. The selected sample had significantly more children and females of childbearing age than the general population, and fewer older individuals. Conclusion This method selected a sample that was simultaneously population-representative and inclusive of important environmental variation. The use of a pseudo-sampling frame and pre-programmed handheld GPS units is more efficient and may yield a more complete sample than
Properties of the endogenous post-stratified estimator using a random forests model
John Tipton; Jean Opsomer; Gretchen G. Moisen
2012-01-01
Post-stratification is used in survey statistics as a method to improve variance estimates. In traditional post-stratification methods, the variable on which the data is being stratified must be known at the population level. In many cases this is not possible, but it is possible to use a model to predict values using covariates, and then stratify on these predicted...
Systematic versus random sampling in stereological studies.
West, Mark J
2012-12-01
The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.
Employment status, inflation and suicidal behaviour: an analysis of a stratified sample in Italy.
Solano, Paola; Pizzorno, Enrico; Gallina, Anna M; Mattei, Chiara; Gabrielli, Filippo; Kayman, Joshua
2012-09-01
There is abundant empirical evidence of a surplus risk of suicide among the unemployed, although few studies have investigated the influence of economic downturns on suicidal behaviours in an employment status-stratified sample. We investigated how economic inflation affected suicidal behaviours according to employment status in Italy from 2001 to 2008. Data concerning economically active people were provided by the Italian Institute for Statistical Analysis and by the International Monetary Fund. The association between inflation and completed versus attempted suicide with respect to employment status was investigated in every year and quarter-year of the study time frame. We considered three occupational categories: employed, unemployed who were previously employed and unemployed who had never worked. The unemployed are at higher suicide risk than the employed. Among the PE, a significant association between inflation and suicide attempt was found, whereas no association was reported concerning completed suicides. No association was found between completed and attempted suicides among the employed, the NE and inflation. Completed suicide in females is significantly associated with unemployment in every quarter-year. The reported vulnerability to suicidal behaviours among the PE as inflation rises underlines the need of effective support strategies for both genders in times of economic downturns.
A Bayesian Justification for Random Sampling in Sample Survey
Directory of Open Access Journals (Sweden)
Glen Meeden
2012-07-01
Full Text Available In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.
k-Means: Random Sampling Procedure
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.
Directory of Open Access Journals (Sweden)
Tim A. Moore
2016-01-01
Full Text Available DOI: 10.17014/ijog.3.1.29-51Stratified sampling of coal seams for petrographic analysis using block samples is a viable alternative to standard methods of channel sampling and particulate pellet mounts. Although petrographic analysis of particulate pellets is employed widely, it is both time consuming and does not allow variation within sampling units to be assessed - an important measure in any study whether it be for paleoenvironmental reconstruction or in obtaining estimates of industrial attributes. Also, samples taken as intact blocks provide additional information, such as texture and botanical affinity that cannot be gained using particulate pellets. Stratified sampling can be employed both on ‘fine’ and ‘coarse’ grained coal units. Fine-grained coals are defined as those coal intervals that do not contain vitrain bands greater than approximately 1 mm in thickness (as measured perpendicular to bedding. In fine-grained coal seams, a reasonable sized block sample (with a polished surface area of ~3 cm2 can be taken that encapsulates the macroscopic variability. However, for coarse-grained coals (vitrain bands >1 mm a different system has to be employed in order to accurately account for the larger particles. Macroscopic point counting of vitrain bands can accurately account for those particles>1 mm within a coal interval. This point counting method is conducted using something as simple as string on a coal face with marked intervals greater than the largest particle expected to be encountered (although new technologies are being developed to capture this type of information digitally. Comparative analyses of particulate pellets and blocks on the same interval show less than 6% variation between the two sample types when blocks are recalculated to include macroscopic counts of vitrain. Therefore even in coarse-grained coals, stratified sampling can be used effectively and representatively.
Rostami, Maryam; Ramezani Tehrani, Fahimeh; Simbar, Masoumeh; Hosseinpanah, Farhad; Alavi Majd, Hamid
2017-04-07
Although there have been marked improvements in our understanding of vitamin D functions in different diseases, gaps on its role during pregnancy remain. Due to the lack of consensus on the most accurate marker of vitamin D deficiency during pregnancy and the optimal level of 25-hydroxyvitamin D, 25(OH)D, for its definition, vitamin D deficiency assessment during pregnancy is a complicated process. Besides, the optimal protocol for treatment of hypovitaminosis D and its effect on maternal and neonatal outcomes are still unclear. The aim of our study was to estimate the prevalence of vitamin D deficiency in the first trimester of pregnancy and to compare vitamin D screening strategy with no screening. Also, we intended to compare the effectiveness of various treatment regimens on maternal and neonatal outcomes in Masjed-Soleyman and Shushtar cities of Khuzestan province, Iran. This was a two-phase study. First, a population-based cross-sectional study was conducted; recruiting 1600 and 900 first trimester pregnant women from health centers of Masjed-Soleyman and Shushtar, respectively, using stratified multistage cluster sampling with probability proportional to size (PPS) method. Second, to assess the effect of screening strategy on maternal and neonatal outcomes, Masjed-Soleyman participants were assigned to a screening program versus Shushtar participants who became the nonscreening arm. Within the framework of the screening regimen, an 8-arm blind randomized clinical trial was undertaken to compare the effects of various treatment protocols. A total of 800 pregnant women with vitamin D deficiency were selected using simple random sampling from the 1600 individuals of Masjed-Soleyman as interventional groups. Serum concentrations of 25(OH)D were classified as: (1) severe deficient (20ng/ml). Those with severe and moderate deficiency were randomly divided into 4 subgroups and received vitamin D3 based on protocol and were followed until delivery. Data was analyzed
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here result from benthic photo-quadrat surveys conducted along transects at stratified random sites across American Samoa in 2015 as a part of...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here are benthic habitat imagery that result from benthic photo-quadrat surveys conducted along transects at stratified random sites across...
Directory of Open Access Journals (Sweden)
Vongsack Latsamy
2009-07-01
Full Text Available Abstract Background Counterfeit oral artesunate has been a major public health problem in mainland SE Asia, impeding malaria control. A countrywide stratified random survey was performed to determine the availability and quality of oral artesunate in pharmacies and outlets (shops selling medicines in the Lao PDR (Laos. Methods In 2003, 'mystery' shoppers were asked to buy artesunate tablets from 180 outlets in 12 of the 18 Lao provinces. Outlets were selected using stratified random sampling by investigators not involved in sampling. Samples were analysed for packaging characteristics, by the Fast Red Dye test, high-performance liquid chromatography (HPLC, mass spectrometry (MS, X-ray diffractometry and pollen analysis. Results Of 180 outlets sampled, 25 (13.9% sold oral artesunate. Outlets selling artesunate were more commonly found in the more malarious southern Laos. Of the 25 outlets, 22 (88%; 95%CI 68–97% sold counterfeit artesunate, as defined by packaging and chemistry. No artesunate was detected in the counterfeits by any of the chemical analysis techniques and analysis of the packaging demonstrated seven different counterfeit types. There was complete agreement between the Fast Red dye test, HPLC and MS analysis. A wide variety of wrong active ingredients were found by MS. Of great concern, 4/27 (14.8% fakes contained detectable amounts of artemisinin (0.26–115.7 mg/tablet. Conclusion This random survey confirms results from previous convenience surveys that counterfeit artesunate is a severe public health problem. The presence of artemisinin in counterfeits may encourage malaria resistance to artemisinin derivatives. With increasing accessibility of artemisinin-derivative combination therapy (ACT in Laos, the removal of artesunate monotherapy from pharmacies may be an effective intervention.
Energy Technology Data Exchange (ETDEWEB)
Jung Yu, Dae [School of Space Research, Kyung Hee University, Yongin 446-701 (Korea, Republic of); Kim, Kihong [Department of Energy Systems Research, Ajou University, Suwon 443-749 (Korea, Republic of)
2013-12-15
We study the effects of a random spatial variation of the plasma density on the mode conversion of electromagnetic waves into electrostatic oscillations in cold, unmagnetized, and stratified plasmas. Using the invariant imbedding method, we calculate precisely the electromagnetic field distribution and the mode conversion coefficient, which is defined to be the fraction of the incident wave power converted into electrostatic oscillations, for the configuration where a numerically generated random density variation is added to the background linear density profile. We repeat similar calculations for a large number of random configurations and take an average of the results. We obtain a peculiar nonmonotonic dependence of the mode conversion coefficient on the strength of randomness. As the disorder increases from zero, the maximum value of the mode conversion coefficient decreases initially, then increases to a maximum, and finally decreases towards zero. The range of the incident angle in which mode conversion occurs increases monotonically as the disorder increases. We present numerical results suggesting that the decrease of mode conversion mainly results from the increased reflection due to the Anderson localization effect originating from disorder, whereas the increase of mode conversion of the intermediate disorder regime comes from the appearance of many resonance points and the enhanced tunneling between the resonance points and the cutoff point. We also find a very large local enhancement of the magnetic field intensity for particular random configurations. In order to obtain high mode conversion efficiency, it is desirable to restrict the randomness close to the resonance region.
International Nuclear Information System (INIS)
Jung Yu, Dae; Kim, Kihong
2013-01-01
We study the effects of a random spatial variation of the plasma density on the mode conversion of electromagnetic waves into electrostatic oscillations in cold, unmagnetized, and stratified plasmas. Using the invariant imbedding method, we calculate precisely the electromagnetic field distribution and the mode conversion coefficient, which is defined to be the fraction of the incident wave power converted into electrostatic oscillations, for the configuration where a numerically generated random density variation is added to the background linear density profile. We repeat similar calculations for a large number of random configurations and take an average of the results. We obtain a peculiar nonmonotonic dependence of the mode conversion coefficient on the strength of randomness. As the disorder increases from zero, the maximum value of the mode conversion coefficient decreases initially, then increases to a maximum, and finally decreases towards zero. The range of the incident angle in which mode conversion occurs increases monotonically as the disorder increases. We present numerical results suggesting that the decrease of mode conversion mainly results from the increased reflection due to the Anderson localization effect originating from disorder, whereas the increase of mode conversion of the intermediate disorder regime comes from the appearance of many resonance points and the enhanced tunneling between the resonance points and the cutoff point. We also find a very large local enhancement of the magnetic field intensity for particular random configurations. In order to obtain high mode conversion efficiency, it is desirable to restrict the randomness close to the resonance region
Interactive Fuzzy Goal Programming approach in multi-response stratified sample surveys
Directory of Open Access Journals (Sweden)
Gupta Neha
2016-01-01
Full Text Available In this paper, we applied an Interactive Fuzzy Goal Programming (IFGP approach with linear, exponential and hyperbolic membership functions, which focuses on maximizing the minimum membership values to determine the preferred compromise solution for the multi-response stratified surveys problem, formulated as a Multi- Objective Non Linear Programming Problem (MONLPP, and by linearizing the nonlinear objective functions at their individual optimum solution, the problem is approximated to an Integer Linear Programming Problem (ILPP. A numerical example based on real data is given, and comparison with some existing allocations viz. Cochran’s compromise allocation, Chatterjee’s compromise allocation and Khowaja’s compromise allocation is made to demonstrate the utility of the approach.
Sampling problems for randomly broken sticks
Energy Technology Data Exchange (ETDEWEB)
Huillet, Thierry [Laboratoire de Physique Theorique et Modelisation, CNRS-UMR 8089 et Universite de Cergy-Pontoise, 5 mail Gay-Lussac, 95031, Neuville sur Oise (France)
2003-04-11
Consider the random partitioning model of a population (represented by a stick of length 1) into n species (fragments) with identically distributed random weights (sizes). Upon ranking the fragments' weights according to ascending sizes, let S{sub m:n} be the size of the mth smallest fragment. Assume that some observer is sampling such populations as follows: drop at random k points (the sample size) onto this stick and record the corresponding numbers of visited fragments. We shall investigate the following sampling problems: (1) what is the sample size if the sampling is carried out until the first visit of the smallest fragment (size S{sub 1:n})? (2) For a given sample size, have all the fragments of the stick been visited at least once or not? This question is related to Feller's random coupon collector problem. (3) In what order are new fragments being discovered and what is the random number of samples separating the discovery of consecutive new fragments until exhaustion of the list? For this problem, the distribution of the size-biased permutation of the species' weights, as the sequence of their weights in their order of appearance is needed and studied.
Padilla, Alberto
2009-01-01
Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...
Hancock, Bruno C; Ketterhagen, William R
2011-10-14
Discrete element model (DEM) simulations of the discharge of powders from hoppers under gravity were analyzed to provide estimates of dosage form content uniformity during the manufacture of solid dosage forms (tablets and capsules). For a system that exhibits moderate segregation the effects of sample size, number, and location within the batch were determined. The various sampling approaches were compared to current best-practices for sampling described in the Product Quality Research Institute (PQRI) Blend Uniformity Working Group (BUWG) guidelines. Sampling uniformly across the discharge process gave the most accurate results with respect to identifying segregation trends. Sigmoidal sampling (as recommended in the PQRI BUWG guidelines) tended to overestimate potential segregation issues, whereas truncated sampling (common in industrial practice) tended to underestimate them. The size of the sample had a major effect on the absolute potency RSD. The number of sampling locations (10 vs. 20) had very little effect on the trends in the data, and the number of samples analyzed at each location (1 vs. 3 vs. 7) had only a small effect for the sampling conditions examined. The results of this work provide greater understanding of the effect of different sampling approaches on the measured content uniformity of real dosage forms, and can help to guide the choice of appropriate sampling protocols. Copyright © 2011 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Olive D. Buhule
2014-10-01
Full Text Available Background: Batch effects in DNA methylation microarray experiments can lead to spurious results if not properly handled during the plating of samples. Methods: Two pilot studies examining the association of DNA methylation patterns across the genome with obesity in Samoan men were investigated for chip- and row-specific batch effects. For each study, the DNA of 46 obese men and 46 lean men were assayed using Illumina's Infinium HumanMethylation450 BeadChip. In the first study (Sample One, samples from obese and lean subjects were examined on separate chips. In the second study (Sample Two, the samples were balanced on the chips by lean/obese status, age group, and census region. We used methylumi, watermelon, and limma R packages, as well as ComBat, to analyze the data. Principal component analysis and linear regression were respectively employed to identify the top principal components and to test for their association with the batches and lean/obese status. To identify differentially methylated positions (DMPs between obese and lean males at each locus, we used a moderated t-test.Results: Chip effects were effectively removed from Sample Two but not Sample One. In addition, dramatic differences were observed between the two sets of DMP results. After removing'' batch effects with ComBat, Sample One had 94,191 probes differentially methylated at a q-value threshold of 0.05 while Sample Two had zero differentially methylated probes. The disparate results from Sample One and Sample Two likely arise due to the confounding of lean/obese status with chip and row batch effects.Conclusion: Even the best possible statistical adjustments for batch effects may not completely remove them. Proper study design is vital for guarding against spurious findings due to such effects.
Sampling high-altitude and stratified mating flights of red imported fire ant.
Fritz, Gary N; Fritz, Ann H; Vander Meer, Robert K
2011-05-01
With the exception of an airplane equipped with nets, no method has been developed that successfully samples red imported fire ant, Solenopsis invicta Buren, sexuals in mating/dispersal flights throughout their potential altitudinal trajectories. We developed and tested a method for sampling queens and males during mating flights at altitudinal intervals reaching as high as "140 m. Our trapping system uses an electric winch and a 1.2-m spindle bolted to a swiveling platform. The winch dispenses up to 183 m of Kevlar-core, nylon rope and the spindle stores 10 panels (0.9 by 4.6 m each) of nylon tulle impregnated with Tangle-Trap. The panels can be attached to the rope at various intervals and hoisted into the air by using a 3-m-diameter, helium-filled balloon. Raising or lowering all 10 panels takes approximately 15-20 min. This trap also should be useful for altitudinal sampling of other insects of medical importance.
Directory of Open Access Journals (Sweden)
Khewal Bhupendra Kesur
2013-01-01
Full Text Available This paper examines the application of Latin Hypercube Sampling (LHS and Antithetic Variables (AVs to reduce the variance of estimated performance measures from microscopic traffic simulators. LHS and AV allow for a more representative coverage of input probability distributions through stratification, reducing the standard error of simulation outputs. Two methods of implementation are examined, one where stratification is applied to headways and routing decisions of individual vehicles and another where vehicle counts and entry times are more evenly sampled. The proposed methods have wider applicability in general queuing systems. LHS is found to outperform AV, and reductions of up to 71% in the standard error of estimates of traffic network performance relative to independent sampling are obtained. LHS allows for a reduction in the execution time of computationally expensive microscopic traffic simulators as fewer simulations are required to achieve a fixed level of precision with reductions of up to 84% in computing time noted on the test cases considered. The benefits of LHS are amplified for more congested networks and as the required level of precision increases.
Generation and Analysis of Constrained Random Sampling Patterns
DEFF Research Database (Denmark)
Pierzchlewski, Jacek; Arildsen, Thomas
2016-01-01
Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, which...... indicates signal sampling points in time. Practical random sampling patterns are constrained by ADC characteristics and application requirements. In this paper, we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications. Furthermore, we propose...... algorithm generates random sampling patterns dedicated for event-driven-ADCs better than existed sampling pattern generators. Finally, implementation issues of random sampling patterns are discussed....
Acceptance sampling using judgmental and randomly selected samples
Energy Technology Data Exchange (ETDEWEB)
Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl
2010-09-01
We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.
A random sampling procedure for anisotropic distributions
International Nuclear Information System (INIS)
Nagrajan, P.S.; Sethulakshmi, P.; Raghavendran, C.P.; Bhatia, D.P.
1975-01-01
A procedure is described for sampling the scattering angle of neutrons as per specified angular distribution data. The cosine of the scattering angle is written as a double Legendre expansion in the incident neutron energy and a random number. The coefficients of the expansion are given for C, N, O, Si, Ca, Fe and Pb and these elements are of interest in dosimetry and shielding. (author)
Occupational position and its relation to mental distress in a random sample of Danish residents
DEFF Research Database (Denmark)
Rugulies, Reiner Ernst; Madsen, Ida E H; Nielsen, Maj Britt D
2010-01-01
PURPOSE: To analyze the distribution of depressive, anxiety, and somatization symptoms across different occupational positions in a random sample of Danish residents. METHODS: The study sample consisted of 591 Danish residents (50% women), aged 20-65, drawn from an age- and gender-stratified random...... sample of the Danish population. Participants filled out a survey that included the 92 item version of the Hopkins Symptom Checklist (SCL-92). We categorized occupational position into seven groups: high- and low-grade non-manual workers, skilled and unskilled manual workers, high- and low-grade self...
Bhatia, Triptish; Gettig, Elizabeth A; Gottesman, Irving I; Berliner, Jonathan; Mishra, N N; Nimgaonkar, Vishwajit L; Deshpande, Smita N
2016-12-01
Schizophrenia (SZ) has an estimated heritability of 64-88%, with the higher values based on twin studies. Conventionally, family history of psychosis is the best individual-level predictor of risk, but reliable risk estimates are unavailable for Indian populations. Genetic, environmental, and epigenetic factors are equally important and should be considered when predicting risk in 'at risk' individuals. To estimate risk based on an Indian schizophrenia participant's family history combined with selected demographic factors. To incorporate variables in addition to family history, and to stratify risk, we constructed a regression equation that included demographic variables in addition to family history. The equation was tested in two independent Indian samples: (i) an initial sample of SZ participants (N=128) with one sibling or offspring; (ii) a second, independent sample consisting of multiply affected families (N=138 families, with two or more sibs/offspring affected with SZ). The overall estimated risk was 4.31±0.27 (mean±standard deviation). There were 19 (14.8%) individuals in the high risk group, 75 (58.6%) in the moderate risk and 34 (26.6%) in the above average risk (in Sample A). In the validation sample, risks were distributed as: high (45%), moderate (38%) and above average (17%). Consistent risk estimates were obtained from both samples using the regression equation. Familial risk can be combined with demographic factors to estimate risk for SZ in India. If replicated, the proposed stratification of risk may be easier and more realistic for family members. Copyright © 2016. Published by Elsevier B.V.
Moreira, Inês C; Ventura, Sandra Rua; Ramos, Isabel; Rodrigues, Pedro Pereira
2015-01-05
Mammography is considered the best imaging technique for breast cancer screening, and the radiographer plays an important role in its performance. Therefore, continuing education is critical to improving the performance of these professionals and thus providing better health care services. Our goal was to develop an e-learning course on breast imaging for radiographers, assessing its efficacy, effectiveness, and user satisfaction. A stratified randomized controlled trial was performed with radiographers and radiology students who already had mammography training, using pre- and post-knowledge tests, and satisfaction questionnaires. The primary outcome was the improvement in test results (percentage of correct answers), using intention-to-treat and per-protocol analysis. A total of 54 participants were assigned to the intervention (20 students plus 34 radiographers) with 53 controls (19+34). The intervention was completed by 40 participants (11+29), with 4 (2+2) discontinued interventions, and 10 (7+3) lost to follow-up. Differences in the primary outcome were found between intervention and control: 21 versus 4 percentage points (pp), Peffect in radiographers (23 pp vs 4 pp; P=.004) but was unclear in students (18 pp vs 5 pp; P=.098). Nonetheless, differences in students' posttest results were found (88% vs 63%; P=.003), which were absent in pretest (63% vs 63%; P=.106). The per-protocol analysis showed a higher effect (26 pp vs 2 pp; Pe-learning course is effective, especially for radiographers, which highlights the need for continuing education.
Lee, Kwang Jin; Kim, Kihong
2011-10-10
We study theoretically the propagation and the Anderson localization of p-polarized electromagnetic waves incident obliquely on randomly stratified dielectric media with weak uncorrelated Gaussian disorder. Using the invariant imbedding method, we calculate the localization length and the disorder-averaged transmittance in a numerically precise manner. We find that the localization length takes an extremely large maximum value at some critical incident angle, which we call the generalized Brewster angle. The disorder-averaged transmittance also takes a maximum very close to one at the same incident angle. Even in the presence of an arbitrarily weak disorder, the generalized Brewster angle is found to be substantially different from the ordinary Brewster angle in uniform media. It is a rapidly increasing function of the average dielectric permittivity and approaches 90° when the average relative dielectric permittivity is slightly larger than two. We make a remarkable observation that the dependence of the generalized Brewster angle on the average dielectric permittivity is universal in the sense that it is independent of the strength of disorder. We also find, surprisingly, that when the average relative dielectric permittivity is less than one and the incident angle is larger than the generalized Brewster angle, both the localization length and the disorder-averaged transmittance increase substantially as the strength of disorder increases in a wide range of the disorder parameter. In other words, the Anderson localization of incident p waves can be weakened by disorder in a certain parameter regime.
Ventura, Sandra Rua; Ramos, Isabel; Rodrigues, Pedro Pereira
2015-01-01
Background Mammography is considered the best imaging technique for breast cancer screening, and the radiographer plays an important role in its performance. Therefore, continuing education is critical to improving the performance of these professionals and thus providing better health care services. Objective Our goal was to develop an e-learning course on breast imaging for radiographers, assessing its efficacy, effectiveness, and user satisfaction. Methods A stratified randomized controlled trial was performed with radiographers and radiology students who already had mammography training, using pre- and post-knowledge tests, and satisfaction questionnaires. The primary outcome was the improvement in test results (percentage of correct answers), using intention-to-treat and per-protocol analysis. Results A total of 54 participants were assigned to the intervention (20 students plus 34 radiographers) with 53 controls (19+34). The intervention was completed by 40 participants (11+29), with 4 (2+2) discontinued interventions, and 10 (7+3) lost to follow-up. Differences in the primary outcome were found between intervention and control: 21 versus 4 percentage points (pp), Pe-learning course is effective, especially for radiographers, which highlights the need for continuing education. PMID:25560547
Jing, Limei; Chen, Ru; Jing, Lisa; Qiao, Yun; Lou, Jiquan; Xu, Jing; Wang, Junwei; Chen, Wen; Sun, Xiaoming
2017-07-01
Basic Medical Insurance (BMI) has changed remarkably over time in China because of health reforms that aim to achieve universal coverage and better health care with adequate efforts by increasing subsidies, reimbursement, and benefits. In this paper, we present the development of BMI, including financing and operation, with a systematic review. Meanwhile, Pudong New Area in Shanghai was chosen as a typical BMI sample for its coverage and management; a stratified cluster sampling survey together with an ordinary logistic regression model was used for the analysis. Enrolee satisfaction and the factors associated with enrolee satisfaction with BMI were analysed. We found that the reenrolling rate superficially improved the BMI coverage and nearly achieved universal coverage. However, BMI funds still faced dual contradictions of fund deficit and insured under compensation, and a long-term strategy is needed to realize the integration of BMI schemes with more homogeneous coverage and benefits. Moreover, Urban Resident Basic Medical Insurance participants reported a higher rate of dissatisfaction than other participants. The key predictors of the enrolees' satisfaction were awareness of the premium and compensation, affordability of out-of-pocket costs, and the proportion of reimbursement. These results highlight the importance that the Chinese government takes measures, such as strengthening BMI fund management, exploring mixed payment methods, and regulating sequential medical orders, to develop an integrated medical insurance system of universal coverage and vertical equity while simultaneously improving enrolee satisfaction. Copyright © 2017 John Wiley & Sons, Ltd.
Systematic random sampling of the comet assay.
McArt, Darragh G; Wasson, Gillian R; McKerr, George; Saetzler, Kurt; Reed, Matt; Howard, C Vyvyan
2009-07-01
The comet assay is a technique used to quantify DNA damage and repair at a cellular level. In the assay, cells are embedded in agarose and the cellular content is stripped away leaving only the DNA trapped in an agarose cavity which can then be electrophoresed. The damaged DNA can enter the agarose and migrate while the undamaged DNA cannot and is retained. DNA damage is measured as the proportion of the migratory 'tail' DNA compared to the total DNA in the cell. The fundamental basis of these arbitrary values is obtained in the comet acquisition phase using fluorescence microscopy with a stoichiometric stain in tandem with image analysis software. Current methods deployed in such an acquisition are expected to be both objectively and randomly obtained. In this paper we examine the 'randomness' of the acquisition phase and suggest an alternative method that offers both objective and unbiased comet selection. In order to achieve this, we have adopted a survey sampling approach widely used in stereology, which offers a method of systematic random sampling (SRS). This is desirable as it offers an impartial and reproducible method of comet analysis that can be used both manually or automated. By making use of an unbiased sampling frame and using microscope verniers, we are able to increase the precision of estimates of DNA damage. Results obtained from a multiple-user pooled variation experiment showed that the SRS technique attained a lower variability than that of the traditional approach. The analysis of a single user with repetition experiment showed greater individual variances while not being detrimental to overall averages. This would suggest that the SRS method offers a better reflection of DNA damage for a given slide and also offers better user reproducibility.
PREDOMINANTLY LOW METALLICITIES MEASURED IN A STRATIFIED SAMPLE OF LYMAN LIMIT SYSTEMS AT Z = 3.7
Energy Technology Data Exchange (ETDEWEB)
Glidden, Ana; Cooper, Thomas J.; Simcoe, Robert A. [Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139 (United States); Cooksey, Kathy L. [Department of Physics and Astronomy, University of Hawai‘i at Hilo, 200 West Kāwili Street, Hilo, HI 96720 (United States); O’Meara, John M., E-mail: aglidden@mit.edu, E-mail: tjcooper@mit.edu, E-mail: simcoe@space.mit.edu, E-mail: kcooksey@hawaii.edu, E-mail: jomeara@smcvt.edu [Department of Physics, Saint Michael’s College, One Winooski Park, Colchester, VT 05439 (United States)
2016-12-20
We measured metallicities for 33 z = 3.4–4.2 absorption line systems drawn from a sample of H i-selected-Lyman limit systems (LLSs) identified in Sloan Digital Sky Survey (SDSS) quasar spectra and stratified based on metal line features. We obtained higher-resolution spectra with the Keck Echellette Spectrograph and Imager, selecting targets according to our stratification scheme in an effort to fully sample the LLS population metallicity distribution. We established a plausible range of H i column densities and measured column densities (or limits) for ions of carbon, silicon, and aluminum, finding ionization-corrected metallicities or upper limits. Interestingly, our ionization models were better constrained with enhanced α -to-aluminum abundances, with a median abundance ratio of [ α /Al] = 0.3. Measured metallicities were generally low, ranging from [M/H] = −3 to −1.68, with even lower metallicities likely for some systems with upper limits. Using survival statistics to incorporate limits, we constructed the cumulative distribution function (CDF) for LLS metallicities. Recent models of galaxy evolution propose that galaxies replenish their gas from the low-metallicity intergalactic medium (IGM) via high-density H i “flows” and eject enriched interstellar gas via outflows. Thus, there has been some expectation that LLSs at the peak of cosmic star formation ( z ≈ 3) might have a bimodal metallicity distribution. We modeled our CDF as a mix of two Gaussian distributions, one reflecting the metallicity of the IGM and the other representative of the interstellar medium of star-forming galaxies. This bimodal distribution yielded a poor fit. A single Gaussian distribution better represented the sample with a low mean metallicity of [M/H] ≈ −2.5.
Hall, Peter A
2012-03-01
Fatty foods are regarded as highly appetitive, and self-control is often required to resist consumption. Executive control resources (ECRs) are potentially facilitative of self-control efforts, and therefore could predict success in the domain of dietary self-restraint. It is not currently known whether stronger ECRs facilitate resistance to fatty food consumption, and moreover, it is unknown whether such an effect would be stronger in some age groups than others. The purpose of the present study was to examine the association between ECRs and consumption of fatty foods among healthy community-dwelling adults across the adult life span. An age-stratified sample of individuals between 18 and 89 years of age attended two laboratory sessions. During the first session they completed two computer-administered tests of ECRs (Stroop and Go-NoGo) and a test of general cognitive function (Wechsler Abbreviated Scale of Intelligence); participants completed two consecutive 1-week recall measures to assess frequency of fatty and nonfatty food consumption. Regression analyses revealed that stronger ECRs were associated with lower frequency of fatty food consumption over the 2-week interval. This association was observed for both measures of ECR and a composite measure. The effect remained significant after adjustment for demographic variables (age, gender, socioeconomic status), general cognitive function, and body mass index. The observed effect of ECRs on fatty food consumption frequency was invariant across age group, and did not generalize to nonfatty food consumption. ECRs may be potentially important, though understudied, determinants of dietary behavior in adults across the life span.
Mai, Vien Quang; Mai, Trịnh Thị Xuan; Tam, Ngo Le Minh; Nghia, Le Trung; Komada, Kenichi; Murakami, Hitoshi
2018-05-19
Dengue is a clinically important arthropod-borne viral disease with increasing global incidence. Here we aimed to estimate the prevalence of dengue infections in Khanh Hoa Province, central Viet Nam, and to identify risk factors for infection. We performed a stratified cluster sampling survey including residents of 3-60 years of age in Nha Trang City, Ninh Hoa District and Dien Khanh District, Khanh Hoa Province, in October 2011. Immunoglobulin G (IgG) and immunoglobulin M (IgM) against dengue were analyzed using a rapid test kit. Participants completed a questionnaire exploring clinical dengue incidence, socio-economic status, and individual behavior. A household checklist was used to examine environment, mosquito larvae presence, and exposure to public health interventions. IgG positivity was 20.5% (urban, 16.3%; rural, 23.0%), IgM positivity was 6.7% (urban, 6.4%; rural, 6.9%), and incidence of clinically compatible dengue during the prior 3 months was 2.8 per 1,000 persons (urban, 1.7; rural, 3.4). For IgG positivity, the adjusted odds ratio (AOR) was 2.68 (95% confidence interval [CI], 1.24-5.81) for mosquito larvae presence in water pooled in old tires and was 3.09 (95% CI, 1.75-5.46) for proximity to a densely inhabited area. For IgM positivity, the AOR was 3.06 (95% CI, 1.50-6.23) for proximity to a densely inhabited area. Our results indicated rural penetration of dengue infections. Control measures should target densely inhabited areas, and may include clean-up of discarded tires and water-collecting waste.
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here result from benthic photo-quadrat surveys conducted along transects at stratified random sites across the Mariana archipelago in 2014 as a...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here result from benthic photo-quadrat surveys conducted along transects at stratified random sites across American Samoa in 2015 as a part of...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here are benthic habitat imagery that result from benthic photo-quadrat surveys conducted along transects at stratified random sites across the...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here are benthic habitat imagery that result from benthic photo-quadrat surveys conducted along transects at stratified random sites across Wake...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here are benthic habitat imagery that result from benthic photo-quadrat surveys conducted along transects at stratified random sites across the...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here result from benthic photo-quadrat surveys conducted along transects at stratified random sites across the Hawaiian archipelago in 2016 as a...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here result from benthic photo-quadrat surveys conducted along transects at stratified random sites across the Hawaiian archipelago in 2013 as a...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here result from benthic photo-quadrat surveys conducted along transects at stratified random sites across the Pacific Remote Island Areas since...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here are benthic habitat imagery that result from benthic photo-quadrat surveys conducted along transects at stratified random sites across the...
Power Spectrum Estimation of Randomly Sampled Signals
DEFF Research Database (Denmark)
Velte, C. M.; Buchhave, P.; K. George, W.
algorithms; sample and-hold and the direct spectral estimator without residence time weighting. The computer generated signal is a Poisson process with a sample rate proportional to velocity magnitude that consist of well-defined frequency content, which makes bias easy to spot. The idea...
Biro, Peter A
2013-02-01
Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.
Lin, Yu-Pin; Chu, Hone-Jay; Huang, Yu-Long; Tang, Chia-Hsi; Rouhani, Shahrokh
2011-06-01
This study develops a stratified conditional Latin hypercube sampling (scLHS) approach for multiple, remotely sensed, normalized difference vegetation index (NDVI) images. The objective is to sample, monitor, and delineate spatiotemporal landscape changes, including spatial heterogeneity and variability, in a given area. The scLHS approach, which is based on the variance quadtree technique (VQT) and the conditional Latin hypercube sampling (cLHS) method, selects samples in order to delineate landscape changes from multiple NDVI images. The images are then mapped for calibration and validation by using sequential Gaussian simulation (SGS) with the scLHS selected samples. Spatial statistical results indicate that in terms of their statistical distribution, spatial distribution, and spatial variation, the statistics and variograms of the scLHS samples resemble those of multiple NDVI images more closely than those of cLHS and VQT samples. Moreover, the accuracy of simulated NDVI images based on SGS with scLHS samples is significantly better than that of simulated NDVI images based on SGS with cLHS samples and VQT samples, respectively. However, the proposed approach efficiently monitors the spatial characteristics of landscape changes, including the statistics, spatial variability, and heterogeneity of NDVI images. In addition, SGS with the scLHS samples effectively reproduces spatial patterns and landscape changes in multiple NDVI images.
Kim, Hee-Young; Kim, Seung-Kyu; Kang, Dong-Mug; Hwang, Yong-Sik; Oh, Jeong-Eun
2014-02-01
Serum samples were collected from volunteers of various ages and both genders using a proportionate stratified sampling method, to assess the exposure of the general population in Busan, South Korea to perfluorinated compounds (PFCs). 16 PFCs were investigated in serum samples from 306 adults (124 males and 182 females) and one day composite diet samples (breakfast, lunch, and dinner) from 20 of the serum donors, to investigate the relationship between food and serum PFC concentrations. Perfluorooctanoic acid and perfluorooctanesulfonic acid were the dominant PFCs in the serum samples, with mean concentrations of 8.4 and 13 ng/mL, respectively. Perfluorotridecanoic acid was the dominant PFC in the composite food samples, ranging from studies. We confirmed from the relationships between questionnaire results and the PFC concentrations in the serum samples, that food is one of the important contribution factors of human exposure to PFCs. However, there were no correlations between the PFC concentrations in the one day composite diet samples and the serum samples, because a one day composite diet sample is not necessarily representative of a person's long-term diet and because of the small number of samples taken. Copyright © 2013 Elsevier B.V. All rights reserved.
WANG, P. T.
2015-12-01
Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.
Efficient sampling of complex network with modified random walk strategies
Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei
2018-02-01
We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.
Herrero, Pablo; Gómez-Trullén, Eva M; Asensio, Angel; García, Elena; Casas, Roberto; Monserrat, Esther; Pandyan, Anand
2012-12-01
To investigate whether hippotherapy (when applied by a simulator) improves postural control and balance in children with cerebral palsy. Stratified single-blind randomized controlled trial with an independent assessor. Stratification was made by gross motor function classification system levels, and allocation was concealed. Children between 4 and 18 years old with cerebral palsy. Participants were randomized to an intervention (simulator ON) or control (simulator OFF) group after getting informed consent. Treatment was provided once a week (15 minutes) for 10 weeks. Gross Motor Function Measure (dimension B for balance and the Total Score) and Sitting Assessment Scale were carried out at baseline (prior to randomization), end of intervention and 12 weeks after completing the intervention. Thirty-eight children participated. The groups were balanced at baseline. Sitting balance (measured by dimension B of the Gross Motor Function Measure) improved significantly in the treatment group (effect size = 0.36; 95% CI 0.01-0.71) and the effect size was greater in the severely disabled group (effect size = 0.80; 95% CI 0.13-1.47). The improvements in sitting balance were not maintained over the follow-up period. Changes in the total score of the Gross Motor Function Measure and the Sitting Assessment Scale were not significant. Hippotherapy with a simulator can improve sitting balance in cerebral palsy children who have higher levels of disability. However, this did not lead to a change in the overall function of these children (Gross Motor Function Classification System level V).
Sampling large random knots in a confined space
International Nuclear Information System (INIS)
Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M
2007-01-01
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications
Sampling large random knots in a confined space
Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.
2007-09-01
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
Sampling large random knots in a confined space
Energy Technology Data Exchange (ETDEWEB)
Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)
2007-09-28
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS
Sampath Sundaram; Ammani Sivaraman
2010-01-01
In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i) Balanced Systematic Sampling (BSS) of Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...
Lee, Myung Kyung; Park, Bu Kyung
2018-01-01
This study examined the effect of flipped learning in comparison to traditional learning in a surgical nursing practicum. The subjects of this study were 102 nursing students in their third year of university who were scheduled to complete a clinical nursing practicum in an operating room or surgical unit. Participants were randomly assigned to either a flipped learning group (n = 51) or a traditional learning group (n = 51) for the 1-week, 45-hour clinical nursing practicum. The flipped-learning group completed independent e-learning lessons on surgical nursing and received a brief orientation prior to the commencement of the practicum, while the traditional-learning group received a face-to-face orientation and on-site instruction. After the completion of the practicum, both groups completed a case study and a conference. The student's self-efficacy, self-leadership, and problem-solving skills in clinical practice were measured both before and after the one-week surgical nursing practicum. Participants' independent goal setting and evaluation of beliefs and assumptions for the subscales of self-leadership and problem-solving skills were compared for the flipped learning group and the traditional learning group. The results showed greater improvement on these indicators for the flipped learning group in comparison to the traditional learning group. The flipped learning method might offer more effective e-learning opportunities in terms of self-leadership and problem-solving than the traditional learning method in surgical nursing practicums.
Rubin, K H; Rothmann, M J; Holmberg, T; Høiberg, M; Möller, S; Barkmann, R; Glüer, C C; Hermann, A P; Bech, M; Gram, J; Brixen, K
2018-03-01
The Risk-stratified Osteoporosis Strategy Evaluation (ROSE) study investigated the effectiveness of a two-step screening program for osteoporosis in women. We found no overall reduction in fractures from systematic screening compared to the current case-finding strategy. The group of moderate- to high-risk women, who accepted the invitation to DXA, seemed to benefit from the program. The purpose of the ROSE study was to investigate the effectiveness of a two-step population-based osteoporosis screening program using the Fracture Risk Assessment Tool (FRAX) derived from a self-administered questionnaire to select women for DXA scan. After the scanning, standard osteoporosis management according to Danish national guidelines was followed. Participants were randomized to either screening or control group, and randomization was stratified according to age and area of residence. Inclusion took place from February 2010 to November 2011. Participants received a self-administered questionnaire, and women in the screening group with a FRAX score ≥ 15% (major osteoporotic fractures) were invited to a DXA scan. Primary outcome was incident clinical fractures. Intention-to-treat analysis and two per-protocol analyses were performed. A total of 3416 fractures were observed during a median follow-up of 5 years. No significant differences were found in the intention-to-treat analyses with 34,229 women included aged 65-80 years. The per-protocol analyses showed a risk reduction in the group that underwent DXA scanning compared to women in the control group with a FRAX ≥ 15%, in regard to major osteoporotic fractures, hip fractures, and all fractures. The risk reduction was most pronounced for hip fractures (adjusted SHR 0.741, p = 0.007). Compared to an office-based case-finding strategy, the two-step systematic screening strategy had no overall effect on fracture incidence. The two-step strategy seemed, however, to be beneficial in the group of women who were
Random sampling of evolution time space and Fourier transform processing
International Nuclear Information System (INIS)
Kazimierczuk, Krzysztof; Zawadzka, Anna; Kozminski, Wiktor; Zhukov, Igor
2006-01-01
Application of Fourier Transform for processing 3D NMR spectra with random sampling of evolution time space is presented. The 2D FT is calculated for pairs of frequencies, instead of conventional sequence of one-dimensional transforms. Signal to noise ratios and linewidths for different random distributions were investigated by simulations and experiments. The experimental examples include 3D HNCA, HNCACB and 15 N-edited NOESY-HSQC spectra of 13 C 15 N labeled ubiquitin sample. Obtained results revealed general applicability of proposed method and the significant improvement of resolution in comparison with conventional spectra recorded in the same time
International Nuclear Information System (INIS)
Bouyer, Jeremy; Seck, Momar Talla; Guerrini, Laure; Sall, Baba; Ndiaye, Elhadji Youssou; Vreysen, Marc J.B.
2010-01-01
The riverine tsetse species Glossina palpalis gambiensis Vanderplank 1949 (Diptera: Glossinidae) inhabits riparian forests along river systems in West Africa. The government of Senegal has embarked on a project to eliminate this tsetse species, and African animal trypanosomoses, from the Niayes are using an area-wide integrated pest management approach. A stratified entomological sampling strategy was therefore developed using spatial analytical tools and mathematical modeling. A preliminary phytosociological census identified eight types of suitable habitat, which could be discriminated from LandSat 7ETM satellite images and denominated wet areas. At the end of March 2009, 683 unbaited Vavoua traps had been deployed, and the observed infested area in the Niayes was 525 km2. In the remaining area, a mathematical model was used to assess the risk that flies were present despite a sequence of zero catches. The analysis showed that this risk was above 0.05 in19% of this area that will be considered as infested during the control operations.The remote sensing analysis that identifed the wet areas allowed a restriction of the area to be surveyed to 4% of the total surface area (7,150km2), whereas the mathematical model provided an efficient method to improve the accuracy and the robustness of the sampling protocol. The final size of the control area will be decided based on the entomological collection data.This entomological sampling procedure might be used for other vector or pest control scenarios. (Authors)
Adaptive importance sampling of random walks on continuous state spaces
International Nuclear Information System (INIS)
Baggerly, K.; Cox, D.; Picard, R.
1998-01-01
The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material
Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël
2016-11-17
Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.
Design of dry sand soil stratified sampler
Li, Erkang; Chen, Wei; Feng, Xiao; Liao, Hongbo; Liang, Xiaodong
2018-04-01
This paper presents a design of a stratified sampler for dry sand soil, which can be used for stratified sampling of loose sand under certain conditions. Our group designed the mechanical structure of a portable, single - person, dry sandy soil stratified sampler. We have set up a mathematical model for the sampler. It lays the foundation for further development of design research.
Henry, M J; Pasco, J A; Seeman, E; Nicholson, G C; Sanders, K M; Kotowicz, M A
2001-01-01
Fracture risk is determined by bone mineral density (BMD). The T-score, a measure of fracture risk, is the position of an individual's BMD in relation to a reference range. The aim of this study was to determine the magnitude of change in the T-score when different sampling techniques were used to produce the reference range. Reference ranges were derived from three samples, drawn from the same region: (1) an age-stratified population-based random sample, (2) unselected volunteers, and (3) a selected healthy subset of the population-based sample with no diseases or drugs known to affect bone. T-scores were calculated using the three reference ranges for a cohort of women who had sustained a fracture and as a group had a low mean BMD (ages 35-72 yr; n = 484). For most comparisons, the T-scores for the fracture cohort were more negative using the population reference range. The difference in T-scores reached 1.0 SD. The proportion of the fracture cohort classified as having osteoporosis at the spine was 26, 14, and 23% when the population, volunteer, and healthy reference ranges were applied, respectively. The use of inappropriate reference ranges results in substantial changes to T-scores and may lead to inappropriate management.
Random vs. systematic sampling from administrative databases involving human subjects.
Hagino, C; Lo, R J
1998-09-01
Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.
RandomSpot: A web-based tool for systematic random sampling of virtual slides.
Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E
2015-01-01
This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
A Table-Based Random Sampling Simulation for Bioluminescence Tomography
Directory of Open Access Journals (Sweden)
Xiaomeng Zhang
2006-01-01
Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.
Sampling Polya-Gamma random variates: alternate and approximate techniques
Windle, Jesse; Polson, Nicholas G.; Scott, James G.
2014-01-01
Efficiently sampling from the P\\'olya-Gamma distribution, ${PG}(b,z)$, is an essential element of P\\'olya-Gamma data augmentation. Polson et. al (2013) show how to efficiently sample from the ${PG}(1,z)$ distribution. We build two new samplers that offer improved performance when sampling from the ${PG}(b,z)$ distribution and $b$ is not unity.
Estimates of Inequality Indices Based on Simple Random, Ranked Set, and Systematic Sampling
Bansal, Pooja; Arora, Sangeeta; Mahajan, Kalpana K.
2013-01-01
Gini index, Bonferroni index, and Absolute Lorenz index are some popular indices of inequality showing different features of inequality measurement. In general simple random sampling procedure is commonly used to estimate the inequality indices and their related inference. The key condition that the samples must be drawn via simple random sampling procedure though makes calculations much simpler but this assumption is often violated in practice as the data does not always yield simple random ...
Importance sampling of heavy-tailed iterated random functions
B. Chen (Bohan); C.H. Rhee (Chang-Han); A.P. Zwart (Bert)
2016-01-01
textabstractWe consider a stochastic recurrence equation of the form $Z_{n+1} = A_{n+1} Z_n+B_{n+1}$, where $\\mathbb{E}[\\log A_1]<0$, $\\mathbb{E}[\\log^+ B_1]<\\infty$ and $\\{(A_n,B_n)\\}_{n\\in\\mathbb{N}}$ is an i.i.d. sequence of positive random vectors. The stationary distribution of this Markov
Directory of Open Access Journals (Sweden)
CODRUŢA DURA
2010-01-01
Full Text Available The sample represents a particular segment of the statistical populationchosen to represent it as a whole. The representativeness of the sample determines the accuracyfor estimations made on the basis of calculating the research indicators and the inferentialstatistics. The method of random sampling is part of probabilistic methods which can be usedwithin marketing research and it is characterized by the fact that it imposes the requirementthat each unit belonging to the statistical population should have an equal chance of beingselected for the sampling process. When the simple random sampling is meant to be rigorouslyput into practice, it is recommended to use the technique of random number tables in order toconfigure the sample which will provide information that the marketer needs. The paper alsodetails the practical procedure implemented in order to create a sample for a marketingresearch by generating random numbers using the facilities offered by Microsoft Excel.
Correlated random sampling for multivariate normal and log-normal distributions
International Nuclear Information System (INIS)
Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.
2012-01-01
A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.
Chaudhuri, Arijit
2014-01-01
Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...
An alternative procedure for estimating the population mean in simple random sampling
Directory of Open Access Journals (Sweden)
Housila P. Singh
2012-03-01
Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.
Health plan auditing: 100-percent-of-claims vs. random-sample audits.
Sillup, George P; Klimberg, Ronald K
2011-01-01
The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.
Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster
DEFF Research Database (Denmark)
Schou, Mads Fristrup
2013-01-01
When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column...... and diminishes environmental variance. This method was compared with a traditional egg collection method where eggs are collected directly from the medium. Within each method the observed and expected standard deviations of egg-to-adult viability were compared, whereby the difference in the randomness...... and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila....
Tchoubi, Sébastien; Sobngwi-Tambekou, Joëlle; Noubiap, Jean Jacques N.; Asangbeh, Serra Lem; Nkoum, Benjamin Alexandre; Sobngwi, Eugene
2015-01-01
Background Childhood obesity is one of the most serious public health challenges of the 21st century. The prevalence of overweight and obesity among children (overweight and obesity among children aged 6 months to 5 years in Cameroon in 2011. Methods Four thousand five hundred and eighteen children (2205 boys and 2313 girls) aged between 6 to 59 months were sampled in the 2011 Demographic Health Survey (DHS) database. Body Mass Index (BMI) z-scores based on WHO 2006 reference population was chosen to estimate overweight (BMI z-score > 2) and obesity (BMI for age > 3). Regression analyses were performed to investigate risk factors of overweight/obesity. Results The prevalence of overweight and obesity was 8% (1.7% for obesity alone). Boys were more affected by overweight than girls with a prevalence of 9.7% and 6.4% respectively. The highest prevalence of overweight was observed in the Grassfield area (including people living in West and North-West regions) (15.3%). Factors that were independently associated with overweight and obesity included: having overweight mother (adjusted odds ratio (aOR) = 1.51; 95% CI 1.15 to 1.97) and obese mother (aOR = 2.19; 95% CI = 155 to 3.07), compared to having normal weight mother; high birth weight (aOR = 1.69; 95% CI 1.24 to 2.28) compared to normal birth weight; male gender (aOR = 1.56; 95% CI 1.24 to 1.95); low birth rank (aOR = 1.35; 95% CI 1.06 to 1.72); being aged between 13–24 months (aOR = 1.81; 95% CI = 1.21 to 2.66) and 25–36 months (aOR = 2.79; 95% CI 1.93 to 4.13) compared to being aged 45 to 49 months; living in the grassfield area (aOR = 2.65; 95% CI = 1.87 to 3.79) compared to living in Forest area. Muslim appeared as a protective factor (aOR = 0.67; 95% CI 0.46 to 0.95).compared to Christian religion. Conclusion This study underlines a high prevalence of early childhood overweight with significant disparities between ecological areas of Cameroon. Risk factors of overweight included high maternal BMI, high
Tchoubi, Sébastien; Sobngwi-Tambekou, Joëlle; Noubiap, Jean Jacques N; Asangbeh, Serra Lem; Nkoum, Benjamin Alexandre; Sobngwi, Eugene
2015-01-01
Childhood obesity is one of the most serious public health challenges of the 21st century. The prevalence of overweight and obesity among children (overweight and obesity among children aged 6 months to 5 years in Cameroon in 2011. Four thousand five hundred and eighteen children (2205 boys and 2313 girls) aged between 6 to 59 months were sampled in the 2011 Demographic Health Survey (DHS) database. Body Mass Index (BMI) z-scores based on WHO 2006 reference population was chosen to estimate overweight (BMI z-score > 2) and obesity (BMI for age > 3). Regression analyses were performed to investigate risk factors of overweight/obesity. The prevalence of overweight and obesity was 8% (1.7% for obesity alone). Boys were more affected by overweight than girls with a prevalence of 9.7% and 6.4% respectively. The highest prevalence of overweight was observed in the Grassfield area (including people living in West and North-West regions) (15.3%). Factors that were independently associated with overweight and obesity included: having overweight mother (adjusted odds ratio (aOR) = 1.51; 95% CI 1.15 to 1.97) and obese mother (aOR = 2.19; 95% CI = 155 to 3.07), compared to having normal weight mother; high birth weight (aOR = 1.69; 95% CI 1.24 to 2.28) compared to normal birth weight; male gender (aOR = 1.56; 95% CI 1.24 to 1.95); low birth rank (aOR = 1.35; 95% CI 1.06 to 1.72); being aged between 13-24 months (aOR = 1.81; 95% CI = 1.21 to 2.66) and 25-36 months (aOR = 2.79; 95% CI 1.93 to 4.13) compared to being aged 45 to 49 months; living in the grassfield area (aOR = 2.65; 95% CI = 1.87 to 3.79) compared to living in Forest area. Muslim appeared as a protective factor (aOR = 0.67; 95% CI 0.46 to 0.95).compared to Christian religion. This study underlines a high prevalence of early childhood overweight with significant disparities between ecological areas of Cameroon. Risk factors of overweight included high maternal BMI, high birth weight, male gender, low birth rank
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
Random Sampling of Correlated Parameters – a Consistent Solution for Unfavourable Conditions
Energy Technology Data Exchange (ETDEWEB)
Žerovnik, G., E-mail: gasper.zerovnik@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Trkov, A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Kodeli, I.A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Capote, R. [International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Smith, D.L. [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, CA 92118-3073 (United States)
2015-01-15
Two methods for random sampling according to a multivariate lognormal distribution – the correlated sampling method and the method of transformation of correlation coefficients – are briefly presented. The methods are mathematically exact and enable consistent sampling of correlated inherently positive parameters with given information on the first two distribution moments. Furthermore, a weighted sampling method to accelerate the convergence of parameters with extremely large relative uncertainties is described. However, the method is efficient only for a limited number of correlated parameters.
Williamson, Graham R
2003-11-01
This paper discusses the theoretical limitations of the use of random sampling and probability theory in the production of a significance level (or P-value) in nursing research. Potential alternatives, in the form of randomization tests, are proposed. Research papers in nursing, medicine and psychology frequently misrepresent their statistical findings, as the P-values reported assume random sampling. In this systematic review of studies published between January 1995 and June 2002 in the Journal of Advanced Nursing, 89 (68%) studies broke this assumption because they used convenience samples or entire populations. As a result, some of the findings may be questionable. The key ideas of random sampling and probability theory for statistical testing (for generating a P-value) are outlined. The result of a systematic review of research papers published in the Journal of Advanced Nursing is then presented, showing how frequently random sampling appears to have been misrepresented. Useful alternative techniques that might overcome these limitations are then discussed. REVIEW LIMITATIONS: This review is limited in scope because it is applied to one journal, and so the findings cannot be generalized to other nursing journals or to nursing research in general. However, it is possible that other nursing journals are also publishing research articles based on the misrepresentation of random sampling. The review is also limited because in several of the articles the sampling method was not completely clearly stated, and in this circumstance a judgment has been made as to the sampling method employed, based on the indications given by author(s). Quantitative researchers in nursing should be very careful that the statistical techniques they use are appropriate for the design and sampling methods of their studies. If the techniques they employ are not appropriate, they run the risk of misinterpreting findings by using inappropriate, unrepresentative and biased samples.
Peacock, Tom; Blanchette, Francois; Bush, John W. M.
2005-04-01
We present the results of an experimental investigation of the flows generated by monodisperse particles settling at low Reynolds number in a stably stratified ambient with an inclined sidewall. In this configuration, upwelling beneath the inclined wall associated with the Boycott effect is opposed by the ambient density stratification. The evolution of the system is determined by the relative magnitudes of the container depth, h, and the neutral buoyancy height, hn = c0(ρp-ρf)/|dρ/dz|, where c0 is the particle concentration, ρp the particle density, ρf the mean fluid density and dρ/dz Boycott layer transports dense fluid from the bottom to the top of the system; subsequently, the upper clear layer of dense saline fluid is mixed by convection. For sufficiently strong stratification, h > hn, layering occurs. The lowermost layer is created by clear fluid transported from the base to its neutral buoyancy height, and has a vertical extent hn; subsequently, smaller overlying layers develop. Within each layer, convection erodes the initially linear density gradient, generating a step-like density profile throughout the system that persists after all the particles have settled. Particles are transported across the discrete density jumps between layers by plumes of particle-laden fluid.
Generating Random Samples of a Given Size Using Social Security Numbers.
Erickson, Richard C.; Brauchle, Paul E.
1984-01-01
The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)
Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne
2018-01-01
Introduction Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. Materials and methods The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. Results The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. Conclusions The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in
L'Engle, Kelly; Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne
2018-01-01
Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in comparison to household surveys. Random digit dialing of mobile
Directory of Open Access Journals (Sweden)
Kelly L'Engle
Full Text Available Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample.The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census.The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample.The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in comparison to household surveys. Random digit
Directory of Open Access Journals (Sweden)
Nguyen Phuong H
2012-10-01
Full Text Available Abstract Background Low birth weight and maternal anemia remain intractable problems in many developing countries. The adequacy of the current strategy of providing iron-folic acid (IFA supplements only during pregnancy has been questioned given many women enter pregnancy with poor iron stores, the substantial micronutrient demand by maternal and fetal tissues, and programmatic issues related to timing and coverage of prenatal care. Weekly IFA supplementation for women of reproductive age (WRA improves iron status and reduces the burden of anemia in the short term, but few studies have evaluated subsequent pregnancy and birth outcomes. The Preconcept trial aims to determine whether pre-pregnancy weekly IFA or multiple micronutrient (MM supplementation will improve birth outcomes and maternal and infant iron status compared to the current practice of prenatal IFA supplementation only. This paper provides an overview of study design, methodology and sample characteristics from baseline survey data and key lessons learned. Methods/design We have recruited 5011 WRA in a double-blind stratified randomized controlled trial in rural Vietnam and randomly assigned them to receive weekly supplements containing either: 1 2800 μg folic acid 2 60 mg iron and 2800 μg folic acid or 3 MM. Women who become pregnant receive daily IFA, and are being followed through pregnancy, delivery, and up to three months post-partum. Study outcomes include birth outcomes and maternal and infant iron status. Data are being collected on household characteristics, maternal diet and mental health, anthropometry, infant feeding practices, morbidity and compliance. Discussion The study is timely and responds to the WHO Global Expert Consultation which identified the need to evaluate the long term benefits of weekly IFA and MM supplementation in WRA. Findings will generate new information to help guide policy and programs designed to reduce the burden of anemia in women and
International Nuclear Information System (INIS)
Maziero, Jonas
2015-01-01
The numerical generation of random quantum states (RQS) is an important procedure for investigations in quantum information science. Here, we review some methods that may be used for performing that task. We start by presenting a simple procedure for generating random state vectors, for which the main tool is the random sampling of unbiased discrete probability distributions (DPD). Afterwards, the creation of random density matrices is addressed. In this context, we first present the standard method, which consists in using the spectral decomposition of a quantum state for getting RQS from random DPDs and random unitary matrices. In the sequence, the Bloch vector parametrization method is described. This approach, despite being useful in several instances, is not in general convenient for RQS generation. In the last part of the article, we regard the overparametrized method (OPM) and the related Ginibre and Bures techniques. The OPM can be used to create random positive semidefinite matrices with unit trace from randomly produced general complex matrices in a simple way that is friendly for numerical implementations. We consider a physically relevant issue related to the possible domains that may be used for the real and imaginary parts of the elements of such general complex matrices. Subsequently, a too fast concentration of measure in the quantum state space that appears in this parametrization is noticed. (author)
Sampling designs and methods for estimating fish-impingement losses at cooling-water intakes
International Nuclear Information System (INIS)
Murarka, I.P.; Bodeau, D.J.
1977-01-01
Several systems for estimating fish impingement at power plant cooling-water intakes are compared to determine the most statistically efficient sampling designs and methods. Compared to a simple random sampling scheme the stratified systematic random sampling scheme, the systematic random sampling scheme, and the stratified random sampling scheme yield higher efficiencies and better estimators for the parameters in two models of fish impingement as a time-series process. Mathematical results and illustrative examples of the applications of the sampling schemes to simulated and real data are given. Some sampling designs applicable to fish-impingement studies are presented in appendixes
Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E
2001-01-01
Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.
Estimation of Sensitive Proportion by Randomized Response Data in Successive Sampling
Directory of Open Access Journals (Sweden)
Bo Yu
2015-01-01
Full Text Available This paper considers the problem of estimation for binomial proportions of sensitive or stigmatizing attributes in the population of interest. Randomized response techniques are suggested for protecting the privacy of respondents and reducing the response bias while eliciting information on sensitive attributes. In many sensitive question surveys, the same population is often sampled repeatedly on each occasion. In this paper, we apply successive sampling scheme to improve the estimation of the sensitive proportion on current occasion.
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number... Â§ 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each for...
Random sampling of elementary flux modes in large-scale metabolic networks.
Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel
2012-09-15
The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.
The Dirichet-Multinomial model for multivariate randomized response data and small samples
Avetisyan, Marianna; Fox, Gerardus J.A.
2012-01-01
In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The
The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples
Avetisyan, Marianna; Fox, Jean-Paul
2012-01-01
In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…
A Nationwide Random Sampling Survey of Potential Complicated Grief in Japan
Mizuno, Yasunao; Kishimoto, Junji; Asukai, Nozomu
2012-01-01
To investigate the prevalence of significant loss, potential complicated grief (CG), and its contributing factors, we conducted a nationwide random sampling survey of Japanese adults aged 18 or older (N = 1,343) using a self-rating Japanese-language version of the Complicated Grief Brief Screen. Among them, 37.0% experienced their most significant…
A simple sample size formula for analysis of covariance in cluster randomized trials.
Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.
2012-01-01
For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An
Random selection of items. Selection of n1 samples among N items composing a stratum
International Nuclear Information System (INIS)
Jaech, J.L.; Lemaire, R.J.
1987-02-01
STR-224 provides generalized procedures to determine required sample sizes, for instance in the course of a Physical Inventory Verification at Bulk Handling Facilities. The present report describes procedures to generate random numbers and select groups of items to be verified in a given stratum through each of the measurement methods involved in the verification. (author). 3 refs
Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA
Taylor, Laura; Doehler, Kirsten
2015-01-01
This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…
An efficient method of randomly sampling the coherent angular scatter distribution
International Nuclear Information System (INIS)
Williamson, J.F.; Morin, R.L.
1983-01-01
Monte Carlo simulations of photon transport phenomena require random selection of an interaction process at each collision site along the photon track. Possible choices are usually limited to photoelectric absorption and incoherent scatter as approximated by the Klein-Nishina distribution. A technique is described for sampling the coherent angular scatter distribution, for the benefit of workers in medical physics. (U.K.)
The Bootstrap, the Jackknife, and the Randomization Test: A Sampling Taxonomy.
Rodgers, J L
1999-10-01
A simple sampling taxonomy is defined that shows the differences between and relationships among the bootstrap, the jackknife, and the randomization test. Each method has as its goal the creation of an empirical sampling distribution that can be used to test statistical hypotheses, estimate standard errors, and/or create confidence intervals. Distinctions between the methods can be made based on the sampling approach (with replacement versus without replacement) and the sample size (replacing the whole original sample versus replacing a subset of the original sample). The taxonomy is useful for teaching the goals and purposes of resampling schemes. An extension of the taxonomy implies other possible resampling approaches that have not previously been considered. Univariate and multivariate examples are presented.
Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge
International Nuclear Information System (INIS)
Ma, Y C; Liu, H Y; Yan, S B; Li, J M; Tang, J; Yang, Y H; Yang, M W
2013-01-01
This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency. (paper)
Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge
Ma, Y. C.; Liu, H. Y.; Yan, S. B.; Yang, Y. H.; Yang, M. W.; Li, J. M.; Tang, J.
2013-05-01
This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency.
Reliability of impingement sampling designs: An example from the Indian Point station
International Nuclear Information System (INIS)
Mattson, M.T.; Waxman, J.B.; Watson, D.A.
1988-01-01
A 4-year data base (1976-1979) of daily fish impingement counts at the Indian Point electric power station on the Hudson River was used to compare the precision and reliability of three random-sampling designs: (1) simple random, (2) seasonally stratified, and (3) empirically stratified. The precision of daily impingement estimates improved logarithmically for each design as more days in the year were sampled. Simple random sampling was the least, and empirically stratified sampling was the most precise design, and the difference in precision between the two stratified designs was small. Computer-simulated sampling was used to estimate the reliability of the two stratified-random-sampling designs. A seasonally stratified sampling design was selected as the most appropriate reduced-sampling program for Indian Point station because: (1) reasonably precise and reliable impingement estimates were obtained using this design for all species combined and for eight common Hudson River fish by sampling only 30% of the days in a year (110 d); and (2) seasonal strata may be more precise and reliable than empirical strata if future changes in annual impingement patterns occur. The seasonally stratified design applied to the 1976-1983 Indian Point impingement data showed that selection of sampling dates based on daily species-specific impingement variability gave results that were more precise, but not more consistently reliable, than sampling allocations based on the variability of all fish species combined. 14 refs., 1 fig., 6 tabs
Characterization of electron microscopes with binary pseudo-random multilayer test samples
Yashchuk, Valeriy V.; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.
2011-09-01
Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1,2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi 2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.
Characterization of electron microscopes with binary pseudo-random multilayer test samples
International Nuclear Information System (INIS)
Yashchuk, Valeriy V.; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.
2011-01-01
Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi 2 /Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.
LOD score exclusion analyses for candidate genes using random population samples.
Deng, H W; Li, J; Recker, R R
2001-05-01
While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes with random population samples. We develop a LOD score approach for exclusion analyses of candidate genes with random population samples. Under this approach, specific genetic effects and inheritance models at candidate genes can be analysed and if a LOD score is < or = - 2.0, the locus can be excluded from having an effect larger than that specified. Computer simulations show that, with sample sizes often employed in association studies, this approach has high power to exclude a gene from having moderate genetic effects. In contrast to regular association analyses, population admixture will not affect the robustness of our analyses; in fact, it renders our analyses more conservative and thus any significant exclusion result is robust. Our exclusion analysis complements association analysis for candidate genes in random population samples and is parallel to the exclusion mapping analyses that may be conducted in linkage analyses with pedigrees or relative pairs. The usefulness of the approach is demonstrated by an application to test the importance of vitamin D receptor and estrogen receptor genes underlying the differential risk to osteoporotic fractures.
International Nuclear Information System (INIS)
Bertschinger, E.
1987-01-01
Path integrals may be used to describe the statistical properties of a random field such as the primordial density perturbation field. In this framework the probability distribution is given for a Gaussian random field subjected to constraints such as the presence of a protovoid or supercluster at a specific location in the initial conditions. An algorithm has been constructed for generating samples of a constrained Gaussian random field on a lattice using Monte Carlo techniques. The method makes possible a systematic study of the density field around peaks or other constrained regions in the biased galaxy formation scenario, and it is effective for generating initial conditions for N-body simulations with rare objects in the computational volume. 21 references
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
Stratified B-trees and versioning dictionaries
Twigg, Andy; Byde, Andrew; Milos, Grzegorz; Moreton, Tim; Wilkes, John; Wilkie, Tom
2011-01-01
A classic versioned data structure in storage and computer science is the copy-on-write (CoW) B-tree -- it underlies many of today's file systems and databases, including WAFL, ZFS, Btrfs and more. Unfortunately, it doesn't inherit the B-tree's optimality properties; it has poor space utilization, cannot offer fast updates, and relies on random IO to scale. Yet, nothing better has been developed since. We describe the `stratified B-tree', which beats all known semi-external memory versioned B...
Melvin, Neal R; Poda, Daniel; Sutherland, Robert J
2007-10-01
When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.
McGarvey, Richard; Burch, Paul; Matthews, Janet M
2016-01-01
Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with
Iterative algorithm of discrete Fourier transform for processing randomly sampled NMR data sets
International Nuclear Information System (INIS)
Stanek, Jan; Kozminski, Wiktor
2010-01-01
Spectra obtained by application of multidimensional Fourier Transformation (MFT) to sparsely sampled nD NMR signals are usually corrupted due to missing data. In the present paper this phenomenon is investigated on simulations and experiments. An effective iterative algorithm for artifact suppression for sparse on-grid NMR data sets is discussed in detail. It includes automated peak recognition based on statistical methods. The results enable one to study NMR spectra of high dynamic range of peak intensities preserving benefits of random sampling, namely the superior resolution in indirectly measured dimensions. Experimental examples include 3D 15 N- and 13 C-edited NOESY-HSQC spectra of human ubiquitin.
Randomized branch sampling to estimatefruit production in Pecan trees cv. ‘Barton’
Directory of Open Access Journals (Sweden)
Filemom Manoel Mokochinski
Full Text Available ABSTRACT: Sampling techniques to quantify the production of fruits are still very scarce and create a gap in crop development research. This study was conducted in a rural property in the county of Cachoeira do Sul - RS to estimate the efficiency of randomized branch sampling (RBS in quantifying the production of pecan fruit at three different ages (5,7 and 10 years. Two selection techniques were tested: the probability proportional to the diameter (PPD and the uniform probability (UP techniques, which were performed on nine trees, three from each age and randomly chosen. The RBS underestimated fruit production for all ages, and its main drawback was the high sampling error (125.17% - PPD and 111.04% - UP. The UP was regarded as more efficient than the PPD, though both techniques estimated similar production and similar experimental errors. In conclusion, we reported that branch sampling was inaccurate for this case study, requiring new studies to produce estimates with smaller sampling error.
Avrachenkov, Konstantin; Borkar, Vivek S; Kadavankandy, Arun; Sreedharan, Jithin K
2018-01-01
In the framework of network sampling, random walk (RW) based estimation techniques provide many pragmatic solutions while uncovering the unknown network as little as possible. Despite several theoretical advances in this area, RW based sampling techniques usually make a strong assumption that the samples are in stationary regime, and hence are impelled to leave out the samples collected during the burn-in period. This work proposes two sampling schemes without burn-in time constraint to estimate the average of an arbitrary function defined on the network nodes, for example, the average age of users in a social network. The central idea of the algorithms lies in exploiting regeneration of RWs at revisits to an aggregated super-node or to a set of nodes, and in strategies to enhance the frequency of such regenerations either by contracting the graph or by making the hitting set larger. Our first algorithm, which is based on reinforcement learning (RL), uses stochastic approximation to derive an estimator. This method can be seen as intermediate between purely stochastic Markov chain Monte Carlo iterations and deterministic relative value iterations. The second algorithm, which we call the Ratio with Tours (RT)-estimator, is a modified form of respondent-driven sampling (RDS) that accommodates the idea of regeneration. We study the methods via simulations on real networks. We observe that the trajectories of RL-estimator are much more stable than those of standard random walk based estimation procedures, and its error performance is comparable to that of respondent-driven sampling (RDS) which has a smaller asymptotic variance than many other estimators. Simulation studies also show that the mean squared error of RT-estimator decays much faster than that of RDS with time. The newly developed RW based estimators (RL- and RT-estimators) allow to avoid burn-in period, provide better control of stability along the sample path, and overall reduce the estimation time. Our
Electromagnetic waves in stratified media
Wait, James R; Fock, V A; Wait, J R
2013-01-01
International Series of Monographs in Electromagnetic Waves, Volume 3: Electromagnetic Waves in Stratified Media provides information pertinent to the electromagnetic waves in media whose properties differ in one particular direction. This book discusses the important feature of the waves that enables communications at global distances. Organized into 13 chapters, this volume begins with an overview of the general analysis for the electromagnetic response of a plane stratified medium comprising of any number of parallel homogeneous layers. This text then explains the reflection of electromagne
Improving ambulatory saliva-sampling compliance in pregnant women: a randomized controlled study.
Directory of Open Access Journals (Sweden)
Julian Moeller
Full Text Available OBJECTIVE: Noncompliance with scheduled ambulatory saliva sampling is common and has been associated with biased cortisol estimates in nonpregnant subjects. This study is the first to investigate in pregnant women strategies to improve ambulatory saliva-sampling compliance, and the association between sampling noncompliance and saliva cortisol estimates. METHODS: We instructed 64 pregnant women to collect eight scheduled saliva samples on two consecutive days each. Objective compliance with scheduled sampling times was assessed with a Medication Event Monitoring System and self-reported compliance with a paper-and-pencil diary. In a randomized controlled study, we estimated whether a disclosure intervention (informing women about objective compliance monitoring and a reminder intervention (use of acoustical reminders improved compliance. A mixed model analysis was used to estimate associations between women's objective compliance and their diurnal cortisol profiles, and between deviation from scheduled sampling and the cortisol concentration measured in the related sample. RESULTS: Self-reported compliance with a saliva-sampling protocol was 91%, and objective compliance was 70%. The disclosure intervention was associated with improved objective compliance (informed: 81%, noninformed: 60%, F(1,60 = 17.64, p<0.001, but not the reminder intervention (reminders: 68%, without reminders: 72%, F(1,60 = 0.78, p = 0.379. Furthermore, a woman's increased objective compliance was associated with a higher diurnal cortisol profile, F(2,64 = 8.22, p<0.001. Altered cortisol levels were observed in less objective compliant samples, F(1,705 = 7.38, p = 0.007, with delayed sampling associated with lower cortisol levels. CONCLUSIONS: The results suggest that in pregnant women, objective noncompliance with scheduled ambulatory saliva sampling is common and is associated with biased cortisol estimates. To improve sampling compliance, results suggest
Mahalanobis' Contributions to Sample Surveys
Indian Academy of Sciences (India)
Sample Survey started its operations in October 1950 under the ... and adopted random cuts for estimating the acreage under jute ... demographic factors relating to indebtedness, unemployment, ... traffic surveys, demand for currency coins and average life of .... Mahalanobis derived the optimum allocation in stratified.
Stratified medicine and reimbursement issues
Fugel, Hans-Joerg; Nuijten, Mark; Postma, Maarten
2012-01-01
Stratified Medicine (SM) has the potential to target patient populations who will most benefit from a therapy while reducing unnecessary health interventions associated with side effects. The link between clinical biomarkers/diagnostics and therapies provides new opportunities for value creation to
Acute stress symptoms during the second Lebanon war in a random sample of Israeli citizens.
Cohen, Miri; Yahav, Rivka
2008-02-01
The aims of this study were to assess prevalence of acute stress disorder (ASD) and acute stress symptoms (ASS) in Israel during the second Lebanon war. A telephone survey was conducted in July 2006 of a random sample of 235 residents of northern Israel, who were subjected to missile attacks, and of central Israel, who were not subjected to missile attacks. Results indicate that ASS scores were higher in the northern respondents; 6.8% of the northern sample and 3.9% of the central sample met ASD criteria. Appearance of each symptom ranged from 15.4% for dissociative to 88.4% for reexperiencing, with significant differences between northern and central respondents only for reexperiencing and arousal. A low ASD rate and a moderate difference between areas subjected and not subjected to attack were found.
Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling
Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing
2018-05-01
The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.
Directory of Open Access Journals (Sweden)
Alireza Goli
2015-09-01
Full Text Available Distribution and optimum allocation of emergency resources are the most important tasks, which need to be accomplished during crisis. When a natural disaster such as earthquake, flood, etc. takes place, it is necessary to deliver rescue efforts as quickly as possible. Therefore, it is important to find optimum location and distribution of emergency relief resources. When a natural disaster occurs, it is not possible to reach some damaged areas. In this paper, location and multi-depot vehicle routing for emergency vehicles using tour coverage and random sampling is investigated. In this study, there is no need to visit all the places and some demand points receive their needs from the nearest possible location. The proposed study is implemented for some randomly generated numbers in different sizes. The preliminary results indicate that the proposed method was capable of reaching desirable solutions in reasonable amount of time.
Towards Cost-efficient Sampling Methods
Peng, Luo; Yongli, Li; Chong, Wu
2014-01-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and...
Random On-Board Pixel Sampling (ROPS) X-Ray Camera
Energy Technology Data Exchange (ETDEWEB)
Wang, Zhehui [Los Alamos; Iaroshenko, O. [Los Alamos; Li, S. [Los Alamos; Liu, T. [Fermilab; Parab, N. [Argonne (main); Chen, W. W. [Purdue U.; Chu, P. [Los Alamos; Kenyon, G. [Los Alamos; Lipton, R. [Fermilab; Sun, K.-X. [Nevada U., Las Vegas
2017-09-25
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.
The Stratified Legitimacy of Abortions.
Kimport, Katrina; Weitz, Tracy A; Freedman, Lori
2016-12-01
Roe v. Wade was heralded as an end to unequal access to abortion care in the United States. However, today, despite being common and safe, abortion is performed only selectively in hospitals and private practices. Drawing on 61 interviews with obstetrician-gynecologists in these settings, we examine how they determine which abortions to perform. We find that they distinguish between more and less legitimate abortions, producing a narrative of stratified legitimacy that privileges abortions for intended pregnancies, when the fetus is unhealthy, and when women perform normative gendered sexuality, including distress about the abortion, guilt about failure to contracept, and desire for motherhood. This stratified legitimacy can perpetuate socially-inflected inequality of access and normative gendered sexuality. Additionally, we argue that the practice by physicians of distinguishing among abortions can legitimate legislative practices that regulate and restrict some kinds of abortion, further constraining abortion access. © American Sociological Association 2016.
Event-triggered synchronization for reaction-diffusion complex networks via random sampling
Dong, Tao; Wang, Aijuan; Zhu, Huiyun; Liao, Xiaofeng
2018-04-01
In this paper, the synchronization problem of the reaction-diffusion complex networks (RDCNs) with Dirichlet boundary conditions is considered, where the data is sampled randomly. An event-triggered controller based on the sampled data is proposed, which can reduce the number of controller and the communication load. Under this strategy, the synchronization problem of the diffusion complex network is equivalently converted to the stability of a of reaction-diffusion complex dynamical systems with time delay. By using the matrix inequality technique and Lyapunov method, the synchronization conditions of the RDCNs are derived, which are dependent on the diffusion term. Moreover, it is found the proposed control strategy can get rid of the Zeno behavior naturally. Finally, a numerical example is given to verify the obtained results.
RADIAL STABILITY IN STRATIFIED STARS
International Nuclear Information System (INIS)
Pereira, Jonas P.; Rueda, Jorge A.
2015-01-01
We formulate within a generalized distributional approach the treatment of the stability against radial perturbations for both neutral and charged stratified stars in Newtonian and Einstein's gravity. We obtain from this approach the boundary conditions connecting any two phases within a star and underline its relevance for realistic models of compact stars with phase transitions, owing to the modification of the star's set of eigenmodes with respect to the continuous case
International Nuclear Information System (INIS)
Jeong, Hae-Yong; Park, Moon-Ghu
2015-01-01
In most existing evaluation methodologies, which follow a conservative approach, the most conservative initial conditions are searched for each transient scenario through tremendous assessment for wide operating windows or limiting conditions for operation (LCO) allowed by the operating guidelines. In this procedure, a user effect could be involved and a remarkable time and human resources are consumed. In the present study, we investigated a more effective statistical method for the selection of the most conservative initial condition by the use of random sampling of operating parameters affecting the initial conditions. A method for the determination of initial conditions based on random sampling of plant design parameters is proposed. This method is expected to be applied for the selection of the most conservative initial plant conditions in the safety analysis using a conservative evaluation methodology. In the method, it is suggested that the initial conditions of reactor coolant flow rate, pressurizer level, pressurizer pressure, and SG level are adjusted by controlling the pump rated flow, setpoints of PLCS, PPCS, and FWCS, respectively. The proposed technique is expected to contribute to eliminate the human factors introduced in the conventional safety analysis procedure and also to reduce the human resources invested in the safety evaluation of nuclear power plants
A systematic examination of a random sampling strategy for source apportionment calculations.
Andersson, August
2011-12-15
Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.
LOD score exclusion analyses for candidate QTLs using random population samples.
Deng, Hong-Wen
2003-11-01
While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes as putative QTLs using random population samples. Previously, we developed an LOD score exclusion mapping approach for candidate genes for complex diseases. Here, we extend this LOD score approach for exclusion analyses of candidate genes for quantitative traits. Under this approach, specific genetic effects (as reflected by heritability) and inheritance models at candidate QTLs can be analyzed and if an LOD score is < or = -2.0, the locus can be excluded from having a heritability larger than that specified. Simulations show that this approach has high power to exclude a candidate gene from having moderate genetic effects if it is not a QTL and is robust to population admixture. Our exclusion analysis complements association analysis for candidate genes as putative QTLs in random population samples. The approach is applied to test the importance of Vitamin D receptor (VDR) gene as a potential QTL underlying the variation of bone mass, an important determinant of osteoporosis.
Landslide Susceptibility Assessment Using Frequency Ratio Technique with Iterative Random Sampling
Directory of Open Access Journals (Sweden)
Hyun-Joo Oh
2017-01-01
Full Text Available This paper assesses the performance of the landslide susceptibility analysis using frequency ratio (FR with an iterative random sampling. A pair of before-and-after digital aerial photographs with 50 cm spatial resolution was used to detect landslide occurrences in Yongin area, Korea. Iterative random sampling was run ten times in total and each time it was applied to the training and validation datasets. Thirteen landslide causative factors were derived from the topographic, soil, forest, and geological maps. The FR scores were calculated from the causative factors and training occurrences repeatedly ten times. The ten landslide susceptibility maps were obtained from the integration of causative factors that assigned FR scores. The landslide susceptibility maps were validated by using each validation dataset. The FR method achieved susceptibility accuracies from 89.48% to 93.21%. And the landslide susceptibility accuracy of the FR method is higher than 89%. Moreover, the ten times iterative FR modeling may contribute to a better understanding of a regularized relationship between the causative factors and landslide susceptibility. This makes it possible to incorporate knowledge-driven considerations of the causative factors into the landslide susceptibility analysis and also be extensively used to other areas.
A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling
Directory of Open Access Journals (Sweden)
Ying Yan
2017-01-01
Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.
International Nuclear Information System (INIS)
Eperon, Isabelle; Vassilakos, Pierre; Navarria, Isabelle; Menoud, Pierre-Alain; Gauthier, Aude; Pache, Jean-Claude; Boulvain, Michel; Untiet, Sarah; Petignat, Patrick
2013-01-01
To evaluate if human papillomavirus (HPV) self-sampling (Self-HPV) using a dry vaginal swab is a valid alternative for HPV testing. Women attending colposcopy clinic were recruited to collect two consecutive Self-HPV samples: a Self-HPV using a dry swab (S-DRY) and a Self-HPV using a standard wet transport medium (S-WET). These samples were analyzed for HPV using real time PCR (Roche Cobas). Participants were randomized to determine the order of the tests. Questionnaires assessing preferences and acceptability for both tests were conducted. Subsequently, women were invited for colposcopic examination; a physician collected a cervical sample (physician-sampling) with a broom-type device and placed it into a liquid-based cytology medium. Specimens were then processed for the production of cytology slides and a Hybrid Capture HPV DNA test (Qiagen) was performed from the residual liquid. Biopsies were performed if indicated. Unweighted kappa statistics (κ) and McNemar tests were used to measure the agreement among the sampling methods. A total of 120 women were randomized. Overall HPV prevalence was 68.7% (95% Confidence Interval (CI) 59.3–77.2) by S-WET, 54.4% (95% CI 44.8–63.9) by S-DRY and 53.8% (95% CI 43.8–63.7) by HC. Among paired samples (S-WET and S-DRY), the overall agreement was good (85.7%; 95% CI 77.8–91.6) and the κ was substantial (0.70; 95% CI 0.57-0.70). The proportion of positive type-specific HPV agreement was also good (77.3%; 95% CI 68.2-84.9). No differences in sensitivity for cervical intraepithelial neoplasia grade one (CIN1) or worse between the two Self-HPV tests were observed. Women reported the two Self-HPV tests as highly acceptable. Self-HPV using dry swab transfer does not appear to compromise specimen integrity. Further study in a large screening population is needed. ClinicalTrials.gov: http://clinicaltrials.gov/show/NCT01316120
Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen
2017-12-01
We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.
Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.
Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael
2014-10-01
Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.
Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch
2017-06-06
An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.
Gray bootstrap method for estimating frequency-varying random vibration signals with small samples
Directory of Open Access Journals (Sweden)
Wang Yanqing
2014-04-01
Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.
Clerkin, Elise M; Magee, Joshua C; Wells, Tony T; Beard, Courtney; Barnett, Nancy P
2016-12-01
Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Adult participants (N = 86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Clerkin, Elise M.; Magee, Joshua C.; Wells, Tony T.; Beard, Courtney; Barnett, Nancy P.
2016-01-01
Objective Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Method Adult participants (N=86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Results Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. Conclusions These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. PMID:27591918
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure
Free Falling in Stratified Fluids
Lam, Try; Vincent, Lionel; Kanso, Eva
2017-11-01
Leaves falling in air and discs falling in water are examples of unsteady descents due to complex interaction between gravitational and aerodynamic forces. Understanding these descent modes is relevant to many branches of engineering and science such as estimating the behavior of re-entry space vehicles to studying biomechanics of seed dispersion. For regularly shaped objects falling in homogenous fluids, the motion is relatively well understood. However, less is known about how density stratification of the fluid medium affects the falling behavior. Here, we experimentally investigate the descent of discs in both pure water and in stable linearly stratified fluids for Froude numbers Fr 1 and Reynolds numbers Re between 1000 -2000. We found that stable stratification (1) enhances the radial dispersion of the disc at landing, (2) increases the descent time, (3) decreases the inclination (or nutation) angle, and (4) decreases the fluttering amplitude while falling. We conclude by commenting on how the corresponding information can be used as a predictive model for objects free falling in stratified fluids.
Stratified Medicine and Reimbursement Issues
Directory of Open Access Journals (Sweden)
Hans-Joerg eFugel
2012-10-01
Full Text Available Stratified Medicine (SM has the potential to target patient populations who will most benefit from a therapy while reducing unnecessary health interventions associated with side effects. The link between clinical biomarkers/diagnostics and therapies provides new opportunities for value creation to strengthen the value proposition to pricing and reimbursement (P&R authorities. However, the introduction of SM challenges current reimbursement schemes in many EU countries and the US as different P&R policies have been adopted for drugs and diagnostics. Also, there is a lack of a consistent process for value assessment of more complex diagnostics in these markets. New, innovative approaches and more flexible P&R systems are needed to reflect the added value of diagnostic tests and to stimulate investments in new technologies. Yet, the framework for access of diagnostic–based therapies still requires further development while setting the right incentives and appropriate align stakeholders interests when realizing long- term patient benefits. This article addresses the reimbursement challenges of SM approaches in several EU countries and the US outlining some options to overcome existing reimbursement barriers for stratified medicine.
Song, Zhuoyi; Zhou, Yu; Juusola, Mikko
2016-01-01
Many diurnal photoreceptors encode vast real-world light changes effectively, but how this performance originates from photon sampling is unclear. A 4-module biophysically-realistic fly photoreceptor model, in which information capture is limited by the number of its sampling units (microvilli) and their photon-hit recovery time (refractoriness), can accurately simulate real recordings and their information content. However, sublinear summation in quantum bump production (quantum-gain-nonlinearity) may also cause adaptation by reducing the bump/photon gain when multiple photons hit the same microvillus simultaneously. Here, we use a Random Photon Absorption Model (RandPAM), which is the 1st module of the 4-module fly photoreceptor model, to quantify the contribution of quantum-gain-nonlinearity in light adaptation. We show how quantum-gain-nonlinearity already results from photon sampling alone. In the extreme case, when two or more simultaneous photon-hits reduce to a single sublinear value, quantum-gain-nonlinearity is preset before the phototransduction reactions adapt the quantum bump waveform. However, the contribution of quantum-gain-nonlinearity in light adaptation depends upon the likelihood of multi-photon-hits, which is strictly determined by the number of microvilli and light intensity. Specifically, its contribution to light-adaptation is marginal (≤ 1%) in fly photoreceptors with many thousands of microvilli, because the probability of simultaneous multi-photon-hits on any one microvillus is low even during daylight conditions. However, in cells with fewer sampling units, the impact of quantum-gain-nonlinearity increases with brightening light. PMID:27445779
Brus, D.J.; Gruijter, de J.J.
1997-01-01
Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based
Mok, Alexander; Haldar, Sumanto; Lee, Jetty Chung-Yung; Leow, Melvin Khee-Shing; Henry, Christiani Jeyakumar
2016-03-15
Cardio-Metabolic Disease (CMD) is the leading cause of death globally and particularly in Asia. Postprandial elevation of glycaemia, insulinaemia, triglyceridaemia are associated with an increased risk of CMD. While studies have shown that higher protein intake or increased meal frequency may benefit postprandial metabolism, their combined effect has rarely been investigated using composite mixed meals. We therefore examined the combined effects of increasing meal frequency (2-large vs 6-smaller meals), with high or low-protein (40 % vs 10 % energy from protein respectively) isocaloric mixed meals on a range of postprandial CMD risk markers. In a randomized crossover study, 10 healthy Chinese males (Age: 29 ± 7 years; BMI: 21.9 ± 1.7 kg/m(2)) underwent 4 dietary treatments: CON-2 (2 large Low-Protein meals), CON-6 (6 Small Low-Protein meals), PRO-2 (2 Large High-Protein meals) and PRO-6 (6 Small High-Protein meals). Subjects wore a continuous glucose monitor (CGM) and venous blood samples were obtained at baseline and at regular intervals for 8.5 h to monitor postprandial changes in glucose, insulin, triglycerides and high sensitivity C-reactive protein (hsCRP). Blood pressure was measured at regular intervals pre- and post- meal consumption. Urine was collected to measure excretion of creatinine and F2-isoprostanes and its metabolites over the 8.5 h postprandial period. The high-protein meals, irrespective of meal frequency were beneficial for glycaemic health since glucose incremental area under the curve (iAUC) for PRO-2 (185 ± 166 mmol.min.L(-1)) and PRO-6 (214 ± 188 mmol.min.L(-1)) were 66 and 60 % lower respectively (both p meals with higher protein content, irrespective of meal frequency appears to be beneficial for postprandial glycemic and insulinemic responses in young, healthy Chinese males. Implications of this study may be useful in the Asian context where the consumption of high glycemic index, carbohydrate meals is prevalent. NCT02529228 .
Information content of household-stratified epidemics
Directory of Open Access Journals (Sweden)
T.M. Kinyanjui
2016-09-01
Full Text Available Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs.
Information content of household-stratified epidemics.
Kinyanjui, T M; Pellis, L; House, T
2016-09-01
Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
The contribution of simple random sampling to observed variations in faecal egg counts.
Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I
2012-09-10
It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided. Copyright © 2012 Elsevier B.V. All rights reserved.
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-09-01
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.
Suppression of stratified explosive interactions
Energy Technology Data Exchange (ETDEWEB)
Meeks, M.K.; Shamoun, B.I.; Bonazza, R.; Corradini, M.L. [Wisconsin Univ., Madison, WI (United States). Dept. of Nuclear Engineering and Engineering Physics
1998-01-01
Stratified Fuel-Coolant Interaction (FCI) experiments with Refrigerant-134a and water were performed in a large-scale system. Air was uniformly injected into the coolant pool to establish a pre-existing void which could suppress the explosion. Two competing effects due to the variation of the air flow rate seem to influence the intensity of the explosion in this geometrical configuration. At low flow rates, although the injected air increases the void fraction, the concurrent agitation and mixing increases the intensity of the interaction. At higher flow rates, the increase in void fraction tends to attenuate the propagated pressure wave generated by the explosion. Experimental results show a complete suppression of the vapor explosion at high rates of air injection, corresponding to an average void fraction of larger than 30%. (author)
Walvoort, D.J.J.; Brus, D.J.; Gruijter, de J.J.
2010-01-01
Both for mapping and for estimating spatial means of an environmental variable, the accuracy of the result will usually be increased by dispersing the sample locations so that they cover the study area as uniformly as possible. We developed a new R package for designing spatial coverage samples for
Chu, Hui-May; Ette, Ene I
2005-09-02
his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
Energy Technology Data Exchange (ETDEWEB)
Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Discriminative motif discovery via simulated evolution and random under-sampling.
Directory of Open Access Journals (Sweden)
Tao Song
Full Text Available Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.
Discriminative motif discovery via simulated evolution and random under-sampling.
Song, Tao; Gu, Hong
2014-01-01
Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs) training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.
Schmidt, Jennifer; Martin, Alexandra
2016-09-01
Brain-directed treatment techniques, such as neurofeedback, have recently been proposed as adjuncts in the treatment of eating disorders to improve therapeutic outcomes. In line with this recommendation, a cue exposure EEG-neurofeedback protocol was developed. The present study aimed at the evaluation of the specific efficacy of neurofeedback to reduce subjective binge eating in a female subthreshold sample. A total of 75 subjects were randomized to EEG-neurofeedback, mental imagery with a comparable treatment set-up or a waitlist group. At post-treatment, only EEG-neurofeedback led to a reduced frequency of binge eating (p = .015, g = 0.65). The effects remained stable to a 3-month follow-up. EEG-neurofeedback further showed particular beneficial effects on perceived stress and dietary self-efficacy. Differences in outcomes did not arise from divergent treatment expectations. Because EEG-neurofeedback showed a specific efficacy, it may be a promising brain-directed approach that should be tested as a treatment adjunct in clinical groups with binge eating. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.
PHOTOSPHERIC EMISSION FROM STRATIFIED JETS
International Nuclear Information System (INIS)
Ito, Hirotaka; Nagataki, Shigehiro; Ono, Masaomi; Lee, Shiu-Hang; Mao, Jirong; Yamada, Shoichi; Pe'er, Asaf; Mizuta, Akira; Harikae, Seiji
2013-01-01
We explore photospheric emissions from stratified two-component jets, wherein a highly relativistic spine outflow is surrounded by a wider and less relativistic sheath outflow. Thermal photons are injected in regions of high optical depth and propagated until the photons escape at the photosphere. Because of the presence of shear in velocity (Lorentz factor) at the boundary of the spine and sheath region, a fraction of the injected photons are accelerated using a Fermi-like acceleration mechanism such that a high-energy power-law tail is formed in the resultant spectrum. We show, in particular, that if a velocity shear with a considerable variance in the bulk Lorentz factor is present, the high-energy part of observed gamma-ray bursts (GRBs) photon spectrum can be explained by this photon acceleration mechanism. We also show that the accelerated photons might also account for the origin of the extra-hard power-law component above the bump of the thermal-like peak seen in some peculiar bursts (e.g., GRB 090510, 090902B, 090926A). We demonstrate that time-integrated spectra can also reproduce the low-energy spectrum of GRBs consistently using a multi-temperature effect when time evolution of the outflow is considered. Last, we show that the empirical E p -L p relation can be explained by differences in the outflow properties of individual sources
Li, Tiandong
2012-01-01
In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…
Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P
1995-01-01
This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.
Shen, Lujun; Yang, Lei; Zhang, Jing; Zhang, Meng
2018-01-01
To explore the effect of expressive writing of positive emotions on test anxiety among senior-high-school students. The Test Anxiety Scale (TAS) was used to assess the anxiety level of 200 senior-high-school students. Seventy-five students with high anxiety were recruited and divided randomly into experimental and control groups. Each day for 30 days, the experimental group engaged in 20 minutes of expressive writing of positive emotions, while the control group was asked to merely write down their daily events. A second test was given after the month-long experiment to analyze whether there had been a reduction in anxiety among the sample. Quantitative data was obtained from TAS scores. The NVivo10.0 software program was used to examine the frequency of particular word categories used in participants' writing manuscripts. Senior-high-school students indicated moderate to high test anxiety. There was a significant difference in post-test results (P 0.05). Students' writing manuscripts were mainly encoded on five code categories: cause, anxiety manifestation, positive emotion, insight and evaluation. There was a negative relation between positive emotion, insight codes and test anxiety. There were significant differences in the positive emotion, anxiety manifestation, and insight code categories between the first 10 days' manuscripts and the last 10 days' ones. Long-term expressive writing of positive emotions appears to help reduce test anxiety by using insight and positive emotion words for Chinese students. Efficient and effective intervention programs to ease test anxiety can be designed based on this study.
Directory of Open Access Journals (Sweden)
Lujun Shen
Full Text Available To explore the effect of expressive writing of positive emotions on test anxiety among senior-high-school students.The Test Anxiety Scale (TAS was used to assess the anxiety level of 200 senior-high-school students. Seventy-five students with high anxiety were recruited and divided randomly into experimental and control groups. Each day for 30 days, the experimental group engaged in 20 minutes of expressive writing of positive emotions, while the control group was asked to merely write down their daily events. A second test was given after the month-long experiment to analyze whether there had been a reduction in anxiety among the sample. Quantitative data was obtained from TAS scores. The NVivo10.0 software program was used to examine the frequency of particular word categories used in participants' writing manuscripts.Senior-high-school students indicated moderate to high test anxiety. There was a significant difference in post-test results (P 0.05. Students' writing manuscripts were mainly encoded on five code categories: cause, anxiety manifestation, positive emotion, insight and evaluation. There was a negative relation between positive emotion, insight codes and test anxiety. There were significant differences in the positive emotion, anxiety manifestation, and insight code categories between the first 10 days' manuscripts and the last 10 days' ones.Long-term expressive writing of positive emotions appears to help reduce test anxiety by using insight and positive emotion words for Chinese students. Efficient and effective intervention programs to ease test anxiety can be designed based on this study.
Zhang, Jing; Zhang, Meng
2018-01-01
Purpose To explore the effect of expressive writing of positive emotions on test anxiety among senior-high-school students. Methods The Test Anxiety Scale (TAS) was used to assess the anxiety level of 200 senior-high-school students. Seventy-five students with high anxiety were recruited and divided randomly into experimental and control groups. Each day for 30 days, the experimental group engaged in 20 minutes of expressive writing of positive emotions, while the control group was asked to merely write down their daily events. A second test was given after the month-long experiment to analyze whether there had been a reduction in anxiety among the sample. Quantitative data was obtained from TAS scores. The NVivo10.0 software program was used to examine the frequency of particular word categories used in participants’ writing manuscripts. Results Senior-high-school students indicated moderate to high test anxiety. There was a significant difference in post-test results (P 0.05). Students’ writing manuscripts were mainly encoded on five code categories: cause, anxiety manifestation, positive emotion, insight and evaluation. There was a negative relation between positive emotion, insight codes and test anxiety. There were significant differences in the positive emotion, anxiety manifestation, and insight code categories between the first 10 days’ manuscripts and the last 10 days’ ones. Conclusions Long-term expressive writing of positive emotions appears to help reduce test anxiety by using insight and positive emotion words for Chinese students. Efficient and effective intervention programs to ease test anxiety can be designed based on this study. PMID:29401473
Directory of Open Access Journals (Sweden)
Yan Guo
Full Text Available Millions of people move from rural areas to urban areas in China to pursue new opportunities while leaving their spouses and children at rural homes. Little is known about the impact of migration-related separation on mental health of these rural migrants in urban China.Survey data from a random sample of rural-to-urban migrants (n = 1113, aged 18-45 from Wuhan were analyzed. The Domestic Migration Stress Questionnaire (DMSQ, an instrument with four subconstructs, was used to measure migration-related stress. The relationship between spouse/child separation and stress was assessed using survey estimation methods to account for the multi-level sampling design.16.46% of couples were separated from their spouses (spouse-separation only, 25.81% of parents were separated from their children (child separation only. Among the participants who married and had children, 5.97% were separated from both their spouses and children (double separation. Spouse-separation only and double separation did not scored significantly higher on DMSQ than those with no separation. Compared to parents without child separation, parents with child separation scored significantly higher on DMSQ (mean score = 2.88, 95% CI: [2.81, 2.95] vs. 2.60 [2.53, 2.67], p < .05. Stratified analysis by separation type and by gender indicated that the association was stronger for child-separation only and for female participants.Child-separation is an important source of migration-related stress, and the effect is particularly strong for migrant women. Public policies and intervention programs should consider these factors to encourage and facilitate the co-migration of parents with their children to mitigate migration-related stress.
International Nuclear Information System (INIS)
Plevnik, Lucijan; Žerovnik, Gašper
2016-01-01
Highlights: • Methods for random sampling of correlated parameters. • Link to open-source code for sampling of resonance parameters in ENDF-6 format. • Validation of the code on realistic and artificial data. • Validation of covariances in three major contemporary nuclear data libraries. - Abstract: Methods for random sampling of correlated parameters are presented. The methods are implemented for sampling of resonance parameters in ENDF-6 format and a link to the open-source code ENDSAM is given. The code has been validated on realistic data. Additionally, consistency of covariances of resonance parameters of three major contemporary nuclear data libraries (JEFF-3.2, ENDF/B-VII.1 and JENDL-4.0u2) has been checked.
Catarino, Rosa; Vassilakos, Pierre; Bilancioni, Aline; Vanden Eynde, Mathieu; Meyer-Hamme, Ulrike; Menoud, Pierre-Alain; Guerry, Fr?d?ric; Petignat, Patrick
2015-01-01
Background Human papillomavirus (HPV) self-sampling (self-HPV) is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab. Methods A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed f...
Albumin to creatinine ratio in a random urine sample: Correlation with severity of preeclampsia
Directory of Open Access Journals (Sweden)
Fady S. Moiety
2014-06-01
Conclusions: Random urine ACR may be a reliable method for prediction and assessment of severity of preeclampsia. Using the estimated cut-off may add to the predictive value of such a simple quick test.
Energy Technology Data Exchange (ETDEWEB)
Romero, Vicente [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bonney, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schroeder, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-11-01
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10^{-4} probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.
Edgington, Eugene
2007-01-01
Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani
Classification of archaeologically stratified pumice by INAA
International Nuclear Information System (INIS)
Peltz, C.; Bichler, M.
2001-01-01
In the framework of the research program 'Synchronization of Civilization in the Eastern Mediterranean Region in the 2nd Millenium B.C.' instrumental neutron activation analysis (INAA) was used to determine 30 elements in pumice from archaeological excavations to reveal their specific volcanic origin. The widespread pumiceous products of several eruptions in the Aegean region were used as abrasive tools and were therefore popular trade objects. A remarkable quantity of pumice and pumiceous tephra (several km 3 ) was produced by the 'Minoan eruption' of Thera (Santorini), which is assumed to have happened between 1450 and 1650 B.C. Thus the discovery of the primary fallout of 'Minoan' tephra in archaeologically stratified locations can be used as a relative time mark. Additionally, pumice lumps used as abrasive can serve for dating by first appearance. Essential to an identification of the primary volcanic source is the knowledge that pumices from the Aegean region can easily be distinguished by their trace element distribution patterns, as previous work has shown. The elements Al, Ba, Ca, Ce, Co, Cr, Cs, Dy, Eu, Fe, Hf, K, La, Lu, Mn, Na, Nd, Rb, Sb, Sc, Sm, Ta, Tb, Th, Ti, U, V, Yb, Zn and Zr were determined in 16 samples of pumice lumps from excavations in Tell-el-Dab'a and Tell-el-Herr (Egypt). Two irradiation cycles and five measurement runs were applied. A reliable identification of the samples is achieved by comparing these results to the database compiled in previous studies. (author)
International Nuclear Information System (INIS)
Matsuda, Hideharu; Minato, Susumu
2002-01-01
The accuracy of statistical quantity like the mean value and contour map obtained by measurement of the environmental gamma-ray dose rate was evaluated by random sampling of 5 different model distribution maps made by the mean slope, -1.3, of power spectra calculated from the actually measured values. The values were derived from 58 natural gamma dose rate data reported worldwide ranging in the means of 10-100 Gy/h rates and 10 -3 -10 7 km 2 areas. The accuracy of the mean value was found around ±7% even for 60 or 80 samplings (the most frequent number) and the standard deviation had the accuracy less than 1/4-1/3 of the means. The correlation coefficient of the frequency distribution was found 0.860 or more for 200-400 samplings (the most frequent number) but of the contour map, 0.502-0.770. (K.H.)
Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs
Directory of Open Access Journals (Sweden)
Faqir Muhammad
2007-01-01
Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.
Directory of Open Access Journals (Sweden)
Elsa Tavernier
Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.
Methodology Series Module 5: Sampling Strategies.
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.
Methodology series module 5: Sampling strategies
Directory of Open Access Journals (Sweden)
Maninder Singh Setia
2016-01-01
Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability sampling – based on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Reynolds, Maureen D; Tarter, Ralph E; Kirisci, Levent
2004-09-06
Men qualifying for substance use disorder (SUD) consequent to consumption of an illicit drug were compared according to recruitment method. It was hypothesized that volunteers would be more self-disclosing and exhibit more severe disturbances compared to randomly recruited subjects. Personal, demographic, family, social, substance use, psychiatric, and SUD characteristics of volunteers (N = 146) were compared to randomly recruited (N = 102) subjects. Volunteers had lower socioceconomic status, were more likely to be African American, and had lower IQ than randomly recruited subjects. Volunteers also evidenced greater social and family maladjustment and more frequently had received treatment for substance abuse. In addition, lower social desirability response bias was observed in the volunteers. SUD was not more severe in the volunteers; however, they reported a higher lifetime rate of opiate, diet, depressant, and analgesic drug use. Volunteers and randomly recruited subjects qualifying for SUD consequent to illicit drug use are similar in SUD severity but differ in terms of severity of psychosocial disturbance and history of drug involvement. The factors discriminating volunteers and randomly recruited subjects are well known to impact on outcome, hence they need to be considered in research design, especially when selecting a sampling strategy in treatment research.
Borak, T B
1986-04-01
Periodic grab sampling in combination with time-of-occupancy surveys has been the accepted procedure for estimating the annual exposure of underground U miners to Rn daughters. Temporal variations in the concentration of potential alpha energy in the mine generate uncertainties in this process. A system to randomize the selection of locations for measurement is described which can reduce uncertainties and eliminate systematic biases in the data. In general, a sample frequency of 50 measurements per year is sufficient to satisfy the criteria that the annual exposure be determined in working level months to within +/- 50% of the true value with a 95% level of confidence. Suggestions for implementing this randomization scheme are presented.
Directory of Open Access Journals (Sweden)
Andreas Steimer
Full Text Available Oscillations between high and low values of the membrane potential (UP and DOWN states respectively are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs of the exponential integrate and fire (EIF model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing
Steimer, Andreas; Schindler, Kaspar
2015-01-01
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational
International Nuclear Information System (INIS)
Martens, B.R.
1989-01-01
In the context of random sampling tests, parameters are checked on the waste barrels and criteria are given on which these tests are based. Also, it is shown how faulty data on the properties of the waste or faulty waste barrels should be treated. To decide the extent of testing, the properties of the waste relevant to final storage are determined based on the conditioning process used. (DG) [de
Random or systematic sampling to detect a localised microbial contamination within a batch of food
Jongenburger, I.; Reij, M.W.; Boer, E.P.J.; Gorris, L.G.M.; Zwietering, M.H.
2011-01-01
Pathogenic microorganisms are known to be distributed heterogeneously in food products that are solid, semi-solid or powdered, like for instance peanut butter, cereals, or powdered milk. This complicates effective detection of the pathogens by sampling. Two-class sampling plans, which are deployed
Conditional estimation of exponential random graph models from snowball sampling designs
Pattison, Philippa E.; Robins, Garry L.; Snijders, Tom A. B.; Wang, Peng
2013-01-01
A complete survey of a network in a large population may be prohibitively difficult and costly. So it is important to estimate models for networks using data from various network sampling designs, such as link-tracing designs. We focus here on snowball sampling designs, designs in which the members
Kane, Michael
2002-01-01
Reviews the criticisms of sampling assumptions in generalizability theory (and in reliability theory) and examines the feasibility of using representative sampling, stratification, homogeneity assumptions, and replications to address these criticisms. Suggests some general outlines for the conduct of generalizability theory studies. (SLD)
The Risk-Stratified Osteoporosis Strategy Evaluation study (ROSE)
DEFF Research Database (Denmark)
Rubin, Katrine Hass; Holmberg, Teresa; Rothmann, Mette Juel
2015-01-01
The risk-stratified osteoporosis strategy evaluation study (ROSE) is a randomized prospective population-based study investigating the effectiveness of a two-step screening program for osteoporosis in women. This paper reports the study design and baseline characteristics of the study population....... 35,000 women aged 65-80 years were selected at random from the population in the Region of Southern Denmark and-before inclusion-randomized to either a screening group or a control group. As first step, a self-administered questionnaire regarding risk factors for osteoporosis based on FRAX......(®) was issued to both groups. As second step, subjects in the screening group with a 10-year probability of major osteoporotic fractures ≥15 % were offered a DXA scan. Patients diagnosed with osteoporosis from the DXA scan were advised to see their GP and discuss pharmaceutical treatment according to Danish...
Methodology Series Module 5: Sampling Strategies
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438
Prototypic Features of Loneliness in a Stratified Sample of Adolescents
DEFF Research Database (Denmark)
Lasgaard, Mathias; Elklit, Ask
2009-01-01
Dominant theoretical approaches in loneliness research emphasize the value of personality characteristics in explaining loneliness. The present study examines whether dysfunctional social strategies and attributions in lonely adolescents can be explained by personality characteristics...... guidance and intervention. Thus, professionals need to be knowledgeable about prototypic features of loneliness in addition to employing a pro-active approach when assisting adolescents who display prototypic features....
Min, M.
2017-10-01
Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.
Random Walks on Directed Networks: Inference and Respondent-Driven Sampling
Directory of Open Access Journals (Sweden)
Malmros Jens
2016-06-01
Full Text Available Respondent-driven sampling (RDS is often used to estimate population properties (e.g., sexual risk behavior in hard-to-reach populations. In RDS, already sampled individuals recruit population members to the sample from their social contacts in an efficient snowball-like sampling procedure. By assuming a Markov model for the recruitment of individuals, asymptotically unbiased estimates of population characteristics can be obtained. Current RDS estimation methodology assumes that the social network is undirected, that is, all edges are reciprocal. However, empirical social networks in general also include a substantial number of nonreciprocal edges. In this article, we develop an estimation method for RDS in populations connected by social networks that include reciprocal and nonreciprocal edges. We derive estimators of the selection probabilities of individuals as a function of the number of outgoing edges of sampled individuals. The proposed estimators are evaluated on artificial and empirical networks and are shown to generally perform better than existing estimators. This is the case in particular when the fraction of directed edges in the network is large.
Sample size estimation and sampling techniques for selecting a representative sample
Directory of Open Access Journals (Sweden)
Aamir Omair
2014-01-01
Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.
Characterization of electron microscopes with binary pseudo-random multilayer test samples
International Nuclear Information System (INIS)
Yashchuk, Valeriy V.; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.
2010-01-01
We discuss the results of SEM and TEM measurements with the BPRML test samples fabricated from a BPRML (WSi2/Si with fundamental layer thickness of 3 nm) with a Dual Beam FIB (focused ion beam)/SEM technique. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize x-ray microscopes. Corresponding work with x-ray microscopes is in progress.
Vidovszky, Márton; Kohl, Claudia; Boldogh, Sándor; Görföl, Tamás; Wibbelt, Gudrun; Kurth, Andreas; Harrach, Balázs
2015-12-01
From over 1250 extant species of the order Chiroptera, 25 and 28 are known to occur in Germany and Hungary, respectively. Close to 350 samples originating from 28 bat species (17 from Germany, 27 from Hungary) were screened for the presence of adenoviruses (AdVs) using a nested PCR that targets the DNA polymerase gene of AdVs. An additional PCR was designed and applied to amplify a fragment from the gene encoding the IVa2 protein of mastadenoviruses. All German samples originated from organs of bats found moribund or dead. The Hungarian samples were excrements collected from colonies of known bat species, throat or rectal swab samples, taken from live individuals that had been captured for faunistic surveys and migration studies, as well as internal organs of dead specimens. Overall, 51 samples (14.73%) were found positive. We detected 28 seemingly novel and six previously described bat AdVs by sequencing the PCR products. The positivity rate was the highest among the guano samples of bat colonies. In phylogeny reconstructions, the AdVs detected in bats clustered roughly, but not perfectly, according to the hosts' families (Vespertilionidae, Rhinolophidae, Hipposideridae, Phyllostomidae and Pteropodidae). In a few cases, identical sequences were derived from animals of closely related species. On the other hand, some bat species proved to harbour more than one type of AdV. The high prevalence of infection and the large number of chiropteran species worldwide make us hypothesise that hundreds of different yet unknown AdV types might circulate in bats.
Haggard, Megan C; Kang, Linda L; Rowatt, Wade C; Shen, Megan Johnson
2015-01-01
The connection between religiousness and volunteering for the community can be explained through two distinct features of religion. First, religious organizations are social groups that encourage members to help others through planned opportunities. Second, helping others is regarded as an important value for members in religious organizations to uphold. We examined the relationship between religiousness and self-reported community volunteering in two independent national random surveys of American adults (i.e., the 2005 and 2007 waves of the Baylor Religion Survey). In both waves, frequency of religious service attendance was associated with an increase in likelihood that individuals would volunteer, whether through their religious organization or not, whereas frequency of reading sacred texts outside of religious services was associated with an increase in likelihood of volunteering only for or through their religious organization. The role of religion in community volunteering is discussed in light of these findings.
Re-estimating sample size in cluster randomized trials with active recruitment within clusters
van Schie, Sander; Moerbeek, Mirjam
2014-01-01
Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
Grain distinct stratified nanolayers in aluminium alloys
Energy Technology Data Exchange (ETDEWEB)
Donatus, U., E-mail: uyimedonatus@yahoo.com [School of Materials, The University of Manchester, Manchester, M13 9PL, England (United Kingdom); Thompson, G.E.; Zhou, X.; Alias, J. [School of Materials, The University of Manchester, Manchester, M13 9PL, England (United Kingdom); Tsai, I.-L. [Oxford Instruments NanoAnalysis, HP12 2SE, High Wycombe (United Kingdom)
2017-02-15
The grains of aluminium alloys have stratified nanolayers which determine their mechanical and chemical responses. In this study, the nanolayers were revealed in the grains of AA6082 (T6 and T7 conditions), AA5083-O and AA2024-T3 alloys by etching the alloys in a solution comprising 20 g Cr{sub 2}O{sub 3} + 30 ml HPO{sub 3} in 1 L H{sub 2}O. Microstructural examination was conducted on selected grains of interest using scanning electron microscopy and electron backscatter diffraction technique. It was observed that the nanolayers are orientation dependent and are parallel to the {100} planes. They have ordered and repeated tunnel squares that are flawed at the sides which are aligned in the <100> directions. These flawed tunnel squares dictate the tunnelling corrosion morphology as well as appearing to have an affect on the arrangement and sizes of the precipitation hardening particles. The inclination of the stratified nanolayers, their interpacing, and the groove sizes have significant influence on the corrosion behaviour and seeming influence on the strengthening mechanism of the investigated aluminium alloys. - Highlights: • Stratified nanolayers in aluminium alloy grains. • Relationship of the stratified nanolayers with grain orientation. • Influence of the inclinations of the stratified nanolayers on corrosion. • Influence of the nanolayers interspacing and groove sizes on hardness and corrosion.
Random Evolutionary Dynamics Driven by Fitness and House-of-Cards Mutations: Sampling Formulae
Huillet, Thierry E.
2017-07-01
We first revisit the multi-allelic mutation-fitness balance problem, especially when mutations obey a house of cards condition, where the discrete-time deterministic evolutionary dynamics of the allelic frequencies derives from a Shahshahani potential. We then consider multi-allelic Wright-Fisher stochastic models whose deviation to neutrality is from the Shahshahani mutation/selection potential. We next focus on the weak selection, weak mutation cases and, making use of a Gamma calculus, we compute the normalizing partition functions of the invariant probability densities appearing in their Wright-Fisher diffusive approximations. Using these results, generalized Ewens sampling formulae (ESF) from the equilibrium distributions are derived. We start treating the ESF in the mixed mutation/selection potential case and then we restrict ourselves to the ESF in the simpler house-of-cards mutations only situation. We also address some issues concerning sampling problems from infinitely-many alleles weak limits.
Seroincidence of non-typhoid Salmonella infections: convenience vs. random community-based sampling.
Emborg, H-D; Simonsen, J; Jørgensen, C S; Harritshøj, L H; Krogfelt, K A; Linneberg, A; Mølbak, K
2016-01-01
The incidence of reported infections of non-typhoid Salmonella is affected by biases inherent to passive laboratory surveillance, whereas analysis of blood sera may provide a less biased alternative to estimate the force of Salmonella transmission in humans. We developed a mathematical model that enabled a back-calculation of the annual seroincidence of Salmonella based on measurements of specific antibodies. The aim of the present study was to determine the seroincidence in two convenience samples from 2012 (Danish blood donors, n = 500, and pregnant women, n = 637) and a community-based sample of healthy individuals from 2006 to 2007 (n = 1780). The lowest antibody levels were measured in the samples from the community cohort and the highest in pregnant women. The annual Salmonella seroincidences were 319 infections/1000 pregnant women [90% credibility interval (CrI) 210-441], 182/1000 in blood donors (90% CrI 85-298) and 77/1000 in the community cohort (90% CrI 45-114). Although the differences between study populations decreased when accounting for different age distributions the estimates depend on the study population. It is important to be aware of this issue and define a certain population under surveillance in order to obtain consistent results in an application of serological measures for public health purposes.
Freeman, Lindsay M; Pang, Lin; Fainman, Yeshaiahu
2018-05-09
The analysis of DNA has led to revolutionary advancements in the fields of medical diagnostics, genomics, prenatal screening, and forensic science, with the global DNA testing market expected to reach revenues of USD 10.04 billion per year by 2020. However, the current methods for DNA analysis remain dependent on the necessity for fluorophores or conjugated proteins, leading to high costs associated with consumable materials and manual labor. Here, we demonstrate a potential label-free DNA composition detection method using surface-enhanced Raman spectroscopy (SERS) in which we identify the composition of cytosine and adenine within single strands of DNA. This approach depends on the fact that there is one phosphate backbone per nucleotide, which we use as a reference to compensate for systematic measurement variations. We utilize plasmonic nanomaterials with random Raman sampling to perform label-free detection of the nucleotide composition within DNA strands, generating a calibration curve from standard samples of DNA and demonstrating the capability of resolving the nucleotide composition. The work represents an innovative way for detection of the DNA composition within DNA strands without the necessity of attached labels, offering a highly sensitive and reproducible method that factors in random sampling to minimize error.
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Active Learning Not Associated with Student Learning in a Random Sample of College Biology Courses
Andrews, T. M.; Leonard, M. J.; Colgrove, C. A.; Kalinowski, S. T.
2011-01-01
Previous research has suggested that adding active learning to traditional college science lectures substantially improves student learning. However, this research predominantly studied courses taught by science education researchers, who are likely to have exceptional teaching expertise. The present study investigated introductory biology courses randomly selected from a list of prominent colleges and universities to include instructors representing a broader population. We examined the relationship between active learning and student learning in the subject area of natural selection. We found no association between student learning gains and the use of active-learning instruction. Although active learning has the potential to substantially improve student learning, this research suggests that active learning, as used by typical college biology instructors, is not associated with greater learning gains. We contend that most instructors lack the rich and nuanced understanding of teaching and learning that science education researchers have developed. Therefore, active learning as designed and implemented by typical college biology instructors may superficially resemble active learning used by education researchers, but lacks the constructivist elements necessary for improving learning. PMID:22135373
Oh, Paul; Lee, Sukho; Kang, Moon Gi
2017-06-28
Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method.
Stratified charge rotary engine for general aviation
Mount, R. E.; Parente, A. M.; Hady, W. F.
1986-01-01
A development history, a current development status assessment, and a design feature and performance capabilities account are given for stratified-charge rotary engines applicable to aircraft propulsion. Such engines are capable of operating on Jet-A fuel with substantial cost savings, improved altitude capability, and lower fuel consumption by comparison with gas turbine powerplants. Attention is given to the current development program of a 400-hp engine scheduled for initial operations in early 1990. Stratified charge rotary engines are also applicable to ground power units, airborne APUs, shipboard generators, and vehicular engines.
Directory of Open Access Journals (Sweden)
Gunter eSpöck
2015-05-01
Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.
Detection and monitoring of invasive exotic plants: a comparison of four sampling methods
Cynthia D. Huebner
2007-01-01
The ability to detect and monitor exotic invasive plants is likely to vary depending on the sampling method employed. Methods with strong qualitative thoroughness for species detection often lack the intensity necessary to monitor vegetation change. Four sampling methods (systematic plot, stratified-random plot, modified Whittaker, and timed meander) in hemlock and red...
Global Stratigraphy of Venus: Analysis of a Random Sample of Thirty-Six Test Areas
Basilevsky, Alexander T.; Head, James W., III
1995-01-01
The age relations between 36 impact craters with dark paraboloids and other geologic units and structures at these localities have been studied through photogeologic analysis of Magellan SAR images of the surface of Venus. Geologic settings in all 36 sites, about 1000 x 1000 km each, could be characterized using only 10 different terrain units and six types of structures. These units and structures form a major stratigraphic and geologic sequence (from oldest to youngest): (1) tessera terrain; (2) densely fractured terrains associated with coronae and in the form of remnants among plains; (3) fractured and ridged plains and ridge belts; (4) plains with wrinkle ridges; (5) ridges associated with coronae annulae and ridges of arachnoid annulae which are contemporary with wrinkle ridges of the ridged plains; (6) smooth and lobate plains; (7) fractures of coronae annulae, and fractures not related to coronae annulae, which disrupt ridged and smooth plains; (8) rift-associated fractures; and (9) craters with associated dark paraboloids, which represent the youngest 1O% of the Venus impact crater population (Campbell et al.), and are on top of all volcanic and tectonic units except the youngest episodes of rift-associated fracturing and volcanism; surficial streaks and patches are approximately contemporary with dark-paraboloid craters. Mapping of such units and structures in 36 randomly distributed large regions (each approximately 10(exp 6) sq km) shows evidence for a distinctive regional and global stratigraphic and geologic sequence. On the basis of this sequence we have developed a model that illustrates several major themes in the history of Venus. Most of the history of Venus (that of its first 80% or so) is not preserved in the surface geomorphological record. The major deformation associated with tessera formation in the period sometime between 0.5-1.0 b.y. ago (Ivanov and Basilevsky) is the earliest event detected. In the terminal stages of tessera fon
Nitrogen transformations in stratified aquatic microbial ecosystems
DEFF Research Database (Denmark)
Revsbech, N. P.; Risgaard-Petersen, N.; Schramm, A.
2006-01-01
Abstract New analytical methods such as advanced molecular techniques and microsensors have resulted in new insights about how nitrogen transformations in stratified microbial systems such as sediments and biofilms are regulated at a µm-mm scale. A large and ever-expanding knowledge base about n...
Active learning for clinical text classification: is it better than random sampling?
Figueroa, Rosa L; Ngo, Long H; Goryachev, Sergey; Wiechmann, Eduardo P
2012-01-01
Objective This study explores active learning algorithms as a way to reduce the requirements for large training sets in medical text classification tasks. Design Three existing active learning algorithms (distance-based (DIST), diversity-based (DIV), and a combination of both (CMB)) were used to classify text from five datasets. The performance of these algorithms was compared to that of passive learning on the five datasets. We then conducted a novel investigation of the interaction between dataset characteristics and the performance results. Measurements Classification accuracy and area under receiver operating characteristics (ROC) curves for each algorithm at different sample sizes were generated. The performance of active learning algorithms was compared with that of passive learning using a weighted mean of paired differences. To determine why the performance varies on different datasets, we measured the diversity and uncertainty of each dataset using relative entropy and correlated the results with the performance differences. Results The DIST and CMB algorithms performed better than passive learning. With a statistical significance level set at 0.05, DIST outperformed passive learning in all five datasets, while CMB was found to be better than passive learning in four datasets. We found strong correlations between the dataset diversity and the DIV performance, as well as the dataset uncertainty and the performance of the DIST algorithm. Conclusion For medical text classification, appropriate active learning algorithms can yield performance comparable to that of passive learning with considerably smaller training sets. In particular, our results suggest that DIV performs better on data with higher diversity and DIST on data with lower uncertainty. PMID:22707743
Unwilling or Unable to Cheat? Evidence from a Randomized Tax Audit Experiment in Denmark
Henrik J. Kleven; Martin B. Knudsen; Claus T. Kreiner; Søren Pedersen; Emmanuel Saez
2010-01-01
This paper analyzes a randomized tax enforcement experiment in Denmark. In the base year, a stratified and representative sample of over 40,000 individual income tax filers was selected for the experiment. Half of the tax filers were randomly selected to be thoroughly audited, while the rest were deliberately not audited. The following year, "threat-of-audit" letters were randomly assigned and sent to tax filers in both groups. Using comprehensive administrative tax data, we present four main...
Catarino, Rosa; Vassilakos, Pierre; Bilancioni, Aline; Vanden Eynde, Mathieu; Meyer-Hamme, Ulrike; Menoud, Pierre-Alain; Guerry, Frédéric; Petignat, Patrick
2015-01-01
Human papillomavirus (HPV) self-sampling (self-HPV) is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab. A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY) or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA). After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET). HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic. HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2) detected by s-DRY, 56.2% (95%CI: 47.6-64.4) by Dr-WET, and 54.6% (95%CI: 46.1-62.9) by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34), and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56). Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+) were: 64.0% (95%CI: 44.5-79.8) for s-FTA, 84.6% (95%CI: 66.5-93.9) for s-DRY, and 76.9% (95%CI: 58.0-89.0) for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%). Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD)/per card vs. ~1 USD/per swab). Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA. International Standard Randomized Controlled Trial Number (ISRCTN): 43310942.
Directory of Open Access Journals (Sweden)
Rosa Catarino
Full Text Available Human papillomavirus (HPV self-sampling (self-HPV is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab.A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA. After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET. HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic.HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2 detected by s-DRY, 56.2% (95%CI: 47.6-64.4 by Dr-WET, and 54.6% (95%CI: 46.1-62.9 by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34, and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56. Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+ were: 64.0% (95%CI: 44.5-79.8 for s-FTA, 84.6% (95%CI: 66.5-93.9 for s-DRY, and 76.9% (95%CI: 58.0-89.0 for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%. Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD/per card vs. ~1 USD/per swab.Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA.International Standard Randomized Controlled Trial Number (ISRCTN: 43310942.
Spatial Sampling of Weather Data for Regional Crop Yield Simulations
Van Bussel, Lenny G. J.; Ewert, Frank; Zhao, Gang; Hoffmann, Holger; Enders, Andreas; Wallach, Daniel; Asseng, Senthold; Baigorria, Guillermo A.; Basso, Bruno; Biernath, Christian;
2016-01-01
Field-scale crop models are increasingly applied at spatio-temporal scales that range from regions to the globe and from decades up to 100 years. Sufficiently detailed data to capture the prevailing spatio-temporal heterogeneity in weather, soil, and management conditions as needed by crop models are rarely available. Effective sampling may overcome the problem of missing data but has rarely been investigated. In this study the effect of sampling weather data has been evaluated for simulating yields of winter wheat in a region in Germany over a 30-year period (1982-2011) using 12 process-based crop models. A stratified sampling was applied to compare the effect of different sizes of spatially sampled weather data (10, 30, 50, 100, 500, 1000 and full coverage of 34,078 sampling points) on simulated wheat yields. Stratified sampling was further compared with random sampling. Possible interactions between sample size and crop model were evaluated. The results showed differences in simulated yields among crop models but all models reproduced well the pattern of the stratification. Importantly, the regional mean of simulated yields based on full coverage could already be reproduced by a small sample of 10 points. This was also true for reproducing the temporal variability in simulated yields but more sampling points (about 100) were required to accurately reproduce spatial yield variability. The number of sampling points can be smaller when a stratified sampling is applied as compared to a random sampling. However, differences between crop models were observed including some interaction between the effect of sampling on simulated yields and the model used. We concluded that stratified sampling can considerably reduce the number of required simulations. But, differences between crop models must be considered as the choice for a specific model can have larger effects on simulated yields than the sampling strategy. Assessing the impact of sampling soil and crop management
Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.
Zhou, Fuqun; Zhang, Aining
2016-10-25
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.
Media Use and Source Trust among Muslims in Seven Countries: Results of a Large Random Sample Survey
Directory of Open Access Journals (Sweden)
Steven R. Corman
2013-12-01
Full Text Available Despite the perceived importance of media in the spread of and resistance against Islamist extremism, little is known about how Muslims use different kinds of media to get information about religious issues, and what sources they trust when doing so. This paper reports the results of a large, random sample survey among Muslims in seven countries Southeast Asia, West Africa and Western Europe, which helps fill this gap. Results show a diverse set of profiles of media use and source trust that differ by country, with overall low trust in mediated sources of information. Based on these findings, we conclude that mass media is still the most common source of religious information for Muslims, but that trust in mediated information is low overall. This suggests that media are probably best used to persuade opinion leaders, who will then carry anti-extremist messages through more personal means.
Fowkes, F G; Lowe, G D; Rumley, A; Lennie, S E; Smith, F B; Donnan, P T
1993-05-01
Blood viscosity is elevated in hypertensive subjects, but the association of viscosity with arterial blood pressure in the general population, and the influence of social, lifestyle and disease characteristics on this association, are not established. In the Edinburgh Artery Study, 1592 men and women aged 55-74 years selected randomly from the general population attended a university clinic. A fasting blood sample was taken for the measurement of blood viscosity and its major determinants (haematocrit, plasma viscosity and fibrinogen). Systolic pressure was related univariately to blood viscosity (P viscosity (P index. Diastolic pressure was related univariately to blood viscosity (P viscosity (P viscosity and systolic pressure was confined to males. Blood viscosity was associated equally with systolic and diastolic pressures in males, and remained independently related on multivariate analysis adjusting for age, sex, body mass index, social class, smoking, alcohol intake, exercise, angina, HDL and non-HDL cholesterol, diabetes mellitus, plasma viscosity, fibrinogen, and haematocrit.
International Nuclear Information System (INIS)
Ito, Motohiro; Endo, Tomohiro; Yamamoto, Akio; Kuroda, Yusuke; Yoshii, Takashi
2017-01-01
The bias factor method based on the random sampling technique is applied to the benchmark problem of Peach Bottom Unit 2. Validity and availability of the present method, i.e. correction of calculation results and reduction of uncertainty, are confirmed in addition to features and performance of the present method. In the present study, core characteristics in cycle 3 are corrected with the proposed method using predicted and 'measured' critical eigenvalues in cycles 1 and 2. As the source of uncertainty, variance-covariance of cross sections is considered. The calculation results indicate that bias between predicted and measured results, and uncertainty owing to cross section can be reduced. Extension to other uncertainties such as thermal hydraulics properties will be a future task. (author)
DEFF Research Database (Denmark)
Møller, Anders Bjørn; Malone, Brendan P.; Odgers, Nathan
implementation generally improved the algorithm’s ability to predict the correct soil class. The implementation of soil-landscape relationships and area-proportional sampling generally increased the calculation time, while the random forest implementation reduced the calculation time. In the most successful......Detailed soil information is often needed to support agricultural practices, environmental protection and policy decisions. Several digital approaches can be used to map soil properties based on field observations. When soil observations are sparse or missing, an alternative approach...... is to disaggregate existing conventional soil maps. At present, the DSMART algorithm represents the most sophisticated approach for disaggregating conventional soil maps (Odgers et al., 2014). The algorithm relies on classification trees trained from resampled points, which are assigned classes according...
MC3D modelling of stratified explosion
International Nuclear Information System (INIS)
Picchi, S.; Berthoud, G.
1999-01-01
It is known that a steam explosion can occur in a stratified geometry and that the observed yields are lower than in the case of explosion in a premixture configuration. However, very few models are available to quantify the amount of melt which can be involved and the pressure peak that can be developed. In the stratified application of the MC3D code, mixing and fragmentation of the melt are explained by the growth of Kelvin Helmholtz instabilities due to the shear flow of the two phase coolant above the melt. Such a model is then used to recalculate the Frost-Ciccarelli tin-water experiment. Pressure peak, speed of propagation, bubble shape and erosion height are well reproduced as well as the influence of the inertial constraint (height of the water pool). (author)
MC3D modelling of stratified explosion
Energy Technology Data Exchange (ETDEWEB)
Picchi, S.; Berthoud, G. [DTP/SMTH/LM2, CEA, 38 - Grenoble (France)
1999-07-01
It is known that a steam explosion can occur in a stratified geometry and that the observed yields are lower than in the case of explosion in a premixture configuration. However, very few models are available to quantify the amount of melt which can be involved and the pressure peak that can be developed. In the stratified application of the MC3D code, mixing and fragmentation of the melt are explained by the growth of Kelvin Helmholtz instabilities due to the shear flow of the two phase coolant above the melt. Such a model is then used to recalculate the Frost-Ciccarelli tin-water experiment. Pressure peak, speed of propagation, bubble shape and erosion height are well reproduced as well as the influence of the inertial constraint (height of the water pool). (author)
Verweij, Karin J H; Treur, Jorien L; Vink, Jacqueline M
2018-07-01
Epidemiological studies consistently show co-occurrence of use of different addictive substances. Whether these associations are causal or due to overlapping underlying influences remains an important question in addiction research. Methodological advances have made it possible to use published genetic associations to infer causal relationships between phenotypes. In this exploratory study, we used Mendelian randomization (MR) to examine the causality of well-established associations between nicotine, alcohol, caffeine and cannabis use. Two-sample MR was employed to estimate bidirectional causal effects between four addictive substances: nicotine (smoking initiation and cigarettes smoked per day), caffeine (cups of coffee per day), alcohol (units per week) and cannabis (initiation). Based on existing genome-wide association results we selected genetic variants associated with the exposure measure as an instrument to estimate causal effects. Where possible we applied sensitivity analyses (MR-Egger and weighted median) more robust to horizontal pleiotropy. Most MR tests did not reveal causal associations. There was some weak evidence for a causal positive effect of genetically instrumented alcohol use on smoking initiation and of cigarettes per day on caffeine use, but these were not supported by the sensitivity analyses. There was also some suggestive evidence for a positive effect of alcohol use on caffeine use (only with MR-Egger) and smoking initiation on cannabis initiation (only with weighted median). None of the suggestive causal associations survived corrections for multiple testing. Two-sample Mendelian randomization analyses found little evidence for causal relationships between nicotine, alcohol, caffeine and cannabis use. © 2018 Society for the Study of Addiction.
Equipment for extracting and conveying stratified minerals
Energy Technology Data Exchange (ETDEWEB)
Blumenthal, G.; Kunzer, H.; Plaga, K.
1991-08-14
This invention relates to equipment for extracting stratified minerals and conveying the said minerals along the working face, comprising a trough shaped conveyor run assembled from lengths, a troughed extraction run in lengths matching the lengths of conveyor troughing, which is linked to the top edge of the working face side of the conveyor troughing with freedom to swivel vertically, and a positively guided chain carrying extraction tools and scrapers along the conveyor and extraction runs.
Inviscid incompressible limits of strongly stratified fluids
Czech Academy of Sciences Publication Activity Database
Feireisl, Eduard; Jin, B.J.; Novotný, A.
2014-01-01
Roč. 89, 3-4 (2014), s. 307-329 ISSN 0921-7134 R&D Projects: GA ČR GA201/09/0917 Institutional support: RVO:67985840 Keywords : compressible Navier-Stokes system * anelastic approximation * stratified fluid Subject RIV: BA - General Mathematics Impact factor: 0.528, year: 2014 http://iospress.metapress.com/content/d71255745tl50125/?p=969b60ae82634854ab8bd25505ce1f71&pi=3
Nitrogen transformations in stratified aquatic microbial ecosystems
DEFF Research Database (Denmark)
Revsbech, Niels Peter; Risgaard-Petersen, N.; Schramm, Andreas
2006-01-01
Abstract New analytical methods such as advanced molecular techniques and microsensors have resulted in new insights about how nitrogen transformations in stratified microbial systems such as sediments and biofilms are regulated at a µm-mm scale. A large and ever-expanding knowledge base about n...... performing dissimilatory reduction of nitrate to ammonium have given new dimensions to the understanding of nitrogen cycling in nature, and the occurrence of these organisms and processes in stratified microbial communities will be described in detail.......Abstract New analytical methods such as advanced molecular techniques and microsensors have resulted in new insights about how nitrogen transformations in stratified microbial systems such as sediments and biofilms are regulated at a µm-mm scale. A large and ever-expanding knowledge base about...... nitrogen fixation, nitrification, denitrification, and dissimilatory reduction of nitrate to ammonium, and about the microorganisms performing the processes, has been produced by use of these techniques. During the last decade the discovery of anammmox bacteria and migrating, nitrate accumulating bacteria...
Large eddy simulation of stably stratified turbulence
International Nuclear Information System (INIS)
Shen Zhi; Zhang Zhaoshun; Cui Guixiang; Xu Chunxiao
2011-01-01
Stably stratified turbulence is a common phenomenon in atmosphere and ocean. In this paper the large eddy simulation is utilized for investigating homogeneous stably stratified turbulence numerically at Reynolds number Re = uL/v = 10 2 ∼10 3 and Froude number Fr = u/NL = 10 −2 ∼10 0 in which u is root mean square of velocity fluctuations, L is integral scale and N is Brunt-Vaïsälä frequency. Three sets of computation cases are designed with different initial conditions, namely isotropic turbulence, Taylor Green vortex and internal waves, to investigate the statistical properties from different origins. The computed horizontal and vertical energy spectra are consistent with observation in atmosphere and ocean when the composite parameter ReFr 2 is greater than O(1). It has also been found in this paper that the stratification turbulence can be developed under different initial velocity conditions and the internal wave energy is dominated in the developed stably stratified turbulence.
Directory of Open Access Journals (Sweden)
Jennifer L Smith
Full Text Available Implementation of trachoma control strategies requires reliable district-level estimates of trachomatous inflammation-follicular (TF, generally collected using the recommended gold-standard cluster randomized surveys (CRS. Integrated Threshold Mapping (ITM has been proposed as an integrated and cost-effective means of rapidly surveying trachoma in order to classify districts according to treatment thresholds. ITM differs from CRS in a number of important ways, including the use of a school-based sampling platform for children aged 1-9 and a different age distribution of participants. This study uses computerised sampling simulations to compare the performance of these survey designs and evaluate the impact of varying key parameters.Realistic pseudo gold standard data for 100 districts were generated that maintained the relative risk of disease between important sub-groups and incorporated empirical estimates of disease clustering at the household, village and district level. To simulate the different sampling approaches, 20 clusters were selected from each district, with individuals sampled according to the protocol for ITM and CRS. Results showed that ITM generally under-estimated the true prevalence of TF over a range of epidemiological settings and introduced more district misclassification according to treatment thresholds than did CRS. However, the extent of underestimation and resulting misclassification was found to be dependent on three main factors: (i the district prevalence of TF; (ii the relative risk of TF between enrolled and non-enrolled children within clusters; and (iii the enrollment rate in schools.Although in some contexts the two methodologies may be equivalent, ITM can introduce a bias-dependent shift as prevalence of TF increases, resulting in a greater risk of misclassification around treatment thresholds. In addition to strengthening the evidence base around choice of trachoma survey methodologies, this study illustrates
Designing Wood Supply Scenarios from Forest Inventories with Stratified Predictions
Directory of Open Access Journals (Sweden)
Philipp Kilham
2018-02-01
Full Text Available Forest growth and wood supply projections are increasingly used to estimate the future availability of woody biomass and the correlated effects on forests and climate. This research parameterizes an inventory-based business-as-usual wood supply scenario, with a focus on southwest Germany and the period 2002–2012 with a stratified prediction. First, the Classification and Regression Trees algorithm groups the inventory plots into strata with corresponding harvest probabilities. Second, Random Forest algorithms generate individual harvest probabilities for the plots of each stratum. Third, the plots with the highest individual probabilities are selected as harvested until the harvest probability of the stratum is fulfilled. Fourth, the harvested volume of these plots is predicted with a linear regression model trained on harvested plots only. To illustrate the pros and cons of this method, it is compared to a direct harvested volume prediction with linear regression, and a combination of logistic regression and linear regression. Direct harvested volume regression predicts comparable volume figures, but generates these volumes in a way that differs from business-as-usual. The logistic model achieves higher overall classification accuracies, but results in underestimations or overestimations of harvest shares for several subsets of the data. The stratified prediction method balances this shortcoming, and can be of general use for forest growth and timber supply projections from large-scale forest inventories.
Deal, J. H.
1975-01-01
One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.
Susukida, Ryoko; Crum, Rosa M; Stuart, Elizabeth A; Ebnesajjad, Cyrus; Mojtabai, Ramin
2016-07-01
To compare the characteristics of individuals participating in randomized controlled trials (RCTs) of treatments of substance use disorder (SUD) with individuals receiving treatment in usual care settings, and to provide a summary quantitative measure of differences between characteristics of these two groups of individuals using propensity score methods. Design Analyses using data from RCT samples from the National Institute of Drug Abuse Clinical Trials Network (CTN) and target populations of patients drawn from the Treatment Episodes Data Set-Admissions (TEDS-A). Settings Multiple clinical trial sites and nation-wide usual SUD treatment settings in the United States. A total of 3592 individuals from 10 CTN samples and 1 602 226 individuals selected from TEDS-A between 2001 and 2009. Measurements The propensity scores for enrolling in the RCTs were computed based on the following nine observable characteristics: sex, race/ethnicity, age, education, employment status, marital status, admission to treatment through criminal justice, intravenous drug use and the number of prior treatments. Findings The proportion of those with ≥ 12 years of education and the proportion of those who had full-time jobs were significantly higher among RCT samples than among target populations (in seven and nine trials, respectively, at P difference in the mean propensity scores between the RCTs and the target population was 1.54 standard deviations and was statistically significant at P different from individuals receiving treatment in usual care settings. Notably, RCT participants tend to have more years of education and a greater likelihood of full-time work compared with people receiving care in usual care settings. © 2016 Society for the Study of Addiction.
Li, Ningzhi; Li, Shizhe; Shen, Jun
2017-06-01
In vivo 13C magnetic resonance spectroscopy (MRS) is a unique and effective tool for studying dynamic human brain metabolism and the cycling of neurotransmitters. One of the major technical challenges for in vivo 13C-MRS is the high radio frequency (RF) power necessary for heteronuclear decoupling. In the common practice of in vivo 13C-MRS, alkanyl carbons are detected in the spectra range of 10-65ppm. The amplitude of decoupling pulses has to be significantly greater than the large one-bond 1H-13C scalar coupling (1JCH=125-145 Hz). Two main proton decoupling methods have been developed: broadband stochastic decoupling and coherent composite or adiabatic pulse decoupling (e.g., WALTZ); the latter is widely used because of its efficiency and superb performance under inhomogeneous B1 field. Because the RF power required for proton decoupling increases quadratically with field strength, in vivo 13C-MRS using coherent decoupling is often limited to low magnetic fields (protons via weak long-range 1H-13C scalar couplings, which can be decoupled using low RF power broadband stochastic decoupling. Recently, the carboxylic/amide 13C-MRS technique using low power random RF heteronuclear decoupling was safely applied to human brain studies at 7T. Here, we review the two major decoupling methods and the carboxylic/amide 13C-MRS with low power decoupling strategy. Further decreases in RF power deposition by frequency-domain windowing and time-domain random under-sampling are also discussed. Low RF power decoupling opens the possibility of performing in vivo 13C experiments of human brain at very high magnetic fields (such as 11.7T), where signal-to-noise ratio as well as spatial and temporal spectral resolution are more favorable than lower fields.
Iterative random vs. Kennard-Stone sampling for IR spectrum-based classification task using PLS2-DA
Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz
2018-04-01
External testing (ET) is preferred over auto-prediction (AP) or k-fold-cross-validation in estimating more realistic predictive ability of a statistical model. With IR spectra, Kennard-stone (KS) sampling algorithm is often used to split the data into training and test sets, i.e. respectively for model construction and for model testing. On the other hand, iterative random sampling (IRS) has not been the favored choice though it is theoretically more likely to produce reliable estimation. The aim of this preliminary work is to compare performances of KS and IRS in sampling a representative training set from an attenuated total reflectance - Fourier transform infrared spectral dataset (of four varieties of blue gel pen inks) for PLS2-DA modeling. The `best' performance achievable from the dataset is estimated with AP on the full dataset (APF, error). Both IRS (n = 200) and KS were used to split the dataset in the ratio of 7:3. The classic decision rule (i.e. maximum value-based) is employed for new sample prediction via partial least squares - discriminant analysis (PLS2-DA). Error rate of each model was estimated repeatedly via: (a) AP on full data (APF, error); (b) AP on training set (APS, error); and (c) ET on the respective test set (ETS, error). A good PLS2-DA model is expected to produce APS, error and EVS, error that is similar to the APF, error. Bearing that in mind, the similarities between (a) APS, error vs. APF, error; (b) ETS, error vs. APF, error and; (c) APS, error vs. ETS, error were evaluated using correlation tests (i.e. Pearson and Spearman's rank test), using series of PLS2-DA models computed from KS-set and IRS-set, respectively. Overall, models constructed from IRS-set exhibits more similarities between the internal and external error rates than the respective KS-set, i.e. less risk of overfitting. In conclusion, IRS is more reliable than KS in sampling representative training set.
[Causes of emergency dizziness stratified by etiology].
Qiao, Wenying; Liu, Jianguo; Zeng, Hong; Liu, Yugeng; Jia, Weihua; Wang, Honghong; Liu, Bo; Tan, Jing; Li, Changqing
2014-06-03
To explore the causes of emergency dizziness stratified to improve the diagnostic efficiency. A total of 1 857 cases of dizziness at our emergency department were collected and their etiologies stratified by age and gender. The top three diagnoses were benign paroxysmal positional vertigo (BPPV, 31.7%), hypertension (24.0%) and posterior circulation ischemia (PCI, 20.5%). Stratified by age, the main causes of dizziness included BPPV (n = 6), migraine-associated vertigo (n = 2), unknown cause (n = 1) for the group of vertigo (14.5%) and neurosis (7.3%) for 18-44 years; BPPV (36.8%), hypertension (22.4%) and migraine-associated vertigo (11.2%) for 45-59 years; hypertension (30.8%), PCI (29.8%) and BPPV (22.9%) for 60-74 years; PCI (30.7%), hypertension (28.6%) and BPPV (25.5%) for 75-92 years. BPPV, migraine and neurosis were more common in females while hypertension and PCI predominated in males (all P hypertension, neurosis and migraine showed the following significant demographic features: BPPV, PCI, hypertension, neurosis and migraine may be the main causes of dizziness. BPPV should be considered initially when vertigo was triggered repeatedly by positional change, especially for young and middle-aged women. And the other common causes of dizziness were migraine-associated vertigo, neurosis and Meniere's disease.Hypertension should be screened firstly in middle-aged and elderly patients presenting mainly with head heaviness and stretching. In elders with dizziness, BPPV is second in constituent ratio to PCI and hypertension.In middle-aged and elderly patients with dizziness, psychological factors should be considered and diagnosis and treatment should be offered timely.
Directory of Open Access Journals (Sweden)
Alanis Kelly L
2006-02-01
Full Text Available Abstract Background Establishing more sensible measures to treat cocaine-addicted mothers and their children is essential for improving U.S. drug policy. Favorable post-natal environments have moderated potential deleterious prenatal effects. However, since cocaine is an illicit substance having long been demonized, we hypothesized that attitudes toward prenatal cocaine exposure would be more negative than for licit substances, alcohol, nicotine and caffeine. Further, media portrayals about long-term outcomes were hypothesized to influence viewers' attitudes, measured immediately post-viewing. Reducing popular crack baby stigmas could influence future policy decisions by legislators. In Study 1, 336 participants were randomly assigned to 1 of 4 conditions describing hypothetical legal sanction scenarios for pregnant women using cocaine, alcohol, nicotine or caffeine. Participants rated legal sanctions against pregnant women who used one of these substances and risk potential for developing children. In Study 2, 139 participants were randomly assigned to positive, neutral and negative media conditions. Immediately post-viewing, participants rated prenatal cocaine-exposed or non-exposed teens for their academic performance and risk for problems at age18. Results Participants in Study 1 imposed significantly greater legal sanctions for cocaine, perceiving prenatal cocaine exposure as more harmful than alcohol, nicotine or caffeine. A one-way ANOVA for independent samples showed significant differences, beyond .0001. Post-hoc Sheffe test illustrated that cocaine was rated differently from other substances. In Study 2, a one-way ANOVA for independent samples was performed on difference scores for the positive, neutral or negative media conditions about prenatal cocaine exposure. Participants in the neutral and negative media conditions estimated significantly lower grade point averages and more problems for the teen with prenatal cocaine exposure
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
White dwarf stars with chemically stratified atmospheres
Muchmore, D.
1982-01-01
Recent observations and theory suggest that some white dwarfs may have chemically stratified atmospheres - thin layers of hydrogen lying above helium-rich envelopes. Models of such atmospheres show that a discontinuous temperature inversion can occur at the boundary between the layers. Model spectra for layered atmospheres at 30,000 K and 50,000 K tend to have smaller decrements at 912 A, 504 A, and 228 A than uniform atmospheres would have. On the basis of their continuous extreme ultraviolet spectra, it is possible to distinguish observationally between uniform and layered atmospheres for hot white dwarfs.
A user`s guide to LHS: Sandia`s Latin Hypercube Sampling Software
Energy Technology Data Exchange (ETDEWEB)
Wyss, G.D.; Jorgensen, K.H. [Sandia National Labs., Albuquerque, NM (United States). Risk Assessment and Systems Modeling Dept.
1998-02-01
This document is a reference guide for LHS, Sandia`s Latin Hypercube Sampling Software. This software has been developed to generate either Latin hypercube or random multivariate samples. The Latin hypercube technique employs a constrained sampling scheme, whereas random sampling corresponds to a simple Monte Carlo technique. The present program replaces the previous Latin hypercube sampling program developed at Sandia National Laboratories (SAND83-2365). This manual covers the theory behind stratified sampling as well as use of the LHS code both with the Windows graphical user interface and in the stand-alone mode.
Directory of Open Access Journals (Sweden)
Giles M. Foody
2017-08-01
Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.
Turla, Ahmet; Dundar, Cihad; Ozkanli, Caglar
2010-01-01
The main objective of this article is to obtain the prevalence of childhood physical abuse experiences in college students. This cross-sectional study was performed on a gender-stratified random sample of 988 participants studying at Ondokuz Mayis University, with self-reported anonymous questionnaires. It included questions on physical abuse in…
Soil mixing of stratified contaminated sands.
Al-Tabba, A; Ayotamuno, M J; Martin, R J
2000-02-01
Validation of soil mixing for the treatment of contaminated ground is needed in a wide range of site conditions to widen the application of the technology and to understand the mechanisms involved. Since very limited work has been carried out in heterogeneous ground conditions, this paper investigates the effectiveness of soil mixing in stratified sands using laboratory-scale augers. This enabled a low cost investigation of factors such as grout type and form, auger design, installation procedure, mixing mode, curing period, thickness of soil layers and natural moisture content on the unconfined compressive strength, leachability and leachate pH of the soil-grout mixes. The results showed that the auger design plays a very important part in the mixing process in heterogeneous sands. The variability of the properties measured in the stratified soils and the measurable variations caused by the various factors considered, highlighted the importance of duplicating appropriate in situ conditions, the usefulness of laboratory-scale modelling of in situ conditions and the importance of modelling soil and contaminant heterogeneities at the treatability study stage.
Stratified coastal ocean interactions with tropical cyclones
Glenn, S. M.; Miles, T. N.; Seroka, G. N.; Xu, Y.; Forney, R. K.; Yu, F.; Roarty, H.; Schofield, O.; Kohut, J.
2016-01-01
Hurricane-intensity forecast improvements currently lag the progress achieved for hurricane tracks. Integrated ocean observations and simulations during hurricane Irene (2011) reveal that the wind-forced two-layer circulation of the stratified coastal ocean, and resultant shear-induced mixing, led to significant and rapid ahead-of-eye-centre cooling (at least 6 °C and up to 11 °C) over a wide swath of the continental shelf. Atmospheric simulations establish this cooling as the missing contribution required to reproduce Irene's accelerated intensity reduction. Historical buoys from 1985 to 2015 show that ahead-of-eye-centre cooling occurred beneath all 11 tropical cyclones that traversed the Mid-Atlantic Bight continental shelf during stratified summer conditions. A Yellow Sea buoy similarly revealed significant and rapid ahead-of-eye-centre cooling during Typhoon Muifa (2011). These findings establish that including realistic coastal baroclinic processes in forecasts of storm intensity and impacts will be increasingly critical to mid-latitude population centres as sea levels rise and tropical cyclone maximum intensities migrate poleward. PMID:26953963
Stratified Simulations of Collisionless Accretion Disks
Energy Technology Data Exchange (ETDEWEB)
Hirabayashi, Kota; Hoshino, Masahiro, E-mail: hirabayashi-k@eps.s.u-tokyo.ac.jp [Department of Earth and Planetary Science, The University of Tokyo, Tokyo, 113-0033 (Japan)
2017-06-10
This paper presents a series of stratified-shearing-box simulations of collisionless accretion disks in the recently developed framework of kinetic magnetohydrodynamics (MHD), which can handle finite non-gyrotropy of a pressure tensor. Although a fully kinetic simulation predicted a more efficient angular-momentum transport in collisionless disks than in the standard MHD regime, the enhanced transport has not been observed in past kinetic-MHD approaches to gyrotropic pressure anisotropy. For the purpose of investigating this missing link between the fully kinetic and MHD treatments, this paper explores the role of non-gyrotropic pressure and makes the first attempt to incorporate certain collisionless effects into disk-scale, stratified disk simulations. When the timescale of gyrotropization was longer than, or comparable to, the disk-rotation frequency of the orbit, we found that the finite non-gyrotropy selectively remaining in the vicinity of current sheets contributes to suppressing magnetic reconnection in the shearing-box system. This leads to increases both in the saturated amplitude of the MHD turbulence driven by magnetorotational instabilities and in the resultant efficiency of angular-momentum transport. Our results seem to favor the fast advection of magnetic fields toward the rotation axis of a central object, which is required to launch an ultra-relativistic jet from a black hole accretion system in, for example, a magnetically arrested disk state.
Tran, Kathy V; Azhar, Gulrez S; Nair, Rajesh; Knowlton, Kim; Jaiswal, Anjali; Sheffield, Perry; Mavalankar, Dileep; Hess, Jeremy
2013-06-18
Extreme heat is a significant public health concern in India; extreme heat hazards are projected to increase in frequency and severity with climate change. Few of the factors driving population heat vulnerability are documented, though poverty is a presumed risk factor. To facilitate public health preparedness, an assessment of factors affecting vulnerability among slum dwellers was conducted in summer 2011 in Ahmedabad, Gujarat, India. Indicators of heat exposure, susceptibility to heat illness, and adaptive capacity, all of which feed into heat vulnerability, was assessed through a cross-sectional household survey using randomized multistage cluster sampling. Associations between heat-related morbidity and vulnerability factors were identified using multivariate logistic regression with generalized estimating equations to account for clustering effects. Age, preexisting medical conditions, work location, and access to health information and resources were associated with self-reported heat illness. Several of these variables were unique to this study. As sociodemographics, occupational heat exposure, and access to resources were shown to increase vulnerability, future interventions (e.g., health education) might target specific populations among Ahmedabad urban slum dwellers to reduce vulnerability to extreme heat. Surveillance and evaluations of future interventions may also be worthwhile.
Fan, Desheng; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Pan, Xuemei; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2015-04-10
A multiple-image authentication method with a cascaded multilevel architecture in the Fresnel domain is proposed, in which a synthetic encoded complex amplitude is first fabricated, and its real amplitude component is generated by iterative amplitude encoding, random sampling, and space multiplexing for the low-level certification images, while the phase component of the synthetic encoded complex amplitude is constructed by iterative phase information encoding and multiplexing for the high-level certification images. Then the synthetic encoded complex amplitude is iteratively encoded into two phase-type ciphertexts located in two different transform planes. During high-level authentication, when the two phase-type ciphertexts and the high-level decryption key are presented to the system and then the Fresnel transform is carried out, a meaningful image with good quality and a high correlation coefficient with the original certification image can be recovered in the output plane. Similar to the procedure of high-level authentication, in the case of low-level authentication with the aid of a low-level decryption key, no significant or meaningful information is retrieved, but it can result in a remarkable peak output in the nonlinear correlation coefficient of the output image and the corresponding original certification image. Therefore, the method realizes different levels of accessibility to the original certification image for different authority levels with the same cascaded multilevel architecture.
Messiah, Antoine; Lacoste, Jérôme; Gokalsing, Erick; Shultz, James M; Rodríguez de la Vega, Pura; Castro, Grettel; Acuna, Juan M
2016-08-01
Studies on the mental health of families hosting disaster refugees are lacking. This study compares participants in households that hosted 2010 Haitian earthquake disaster refugees with their nonhost counterparts. A random sample survey was conducted from October 2011 through December 2012 in Miami-Dade County, Florida. Haitian participants were assessed regarding their 2010 earthquake exposure and impact on family and friends and whether they hosted earthquake refugees. Using standardized scores and thresholds, they were evaluated for symptoms of three common mental disorders (CMDs): posttraumatic stress disorder, generalized anxiety disorder, and major depressive disorder (MDD). Participants who hosted refugees (n = 51) had significantly higher percentages of scores beyond thresholds for MDD than those who did not host refugees (n = 365) and for at least one CMD, after adjusting for participants' earthquake exposures and effects on family and friends. Hosting refugees from a natural disaster appears to elevate the risk for MDD and possibly other CMDs, independent of risks posed by exposure to the disaster itself. Families hosting refugees deserve special attention.
Clagett, Bartholt; Nathanson, Katherine L.; Ciosek, Stephanie L.; McDermoth, Monique; Vaughn, David J.; Mitra, Nandita; Weiss, Andrew; Martonik, Rachel; Kanetsky, Peter A.
2013-01-01
Random-digit dialing (RDD) using landline telephone numbers is the historical gold standard for control recruitment in population-based epidemiologic research. However, increasing cell-phone usage and diminishing response rates suggest that the effectiveness of RDD in recruiting a random sample of the general population, particularly for younger target populations, is decreasing. In this study, we compared landline RDD with alternative methods of control recruitment, including RDD using cell-...
Voineskos, Sophocles H; Coroneos, Christopher J; Ziolkowski, Natalia I; Kaur, Manraj N; Banfield, Laura; Meade, Maureen O; Chung, Kevin C; Thoma, Achilleas; Bhandari, Mohit
2016-02-01
The authors examined industry support, conflict of interest, and sample size in plastic surgery randomized controlled trials that compared surgical interventions. They hypothesized that industry-funded trials demonstrate statistically significant outcomes more often, and randomized controlled trials with small sample sizes report statistically significant results more frequently. An electronic search identified randomized controlled trials published between 2000 and 2013. Independent reviewers assessed manuscripts and performed data extraction. Funding source, conflict of interest, primary outcome direction, and sample size were examined. Chi-squared and independent-samples t tests were used in the analysis. The search identified 173 randomized controlled trials, of which 100 (58 percent) did not acknowledge funding status. A relationship between funding source and trial outcome direction was not observed. Both funding status and conflict of interest reporting improved over time. Only 24 percent (six of 25) of industry-funded randomized controlled trials reported authors to have independent control of data and manuscript contents. The mean number of patients randomized was 73 per trial (median, 43, minimum, 3, maximum, 936). Small trials were not found to be positive more often than large trials (p = 0.87). Randomized controlled trials with small sample size were common; however, this provides great opportunity for the field to engage in further collaboration and produce larger, more definitive trials. Reporting of trial funding and conflict of interest is historically poor, but it greatly improved over the study period. Underreporting at author and journal levels remains a limitation when assessing the relationship between funding source and trial outcomes. Improved reporting and manuscript control should be goals that both authors and journals can actively achieve.
Glasscock, David J; Carstensen, Ole; Dalgaard, Vita Ligaya
2018-05-28
Randomized controlled trials (RCTs) of interventions aimed at reducing work-related stress indicate that cognitive behavioural therapy (CBT) is more effective than other interventions. However, definitions of study populations are often unclear and there is a lack of interventions targeting both the individual and the workplace. The aim of this study was to determine whether a stress management intervention combining individual CBT and a workplace focus is superior to no treatment in the reduction of perceived stress and stress symptoms and time to lasting return to work (RTW) in a clinical sample. Patients with work-related stress reactions or adjustment disorders were randomly assigned to an intervention group (n = 57, 84.2% female) or a control group (n = 80, 83.8% female). Subjects were followed via questionnaires and register data. The intervention contained individual CBT and the offer of a workplace meeting. We examined intervention effects by analysing group differences in score changes on the Perceived Stress Scale (PSS-10) and the General Health Questionnaire (GHQ-30). We also tested if intervention led to faster lasting RTW. Mean baseline values of PSS were 24.79 in the intervention group and 23.26 in the control group while the corresponding values for GHQ were 21.3 and 20.27, respectively. There was a significant effect of time. 10 months after baseline, both groups reported less perceived stress and improved mental health. 4 months after baseline, we found significant treatment effects for both perceived stress and mental health. The difference in mean change in PSS after 4 months was - 3.09 (- 5.47, - 0.72), while for GHQ it was - 3.91 (- 7.15, - 0.68). There were no group differences in RTW. The intervention led to faster reductions in perceived stress and stress symptoms amongst patients with work-related stress reactions and adjustment disorders. 6 months after the intervention ended there were no longer differences between
Directory of Open Access Journals (Sweden)
Romain Guignard
Full Text Available OBJECTIVES: It is crucial for policy makers to monitor the evolution of tobacco smoking prevalence. In France, this monitoring is based on a series of cross-sectional general population surveys, the Health Barometers, conducted every five years and based on random samples. A methodological study has been carried out to assess the reliability of a monitoring system based on regular quota sampling surveys for smoking prevalence. DESIGN / OUTCOME MEASURES: In 2010, current and daily tobacco smoking prevalences obtained in a quota survey on 8,018 people were compared with those of the 2010 Health Barometer carried out on 27,653 people. Prevalences were assessed separately according to the telephone equipment of the interviewee (landline phone owner vs "mobile-only", and logistic regressions were conducted in the pooled database to assess the impact of the telephone equipment and of the survey mode on the prevalences found. Finally, logistic regressions adjusted for sociodemographic characteristics were conducted in the random sample in order to determine the impact of the needed number of calls to interwiew "hard-to-reach" people on the prevalence found. RESULTS: Current and daily prevalences were higher in the random sample (respectively 33.9% and 27.5% in 15-75 years-old than in the quota sample (respectively 30.2% and 25.3%. In both surveys, current and daily prevalences were lower among landline phone owners (respectively 31.8% and 25.5% in the random sample and 28.9% and 24.0% in the quota survey. The required number of calls was slightly related to the smoking status after adjustment for sociodemographic characteristics. CONCLUSION: Random sampling appears to be more effective than quota sampling, mainly by making it possible to interview hard-to-reach populations.
Ecosystem metabolism in a stratified lake
DEFF Research Database (Denmark)
Stæhr, Peter Anton; Christensen, Jesper Philip Aagaard; Batt, Ryan D.
2012-01-01
, differences were not significant. During stratification, daily variability in epilimnetic DO was dominated by metabolism (46%) and air-water gas exchange (44%). Fluxes related to mixed-layer deepening dominated in meta- and hypolimnic waters (49% and 64%), while eddy diffusion (1% and 14%) was less important....... Although air-water gas exchange rates differed among the three formulations of gas-transfer velocity, this had no significant effect on metabolic rates....... that integrates rates across the entire depth profile and includes DO exchange between depth layers driven by mixed-layer deepening and eddy diffusivity. During full mixing, NEP was close to zero throughout the water column, and GPP and R were reduced 2-10 times compared to stratified periods. When present...
Stratified growth in Pseudomonas aeruginosa biofilms
DEFF Research Database (Denmark)
Werner, E.; Roe, F.; Bugnicourt, A.
2004-01-01
In this study, stratified patterns of protein synthesis and growth were demonstrated in Pseudomonas aeruginosa biofilms. Spatial patterns of protein synthetic activity inside biofilms were characterized by the use of two green fluorescent protein (GFP) reporter gene constructs. One construct...... synthesis was restricted to a narrow band in the part of the biofilm adjacent to the source of oxygen. The zone of active GFP expression was approximately 60 Am wide in colony biofilms and 30 Am wide in flow cell biofilms. The region of the biofilm in which cells were capable of elongation was mapped...... by treating colony biofilms with carbenicillin, which blocks cell division, and then measuring individual cell lengths by transmission electron microscopy. Cell elongation was localized at the air interface of the biofilm. The heterogeneous anabolic patterns measured inside these biofilms were likely a result...
Thermal instability in a stratified plasma
International Nuclear Information System (INIS)
Hermanns, D.F.M.; Priest, E.R.
1989-01-01
The thermal instability mechansism has been studied in connection to observed coronal features, like, e.g. prominences or cool cores in loops. Although these features show a lot of structure, most studies concern the thermal instability in an uniform medium. In this paper, we investigate the thermal instability and the interaction between thermal modes and the slow magneto-acoustic subspectrum for a stratified plasma slab. We fomulate the relevant system of equations and give some straightforward properties of the linear spectrum of a non-uniform plasma slab, i.e. the existence of continuous parts in the spectrum. We present a numerical scheme with which we can investigate the linear spectrum for equilibrium states with stratification. The slow and thermal subspectra of a crude coronal model are given as a preliminary result. (author). 6 refs.; 1 fig
Stratified charge rotary engine combustion studies
Shock, H.; Hamady, F.; Somerton, C.; Stuecken, T.; Chouinard, E.; Rachal, T.; Kosterman, J.; Lambeth, M.; Olbrich, C.
1989-07-01
Analytical and experimental studies of the combustion process in a stratified charge rotary engine (SCRE) continue to be the subject of active research in recent years. Specifically to meet the demand for more sophisticated products, a detailed understanding of the engine system of interest is warranted. With this in mind the objective of this work is to develop an understanding of the controlling factors that affect the SCRE combustion process so that an efficient power dense rotary engine can be designed. The influence of the induction-exhaust systems and the rotor geometry are believed to have a significant effect on combustion chamber flow characteristics. In this report, emphasis is centered on Laser Doppler Velocimetry (LDV) measurements and on qualitative flow visualizations in the combustion chamber of the motored rotary engine assembly. This will provide a basic understanding of the flow process in the RCE and serve as a data base for verification of numerical simulations. Understanding fuel injection provisions is also important to the successful operation of the stratified charge rotary engine. Toward this end, flow visualizations depicting the development of high speed, high pressure fuel jets are described. Friction is an important consideration in an engine from the standpoint of lost work, durability and reliability. MSU Engine Research Laboratory efforts in accessing the frictional losses associated with the rotary engine are described. This includes work which describes losses in bearing, seal and auxillary components. Finally, a computer controlled mapping system under development is described. This system can be used to map shapes such as combustion chamber, intake manifolds or turbine blades accurately.
Goenka, Ajit H; Remer, Erick M; Veniero, Joseph C; Thupili, Chakradhar R; Klein, Eric A
2015-09-01
The objective of our study was to review our experience with CT-guided transgluteal prostate biopsy in patients without rectal access. Twenty-one CT-guided transgluteal prostate biopsy procedures were performed in 16 men (mean age, 68 years; age range, 60-78 years) who were under conscious sedation. The mean prostate-specific antigen (PSA) value was 11.4 ng/mL (range, 2.3-39.4 ng/mL). Six had seven prior unsuccessful transperineal or transurethral biopsies. Biopsy results, complications, sedation time, and radiation dose were recorded. The mean PSA values and number of core specimens were compared between patients with malignant results and patients with nonmalignant results using the Student t test. The average procedural sedation time was 50.6 minutes (range, 15-90 minutes) (n = 20), and the mean effective radiation dose was 8.2 mSv (median, 6.6 mSv; range 3.6-19.3 mSv) (n = 13). Twenty of the 21 (95%) procedures were technically successful. The only complication was a single episode of gross hematuria and penile pain in one patient, which resolved spontaneously. Of 20 successful biopsies, 8 (40%) yielded adenocarcinoma (Gleason score: mean, 8; range, 7-9). Twelve biopsies yielded nonmalignant results (60%): high-grade prostatic intraepithelial neoplasia (n = 3) or benign prostatic tissue with or without inflammation (n = 9). Three patients had carcinoma diagnosed on subsequent biopsies (second biopsy, n = 2 patients; third biopsy, n = 1 patient). A malignant biopsy result was not significantly associated with the number of core specimens (p = 0.3) or the mean PSA value (p = 0.1). CT-guided transgluteal prostate biopsy is a safe and reliable technique for the systematic random sampling of the prostate in patients without a rectal access. In patients with initial negative biopsy results, repeat biopsy should be considered if there is a persistent rise in the PSA value.
Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won
2012-01-01
Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.
Boezen, H M; Schouten, J. P.; Postma, D S; Rijcken, B
1994-01-01
Peak expiratory flow (PEF) variability can be considered as an index of bronchial lability. Population studies on PEF variability are few. The purpose of the current paper is to describe the distribution of PEF variability in a random population sample of adults with a wide age range (20-70 yrs),
The optimism trap: Migrants' educational choices in stratified education systems.
Tjaden, Jasper Dag; Hunkler, Christian
2017-09-01
Immigrant children's ambitious educational choices have often been linked to their families' high level of optimism and motivation for upward mobility. However, previous research has mostly neglected alternative explanations such as information asymmetries or anticipated discrimination. Moreover, immigrant children's higher dropout rates at the higher secondary and university level suggest that low performing migrant students could have benefitted more from pursuing less ambitious tracks, especially in countries that offer viable vocational alternatives. We examine ethnic minority's educational choices using a sample of academically low performing, lower secondary school students in Germany's highly stratified education system. We find that their families' optimism diverts migrant students from viable vocational alternatives. Information asymmetries and anticipated discrimination do not explain their high educational ambitions. While our findings further support the immigrant optimism hypothesis, we discuss how its effect may have different implications depending on the education system. Copyright © 2017. Published by Elsevier Inc.
Simuta-Champo, R.; Herrera-Zamarrón, G. S.
2010-01-01
The Monte Carlo technique provides a natural method for evaluating uncertainties. The uncertainty is represented by a probability distribution or by related quantities such as statistical moments. When the groundwater flow and transport governing equations are solved and the hydraulic conductivity field is treated as a random spatial function, the hydraulic head, velocities and concentrations also become random spatial functions. When that is the case, for the stochastic simulation of groundw...
The effect of surfactant on stratified and stratifying gas-liquid flows
Heiles, Baptiste; Zadrazil, Ivan; Matar, Omar
2013-11-01
We consider the dynamics of a stratified/stratifying gas-liquid flow in horizontal tubes. This flow regime is characterised by the thin liquid films that drain under gravity along the pipe interior, forming a pool at the bottom of the tube, and the formation of large-amplitude waves at the gas-liquid interface. This regime is also accompanied by the detachment of droplets from the interface and their entrainment into the gas phase. We carry out an experimental study involving axial- and radial-view photography of the flow, in the presence and absence of surfactant. We show that the effect of surfactant is to reduce significantly the average diameter of the entrained droplets, through a tip-streaming mechanism. We also highlight the influence of surfactant on the characteristics of the interfacial waves, and the pressure gradient that drives the flow. EPSRC Programme Grant EP/K003976/1.
Directory of Open Access Journals (Sweden)
Ming-Hung Chien
Full Text Available Falls are common in older people and may lead to functional decline, disability, and death. Many risk factors have been identified, but studies evaluating effects of nutritional status are limited. To determine whether nutritional status is a predictor of falls in older people living in the community, we analyzed data collected through the Survey of Health and Living Status of the Elderly in Taiwan (SHLSET.SHLSET include a series of interview surveys conducted by the government on a random sample of people living in community dwellings in the nation. We included participants who received nutritional status assessment using the Mini Nutritional Assessment Taiwan Version 2 (MNA-T2 in the 1999 survey when they were 53 years or older and followed up on the cumulative incidence of falls in the one-year period before the interview in the 2003 survey.At the beginning of follow-up, the 4440 participants had a mean age of 69.5 (standard deviation= 9.1 years, and 467 participants were "not well-nourished," which was defined as having an MNA-T2 score of 23 or less. In the one-year study period, 659 participants reported having at least one fall. After adjusting for other risk factors, we found the associated odds ratio for falls was 1.73 (95% confidence interval, 1.23, 2.42 for "not well-nourished," 1.57 (1.30, 1.90 for female gender, 1.03 (1.02, 1.04 for one-year older, 1.55 (1.22, 1.98 for history of falls, 1.34 (1.05, 1.72 for hospital stay during the past 12 months, 1.66 (1.07, 2.58 for difficulties in activities of daily living, and 1.53 (1.23, 1.91 for difficulties in instrumental activities of daily living.Nutritional status is an independent predictor of falls in older people living in the community. Further studies are warranted to identify nutritional interventions that can help prevent falls in the elderly.
Improved patient selection by stratified surgical intervention
DEFF Research Database (Denmark)
Wang, Miao; Bünger, Cody E; Li, Haisheng
2015-01-01
BACKGROUND CONTEXT: Choosing the best surgical treatment for patients with spinal metastases remains a significant challenge for spine surgeons. There is currently no gold standard for surgical treatments. The Aarhus Spinal Metastases Algorithm (ASMA) was established to help surgeons choose...... the most appropriate surgical intervention for patients with spinal metastases. PURPOSE: The purpose of this study was to evaluate the clinical outcome of stratified surgical interventions based on the ASMA, which combines life expectancy and the anatomical classification of patients with spinal metastases...... survival times in the five surgical groups determined by the ASMA were 2.1 (TS 0-4, TC 1-7), 5.1 (TS 5-8, TC 1-7), 12.1 (TS 9-11, TC 1-7 or TS 12-15, TC 7), 26.0 (TS 12-15, TC 4-6), and 36.0 (TS 12-15, TC 1-3) months. The 30-day mortality rate was 7.5%. Postoperative neurological function was maintained...
Experimental study of unsteady thermally stratified flow
International Nuclear Information System (INIS)
Lee, Sang Jun; Chung, Myung Kyoon
1985-01-01
Unsteady thermally stratified flow caused by two-dimensional surface discharge of warm water into a oblong channel was investigated. Experimental study was focused on the rapidly developing thermal diffusion at small Richardson number. The basic objectives were to study the interfacial mixing between a flowing layer of warm water and an underlying body of cold water and to accumulate experimental data to test computational turbulence models. Mean velocity field measurements were carried out by using NMR-CT(Nuclear Magnetic Resonance-Computerized Tomography). It detects quantitative flow image of any desired section in any direction of flow in short time. Results show that at small Richardson number warm layer rapidly penetrates into the cold layer because of strong turbulent mixing and instability between the two layers. It is found that the transfer of heat across the interface is more vigorous than that of momentum. It is also proved that the NMR-CT technique is a very valuable tool to measure unsteady three dimensional flow field. (Author)
Turbulent fluxes in stably stratified boundary layers
International Nuclear Information System (INIS)
L'vov, Victor S; Procaccia, Itamar; Rudenko, Oleksii
2008-01-01
We present here an extended version of an invited talk we gave at the international conference 'Turbulent Mixing and Beyond'. The dynamical and statistical description of stably stratified turbulent boundary layers with the important example of the stable atmospheric boundary layer in mind is addressed. Traditional approaches to this problem, based on the profiles of mean quantities, velocity second-order correlations and dimensional estimates of the turbulent thermal flux, run into a well-known difficulty, predicting the suppression of turbulence at a small critical value of the Richardson number, in contradiction to observations. Phenomenological attempts to overcome this problem suffer from various theoretical inconsistencies. Here, we present an approach taking into full account all the second-order statistics, which allows us to respect the conservation of total mechanical energy. The analysis culminates in an analytic solution of the profiles of all mean quantities and all second-order correlations, removing the unphysical predictions of previous theories. We propose that the approach taken here is sufficient to describe the lower parts of the atmospheric boundary layer, as long as the Richardson number does not exceed an order of unity. For much higher Richardson numbers, the physics may change qualitatively, requiring careful consideration of the potential Kelvin-Helmoholtz waves and their interaction with the vortical turbulence.
Analysis of Turbulent Combustion in Simplified Stratified Charge Conditions
Moriyoshi, Yasuo; Morikawa, Hideaki; Komatsu, Eiji
The stratified charge combustion system has been widely studied due to the significant potentials for low fuel consumption rate and low exhaust gas emissions. The fuel-air mixture formation process in a direct-injection stratified charge engine is influenced by various parameters, such as atomization, evaporation, and in-cylinder gas motion at high temperature and high pressure conditions. It is difficult to observe the in-cylinder phenomena in such conditions and also challenging to analyze the following stratified charge combustion. Therefore, the combustion phenomena in simplified stratified charge conditions aiming to analyze the fundamental stratified charge combustion are examined. That is, an experimental apparatus which can control the mixture distribution and the gas motion at ignition timing was developed, and the effects of turbulence intensity, mixture concentration distribution, and mixture composition on stratified charge combustion were examined. As a result, the effects of fuel, charge stratification, and turbulence on combustion characteristics were clarified.
Inferences from Genomic Models in Stratified Populations
DEFF Research Database (Denmark)
Janss, Luc; de los Campos, Gustavo; Sheehan, Nuala
2012-01-01
Unaccounted population stratification can lead to spurious associations in genome-wide association studies (GWAS) and in this context several methods have been proposed to deal with this problem. An alternative line of research uses whole-genome random regression (WGRR) models that fit all marker...
Liu, Xiaofeng
2003-01-01
This article considers optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment. Optimal unequal allocation may reduce the cost from that of a balanced design without sacrificing any power. The optimum sample allocation ratio depends only on the…
Modelling of vapour explosion in stratified geometrie
International Nuclear Information System (INIS)
Picchi, St.
1999-01-01
When a hot liquid comes into contact with a colder volatile liquid, one can obtain in some conditions an explosive vaporization, told vapour explosion, whose consequences can be important on neighbouring structures. This explosion needs the intimate mixing and the fine fragmentation between the two liquids. In a stratified vapour explosion, these two liquids are initially superposed and separated by a vapor film. A triggering of the explosion can induce a propagation of this along the film. A study of experimental results and existent models has allowed to retain the following main points: - the explosion propagation is due to a pressure wave propagating through the medium; - the mixing is due to the development of Kelvin-Helmholtz instabilities induced by the shear velocity between the two liquids behind the pressure wave. The presence of the vapour in the volatile liquid explains experimental propagation velocity and the velocity difference between the two fluids at the pressure wave crossing. A first model has been proposed by Brayer in 1994 in order to describe the fragmentation and the mixing of the two fluids. Results of the author do not show explosion propagation. We have therefore built a new mixing-fragmentation model based on the atomization phenomenon that develops itself during the pressure wave crossing. We have also taken into account the transient aspect of the heat transfer between fuel drops and the volatile liquid, and elaborated a model of transient heat transfer. These two models have been introduced in a multi-components, thermal, hydraulic code, MC3D. Results of calculation show a qualitative and quantitative agreement with experimental results and confirm basic options of the model. (author)
Talamo, Giampaolo; Mir Muhammad, A; Pandey, Manoj K; Zhu, Junjia; Creer, Michael H; Malysz, Jozef
2015-02-11
Measurement of daily proteinuria in patients with amyloidosis is recommended at the time of diagnosis for assessing renal involvement, and for monitoring disease activity. Renal involvement is usually defined by proteinuria >500 mg/day. We evaluated the accuracy of the random urine protein-to-creatinine ratio (Pr/Cr) in predicting 24 hour proteinuria in patient with amyloidosis. We compared results of random urine Pr/Cr ratio and concomitant 24-hour urine collections in 44 patients with amyloidosis. We found a strong correlation (Spearman's ρ=0.874) between the Pr/Cr ratio and the 24 hour urine protein excretion. For predicting renal involvement, the optimal cut-off point of the Pr/Cr ratio was 715 mg/g. The sensitivity and specificity for this point were 91.8% and 95.5%, respectively, and the area under the curve value was 97.4%. We conclude that the random urine Pr/Cr ratio could be useful in the screening of renal involvement in patients with amyloidosis. If validated in a prospective study, the random urine Pr/Cr ratio could replace the 24 hour urine collection for the assessment of daily proteinuria and presence of nephrotic syndrome in patients with amyloidosis.
Directory of Open Access Journals (Sweden)
Giampaolo Talamo
2015-02-01
Full Text Available Measurement of daily proteinuria in patients with amyloidosis is recommended at the time of diagnosis for assessing renal involvement, and for monitoring disease activity. Renal involvement is usually defined by proteinuria >500 mg/day. We evaluated the accuracy of the random urine protein-to-creatinine ratio (Pr/Cr in predicting 24 hour proteinuria in patient with amyloidosis. We com- pared results of random urine Pr/Cr ratio and concomitant 24-hour urine collections in 44 patients with amyloidosis. We found a strong correlation (Spearman’s ρ=0.874 between the Pr/Cr ratio and the 24 hour urine protein excretion. For predicting renal involvement, the optimal cut-off point of the Pr/Cr ratio was 715 mg/g. The sensitivity and specificity for this point were 91.8% and 95.5%, respectively, and the area under the curve value was 97.4%. We conclude that the random urine Pr/Cr ratio could be useful in the screening of renal involvement in patients with amyloidosis. If validated in a prospective study, the random urine Pr/Cr ratio could replace the 24 hour urine collection for the assessment of daily proteinuria and presence of nephrotic syndrome in patients with amyloidosis.
Sampling Methods in Cardiovascular Nursing Research: An Overview.
Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie
2014-01-01
Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.
Toward cost-efficient sampling methods
Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie
2015-09-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.
Brus, D.J.; Saby, N.P.A.
2016-01-01
In France like in many other countries, the soil is monitored at the locations of a regular, square grid thus forming a systematic sample (SY). This sampling design leads to good spatial coverage, enhancing the precision of design-based estimates of spatial means and totals. Design-based
International Nuclear Information System (INIS)
Wandiga, S.O.; Jumba, I.O.
1982-01-01
An intercomparative analysis of the concentration of heavy metals:zinc, cadmium, lead, copper, mercury, iron and calcium in head hair of a randomly selected sample of Kenyan people using the techniques of atomic absorption spectrophotometry (AAS) and differential pulse anodic stripping voltammetry (DPAS) has been undertaken. The percent relative standard deviation for each sample analysed using either of the techniques show good sensitivity and correlation between the techniques. The DPAS was found to be slightly sensitive than the AAs instrument used. The recalculated body burden rations of Cd to Zn, Pb to Fe reveal no unusual health impairement symptoms and suggest a relatively clean environment in Kenya.(author)
Aligning the Economic Value of Companion Diagnostics and Stratified Medicines
Directory of Open Access Journals (Sweden)
Edward D. Blair
2012-11-01
Full Text Available The twin forces of payors seeking fair pricing and the rising costs of developing new medicines has driven a closer relationship between pharmaceutical companies and diagnostics companies, because stratified medicines, guided by companion diagnostics, offer better commercial, as well as clinical, outcomes. Stratified medicines have created clinical success and provided rapid product approvals, particularly in oncology, and indeed have changed the dynamic between drug and diagnostic developers. The commercial payback for such partnerships offered by stratified medicines has been less well articulated, but this has shifted as the benefits in risk management, pricing and value creation for all stakeholders become clearer. In this larger healthcare setting, stratified medicine provides both physicians and patients with greater insight on the disease and provides rationale for providers to understand cost-effectiveness of treatment. This article considers how the economic value of stratified medicine relationships can be recognized and translated into better outcomes for all healthcare stakeholders.
Kelly, Heath; Riddell, Michaela A; Gidding, Heather F; Nolan, Terry; Gilbert, Gwendolyn L
2002-08-19
We compared estimates of the age-specific population immunity to measles, mumps, rubella, hepatitis B and varicella zoster viruses in Victorian school children obtained by a national sero-survey, using a convenience sample of residual sera from diagnostic laboratories throughout Australia, with those from a three-stage random cluster survey. When grouped according to school age (primary or secondary school) there was no significant difference in the estimates of immunity to measles, mumps, hepatitis B or varicella. Compared with the convenience sample, the random cluster survey estimated higher immunity to rubella in samples from both primary (98.7% versus 93.6%, P = 0.002) and secondary school students (98.4% versus 93.2%, P = 0.03). Despite some limitations, this study suggests that the collection of a convenience sample of sera from diagnostic laboratories is an appropriate sampling strategy to provide population immunity data that will inform Australia's current and future immunisation policies. Copyright 2002 Elsevier Science Ltd.
Zheng, Lianqing; Chen, Mengen; Yang, Wei
2009-06-21
To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.
Kashdan, Todd B; Farmer, Antonina S
2014-06-01
The ability to recognize and label emotional experiences has been associated with well-being and adaptive functioning. This skill is particularly important in social situations, as emotions provide information about the state of relationships and help guide interpersonal decisions, such as whether to disclose personal information. Given the interpersonal difficulties linked to social anxiety disorder (SAD), deficient negative emotion differentiation may contribute to impairment in this population. We hypothesized that people with SAD would exhibit less negative emotion differentiation in daily life, and these differences would translate to impairment in social functioning. We recruited 43 people diagnosed with generalized SAD and 43 healthy adults to describe the emotions they experienced over 14 days. Participants received palmtop computers for responding to random prompts and describing naturalistic social interactions; to complete end-of-day diary entries, they used a secure online website. We calculated intraclass correlation coefficients to capture the degree of differentiation of negative and positive emotions for each context (random moments, face-to-face social interactions, and end-of-day reflections). Compared to healthy controls, the SAD group exhibited less negative (but not positive) emotion differentiation during random prompts, social interactions, and (at trend level) end-of-day assessments. These differences could not be explained by emotion intensity or variability over the 14 days, or to comorbid depression or anxiety disorders. Our findings suggest that people with generalized SAD have deficits in clarifying specific negative emotions felt at a given point of time. These deficits may contribute to difficulties with effective emotion regulation and healthy social relationship functioning.
Kashdan, Todd B.; Farmer, Antonina S.
2014-01-01
The ability to recognize and label emotional experiences has been associated with well-being and adaptive functioning. This skill is particularly important in social situations, as emotions provide information about the state of relationships and help guide interpersonal decisions, such as whether to disclose personal information. Given the interpersonal difficulties linked to social anxiety disorder (SAD), deficient negative emotion differentiation may contribute to impairment in this population. We hypothesized that people with SAD would exhibit less negative emotion differentiation in daily life, and these differences would translate to impairment in social functioning. We recruited 43 people diagnosed with generalized SAD and 43 healthy adults to describe the emotions they experienced over 14 days. Participants received palmtop computers for responding to random prompts and describing naturalistic social interactions; to complete end-of-day diary entries, they used a secure online website. We calculated intraclass correlation coefficients to capture the degree of differentiation of negative and positive emotions for each context (random moments, face-to-face social interactions, and end-of-day reflections). Compared to healthy controls, the SAD group exhibited less negative (but not positive) emotion differentiation during random prompts, social interactions, and (at trend level) end-of-day assessments. These differences could not be explained by emotion intensity or variability over the 14 days, or to comorbid depression or anxiety disorders. Our findings suggest that people with generalized SAD have deficits in clarifying specific negative emotions felt at a given point of time. These deficits may contribute to difficulties with effective emotion regulation and healthy social relationship functioning. PMID:24512246
Exploring pseudo- and chaotic random Monte Carlo simulations
Blais, J. A. Rod; Zhang, Zhan
2011-07-01
Computer simulations are an increasingly important area of geoscience research and development. At the core of stochastic or Monte Carlo simulations are the random number sequences that are assumed to be distributed with specific characteristics. Computer-generated random numbers, uniformly distributed on (0, 1), can be very different depending on the selection of pseudo-random number (PRN) or chaotic random number (CRN) generators. In the evaluation of some definite integrals, the resulting error variances can even be of different orders of magnitude. Furthermore, practical techniques for variance reduction such as importance sampling and stratified sampling can be applied in most Monte Carlo simulations and significantly improve the results. A comparative analysis of these strategies has been carried out for computational applications in planar and spatial contexts. Based on these experiments, and on some practical examples of geodetic direct and inverse problems, conclusions and recommendations concerning their performance and general applicability are included.
Woodall, W Gill; Delaney, Harold D; Kunitz, Stephen J; Westerberg, Verner S; Zhao, Hongwei
2007-06-01
Randomized trial evidence on the effectiveness of incarceration and treatment of first-time driving while intoxicated (DWI) offenders who are primarily American Indian has yet to be reported in the literature on DWI prevention. Further, research has confirmed the association of antisocial personality disorder (ASPD) with problems with alcohol including DWI. A randomized clinical trial was conducted, in conjunction with 28 days of incarceration, of a treatment program incorporating motivational interviewing principles for first-time DWI offenders. The sample of 305 offenders including 52 diagnosed as ASPD by the Diagnostic Interview Schedule were assessed before assignment to conditions and at 6, 12, and 24 months after discharge. Self-reported frequency of drinking and driving as well as various measures of drinking over the preceding 90 days were available at all assessments for 244 participants. Further, DWI rearrest data for 274 participants were available for analysis. Participants randomized to receive the first offender incarceration and treatment program reported greater reductions in alcohol consumption from baseline levels when compared with participants who were only incarcerated. Antisocial personality disorder participants reported heavier and more frequent drinking but showed significantly greater declines in drinking from intake to posttreatment assessments. Further, the treatment resulted in larger effects relative to the control on ASPD than non-ASPD participants. Nonconfrontational treatment may significantly enhance outcomes for DWI offenders with ASPD when delivered in an incarcerated setting, and in the present study, such effects were found in a primarily American-Indian sample.
Burt, Richard D; Thiede, Hanne
2014-11-01
Respondent-driven sampling (RDS) is a form of peer-based study recruitment and analysis that incorporates features designed to limit and adjust for biases in traditional snowball sampling. It is being widely used in studies of hidden populations. We report an empirical evaluation of RDS's consistency and variability, comparing groups recruited contemporaneously, by identical methods and using identical survey instruments. We randomized recruitment chains from the RDS-based 2012 National HIV Behavioral Surveillance survey of injection drug users in the Seattle area into two groups and compared them in terms of sociodemographic characteristics, drug-associated risk behaviors, sexual risk behaviors, human immunodeficiency virus (HIV) status and HIV testing frequency. The two groups differed in five of the 18 variables examined (P ≤ .001): race (e.g., 60% white vs. 47%), gender (52% male vs. 67%), area of residence (32% downtown Seattle vs. 44%), an HIV test in the previous 12 months (51% vs. 38%). The difference in serologic HIV status was particularly pronounced (4% positive vs. 18%). In four further randomizations, differences in one to five variables attained this level of significance, although the specific variables involved differed. We found some material differences between the randomized groups. Although the variability of the present study was less than has been reported in serial RDS surveys, these findings indicate caution in the interpretation of RDS results. Copyright © 2014 Elsevier Inc. All rights reserved.
Smart, S.M.; Clarke, R.T.; Poll, van de H.M.; Robertson, E.J.; Shield, E.R.; Bunce, R.G.H.; Maskell, L.C.
2003-01-01
Patterns of vegetation across Great Britain (GB) between 1990 and 1998 were quantified based on an analysis of plant species data from a total of 9596 fixed plots. Plots were established on a stratified random basis within 501 1 km sample squares located as part of the Countryside Survey of GB.
Energy Technology Data Exchange (ETDEWEB)
Muetzell, S. (Univ. Hospital of Uppsala (Sweden). Dept. of Family Medicine)
1992-01-01
Computed tomography (CT) of the brain was performed in a random sample of a total of 195 men and 211 male alcoholic patients admitted for the first time during a period of two years from the same geographically limited area of Greater Stockholm as the sample. Laboratory tests were performed, including liver and pancreatic tests. Toxicological screening was performed and the consumption of hepatotoxic drugs was also investigated. The groups were then subdivided with respect to alcohol consumption and use of hepatotoxic drugs: group IA, men from the random sample with low or moderate alcohol consumption and no use of hepatotoxic drugs; IB, men from the random sample with low or moderate alcohol consumption with use of hepatotoxic drugs; IIA, alcoholic inpatients with use of alcohol and no drugs; and IIB, alcoholic inpatients with use of alcohol and drugs. Group IIB was found to have a higher incidence of cortical and subcortical changes than group IA. Group IB had a higher incidence of subcortical changes than group IA, and they differed only in drug use. Groups IIN and IIA only differed in drug use, and IIB had a higher incidence of brian damage except for anterior horn index and wide cerebellar sulci indicating vermian atrophy. Significantly higher serum levels of bilirubin, GGT, ASAT, ALAT, CK LD, and amylase were found in IIB. The results indicate that drug use influences the incidence of cortical and subcortical aberrations, except anterior horn index. It is concluded that the groups with alcohol abuse who used hepatotoxic drugs showed a picture of cortical changes (wide transport sulci and clear-cut of high-grade cortical changes) and also of subcortical aberrations, expressed as an increased widening on the third ventricle.
International Nuclear Information System (INIS)
Muetzell, S.
1992-01-01
Computed tomography (CT) of the brain was performed in a random sample of a total of 195 men and 211 male alcoholic patients admitted for the first time during a period of two years from the same geographically limited area of Greater Stockholm as the sample. Laboratory tests were performed, including liver and pancreatic tests. Toxicological screening was performed and the consumption of hepatotoxic drugs was also investigated. The groups were then subdivided with respect to alcohol consumption and use of hepatotoxic drugs: group IA, men from the random sample with low or moderate alcohol consumption and no use of hepatotoxic drugs; IB, men from the random sample with low or moderate alcohol consumption with use of hepatotoxic drugs; IIA, alcoholic inpatients with use of alcohol and no drugs; and IIB, alcoholic inpatients with use of alcohol and drugs. Group IIB was found to have a higher incidence of cortical and subcortical changes than group IA. Group IB had a higher incidence of subcortical changes than group IA, and they differed only in drug use. Groups IIN and IIA only differed in drug use, and IIB had a higher incidence of brian damage except for anterior horn index and wide cerebellar sulci indicating vermian atrophy. Significantly higher serum levels of bilirubin, GGT, ASAT, ALAT, CK LD, and amylase were found in IIB. The results indicate that drug use influences the incidence of cortical and subcortical aberrations, except anterior horn index. It is concluded that the groups with alcohol abuse who used hepatotoxic drugs showed a picture of cortical changes (wide transport sulci and clear-cut of high-grade cortical changes) and also of subcortical aberrations, expressed as an increased widening on the third ventricle
LENUS (Irish Health Repository)
Billington, Jennifer
2012-08-07
AbstractBackgroundThe STRATIFY score is a clinical prediction rule (CPR) derived to assist clinicians to identify patients at risk of falling. The purpose of this systematic review and meta-analysis is to determine the overall diagnostic accuracy of the STRATIFY rule across a variety of clinical settings.MethodsA literature search was performed to identify all studies that validated the STRATIFY rule. The methodological quality of the studies was assessed using the Quality Assessment of Diagnostic Accuracy Studies tool. A STRATIFY score of ≥2 points was used to identify individuals at higher risk of falling. All included studies were combined using a bivariate random effects model to generate pooled sensitivity and specificity of STRATIFY at ≥2 points. Heterogeneity was assessed using the variance of logit transformed sensitivity and specificity.ResultsSeventeen studies were included in our meta-analysis, incorporating 11,378 patients. At a score ≥2 points, the STRATIFY rule is more useful at ruling out falls in those classified as low risk, with a greater pooled sensitivity estimate (0.67, 95% CI 0.52–0.80) than specificity (0.57, 95% CI 0.45 – 0.69). The sensitivity analysis which examined the performance of the rule in different settings and subgroups also showed broadly comparable results, indicating that the STRATIFY rule performs in a similar manner across a variety of different ‘at risk’ patient groups in different clinical settings.ConclusionThis systematic review shows that the diagnostic accuracy of the STRATIFY rule is limited and should not be used in isolation for identifying individuals at high risk of falls in clinical practice.
Jackson, George L.; Weinberger, Morris; Kirshner, Miriam A.; Stechuchak, Karen M.; Melnyk, Stephanie D.; Bosworth, Hayden B.; Coffman, Cynthia J.; Neelon, Brian; Van Houtven, Courtney; Gentry, Pamela W.; Morris, Isis J.; Rose, Cynthia M.; Taylor, Jennifer P.; May, Carrie L.; Han, Byungjoo; Wainwright, Christi; Alkon, Aviel; Powell, Lesa; Edelman, David
2016-01-01
Despite the availability of efficacious treatments, only half of patients with hypertension achieve adequate blood pressure (BP) control. This paper describes the protocol and baseline subject characteristics of a 2-arm, 18-month randomized clinical trial of titrated disease management (TDM) for patients with pharmaceutically-treated hypertension for whom systolic blood pressure (SBP) is not controlled (≥140mmHg for non-diabetic or ≥130mmHg for diabetic patients). The trial is being conducted among patients of four clinic locations associated with a Veterans Affairs Medical Center. An intervention arm has a TDM strategy in which patients' hypertension control at baseline, 6, and 12 months determines the resource intensity of disease management. Intensity levels include: a low-intensity strategy utilizing a licensed practical nurse to provide bi-monthly, non-tailored behavioral support calls to patients whose SBP comes under control; medium-intensity strategy utilizing a registered nurse to provide monthly tailored behavioral support telephone calls plus home BP monitoring; and high-intensity strategy utilizing a pharmacist to provide monthly tailored behavioral support telephone calls, home BP monitoring, and pharmacist-directed medication management. Control arm patients receive the low-intensity strategy regardless of BP control. The primary outcome is SBP. There are 385 randomized (192 intervention; 193 control) veterans that are predominately older (mean age 63.5 years) men (92.5%). 61.8% are African American, and the mean baseline SBP for all subjects is 143.6mmHg. This trial will determine if a disease management program that is titrated by matching the intensity of resources to patients' BP control leads to superior outcomes compared to a low-intensity management strategy. PMID:27417982
The stratified H-index makes scientific impact transparent
DEFF Research Database (Denmark)
Würtz, Morten; Schmidt, Morten
2017-01-01
The H-index is widely used to quantify and standardize researchers' scientific impact. However, the H-index does not account for the fact that co-authors rarely contribute equally to a paper. Accordingly, we propose the use of a stratified H-index to measure scientific impact. The stratified H......-index supplements the conventional H-index with three separate H-indices: one for first authorships, one for second authorships and one for last authorships. The stratified H-index takes scientific output, quality and individual author contribution into account....
Billong, Serge Clotaire; Fokam, Joseph; Penda, Calixte Ida; Amadou, Salmon; Kob, David Same; Billong, Edson-Joan; Colizzi, Vittorio; Ndjolo, Alexis; Bisseck, Anne-Cecile Zoung-Kani; Elat, Jean-Bosco Nfetam
2016-11-15
Retention on lifelong antiretroviral therapy (ART) is essential in sustaining treatment success while preventing HIV drug resistance (HIVDR), especially in resource-limited settings (RLS). In an era of rising numbers of patients on ART, mastering patients in care is becoming more strategic for programmatic interventions. Due to lapses and uncertainty with the current WHO sampling approach in Cameroon, we thus aimed to ascertain the national performance of, and determinants in, retention on ART at 12 months. Using a systematic random sampling, a survey was conducted in the ten regions (56 sites) of Cameroon, within the "reporting period" of October 2013-November 2014, enrolling 5005 eligible adults and children. Performance in retention on ART at 12 months was interpreted following the definition of HIVDR early warning indicator: excellent (>85%), fair (85-75%), poor (sampling strategy could be further strengthened for informed ART monitoring and HIVDR prevention perspectives.
Burger, Rulof P; McLaren, Zoë M
2017-09-01
The problem of sample selection complicates the process of drawing inference about populations. Selective sampling arises in many real world situations when agents such as doctors and customs officials search for targets with high values of a characteristic. We propose a new method for estimating population characteristics from these types of selected samples. We develop a model that captures key features of the agent's sampling decision. We use a generalized method of moments with instrumental variables and maximum likelihood to estimate the population prevalence of the characteristic of interest and the agents' accuracy in identifying targets. We apply this method to tuberculosis (TB), which is the leading infectious disease cause of death worldwide. We use a national database of TB test data from South Africa to examine testing for multidrug resistant TB (MDR-TB). Approximately one quarter of MDR-TB cases was undiagnosed between 2004 and 2010. The official estimate of 2.5% is therefore too low, and MDR-TB prevalence is as high as 3.5%. Signal-to-noise ratios are estimated to be between 0.5 and 1. Our approach is widely applicable because of the availability of routinely collected data and abundance of potential instruments. Using routinely collected data to monitor population prevalence can guide evidence-based policy making. Copyright © 2017 John Wiley & Sons, Ltd.
DEFF Research Database (Denmark)
Puri, Rajesh; Vilmann, Peter; Saftoiu, Adrian
2009-01-01
). The samples were characterized for cellularity and bloodiness, with a final cytology diagnosis established blindly. The final diagnosis was reached either by EUS-FNA if malignancy was definite, or by surgery and/or clinical follow-up of a minimum of 6 months in the cases of non-specific benign lesions...
Agashiwala, Rajiv M; Louis, Elan D; Hof, Patrick R; Perl, Daniel P
2008-10-21
Non-biased systematic sampling using the principles of stereology provides accurate quantitative estimates of objects within neuroanatomic structures. However, the basic principles of stereology are not optimally suited for counting objects that selectively exist within a limited but complex and convoluted portion of the sample, such as occurs when counting cerebellar Purkinje cells. In an effort to quantify Purkinje cells in association with certain neurodegenerative disorders, we developed a new method for stereologic sampling of the cerebellar cortex, involving calculating the volume of the cerebellar tissues, identifying and isolating the Purkinje cell layer and using this information to extrapolate non-biased systematic sampling data to estimate the total number of Purkinje cells in the tissues. Using this approach, we counted Purkinje cells in the right cerebella of four human male control specimens, aged 41, 67, 70 and 84 years, and estimated the total Purkinje cell number for the four entire cerebella to be 27.03, 19.74, 20.44 and 22.03 million cells, respectively. The precision of the method is seen when comparing the density of the cells within the tissue: 266,274, 173,166, 167,603 and 183,575 cells/cm3, respectively. Prior literature documents Purkinje cell counts ranging from 14.8 to 30.5 million cells. These data demonstrate the accuracy of our approach. Our novel approach, which offers an improvement over previous methodologies, is of value for quantitative work of this nature. This approach could be applied to morphometric studies of other similarly complex tissues as well.
Directory of Open Access Journals (Sweden)
Karunamuni Nandini
2008-12-01
Full Text Available Abstract Background Aerobic physical activity (PA and resistance training are paramount in the treatment and management of type 2 diabetes (T2D, but few studies have examined the determinants of both types of exercise in the same sample. Objective The primary purpose was to investigate the utility of the Theory of Planned Behavior (TPB in explaining aerobic PA and resistance training in a population sample of T2D adults. Methods A total of 244 individuals were recruited through a random national sample which was created by generating a random list of household phone numbers. The list was proportionate to the actual number of household telephone numbers for each Canadian province (with the exception of Quebec. These individuals completed self-report TPB constructs of attitude, subjective norm, perceived behavioral control and intention, and a 3-month follow-up that assessed aerobic PA and resistance training. Results TPB explained 10% and 8% of the variance respectively for aerobic PA and resistance training; and accounted for 39% and 45% of the variance respectively for aerobic PA and resistance training intentions. Conclusion These results may guide the development of appropriate PA interventions for aerobic PA and resistance training based on the TPB.
Sluis-Cremer, G. K.; Walters, L. G.; Sichel, H. S.
1967-01-01
The ventilatory capacity of a random sample of men over the age of 35 years in the town of Carletonville was estimated by the forced expiratory volume and the peak expiratory flow rate. Five hundred and sixty-two persons were working or had worked in gold-mines and 265 had never worked in gold-mines. No difference in ventilatory function was found between the miners and non-miners other than that due to the excess of chronic bronchitis in miners. PMID:6017134
Job strain and resting heart rate: a cross-sectional study in a Swedish random working sample
Directory of Open Access Journals (Sweden)
Peter Eriksson
2016-03-01
Full Text Available Abstract Background Numerous studies have reported an association between stressing work conditions and cardiovascular disease. However, more evidence is needed, and the etiological mechanisms are unknown. Elevated resting heart rate has emerged as a possible risk factor for cardiovascular disease, but little is known about the relation to work-related stress. This study therefore investigated the association between job strain, job control, and job demands and resting heart rate. Methods We conducted a cross-sectional survey of randomly selected men and women in Västra Götalandsregionen, Sweden (West county of Sweden (n = 1552. Information about job strain, job demands, job control, heart rate and covariates was collected during the period 2001–2004 as part of the INTERGENE/ADONIX research project. Six different linear regression models were used with adjustments for gender, age, BMI, smoking, education, and physical activity in the fully adjusted model. Job strain was operationalized as the log-transformed ratio of job demands over job control in the statistical analyses. Results No associations were seen between resting heart rate and job demands. Job strain was associated with elevated resting heart rate in the unadjusted model (linear regression coefficient 1.26, 95 % CI 0.14 to 2.38, but not in any of the extended models. Low job control was associated with elevated resting heart rate after adjustments for gender, age, BMI, and smoking (linear regression coefficient −0.18, 95 % CI −0.30 to −0.02. However, there were no significant associations in the fully adjusted model. Conclusions Low job control and job strain, but not job demands, were associated with elevated resting heart rate. However, the observed associations were modest and may be explained by confounding effects.
Taylor, C. Barr; Kass, Andrea E.; Trockel, Mickey; Cunning, Darby; Weisman, Hannah; Bailey, Jakki; Sinton, Meghan; Aspen, Vandana; Schecthman, Kenneth; Jacobi, Corinna; Wilfley, Denise E.
2015-01-01
Objective Eating disorders (EDs) are serious problems among college-age women and may be preventable. An indicated on-line eating disorder (ED) intervention, designed to reduce ED and comorbid pathology, was evaluated. Method 206 women (M age = 20 ± 1.8 years; 51% White/Caucasian, 11% African American, 10% Hispanic, 21% Asian/Asian American, 7% other) at very high risk for ED onset (i.e., with high weight/shape concerns plus a history of being teased, current or lifetime depression, and/or non-clinical levels of compensatory behaviors) were randomized to a 10-week, Internet-based, cognitive-behavioral intervention or wait-list control. Assessments included the Eating Disorder Examination (EDE to assess ED onset), EDE-Questionnaire, Structured Clinical Interview for DSM Disorders, and Beck Depression Inventory-II. Results ED attitudes and behaviors improved more in the intervention than control group (p = 0.02, d = 0.31); although ED onset rate was 27% lower, this difference was not significant (p = 0.28, NNT = 15). In the subgroup with highest shape concerns, ED onset rate was significantly lower in the intervention than control group (20% versus 42%, p = 0.025, NNT = 5). For the 27 individuals with depression at baseline, depressive symptomatology improved more in the intervention than control group (p = 0.016, d = 0.96); although ED onset rate was lower in the intervention than control group, this difference was not significant (25% versus 57%, NNT = 4). Conclusions An inexpensive, easily disseminated intervention might reduce ED onset among those at highest risk. Low adoption rates need to be addressed in future research. PMID:26795936
Taylor, C Barr; Kass, Andrea E; Trockel, Mickey; Cunning, Darby; Weisman, Hannah; Bailey, Jakki; Sinton, Meghan; Aspen, Vandana; Schecthman, Kenneth; Jacobi, Corinna; Wilfley, Denise E
2016-05-01
Eating disorders (EDs) are serious problems among college-age women and may be preventable. An indicated online eating disorder (ED) intervention, designed to reduce ED and comorbid pathology, was evaluated. 206 women (M age = 20 ± 1.8 years; 51% White/Caucasian, 11% African American, 10% Hispanic, 21% Asian/Asian American, 7% other) at very high risk for ED onset (i.e., with high weight/shape concerns plus a history of being teased, current or lifetime depression, and/or nonclinical levels of compensatory behaviors) were randomized to a 10-week, Internet-based, cognitive-behavioral intervention or waitlist control. Assessments included the Eating Disorder Examination (EDE, to assess ED onset), EDE-Questionnaire, Structured Clinical Interview for DSM Disorders, and Beck Depression Inventory-II. ED attitudes and behaviors improved more in the intervention than control group (p = .02, d = 0.31); although ED onset rate was 27% lower, this difference was not significant (p = .28, NNT = 15). In the subgroup with highest shape concerns, ED onset rate was significantly lower in the intervention than control group (20% vs. 42%, p = .025, NNT = 5). For the 27 individuals with depression at baseline, depressive symptomatology improved more in the intervention than control group (p = .016, d = 0.96); although ED onset rate was lower in the intervention than control group, this difference was not significant (25% vs. 57%, NNT = 4). An inexpensive, easily disseminated intervention might reduce ED onset among those at highest risk. Low adoption rates need to be addressed in future research. (c) 2016 APA, all rights reserved).
Directory of Open Access Journals (Sweden)
Hallett Jonathan
2012-01-01
Full Text Available Abstract Background There is considerable interest in university student hazardous drinking among the media and policy makers. However there have been no population-based studies in Australia to date. We sought to estimate the prevalence and correlates of hazardous drinking and secondhand effects among undergraduates at a Western Australian university. Method We invited 13,000 randomly selected undergraduate students from a commuter university in Australia to participate in an online survey of university drinking. Responses were received from 7,237 students (56%, who served as participants in this study. Results Ninety percent had consumed alcohol in the last 12 months and 34% met criteria for hazardous drinking (AUDIT score ≥ 8 and greater than 6 standard drinks in one sitting in the previous month. Men and Australian/New Zealand residents had significantly increased odds (OR: 2.1; 95% CI: 1.9-2.3; OR: 5.2; 95% CI: 4.4-6.2 of being categorised as dependent (AUDIT score 20 or over than women and non-residents. In the previous 4 weeks, 13% of students had been insulted or humiliated and 6% had been pushed, hit or otherwise assaulted by others who were drinking. One percent of respondents had experienced sexual assault in this time period. Conclusions Half of men and over a third of women were drinking at hazardous levels and a relatively large proportion of students were negatively affected by their own and other students' drinking. There is a need for intervention to reduce hazardous drinking early in university participation. Trial registration ACTRN12608000104358
Castellano, Sergio; Cermelli, Paolo
2011-04-07
Mate choice depends on mating preferences and on the manner in which mate-quality information is acquired and used to make decisions. We present a model that describes how these two components of mating decision interact with each other during a comparative evaluation of prospective mates. The model, with its well-explored precedents in psychology and neurophysiology, assumes that decisions are made by the integration over time of noisy information until a stopping-rule criterion is reached. Due to this informational approach, the model builds a coherent theoretical framework for developing an integrated view of functions and mechanisms of mating decisions. From a functional point of view, the model allows us to investigate speed-accuracy tradeoffs in mating decision at both population and individual levels. It shows that, under strong time constraints, decision makers are expected to make fast and frugal decisions and to optimally trade off population-sampling accuracy (i.e. the number of sampled males) against individual-assessment accuracy (i.e. the time spent for evaluating each mate). From the proximate-mechanism point of view, the model makes testable predictions on the interactions of mating preferences and choosiness in different contexts and it might be of compelling empirical utility for a context-independent description of mating preference strength. Copyright © 2011 Elsevier Ltd. All rights reserved.
Sulaiman, Nabil; Albadawi, Salah; Abusnana, Salah; Fikri, Mahmoud; Madani, Abdulrazzag; Mairghani, Maisoon; Alawadi, Fatheya; Zimmet, Paul; Shaw, Jonathan
2015-09-01
The prevalence of diabetes has risen rapidly in the Middle East, particularly in the Gulf Region. However, some prevalence estimates have not fully accounted for large migrant worker populations and have focused on minority indigenous populations. The objectives of the UAE National Diabetes and Lifestyle Study are to: (i) define the prevalence of, and risk factors for, T2DM; (ii) describe the distribution and determinants of T2DM risk factors; (iii) study health knowledge, attitudes, and (iv) identify gene-environment interactions; and (v) develop baseline data for evaluation of future intervention programs. Given the high burden of diabetes in the region and the absence of accurate data on non-UAE nationals in the UAE, a representative sample of the non-UAE nationals was essential. We used an innovative methodology in which non-UAE nationals were sampled when attending the mandatory biannual health check that is required for visa renewal. Such an approach could also be used in other countries in the region. Complete data were available for 2719 eligible non-UAE nationals (25.9% Arabs, 70.7% Asian non-Arabs, 1.1% African non-Arabs, and 2.3% Westerners). Most were men < 65 years of age. The response rate was 68%, and the non-response was greater among women than men; 26.9% earned less than UAE Dirham (AED) 24 000 (US$6500) and the most common areas of employment were as managers or professionals, in service and sales, and unskilled occupations. Most (37.4%) had completed high school and 4.1% had a postgraduate degree. This novel methodology could provide insights for epidemiological studies in the UAE and other Gulf States, particularly for expatriates. © 2015 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.
Directory of Open Access Journals (Sweden)
Smedslund Geir
2013-02-01
Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.
Energy Technology Data Exchange (ETDEWEB)
Donner, P.
2016-07-01
Citation counts of scientific research contributions are one fundamental data in scientometrics. Accuracy and completeness of citation links are therefore crucial data quality issues (Moed, 2005, Ch. 13). However, despite the known flaws of reference matching algorithms, usually no attempts are made to incorporate uncertainty about citation counts into indicators. This study is a step towards that goal. Particular attention is paid to the question whether publications from countries not using basic Latin script are differently affected by missed citations. The proprietary reference matching procedure of Web of Science (WoS) is based on (near) exact agreement of cited reference data (normalized during processing) to the target papers bibliographical data. Consequently, the procedure has near-optimal precision but incomplete recall - it is known to miss some slightly inaccurate reference links (Olensky, 2015). However, there has been no attempt so far to estimate the rate of missed citations by a principled method for a random sample. For this study a simple random sample of WoS source papers was drawn and it was attempted to find all reference strings of WoS indexed documents that refer to them, in particular inexact matches. The objective is to give a statistical estimate of the proportion of missed citations and to describe the relationship of the number of found citations to the number of missed citations, i.e. the conditional error distribution. The empirical error distribution is statistically analyzed and modelled. (Author)
International Nuclear Information System (INIS)
Gogolak, C.V.
1986-11-01
The concentration of a contaminant measured in a particular medium might be distributed as a positive random variable when it is present, but it may not always be present. If there is a level below which the concentration cannot be distinguished from zero by the analytical apparatus, a sample from such a population will be censored on the left. The presence of both zeros and positive values in the censored portion of such samples complicates the problem of estimating the parameters of the underlying positive random variable and the probability of a zero observation. Using the method of maximum likelihood, it is shown that the solution to this estimation problem reduces largely to that of estimating the parameters of the distribution truncated at the point of censorship. The maximum likelihood estimate of the proportion of zero values follows directly. The derivation of the maximum likelihood estimates for a lognormal population with zeros is given in detail, and the asymptotic properties of the estimates are examined. The estimation method was used to fit several different distributions to a set of severely censored 85 Kr monitoring data from six locations at the Savannah River Plant chemical separations facilities
Clagett, Bartholt; Nathanson, Katherine L; Ciosek, Stephanie L; McDermoth, Monique; Vaughn, David J; Mitra, Nandita; Weiss, Andrew; Martonik, Rachel; Kanetsky, Peter A
2013-12-01
Random-digit dialing (RDD) using landline telephone numbers is the historical gold standard for control recruitment in population-based epidemiologic research. However, increasing cell-phone usage and diminishing response rates suggest that the effectiveness of RDD in recruiting a random sample of the general population, particularly for younger target populations, is decreasing. In this study, we compared landline RDD with alternative methods of control recruitment, including RDD using cell-phone numbers and address-based sampling (ABS), to recruit primarily white men aged 18-55 years into a study of testicular cancer susceptibility conducted in the Philadelphia, Pennsylvania, metropolitan area between 2009 and 2012. With few exceptions, eligible and enrolled controls recruited by means of RDD and ABS were similar with regard to characteristics for which data were collected on the screening survey. While we find ABS to be a comparably effective method of recruiting young males compared with landline RDD, we acknowledge the potential impact that selection bias may have had on our results because of poor overall response rates, which ranged from 11.4% for landline RDD to 1.7% for ABS.
Zhang, Xin; Wu, Yuxia; Ren, Pengwei; Liu, Xueting; Kang, Deying
2015-10-30
To explore the relationship between the external validity and the internal validity of hypertension RCTs conducted in China. Comprehensive literature searches were performed in Medline, Embase, Cochrane Central Register of Controlled Trials (CCTR), CBMdisc (Chinese biomedical literature database), CNKI (China National Knowledge Infrastructure/China Academic Journals Full-text Database) and VIP (Chinese scientific journals database) as well as advanced search strategies were used to locate hypertension RCTs. The risk of bias in RCTs was assessed by a modified scale, Jadad scale respectively, and then studies with 3 or more grading scores were included for the purpose of evaluating of external validity. A data extract form including 4 domains and 25 items was used to explore relationship of the external validity and the internal validity. Statistic analyses were performed by using SPSS software, version 21.0 (SPSS, Chicago, IL). 226 hypertension RCTs were included for final analysis. RCTs conducted in university affiliated hospitals (P internal validity. Multi-center studies (median = 4.0, IQR = 2.0) were scored higher internal validity score than single-center studies (median = 3.0, IQR = 1.0) (P internal validity (P = 0.004). Multivariate regression indicated sample size, industry-funding, quality of life (QOL) taken as measure and the university affiliated hospital as trial setting had statistical significance (P external validity of RCTs do associate with the internal validity, that do not stand in an easy relationship to each other. Regarding the poor reporting, other possible links between two variables need to trace in the future methodological researches.
Tempia, S; Salman, M D; Keefe, T; Morley, P; Freier, J E; DeMartini, J C; Wamwayi, H M; Njeumi, F; Soumaré, B; Abdi, A M
2010-12-01
A cross-sectional sero-survey, using a two-stage cluster sampling design, was conducted between 2002 and 2003 in ten administrative regions of central and southern Somalia, to estimate the seroprevalence and geographic distribution of rinderpest (RP) in the study area, as well as to identify potential risk factors for the observed seroprevalence distribution. The study was also used to test the feasibility of the spatially integrated investigation technique in nomadic and semi-nomadic pastoral systems. In the absence of a systematic list of livestock holdings, the primary sampling units were selected by generating random map coordinates. A total of 9,216 serum samples were collected from cattle aged 12 to 36 months at 562 sampling sites. Two apparent clusters of RP seroprevalence were detected. Four potential risk factors associated with the observed seroprevalence were identified: the mobility of cattle herds, the cattle population density, the proximity of cattle herds to cattle trade routes and cattle herd size. Risk maps were then generated to assist in designing more targeted surveillance strategies. The observed seroprevalence in these areas declined over time. In subsequent years, similar seroprevalence studies in neighbouring areas of Kenya and Ethiopia also showed a very low seroprevalence of RP or the absence of antibodies against RP. The progressive decline in RP antibody prevalence is consistent with virus extinction. Verification of freedom from RP infection in the Somali ecosystem is currently in progress.
Candel, Math J J M; Van Breukelen, Gerard J P
2010-06-30
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.
Tadić, Bosiljka
2018-03-01
We study dynamics of a built-in domain wall (DW) in 2-dimensional disordered ferromagnets with different sample shapes using random-field Ising model on a square lattice rotated by 45 degrees. The saw-tooth DW of the length Lx is created along one side and swept through the sample by slow ramping of the external field until the complete magnetisation reversal and the wall annihilation at the open top boundary at a distance Ly. By fixing the number of spins N =Lx ×Ly = 106 and the random-field distribution at a value above the critical disorder, we vary the ratio of the DW length to the annihilation distance in the range Lx /Ly ∈ [ 1 / 16 , 16 ] . The periodic boundary conditions are applied in the y-direction so that these ratios comprise different samples, i.e., surfaces of cylinders with the changing perimeter Lx and height Ly. We analyse the avalanches of the DW slips between following field updates, and the multifractal structure of the magnetisation fluctuation time series. Our main findings are that the domain-wall lengths materialised in different sample shapes have an impact on the dynamics at all scales. Moreover, the domain-wall motion at the beginning of the hysteresis loop (HLB) probes the disorder effects resulting in the fluctuations that are significantly different from the large avalanches in the central part of the loop (HLC), where the strong fields dominate. Specifically, the fluctuations in HLB exhibit a wide multi-fractal spectrum, which shifts towards higher values of the exponents when the DW length is reduced. The distributions of the avalanches in this segments of the loops obey power-law decay and the exponential cutoffs with the exponents firmly in the mean-field universality class for long DW. In contrast, the avalanches in the HLC obey Tsallis density distribution with the power-law tails which indicate the new categories of the scale invariant behaviour for different ratios Lx /Ly. The large fluctuations in the HLC, on the other
Penaksir Rasio Proporsi Yang Efisien Untuk Rata-rata Populasi Pada Sampling Acak Berstrata
Maulana, Devri; Adnan, Arisman; Sirait, Haposan
2014-01-01
In this article we review three proportion ratio estimators for the population mean on stratified random sampling, i.e. traditional proportion ratio estimator, proportion ratio estimator using coefficient of regression, and proportion ratio estimator usingcoefficient of regression and curtosis as discussed by Singh and Audu [5]. The three estimators are biased estimators, then the mean square error of each estimator is determined. Furthermore, these mean square errors are compa...
The effect of existing turbulence on stratified shear instability
Kaminski, Alexis; Smyth, William
2017-11-01
Ocean turbulence is an essential process governing, for example, heat uptake by the ocean. In the stably-stratified ocean interior, this turbulence occurs in discrete events driven by vertical variations of the horizontal velocity. Typically, these events have been modelled by assuming an initially laminar stratified shear flow which develops wavelike instabilities, becomes fully turbulent, and then relaminarizes into a stable state. However, in the real ocean there is always some level of turbulence left over from previous events, and it is not yet understood how this turbulence impacts the evolution of future mixing events. Here, we perform a series of direct numerical simulations of turbulent events developing in stratified shear flows that are already at least weakly turbulent. We do so by varying the amplitude of the initial perturbations, and examine the subsequent development of the instability and the impact on the resulting turbulent fluxes. This work is supported by NSF Grant OCE1537173.
Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi
2018-04-01
The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.
Directory of Open Access Journals (Sweden)
Chunrong Mi
2017-01-01
Full Text Available Species distribution models (SDMs have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane (Grus monacha, n = 33, White-naped Crane (Grus vipio, n = 40, and Black-necked Crane (Grus nigricollis, n = 75 in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model, Random Forest, CART (Classification and Regression Tree and Maxent (Maximum Entropy Models. In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC and true skill statistic (TSS were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid
Mi, Chunrong; Huettmann, Falk; Guo, Yumin; Han, Xuesong; Wen, Lijia
2017-01-01
Species distribution models (SDMs) have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane ( Grus monacha , n = 33), White-naped Crane ( Grus vipio , n = 40), and Black-necked Crane ( Grus nigricollis , n = 75) in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model), Random Forest, CART (Classification and Regression Tree) and Maxent (Maximum Entropy Models). In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC) and true skill statistic (TSS)) were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial) ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid
Energy Technology Data Exchange (ETDEWEB)
Padula, D.; Madigan, T.; Kiermeier, A.; Daughtry, B.; Pointon, A. [South Australian Research and Development Inst. (Australia)
2004-09-15
To date there has been no published information available on the levels of dioxin (PCDD/F) and PCBs in Australian aquaculture-produced Southern Bluefin Tuna (Thunnus maccoyii). Southern Bluefin Tuna are commercially farmed off the coast of Port Lincoln in the state of South Australia, Australia. This paper reports the levels of dioxin (PCDD/F) and PCBs in muscle tissue samples from 11 randomly sampled aquaculture-produced Southern Bluefin Tuna collected in 2003. Little published data exists on the levels of dioxin (PCDD/F) and PCBs in Australian aquacultureproduced seafood. Wild tuna are first caught in the Great Australian Bight in South Australian waters, and are then brought back to Port Lincoln where they are ranched in sea-cages before being harvested and exported to Japan. The aim of the study was to identify pathways whereby contaminants such as dioxin (PCDD/F) and PCBs may enter the aquaculture production system. This involved undertaking a through chain analysis of the levels of dioxin (PCDD/F) and PCBs in wild caught tuna, seafloor sediment samples from the marine environment, levels in feeds and final harvested exported product. Detailed study was also undertaken on the variation of dioxin (PCDD/F) and PCBs across individual tuna carcases. This paper addresses the levels found in final harvested product. Details on levels found in other studies will be published elsewhere shortly.
Heo, Moonseong; Litwin, Alain H; Blackstock, Oni; Kim, Namhee; Arnsten, Julia H
2017-02-01
We derived sample size formulae for detecting main effects in group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms. Such designs are necessary when experimental interventions need to be administered to groups of subjects whereas control conditions need to be administered to individual subjects. This type of trial, often referred to as a partially nested or partially clustered design, has been implemented for management of chronic diseases such as diabetes and is beginning to emerge more commonly in wider clinical settings. Depending on the research setting, the level of hierarchy of data structure for the experimental arm can be three or two, whereas that for the control arm is two or one. Such different levels of data hierarchy assume correlation structures of outcomes that are different between arms, regardless of whether research settings require two or three level data structure for the experimental arm. Therefore, the different correlations should be taken into account for statistical modeling and for sample size determinations. To this end, we considered mixed-effects linear models with different correlation structures between experimental and control arms to theoretically derive and empirically validate the sample size formulae with simulation studies.
Sagheri, Darius; McLoughlin, Jacinta; Clarkson, John J
2007-01-01
To compare dental caries levels of schoolchildren stratified in different social classes whose domestic water supply had been fluoridated since birth (Dublin) with those living in an area where fluoridated salt was available (Freiburg). A representative, random sample of twelve-year-old children was examined and dental caries was recorded using World Health Organization criteria. A total of 699 twelve-year-old children were examined, 377 were children in Dublin and 322 in Freiburg. In Dublin the mean decayed, missing, and filled permanent teeth (DMFT) was 0.80 and in Freiburg it was 0.69. An examination of the distribution of the DMFT score revealed that its distribution is highly positively skewed. For this reason this study provides summary analyses based on medians and inter-quartile range and nonparametric rank sum tests. In both cities caries levels of children in social class 1 (highest) were considerably lower when compared with the other social classes regardless of the fluoride intervention model used. The caries levels showed a reduced disparity between children in social class 2 (medium) and 3 (lowest) in Dublin compared with those in social class 2 and 3 in Freiburg. The evidence from this study confirmed that water fluoridation has reduced the gap in dental caries experience between medium and lower social classes in Dublin compared with the greater difference in caries experience between the equivalent social classes in Freiburg. The results from this study established the important role of salt fluoridation where water fluoridation is not feasible.
Fensham, J R; Bubner, E; D'Antignana, T; Landos, M; Caraguel, C G B
2018-05-01
The Australian farmed yellowtail kingfish (Seriola lalandi, YTK) industry monitor skin fluke (Benedenia seriolae) and gill fluke (Zeuxapta seriolae) burden by pooling the fluke count of 10 hooked YTK. The random and systematic error of this sampling strategy was evaluated to assess potential impact on treatment decisions. Fluke abundance (fluke count per fish) in a study cage (estimated 30,502 fish) was assessed five times using the current sampling protocol and its repeatability was estimated the repeatability coefficient (CR) and the coefficient of variation (CV). Individual body weight, fork length, fluke abundance, prevalence, intensity (fluke count per infested fish) and density (fluke count per Kg of fish) were compared between 100 hooked and 100 seined YTK (assumed representative of the entire population) to estimate potential selection bias. Depending on the fluke species and age category, CR (expected difference in parasite count between 2 sampling iterations) ranged from 0.78 to 114 flukes per fish. Capturing YTK by hooking increased the selection of fish of a weight and length in the lowest 5th percentile of the cage (RR = 5.75, 95% CI: 2.06-16.03, P-value = 0.0001). These lower end YTK had on average an extra 31 juveniles and 6 adults Z. seriolae per Kg of fish and an extra 3 juvenile and 0.4 adult B. seriolae per Kg of fish, compared to the rest of the cage population (P-value sampling towards the smallest and most heavily infested fish in the population, resulting in poor repeatability (more variability amongst sampled fish) and an overestimation of parasite burden in the population. In this particular commercial situation these finding supported that health management program, where the finding of an underestimation of parasite burden could provide a production impact on the study population. In instances where fish populations and parasite burdens are more homogenous, sampling error may be less severe. Sampling error when capturing fish
Kløvstad, Hilde; Natås, Olav; Tverdal, Aage; Aavitsland, Preben
2013-01-23
As most genital Chlamydia trachomatis infections are asymptomatic, many patients do not seek health care for testing. Infections remain undiagnosed and untreated. We studied whether screening with information and home sampling resulted in more young people getting tested, diagnosed and treated for chlamydia in the three months following the intervention compared to the current strategy of testing in the health care system. We conducted a population based randomized controlled trial among all persons aged 18-25 years in one Norwegian county (41 519 persons). 10 000 persons (intervention) received an invitation by mail with chlamydia information and a mail-back urine sampling kit. 31 519 persons received no intervention and continued with usual care (control). All samples from both groups were analysed in the same laboratory. Information on treatment was obtained from the Norwegian Prescription Database (NorPD). We estimated risk ratios and risk differences of being tested, diagnosed and treated in the intervention group compared to the control group. In the intervention group 16.5% got tested and in the control group 3.4%, risk ratio 4.9 (95% CI 4.5-5.2). The intervention led to 2.6 (95% CI 2.0-3.4) times as many individuals being diagnosed and 2.5 (95% CI 1.9-3.4) times as many individuals receiving treatment for chlamydia compared to no intervention in the three months following the intervention. In Norway, systematic screening with information and home sampling results in more young people being tested, diagnosed and treated for chlamydia in the three months following the intervention than the current strategy of testing in the health care system. However, the study has not established that the intervention will reduce the chlamydia prevalence or the risk of complications from chlamydia.
Large eddy simulation of turbulent and stably-stratified flows
International Nuclear Information System (INIS)
Fallon, Benoit
1994-01-01
The unsteady turbulent flow over a backward-facing step is studied by mean of Large Eddy Simulations with structure function sub grid model, both in isothermal and stably-stratified configurations. Without stratification, the flow develops highly-distorted Kelvin-Helmholtz billows, undergoing to helical pairing, with A-shaped vortices shed downstream. We show that forcing injected by recirculation fluctuations governs this oblique mode instabilities development. The statistical results show good agreements with the experimental measurements. For stably-stratified configurations, the flow remains more bi-dimensional. We show with increasing stratification, how the shear layer growth is frozen by inhibition of pairing process then of Kelvin-Helmholtz instabilities, and the development of gravity waves or stable density interfaces. Eddy structures of the flow present striking analogies with the stratified mixing layer. Additional computations show the development of secondary Kelvin-Helmholtz instabilities on the vorticity layers between two primary structures. This important mechanism based on baroclinic effects (horizontal density gradients) constitutes an additional part of the turbulent mixing process. Finally, the feasibility of Large Eddy Simulation is demonstrated for industrial flows, by studying a complex stratified cavity. Temperature fluctuations are compared to experimental measurements. We also develop three-dimensional un-stationary animations, in order to understand and visualize turbulent interactions. (author) [fr
Bacterial production, protozoan grazing, and mineralization in stratified Lake Vechten
Bloem, J.
1989-01-01
The role of heterotrophic nanoflagellates (HNAN, size 2-20 μm) in grazing on bacteria and mineralization of organic matter in stratified Lake Vechten was studied.
Quantitative effects of manipulation and fixation on HNAN were checked. Considerable losses were caused by
The dynamics of small inertial particles in weakly stratified turbulence
van Aartrijk, M.; Clercx, H.J.H.
We present an overview of a numerical study on the small-scale dynamics and the large-scale dispersion of small inertial particles in stably stratified turbulence. Three types of particles are examined: fluid particles, light inertial particles (with particle-to-fluid density ratio 1Ͽp/Ͽf25) and
Dispersion of (light) inertial particles in stratified turbulence
van Aartrijk, M.; Clercx, H.J.H.; Armenio, Vincenzo; Geurts, Bernardus J.; Fröhlich, Jochen
2010-01-01
We present a brief overview of a numerical study of the dispersion of particles in stably stratified turbulence. Three types of particles arc examined: fluid particles, light inertial particles ($\\rho_p/\\rho_f = \\mathcal{O}(1)$) and heavy inertial particles ($\\rho_p/\\rho_f \\gg 1$). Stratification
Stability of Miscible Displacements Across Stratified Porous Media
Energy Technology Data Exchange (ETDEWEB)
Shariati, Maryam; Yortsos, Yanis C.
2000-09-11
This report studied macro-scale heterogeneity effects. Reflecting on their importance, current simulation practices of flow and displacement in porous media were invariably based on heterogeneous permeability fields. Here, it was focused on a specific aspect of such problems, namely the stability of miscible displacements in stratified porous media, where the displacement is perpendicular to the direction of stratification.
On Internal Waves in a Density-Stratified Estuary
Kranenburg, C.
1991-01-01
In this article some field observations, made in recent years, of internal wave motions in a density-stratified estuary are presented, In order to facilitate the appreciation of the results, and to make some quantitative comparisons, the relevant theory is also summarized. Furthermore, the origins
FDTD scattered field formulation for scatterers in stratified dispersive media.
Olkkonen, Juuso
2010-03-01
We introduce a simple scattered field (SF) technique that enables finite difference time domain (FDTD) modeling of light scattering from dispersive objects residing in stratified dispersive media. The introduced SF technique is verified against the total field scattered field (TFSF) technique. As an application example, we study surface plasmon polariton enhanced light transmission through a 100 nm wide slit in a silver film.
Plane Stratified Flow in a Room Ventilated by Displacement Ventilation
DEFF Research Database (Denmark)
Nielsen, Peter Vilhelm; Nickel, J.; Baron, D. J. G.
2004-01-01
The air movement in the occupied zone of a room ventilated by displacement ventilation exists as a stratified flow along the floor. This flow can be radial or plane according to the number of wall-mounted diffusers and the room geometry. The paper addresses the situations where plane flow...
Dual Spark Plugs For Stratified-Charge Rotary Engine
Abraham, John; Bracco, Frediano V.
1996-01-01
Fuel efficiency of stratified-charge, rotary, internal-combustion engine increased by improved design featuring dual spark plugs. Second spark plug ignites fuel on upstream side of main fuel injector; enabling faster burning and more nearly complete utilization of fuel.
Prognosis research strategy (PROGRESS) 4: Stratified medicine research
A. Hingorani (Aroon); D.A.W.M. van der Windt (Daniëlle); R.D. Riley (Richard); D. Abrams; K.G.M. Moons (Karel); E.W. Steyerberg (Ewout); S. Schroter (Sara); W. Sauerbrei (Willi); D.G. Altman (Douglas); H. Hemingway; A. Briggs (Andrew); N. Brunner; P. Croft (Peter); J. Hayden (Jill); P.A. Kyzas (Panayiotis); N. Malats (Núria); G. Peat; P. Perel (Pablo); I. Roberts (Ian); A. Timmis (Adam)
2013-01-01
textabstractIn patients with a particular disease or health condition, stratified medicine seeks to identify thosewho will have the most clinical benefit or least harm from a specific treatment. In this article, thefourth in the PROGRESS series, the authors discuss why prognosis research should form
van Haren, H.
2015-01-01
The character of turbulent overturns in a weakly stratified deep-sea is investigated in some detail using 144 high-resolution temperature sensors at 0.7 m intervals, starting 5 m above the bottom. A 9-day, 1 Hz sampled record from the 912 m depth flat-bottom (<0.5% bottom-slope) mooring site in the
Szirovicza, Leonóra; López, Pilar; Kopena, Renáta; Benkő, Mária; Martín, José; Pénzes, Judit J
2016-01-01
Here, we report the results of a large-scale PCR survey on the prevalence and diversity of adenoviruses (AdVs) in samples collected randomly from free-living reptiles. On the territories of the Guadarrama Mountains National Park in Central Spain and of the Chafarinas Islands in North Africa, cloacal swabs were taken from 318 specimens of eight native species representing five squamate reptilian families. The healthy-looking animals had been captured temporarily for physiological and ethological examinations, after which they were released. We found 22 AdV-positive samples in representatives of three species, all from Central Spain. Sequence analysis of the PCR products revealed the existence of three hitherto unknown AdVs in 11 Carpetane rock lizards (Iberolacerta cyreni), nine Iberian worm lizards (Blanus cinereus), and two Iberian green lizards (Lacerta schreiberi), respectively. Phylogeny inference showed every novel putative virus to be a member of the genus Atadenovirus. This is the very first description of the occurrence of AdVs in amphisbaenian and lacertid hosts. Unlike all squamate atadenoviruses examined previously, two of the novel putative AdVs had A+T rich DNA, a feature generally deemed to mirror previous host switch events. Our results shed new light on the diversity and evolution of atadenoviruses.
Directory of Open Access Journals (Sweden)
Leonóra Szirovicza
Full Text Available Here, we report the results of a large-scale PCR survey on the prevalence and diversity of adenoviruses (AdVs in samples collected randomly from free-living reptiles. On the territories of the Guadarrama Mountains National Park in Central Spain and of the Chafarinas Islands in North Africa, cloacal swabs were taken from 318 specimens of eight native species representing five squamate reptilian families. The healthy-looking animals had been captured temporarily for physiological and ethological examinations, after which they were released. We found 22 AdV-positive samples in representatives of three species, all from Central Spain. Sequence analysis of the PCR products revealed the existence of three hitherto unknown AdVs in 11 Carpetane rock lizards (Iberolacerta cyreni, nine Iberian worm lizards (Blanus cinereus, and two Iberian green lizards (Lacerta schreiberi, respectively. Phylogeny inference showed every novel putative virus to be a member of the genus Atadenovirus. This is the very first description of the occurrence of AdVs in amphisbaenian and lacertid hosts. Unlike all squamate atadenoviruses examined previously, two of the novel putative AdVs had A+T rich DNA, a feature generally deemed to mirror previous host switch events. Our results shed new light on the diversity and evolution of atadenoviruses.
The status of dental caries and related factors in a sample of Iranian adolescents
DEFF Research Database (Denmark)
Pakpour, Amir H.; Hidarnia, Alireza; Hajizadeh, Ebrahim
2011-01-01
Objective: To describe the status of dental caries in a sample of Iranian adolescents aged 14 to 18 years in Qazvin, and to identify caries-related factors affecting this group. Study design: Qazvin was divided into three zones according to socio-economic status. The sampling procedure used...... was a stratified cluster sampling technique; incorporating 3 stratified zones, for each of which a cluster of school children were recruited from randomly selected high schools. The adolescents agreed to participate in the study and to complete a questionnaire. Dental caries status was assessed in terms of decayed...... their teeth on a regular basis. Although the incidence of caries was found to be moderate, it was influenced by demographic factors such as age and gender in addition to socio-behavioral variables such as family income, the level of education attained by parents, the frequency of dental brushing and flossing...
Caballero-Ortega, Heriberto; Castillo-Cruz, Rocío; Murieta, Sandra; Ortíz-Alegría, Luz Belinda; Calderón-Segura, Esther; Conde-Glez, Carlos J; Cañedo-Solares, Irma; Correa, Dolores
2014-05-14
There are few articles on evaluation of Toxoplasma gondii serological tests. Besides, commercially available tests are not always useful and are expensive for studies in open population. The aim of this study was to evaluate in-house ELISA and western blot for IgG antibodies in a representative sample of people living in Mexico. Three hundred and five serum samples were randomly selected from two national seroepidemiological survey banks; they were taken from men and women of all ages and from all areas of the country. ELISA cut-off was established using the mean plus three standard deviations of negative samples. Western blots were analysed by two experienced technicians and positivity was established according to the presence of at least three diagnostic bands. A commercial ELISA kit was used as a third test. Two reference standards were built up: one using concordant results of two assays leaving the evaluated test out and the other in which the evaluated test was included (IN) with at least two concordant results to define diagnosis. the lowest values of diagnostic parameters were obtained with the OUT reference standards: in-house ELISA had 96.9% sensitivity, 62.1% specificity, 49.6% PPV, 98.1% NPV and 71.8% accuracy, while western blot presented 81.8%, 89.7%, 84.0%, 88.2% and 86.6% values and the best kappa coefficient (0.72-0.82). The in-house ELISA is useful for screening people of Mexico, due to its high sensitivity, while western blot may be used to confirm diagnosis. These techniques might prove useful in other Latin American countries.
Kleczek, M.; Steeneveld, G.J.; Paci, A.; Calmer, R.; Belleudy, A.; Canonici, J.C.; Murguet, F.; Valette, V.
2014-01-01
This paper reports on a laboratory experiment in the CNRM-GAME (Toulouse) stratified water flume of a stably stratified boundary layer, in order to quantify the momentum transfer due to orographically induced gravity waves by gently undulating hills in a boundary layer flow. In a stratified fluid, a
Nicklas, Jacinda M; Skurnik, Geraldine; Zera, Chloe A; Reforma, Liberty G; Levkoff, Sue E; Seely, Ellen W
2016-02-01
The postpartum period is a window of opportunity for diabetes prevention in women with recent gestational diabetes (GDM), but recruitment for clinical trials during this period of life is a major challenge. We adapted a social-ecologic model to develop a multi-level recruitment strategy at the macro (high or institutional level), meso (mid or provider level), and micro (individual) levels. Our goal was to recruit 100 women with recent GDM into the Balance after Baby randomized controlled trial over a 17-month period. Participants were asked to attend three in-person study visits at 6 weeks, 6, and 12 months postpartum. They were randomized into a control arm or a web-based intervention arm at the end of the baseline visit at six weeks postpartum. At the end of the recruitment period, we compared population characteristics of our enrolled subjects to the entire population of women with GDM delivering at Brigham and Women's Hospital (BWH). We successfully recruited 107 of 156 (69 %) women assessed for eligibility, with the majority (92) recruited during pregnancy at a mean 30 (SD ± 5) weeks of gestation, and 15 recruited postpartum, at a mean 2 (SD ± 3) weeks postpartum. 78 subjects attended the initial baseline visit, and 75 subjects were randomized into the trial at a mean 7 (SD ± 2) weeks postpartum. The recruited subjects were similar in age and race/ethnicity to the total population of 538 GDM deliveries at BWH over the 17-month recruitment period. Our multilevel approach allowed us to successfully meet our recruitment goal and recruit a representative sample of women with recent GDM. We believe that our most successful strategies included using a dedicated in-person recruiter, integrating recruitment into clinical flow, allowing for flexibility in recruitment, minimizing barriers to participation, and using an opt-out strategy with providers. Although the majority of women were recruited while pregnant, women recruited in the early postpartum period were
Simulation of steam explosion in stratified melt-coolant configuration
International Nuclear Information System (INIS)
Leskovar, Matjaž; Centrih, Vasilij; Uršič, Mitja
2016-01-01
Highlights: • Strong steam explosions may develop spontaneously in stratified configurations. • Considerable melt-coolant premixed layer formed in subcooled water with hot melts. • Analysis with MC3D code provided insight into stratified steam explosion phenomenon. • Up to 25% of poured melt was mixed with water and available for steam explosion. • Better instrumented experiments needed to determine dominant mixing process. - Abstract: A steam explosion is an energetic fuel coolant interaction process, which may occur during a severe reactor accident when the molten core comes into contact with the coolant water. In nuclear reactor safety analyses steam explosions are primarily considered in melt jet-coolant pool configurations where sufficiently deep coolant pool conditions provide complete jet breakup and efficient premixture formation. Stratified melt-coolant configurations, i.e. a molten melt layer below a coolant layer, were up to now believed as being unable to generate strong explosive interactions. Based on the hypothesis that there are no interfacial instabilities in a stratified configuration it was assumed that the amount of melt in the premixture is insufficient to produce strong explosions. However, the recently performed experiments in the PULiMS and SES (KTH, Sweden) facilities with oxidic corium simulants revealed that strong steam explosions may develop spontaneously also in stratified melt-coolant configurations, where with high temperature melts and subcooled water conditions a considerable melt-coolant premixed layer is formed. In the article, the performed study of steam explosions in a stratified melt-coolant configuration in PULiMS like conditions is presented. The goal of this analytical work is to supplement the experimental activities within the PULiMS research program by addressing the key questions, especially regarding the explosivity of the formed premixed layer and the mechanisms responsible for the melt-water mixing. To
Indications for tonsillectomy stratified by the level of evidence
Windfuhr, Jochen P.
2016-01-01
Background: One of the most significant clinical trials, demonstrating the efficacy of tonsillectomy (TE) for recurrent throat infection in severely affected children, was published in 1984. This systematic review was undertaken to compile various indications for TE as suggested in the literature after 1984 and to stratify the papers according to the current concept of evidence-based medicine. Material and methods: A systematic Medline research was performed using the key word of “tonsillectomy“ in combination with different filters such as “systematic reviews“, “meta-analysis“, “English“, “German“, and “from 1984/01/01 to 2015/05/31“. Further research was performed in the Cochrane Database of Systematic Reviews, National Guideline Clearinghouse, Guidelines International Network and BMJ Clinical Evidence using the same key word. Finally, data from the “Trip Database” were researched for “tonsillectomy” and “indication“ and “from: 1984 to: 2015“ in combination with either “systematic review“ or “meta-analysis“ or “metaanalysis”. Results: A total of 237 papers were retrieved but only 57 matched our inclusion criteria covering the following topics: peritonsillar abscess (3), guidelines (5), otitis media with effusion (5), psoriasis (3), PFAPA syndrome (6), evidence-based indications (5), renal diseases (7), sleep-related breathing disorders (11), and tonsillitis/pharyngitis (12), respectively. Conclusions: 1) The literature suggests, that TE is not indicated to treat otitis media with effusion. 2) It has been shown, that the PFAPA syndrome is self-limiting and responds well to steroid administration, at least in a considerable amount of children. The indication for TE therefore appears to be imbalanced but further research is required to clarify the value of surgery. 3) Abscesstonsillectomy as a routine is not justified and indicated only for cases not responding to other measures of treatment, evident complications
Identification of major planktonic sulfur oxidizers in stratified freshwater lake.
Directory of Open Access Journals (Sweden)
Hisaya Kojima
Full Text Available Planktonic sulfur oxidizers are important constituents of ecosystems in stratified water bodies, and contribute to sulfide detoxification. In contrast to marine environments, taxonomic identities of major planktonic sulfur oxidizers in freshwater lakes still remain largely unknown. Bacterioplankton community structure was analyzed in a stratified freshwater lake, Lake Mizugaki in Japan. In the clone libraries of 16S rRNA gene, clones very closely related to a sulfur oxidizer isolated from this lake, Sulfuritalea hydrogenivorans, were detected in deep anoxic water, and occupied up to 12.5% in each library of different water depth. Assemblages of planktonic sulfur oxidizers were specifically analyzed by constructing clone libraries of genes involved in sulfur oxidation, aprA, dsrA, soxB and sqr. In the libraries, clones related to betaproteobacteria were detected with high frequencies, including the close relatives of Sulfuritalea hydrogenivorans.
Study of MRI in stratified viscous plasma configuration
Carlevaro, Nakia; Montani, Giovanni; Renzi, Fabrizio
2017-02-01
We analyze the morphology of the magneto-rotational instability (MRI) for a stratified viscous plasma disk configuration in differential rotation, taking into account the so-called corotation theorem for the background profile. In order to select the intrinsic Alfvénic nature of MRI, we deal with an incompressible plasma and we adopt a formulation of the local perturbation analysis based on the use of the magnetic flux function as a dynamical variable. Our study outlines, as consequence of the corotation condition, a marked asymmetry of the MRI with respect to the equatorial plane, particularly evident in a complete damping of the instability over a positive critical height on the equatorial plane. We also emphasize how such a feature is already present (although less pronounced) even in the ideal case, restoring a dependence of the MRI on the stratified morphology of the gravitational field.
Mixing of stratified flow around bridge piers in steady current
DEFF Research Database (Denmark)
Jensen, Bjarne; Carstensen, Stefan; Christensen, Erik Damgaard
2018-01-01
This paper presents the results of an experimental and numerical investigation of the mixing of stratified flow around bridge pier structures. In this study, which was carried out in connection with the Fehmarnbelt Fixed Link environmental impact assessment, the mixing processes of two-layer stra......This paper presents the results of an experimental and numerical investigation of the mixing of stratified flow around bridge pier structures. In this study, which was carried out in connection with the Fehmarnbelt Fixed Link environmental impact assessment, the mixing processes of two......-layer stratification was studied in which the lower level had a higher salinity than the upper layer. The physical experiments investigated two different pier designs. A general study was made regarding forces on the piers in which the effect of the current angle relative to the structure was also included...
Stratified charge rotary aircraft engine technology enablement program
Badgley, P. R.; Irion, C. E.; Myers, D. M.
1985-01-01
The multifuel stratified charge rotary engine is discussed. A single rotor, 0.7L/40 cu in displacement, research rig engine was tested. The research rig engine was designed for operation at high speeds and pressures, combustion chamber peak pressure providing margin for speed and load excursions above the design requirement for a high is advanced aircraft engine. It is indicated that the single rotor research rig engine is capable of meeting the established design requirements of 120 kW, 8,000 RPM, 1,379 KPA BMEP. The research rig engine, when fully developed, will be a valuable tool for investigating, advanced and highly advanced technology components, and provide an understanding of the stratified charge rotary engine combustion process.
Analysis of photonic band-gap structures in stratified medium
DEFF Research Database (Denmark)
Tong, Ming-Sze; Yinchao, Chen; Lu, Yilong
2005-01-01
in electromagnetic and microwave applications once the Maxwell's equations are appropriately modeled. Originality/value - The method validates its values and properties through extensive studies on regular and defective 1D PBG structures in stratified medium, and it can be further extended to solving more......Purpose - To demonstrate the flexibility and advantages of a non-uniform pseudo-spectral time domain (nu-PSTD) method through studies of the wave propagation characteristics on photonic band-gap (PBG) structures in stratified medium Design/methodology/approach - A nu-PSTD method is proposed...... in solving the Maxwell's equations numerically. It expands the temporal derivatives using the finite differences, while it adopts the Fourier transform (FT) properties to expand the spatial derivatives in Maxwell's equations. In addition, the method makes use of the chain-rule property in calculus together...
Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R
2016-12-01
: MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We
Community genomics among stratified microbial assemblages in the ocean's interior
DEFF Research Database (Denmark)
DeLong, Edward F; Preston, Christina M; Mincer, Tracy
2006-01-01
Microbial life predominates in the ocean, yet little is known about its genomic variability, especially along the depth continuum. We report here genomic analyses of planktonic microbial communities in the North Pacific Subtropical Gyre, from the ocean's surface to near-sea floor depths. Sequence......, and host-viral interactions. Comparative genomic analyses of stratified microbial communities have the potential to provide significant insight into higher-order community organization and dynamics....
Large Eddy Simulation of stratified flows over structures
Brechler J.; Fuka V.
2013-01-01
We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model) to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.
Large Eddy Simulation of stratified flows over structures
Directory of Open Access Journals (Sweden)
Brechler J.
2013-04-01
Full Text Available We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.
Large Eddy Simulation of stratified flows over structures
Fuka, V.; Brechler, J.
2013-04-01
We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model) to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.
Propagation of acoustic waves in a stratified atmosphere, 1
Kalkofen, W.; Rossi, P.; Bodo, G.; Massaglia, S.
1994-01-01
This work is motivated by the chromospheric 3 minute oscillations observed in the K(sub 2v) bright points. We study acoustic gravity waves in a one-dimensional, gravitationally stratified, isothermal atmosphere. The oscillations are excited either by a velocity pulse imparted to a layer in an atmosphere of infinite vertical extent, or by a piston forming the lower boundary of a semi-infinite medium. We consider both linear and non-linear waves.
A statistical mechanics approach to mixing in stratified fluids
Venaille , Antoine; Gostiaux , Louis; Sommeria , Joël
2016-01-01
Accepted for the Journal of Fluid Mechanics; Predicting how much mixing occurs when a given amount of energy is injected into a Boussinesq fluid is a longstanding problem in stratified turbulence. The huge number of degrees of freedom involved in these processes renders extremely difficult a deterministic approach to the problem. Here we present a statistical mechanics approach yielding a prediction for a cumulative, global mixing efficiency as a function of a global Richard-son number and th...
Sutudy on exchange flow under the unstably stratified field
文沢, 元雄
2005-01-01
This paper deals with the exchange flow under the unstably stratified field. The author developed the effective measurement system as well as the numerical analysis program. The system and the program are applied to the helium-air exchange flow in a rectangular channel with inclination. Following main features of the exchange flow were discussed based on the calculated results.(1) Time required for establishing a quasi-steady state exchange flow.(2) The relationship between the inclination an...
Garboś, Sławomir; Święcicka, Dorota
2015-11-01
The random daytime (RDT) sampling method was used for the first time in the assessment of average weekly exposure to uranium through drinking water in a large water supply zone. Data set of uranium concentrations determined in 106 RDT samples collected in three runs from the water supply zone in Wroclaw (Poland), cannot be simply described by normal or log-normal distributions. Therefore, a numerical method designed for the detection and calculation of bimodal distribution was applied. The extracted two distributions containing data from the summer season of 2011 and the winter season of 2012 (nI=72) and from the summer season of 2013 (nII=34) allowed to estimate means of U concentrations in drinking water: 0.947 μg/L and 1.23 μg/L, respectively. As the removal efficiency of uranium during applied treatment process is negligible, the effect of increase in uranium concentration can be explained by higher U concentration in the surface-infiltration water used for the production of drinking water. During the summer season of 2013, heavy rains were observed in Lower Silesia region, causing floods over the territory of the entire region. Fluctuations in uranium concentrations in surface-infiltration water can be attributed to releases of uranium from specific sources - migration from phosphate fertilizers and leaching from mineral deposits. Thus, exposure to uranium through drinking water may increase during extreme rainfall events. The average chronic weekly intakes of uranium through drinking water, estimated on the basis of central values of the extracted normal distributions, accounted for 3.2% and 4.1% of tolerable weekly intake. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Gaëtan Sossauer
Full Text Available OBJECTIVE: Human papillomavirus (HPV self-sampling (Self-HPV may be used as a primary cervical cancer screening method in a low resource setting. Our aim was to evaluate whether an educational intervention would improve women's knowledge and confidence in the Self-HPV method. METHOD: Women aged between 25 and 65 years old, eligible for cervical cancer screening, were randomly chosen to receive standard information (control group or standard information followed by educational intervention (interventional group. Standard information included explanations about what the test detects (HPV, the link between HPV and cervical cancer and how to perform HPV self-sampling. The educational intervention consisted of a culturally tailored video about HPV, cervical cancer, Self-HPV and its relevancy as a screening test. All participants completed a questionnaire that assessed sociodemographic data, women's knowledge about cervical cancer and acceptability of Self-HPV. RESULTS: A total of 302 women were enrolled in 4 health care centers in Yaoundé and the surrounding countryside. 301 women (149 in the "control group" and 152 in the "intervention group" completed the full process and were included into the analysis. Participants who received the educational intervention had a significantly higher knowledge about HPV and cervical cancer than the control group (p<0.05, but no significant difference on Self-HPV acceptability and confidence in the method was noticed between the two groups. CONCLUSION: Educational intervention promotes an increase in knowledge about HPV and cervical cancer. Further investigation should be conducted to determine if this intervention can be sustained beyond the short term and influences screening behavior. TRIALS REGISTRATION: International Standard Randomised Controlled Trial Number (ISRCTN Register ISRCTN78123709.
Guo, Yan; Chen, Xinguang; Gong, Jie; Li, Fang; Zhu, Chaoyang; Yan, Yaqiong; Wang, Liang
2016-01-01
Millions of people move from rural areas to urban areas in China to pursue new opportunities while leaving their spouses and children at rural homes. Little is known about the impact of migration-related separation on mental health of these rural migrants in urban China. Survey data from a random sample of rural-to-urban migrants (n = 1113, aged 18-45) from Wuhan were analyzed. The Domestic Migration Stress Questionnaire (DMSQ), an instrument with four subconstructs, was used to measure migration-related stress. The relationship between spouse/child separation and stress was assessed using survey estimation methods to account for the multi-level sampling design. 16.46% of couples were separated from their spouses (spouse-separation only), 25.81% of parents were separated from their children (child separation only). Among the participants who married and had children, 5.97% were separated from both their spouses and children (double separation). Spouse-separation only and double separation did not scored significantly higher on DMSQ than those with no separation. Compared to parents without child separation, parents with child separation scored significantly higher on DMSQ (mean score = 2.88, 95% CI: [2.81, 2.95] vs. 2.60 [2.53, 2.67], p separation type and by gender indicated that the association was stronger for child-separation only and for female participants. Child-separation is an important source of migration-related stress, and the effect is particularly strong for migrant women. Public policies and intervention programs should consider these factors to encourage and facilitate the co-migration of parents with their children to mitigate migration-related stress.
Background stratified Poisson regression analysis of cohort data.
Richardson, David B; Langholz, Bryan
2012-03-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.
Ethanol dehydration to ethylene in a stratified autothermal millisecond reactor.
Skinner, Michael J; Michor, Edward L; Fan, Wei; Tsapatsis, Michael; Bhan, Aditya; Schmidt, Lanny D
2011-08-22
The concurrent decomposition and deoxygenation of ethanol was accomplished in a stratified reactor with 50-80 ms contact times. The stratified reactor comprised an upstream oxidation zone that contained Pt-coated Al(2)O(3) beads and a downstream dehydration zone consisting of H-ZSM-5 zeolite films deposited on Al(2)O(3) monoliths. Ethanol conversion, product selectivity, and reactor temperature profiles were measured for a range of fuel:oxygen ratios for two autothermal reactor configurations using two different sacrificial fuel mixtures: a parallel hydrogen-ethanol feed system and a series methane-ethanol feed system. Increasing the amount of oxygen relative to the fuel resulted in a monotonic increase in ethanol conversion in both reaction zones. The majority of the converted carbon was in the form of ethylene, where the ethanol carbon-carbon bonds stayed intact while the oxygen was removed. Over 90% yield of ethylene was achieved by using methane as a sacrificial fuel. These results demonstrate that noble metals can be successfully paired with zeolites to create a stratified autothermal reactor capable of removing oxygen from biomass model compounds in a compact, continuous flow system that can be configured to have multiple feed inputs, depending on process restrictions. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Background stratified Poisson regression analysis of cohort data
International Nuclear Information System (INIS)
Richardson, David B.; Langholz, Bryan
2012-01-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)
Directory of Open Access Journals (Sweden)
Orgül Selim
2010-06-01
Full Text Available Abstract Background The aim of this epidemiological study was to investigate the relationship of thermal discomfort with cold extremities (TDCE to age, gender, and body mass index (BMI in a Swiss urban population. Methods In a random population sample of Basel city, 2,800 subjects aged 20-40 years were asked to complete a questionnaire evaluating the extent of cold extremities. Values of cold extremities were based on questionnaire-derived scores. The correlation of age, gender, and BMI to TDCE was analyzed using multiple regression analysis. Results A total of 1,001 women (72.3% response rate and 809 men (60% response rate returned a completed questionnaire. Statistical analyses revealed the following findings: Younger subjects suffered more intensely from cold extremities than the elderly, and women suffered more than men (particularly younger women. Slimmer subjects suffered significantly more often from cold extremities than subjects with higher BMIs. Conclusions Thermal discomfort with cold extremities (a relevant symptom of primary vascular dysregulation occurs at highest intensity in younger, slimmer women and at lowest intensity in elderly, stouter men.
Thorslund, Karin; Johansson Hanse, Jan; Axberg, Ulf
2017-07-01
Universal parental support intended to enhance parents' capacity for parenting is an important aspect of public health strategies. However, support has mostly been aimed at parents, especially mothers, of younger children. There is a gap in the research concerning parents of adolescents and fathers' interest in parenting support. To investigate and compare the interest in parenting support of parents of adolescents and younger children, potential differences between mothers and fathers, and their knowledge of what is being offered to them already, and to explore their requirements for future universal parental support. Telephone interviews were conducted with a random sample of 1336 parents. Quantitative methods were used to analyze differences between groups and qualitative methods were used to analyze open-ended questions in regard to parents' requirements for future universal parental support. About 82% of the parents of adolescents interviewed think that offering universal parental support is most important during child's adolescence. There is a substantial interest, particularly among mothers, in most forms of support. Despite their interest, parents have limited awareness of the support available. Only 7% knew about the local municipality website, although 70% reported a possible interest in such a website. Similarly, 3% knew that a parent phone line was available to them, while 59% reported a possible interest. It poses a challenge but is nevertheless important for municipalities to develop support targeted at parents of adolescents which is tailored to their needs, and to reach out with information.
Directory of Open Access Journals (Sweden)
Serge Clotaire Billong
2016-11-01
Full Text Available Abstract Background Retention on lifelong antiretroviral therapy (ART is essential in sustaining treatment success while preventing HIV drug resistance (HIVDR, especially in resource-limited settings (RLS. In an era of rising numbers of patients on ART, mastering patients in care is becoming more strategic for programmatic interventions. Due to lapses and uncertainty with the current WHO sampling approach in Cameroon, we thus aimed to ascertain the national performance of, and determinants in, retention on ART at 12 months. Methods Using a systematic random sampling, a survey was conducted in the ten regions (56 sites of Cameroon, within the “reporting period” of October 2013–November 2014, enrolling 5005 eligible adults and children. Performance in retention on ART at 12 months was interpreted following the definition of HIVDR early warning indicator: excellent (>85%, fair (85–75%, poor (<75; and factors with p-value < 0.01 were considered statistically significant. Results Majority (74.4% of patients were in urban settings, and 50.9% were managed in reference treatment centres. Nationwide, retention on ART at 12 months was 60.4% (2023/3349; only six sites and one region achieved acceptable performances. Retention performance varied in reference treatment centres (54.2% vs. management units (66.8%, p < 0.0001; male (57.1% vs. women (62.0%, p = 0.007; and with WHO clinical stage I (63.3% vs. other stages (55.6%, p = 0.007; but neither for age (adults [60.3%] vs. children [58.8%], p = 0.730 nor for immune status (CD4351–500 [65.9%] vs. other CD4-staging [59.86%], p = 0.077. Conclusions Poor retention in care, within 12 months of ART initiation, urges active search for lost-to-follow-up targeting preferentially male and symptomatic patients, especially within reference ART clinics. Such sampling strategy could be further strengthened for informed ART monitoring and HIVDR prevention perspectives.
Directory of Open Access Journals (Sweden)
Faria AD
2014-06-01
Full Text Available Augusto Duarte Faria,1 Luciano Dias de Mattos Souza,2 Taiane de Azevedo Cardoso,2 Karen Amaral Tavares Pinheiro,2 Ricardo Tavares Pinheiro,2 Ricardo Azevedo da Silva,2 Karen Jansen21Department of Clinical and Health Psychology, Universidade Federal do Rio Grande – FURG, Rio Grande, RS, Brazil; 2Health and Behavior Postgraduate Program, Universidade Católica de Pelotas – UCPEL, Pelotas, RS, BrazilIntroduction: Changes in biological rhythm are among the various characteristics of bipolar disorder, and have long been associated with the functional impairment of the disease. There are only a few viable options of psychosocial interventions that deal with this specific topic; one of them is psychoeducation, a model that, although it has been used by practitioners for some time, only recently have studies shown its efficacy in clinical practice.Aim: To assess if patients undergoing psychosocial intervention in addition to a pharmacological treatment have better regulation of their biological rhythm than those only using medication.Method: This study is a randomized clinical trial that compares a standard medication intervention to an intervention combined with drugs and psychoeducation. The evaluation of the biological rhythm was made using the Biological Rhythm Interview of Assessment in Neuropsychiatry, an 18-item scale divided in four areas (sleep, activity, social rhythm, and eating pattern. The combined intervention consisted of medication and a short-term psychoeducation model summarized in a protocol of six individual sessions of 1 hour each.Results: The sample consisted of 61 patients with bipolar II disorder, but during the study, there were 14 losses to follow-up. Therefore, the final sample consisted of 45 individuals (26 for standard intervention and 19 for combined. The results showed that, in this sample and time period evaluated, the combined treatment of medication and psychoeducation had no statistically significant impact on the
Directory of Open Access Journals (Sweden)
Jaishri Mehraj
Full Text Available OBJECTIVE: The findings from truly randomized community-based studies on Staphylococcus aureus nasal colonization are scarce. Therefore we have examined point prevalence and risk factors of S. aureus nasal carriage in a non-hospitalized population of Braunschweig, northern Germany. METHODS: A total of 2026 potential participants were randomly selected through the resident's registration office and invited by mail. They were requested to collect a nasal swab at home and return it by mail. S. aureus was identified by culture and PCR. Logistic regression was used to determine risk factors of S. aureus carriage. RESULTS: Among the invitees, 405 individuals agreed to participate and 389 provided complete data which was included in the analysis. The median age of the participants was 49 years (IQR: 39-61 and 61% were females. S. aureus was isolated in 85 (21.9%; 95% CI: 18.0-26.2% of the samples, five of which were MRSA (1.29%; 95% CI: 0.55-2.98%. In multiple logistic regression, male sex (OR = 3.50; 95% CI: 2.01-6.11 and presence of allergies (OR = 2.43; 95% CI: 1.39-4.24 were found to be associated with S. aureus nasal carriage. Fifty five different spa types were found, that clustered into nine distinct groups. MRSA belonged to the hospital-associated spa types t032 and t025 (corresponds to MLST CC 22, whereas MSSA spa types varied and mostly belonged to spa-CC 012 (corresponds to MLST CC 30, and spa-CC 084 (corresponds to MLST CC 15. CONCLUSION: This first point prevalence study of S. aureus in a non-hospitalized population of Germany revealed prevalence, consistent with other European countries and supports previous findings on male sex and allergies as risk factors of S. aureus carriage. The detection of hospital-associated MRSA spa types in the community indicates possible spread of these strains from hospitals into the community.
Directory of Open Access Journals (Sweden)
Vicky Stergiopoulos
Full Text Available Housing First (HF is being widely disseminated in efforts to end homelessness among homeless adults with psychiatric disabilities. This study evaluates the effectiveness of HF with Intensive Case Management (ICM among ethnically diverse homeless adults in an urban setting. 378 participants were randomized to HF with ICM or treatment-as-usual (TAU in Toronto (Canada, and followed for 24 months. Measures of effectiveness included housing stability, physical (EQ5D-VAS and mental (CSI, GAIN-SS health, social functioning (MCAS, quality of life (QoLI20, and health service use. Two-thirds of the sample (63% was from racialized groups and half (50% were born outside Canada. Over the 24 months of follow-up, HF participants spent a significantly greater percentage of time in stable residences compared to TAU participants (75.1% 95% CI 70.5 to 79.7 vs. 39.3% 95% CI 34.3 to 44.2, respectively. Similarly, community functioning (MCAS improved significantly from baseline in HF compared to TAU participants (change in mean difference = +1.67 95% CI 0.04 to 3.30. There was a significant reduction in the number of days spent experiencing alcohol problems among the HF compared to TAU participants at 24 months (ratio of rate ratios = 0.47 95% CI 0.22 to 0.99 relative to baseline, a reduction of 53%. Although the number of emergency department visits and days in hospital over 24 months did not differ significantly between HF and TAU participants, fewer HF participants compared to TAU participants had 1 or more hospitalizations during this period (70.4% vs. 81.1%, respectively; P=0.044. Compared to non-racialized HF participants, racialized HF participants saw an increase in the amount of money spent on alcohol (change in mean difference = $112.90 95% CI 5.84 to 219.96 and a reduction in physical community integration (ratio of rate ratios = 0.67 95% CI 0.47 to 0.96 from baseline to 24 months. Secondary analyses found a significant reduction in the number of days
Numerical simulations of the stratified oceanic bottom boundary layer
Taylor, John R.
Numerical simulations are used to consider several problems relevant to the turbulent oceanic bottom boundary layer. In the first study, stratified open channel flow is considered with thermal boundary conditions chosen to approximate a shallow sea. Specifically, a constant heat flux is applied at the free surface and the lower wall is assumed to be adiabatic. When the surface heat flux is strong, turbulent upwellings of low speed fluid from near the lower wall are inhibited by the stable stratification. Subsequent studies consider a stratified bottom Ekman layer over a non-sloping lower wall. The influence of the free surface is removed by using an open boundary condition at the top of the computational domain. Particular attention is paid to the influence of the outer layer stratification on the boundary layer structure. When the density field is initialized with a linear profile, a turbulent mixed layer forms near the wall, which is separated from the outer layer by a strongly stable pycnocline. It is found that the bottom stress is not strongly affected by the outer layer stratification. However, stratification reduces turbulent transport to the outer layer and strongly limits the boundary layer height. The mean shear at the top of the boundary layer is enhanced when the outer layer is stratified, and this shear is strong enough to cause intermittent instabilities above the pycnocline. Turbulence-generated internal gravity waves are observed in the outer layer with a relatively narrow frequency range. An explanation for frequency content of these waves is proposed, starting with an observed broad-banded turbulent spectrum and invoking linear viscous decay to explain the preferential damping of low and high frequency waves. During the course of this work, an open-source computational fluid dynamics code has been developed with a number of advanced features including scalar advection, subgrid-scale models for large-eddy simulation, and distributed memory
E25 stratified torch ignition engine emissions and combustion analysis
International Nuclear Information System (INIS)
Rodrigues Filho, Fernando Antonio; Baêta, José Guilherme Coelho; Teixeira, Alysson Fernandes; Valle, Ramón Molina; Fonseca de Souza, José Leôncio
2016-01-01
Highlights: • A stratified torch ignition (STI) engine was built and tested. • The STI engines was tested in a wide range of load and speed. • Significant reduction on emissions was achieved by means of the STI system. • Low cyclic variability characterized the lean combustion process of the torch ignition engine. • HC emission is the main drawback of the stratified torch ignition engine. - Abstract: Vehicular emissions significantly increase atmospheric air pollution and greenhouse gases (GHG). This fact associated with fast global vehicle fleet growth calls for prompt scientific community technological solutions in order to promote a significant reduction in vehicle fuel consumption and emissions, especially of fossil fuels to comply with future legislation. To meet this goal, a prototype stratified torch ignition (STI) engine was built from a commercial existing baseline engine. In this system, combustion starts in a pre-combustion chamber, where the pressure increase pushes the combustion jet flames through calibrated nozzles to be precisely targeted into the main chamber. These combustion jet flames are endowed with high thermal and kinetic energy, being able to generate a stable lean combustion process. The high kinetic and thermal energy of the combustion jet flame results from the load stratification. This is carried out through direct fuel injection in the pre-combustion chamber by means of a prototype gasoline direct injector (GDI) developed for a very low fuel flow rate. In this work the engine out-emissions of CO, NOx, HC and CO_2 of the STI engine are presented and a detailed analysis supported by the combustion parameters is conducted. The results obtained in this work show a significant decrease in the specific emissions of CO, NOx and CO_2 of the STI engine in comparison with the baseline engine. On the other hand, HC specific emission increased due to wall wetting from the fuel hitting in the pre-combustion chamber wall.
Direct contact condensation induced transition from stratified to slug flow
International Nuclear Information System (INIS)
Strubelj, Luka; Ezsoel, Gyoergy; Tiselj, Iztok
2010-01-01
Selected condensation-induced water hammer experiments performed on PMK-2 device were numerically modelled with three-dimensional two-fluid models of computer codes NEPTUNE C FD and CFX. Experimental setup consists of the horizontal pipe filled with the hot steam that is being slowly flooded with cold water. In most of the experimental cases, slow flooding of the pipe was abruptly interrupted by a strong slugging and water hammer, while in the selected experimental runs performed at higher initial pressures and temperatures that are analysed in the present work, the transition from the stratified into the slug flow was not accompanied by the water hammer pressure peak. That makes these cases more suitable tests for evaluation of the various condensation models in the horizontally stratified flows and puts them in the range of the available CFD (Computational Fluid Dynamics) codes. The key models for successful simulation appear to be the condensation model of the hot vapour on the cold liquid and the interfacial momentum transfer model. The surface renewal types of condensation correlations, developed for condensation in the stratified flows, were used in the simulations and were applied also in the regions of the slug flow. The 'large interface' model for inter-phase momentum transfer model was compared to the bubble drag model. The CFD simulations quantitatively captured the main phenomena of the experiments, while the stochastic nature of the particular condensation-induced water hammer experiments did not allow detailed prediction of the time and position of the slug formation in the pipe. We have clearly shown that even the selected experiments without water hammer present a tough test for the applied CFD codes, while modelling of the water hammer pressure peaks in two-phase flow, being a strongly compressible flow phenomena, is beyond the capability of the current CFD codes.
Directory of Open Access Journals (Sweden)
Anne H Berman
Full Text Available The KIDSCREEN-27 is a measure of child and adolescent quality of life (QoL, with excellent psychometric properties, available in child-report and parent-rating versions in 38 languages. This study provides child-reported and parent-rated norms for the KIDSCREEN-27 among Swedish 11-16 year-olds, as well as child-parent agreement. Sociodemographic correlates of self-reported wellbeing and parent-rated wellbeing were also measured.A random population sample consisting of 600 children aged 11-16, 100 per age group and one of their parents (N = 1200, were approached for response to self-reported and parent-rated versions of the KIDSCREEN-27. Parents were also asked about their education, employment status and their own QoL based on the 26-item WHOQOL-Bref. Based on the final sampling pool of 1158 persons, a 34.8% response rate of 403 individuals was obtained, including 175 child-parent pairs, 27 child singleton responders and 26 parent singletons. Gender and age differences for parent ratings and child-reported data were analyzed using t-tests and the Mann-Whitney U-test. Post-hoc Dunn tests were conducted for pairwise comparisons when the p-value for specific subscales was 0.05 or lower. Child-parent agreement was tested item-by-item, using the Prevalence- and Bias-Adjusted Kappa (PABAK coefficient for ordinal data (PABAK-OS; dimensional and total score agreement was evaluated based on dichotomous cut-offs for lower well-being, using the PABAK and total, continuous scores were evaluated using Bland-Altman plots.Compared to European norms, Swedish children in this sample scored lower on Physical wellbeing (48.8 SE/49.94 EU but higher on the other KIDSCREEN-27 dimensions: Psychological wellbeing (53.4/49.77, Parent relations and autonomy (55.1/49.99, Social Support and peers (54.1/49.94 and School (55.8/50.01. Older children self-reported lower wellbeing than younger children. No significant self-reported gender differences occurred and parent ratings
Technetium reduction and removal in a stratified fjord
International Nuclear Information System (INIS)
Keith-Roach, M.; Roos, P.
2002-01-01
The distribution of Tc in the water columns of a stratified fjord has been measured to investigate the behaviour and fate of Tc on reaching reducing waters. Slow mixing in the water column of the fjord results in vertical transport of the dissolved Tc to the oxic/anoxic interface. Tc is reduced just below the interface and at 21 m 60% is sorbed to particulate and colloidal material. Tc is carried to the sediments sorbed to the particulate material, where there is a current inventory of approximately 3 Bq m -2 . (LN)
Stability of unstably stratified shear flow between parallel plates
Energy Technology Data Exchange (ETDEWEB)
Fujimura, Kaoru; Kelly, R E
1987-09-01
The linear stability of unstably stratified shear flows between two horizontal parallel plates was investigated. Eigenvalue problems were solved numerically by making use of the expansion method in Chebyshev polynomials, and the critical Rayleigh numbers were obtained accurately in the Reynolds number range of (0.01, 100). It was found that the critical Rayleigh number increases with an increase of the Reynolds number. The result strongly supports previous stability analyses except for the analysis by Makino and Ishikawa (J. Jpn. Soc. Fluid Mech. 4 (1985) 148 - 158) in which a decrease of the critical Rayleigh number was obtained.
Stability of unstably stratified shear flow between parallel plates
International Nuclear Information System (INIS)
Fujimura, Kaoru; Kelly, R.E.
1987-01-01
The linear stability of unstably stratified shear flows between two horizontal parallel plates was investigated. Eigenvalue problems were solved numerically by making use of the expansion method in Chebyshev polynomials, and the critical Rayleigh numbers were obtained accurately in the Reynolds number range of [0.01, 100]. It was found that the critical Rayleigh number increases with an increase of the Reynolds number. The result strongly supports previous stability analyses except for the analysis by Makino and Ishikawa [J. Jpn. Soc. Fluid Mech. 4 (1985) 148 - 158] in which a decrease of the critical Rayleigh number was obtained. (author)
Stratifying patients with peripheral neuropathic pain based on sensory profiles
DEFF Research Database (Denmark)
Vollert, Jan; Maier, Christoph; Attal, Nadine
2017-01-01
In a recent cluster analysis, it has been shown that patients with peripheral neuropathic pain can be grouped into 3 sensory phenotypes based on quantitative sensory testing profiles, which are mainly characterized by either sensory loss, intact sensory function and mild thermal hyperalgesia and...... populations that need to be screened to reach a subpopulation large enough to conduct a phenotype-stratified study. The most common phenotype in diabetic polyneuropathy was sensory loss (83%), followed by mechanical hyperalgesia (75%) and thermal hyperalgesia (34%, note that percentages are overlapping...
Technetium reduction and removal in a stratified fjord
Energy Technology Data Exchange (ETDEWEB)
Keith-Roach, M.; Roos, P. [Risoe National Lab., Roskilde (Denmark)
2002-04-01
The distribution of Tc in the water columns of a stratified fjord has been measured to investigate the behaviour and fate of Tc on reaching reducing waters. Slow mixing in the water column of the fjord results in vertical transport of the dissolved Tc to the oxic/anoxic interface. Tc is reduced just below the interface and at 21 m 60% is sorbed to particulate and colloidal material. Tc is carried to the sediments sorbed to the particulate material, where there is a current inventory of approximately 3 Bq m{sup -2}. (LN)
Development of a natural gas stratified charge rotary engine
Energy Technology Data Exchange (ETDEWEB)
Sierens, R.; Verdonck, W.
1985-01-01
A water model has been used to determine the positions of separate inlet ports for a natural gas, stratified charge rotary engine. The flow inside the combustion chamber (mainly during the induction period) has been registered by a film camera. From these tests the best locations of the inlet ports have been obtained, a prototype of this engine has been built by Audi NSU and tested in the laboratories of the university of Gent. The results of these tests, for different stratification configurations, are given. These results are comparable with the best results obtained by Audi NSU for a homogeneous natural gas rotary engine.
SOLUTION OF A MULTIVARIATE STRATIFIED SAMPLING PROBLEM THROUGH CHEBYSHEV GOAL PROGRAMMING
Directory of Open Access Journals (Sweden)
Mohd. Vaseem Ismail
2010-12-01
Full Text Available In this paper, we consider the problem of minimizing the variances for the various characters with fixed (given budget. Each convex objective function is first linearised at its minimal point where it meets the linear cost constraint. The resulting multiobjective linear programming problem is then solved by Chebyshev goal programming. A numerical example is given to illustrate the procedure.
Model-based estimation of finite population total in stratified sampling
African Journals Online (AJOL)
The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...
Ecological indicators must be shown to be responsive to stress. For large-scale observational studies the best way to demonstrate responsiveness is by evaluating indicators along a gradient of stress, but such gradients are often unknown for a population of sites prior to site se...
Sampling strategy for a large scale indoor radiation survey - a pilot project
International Nuclear Information System (INIS)
Strand, T.; Stranden, E.
1986-01-01
Optimisation of a stratified random sampling strategy for large scale indoor radiation surveys is discussed. It is based on the results from a small scale pilot project where variances in dose rates within different categories of houses were assessed. By selecting a predetermined precision level for the mean dose rate in a given region, the number of measurements needed can be optimised. The results of a pilot project in Norway are presented together with the development of the final sampling strategy for a planned large scale survey. (author)
Crystallization of a compositionally stratified basal magma ocean
Laneuville, Matthieu; Hernlund, John; Labrosse, Stéphane; Guttenberg, Nicholas
2018-03-01
Earth's ∼3.45 billion year old magnetic field is regenerated by dynamo action in its convecting liquid metal outer core. However, convection induces an isentropic thermal gradient which, coupled with a high core thermal conductivity, results in rapid conducted heat loss. In the absence of implausibly high radioactivity or alternate sources of motion to drive the geodynamo, the Earth's early core had to be significantly hotter than the melting point of the lower mantle. While the existence of a dense convecting basal magma ocean (BMO) has been proposed to account for high early core temperatures, the requisite physical and chemical properties for a BMO remain controversial. Here we relax the assumption of a well-mixed convecting BMO and instead consider a BMO that is initially gravitationally stratified owing to processes such as mixing between metals and silicates at high temperatures in the core-mantle boundary region during Earth's accretion. Using coupled models of crystallization and heat transfer through a stratified BMO, we show that very high temperatures could have been trapped inside the early core, sequestering enough heat energy to run an ancient geodynamo on cooling power alone.
Dyadic Green's function of an eccentrically stratified sphere.
Moneda, Angela P; Chrissoulidis, Dimitrios P
2014-03-01
The electric dyadic Green's function (dGf) of an eccentrically stratified sphere is built by use of the superposition principle, dyadic algebra, and the addition theorem of vector spherical harmonics. The end result of the analytical formulation is a set of linear equations for the unknown vector wave amplitudes of the dGf. The unknowns are calculated by truncation of the infinite sums and matrix inversion. The theory is exact, as no simplifying assumptions are required in any one of the analytical steps leading to the dGf, and it is general in the sense that any number, position, size, and electrical properties can be considered for the layers of the sphere. The point source can be placed outside of or in any lossless part of the sphere. Energy conservation, reciprocity, and other checks verify that the dGf is correct. A numerical application is made to a stratified sphere made of gold and glass, which operates as a lens.
Crenothrix are major methane consumers in stratified lakes.
Oswald, Kirsten; Graf, Jon S; Littmann, Sten; Tienken, Daniela; Brand, Andreas; Wehrli, Bernhard; Albertsen, Mads; Daims, Holger; Wagner, Michael; Kuypers, Marcel Mm; Schubert, Carsten J; Milucka, Jana
2017-09-01
Methane-oxidizing bacteria represent a major biological sink for methane and are thus Earth's natural protection against this potent greenhouse gas. Here we show that in two stratified freshwater lakes a substantial part of upward-diffusing methane was oxidized by filamentous gamma-proteobacteria related to Crenothrix polyspora. These filamentous bacteria have been known as contaminants of drinking water supplies since 1870, but their role in the environmental methane removal has remained unclear. While oxidizing methane, these organisms were assigned an 'unusual' methane monooxygenase (MMO), which was only distantly related to 'classical' MMO of gamma-proteobacterial methanotrophs. We now correct this assignment and show that Crenothrix encode a typical gamma-proteobacterial PmoA. Stable isotope labeling in combination swith single-cell imaging mass spectrometry revealed methane-dependent growth of the lacustrine Crenothrix with oxygen as well as under oxygen-deficient conditions. Crenothrix genomes encoded pathways for the respiration of oxygen as well as for the reduction of nitrate to N 2 O. The observed abundance and planktonic growth of Crenothrix suggest that these methanotrophs can act as a relevant biological sink for methane in stratified lakes and should be considered in the context of environmental removal of methane.
LONGITUDINAL OSCILLATIONS IN DENSITY STRATIFIED AND EXPANDING SOLAR WAVEGUIDES
Energy Technology Data Exchange (ETDEWEB)
Luna-Cardozo, M. [Instituto de Astronomia y Fisica del Espacio, CONICET-UBA, CC. 67, Suc. 28, 1428 Buenos Aires (Argentina); Verth, G. [School of Computing, Engineering and Information Sciences, Northumbria University, Newcastle Upon Tyne NE1 8ST (United Kingdom); Erdelyi, R., E-mail: mluna@iafe.uba.ar, E-mail: robertus@sheffield.ac.uk, E-mail: gary.verth@northumbria.ac.uk [Solar Physics and Space Plasma Research Centre (SP2RC), University of Sheffield, Hicks Building, Hounsfield Road, Sheffield S3 7RH (United Kingdom)
2012-04-01
Waves and oscillations can provide vital information about the internal structure of waveguides in which they propagate. Here, we analytically investigate the effects of density and magnetic stratification on linear longitudinal magnetohydrodynamic (MHD) waves. The focus of this paper is to study the eigenmodes of these oscillations. It is our specific aim to understand what happens to these MHD waves generated in flux tubes with non-constant (e.g., expanding or magnetic bottle) cross-sectional area and density variations. The governing equation of the longitudinal mode is derived and solved analytically and numerically. In particular, the limit of the thin flux tube approximation is examined. The general solution describing the slow longitudinal MHD waves in an expanding magnetic flux tube with constant density is found. Longitudinal MHD waves in density stratified loops with constant magnetic field are also analyzed. From analytical solutions, the frequency ratio of the first overtone and fundamental mode is investigated in stratified waveguides. For small expansion, a linear dependence between the frequency ratio and the expansion factor is found. From numerical calculations it was found that the frequency ratio strongly depends on the density profile chosen and, in general, the numerical results are in agreement with the analytical results. The relevance of these results for solar magneto-seismology is discussed.
Improvements to TRAC models of condensing stratified flow. Pt. 1
International Nuclear Information System (INIS)
Zhang, Q.; Leslie, D.C.
1991-12-01
Direct contact condensation in stratified flow is an important phenomenon in LOCA analyses. In this report, the TRAC interfacial heat transfer model for stratified condensing flow has been assessed against the Bankoff experiments. A rectangular channel option has been added to the code to represent the experimental geometry. In almost all cases the TRAC heat transfer coefficient (HTC) over-predicts the condensation rates and in some cases it is so high that the predicted steam is sucked in from the normal outlet in order to conserve mass. Based on their cocurrent and countercurrent condensing flow experiments, Bankoff and his students (Lim 1981, Kim 1985) developed HTC models from the two cases. The replacement of the TRAC HTC with either of Bankoff's models greatly improves the predictions of condensation rates in the experiment with cocurrent condensing flow. However, the Bankoff HTC for countercurrent flow is preferable because it is based only on the local quantities rather than on the quantities averaged from the inlet. (author)
Internal circle uplifts, transversality and stratified G-structures
Energy Technology Data Exchange (ETDEWEB)
Babalic, Elena Mirela [Department of Theoretical Physics, National Institute of Physics and Nuclear Engineering,Str. Reactorului no.30, P.O.BOX MG-6, Postcode 077125, Bucharest-Magurele (Romania); Department of Physics, University of Craiova,13 Al. I. Cuza Str., Craiova 200585 (Romania); Lazaroiu, Calin Iuliu [Center for Geometry and Physics, Institute for Basic Science,Pohang 790-784 (Korea, Republic of)
2015-11-24
We study stratified G-structures in N=2 compactifications of M-theory on eight-manifolds M using the uplift to the auxiliary nine-manifold M̂=M×S{sup 1}. We show that the cosmooth generalized distribution D̂ on M̂ which arises in this formalism may have pointwise transverse or non-transverse intersection with the pull-back of the tangent bundle of M, a fact which is responsible for the subtle relation between the spinor stabilizers arising on M and M̂ and for the complicated stratified G-structure on M which we uncovered in previous work. We give a direct explanation of the latter in terms of the former and relate explicitly the defining forms of the SU(2) structure which exists on the generic locus U of M to the defining forms of the SU(3) structure which exists on an open subset Û of M̂, thus providing a dictionary between the eight- and nine-dimensional formalisms.
A modified stratified model for the 3C 273 jet
International Nuclear Information System (INIS)
Liu Wenpo; Shen Zhiqiang
2009-01-01
We present a modified stratified jet model to interpret the observed spectral energy distributions of knots in the 3C 273 jet. Based on the hypothesis of the single index of the particle energy spectrum at injection and identical emission processes among all the knots, the observed difference of spectral shape among different 3C 273 knots can be understood as a manifestation of the deviation of the equivalent Doppler factor of stratified emission regions in an individual knot from a characteristic one. The summed spectral energy distributions of all ten knots in the 3C 273 jet can be well fitted by two components: a low-energy component (radio to optical) dominated by synchrotron radiation and a high-energy component (UV, X-ray and γ-ray) dominated by inverse Compton scattering of the cosmic microwave background. This gives a consistent spectral index of α = 0.88 (S v ∝ v -α ) and a characteristic Doppler factor of 7.4. Assuming the average of the summed spectrum as the characteristic spectrum of each knot in the 3C 273 jet, we further get a distribution of Doppler factors. We discuss the possible implications of these results for the physical properties in the 3C 273 jet. Future GeV observations with GLAST could separate the γ-ray emission of 3C 273 from the large scale jet and the small scale jet (i.e. the core) through measuring the GeV spectrum.
STRESS DISTRIBUTION IN THE STRATIFIED MASS CONTAINING VERTICAL ALVEOLE
Directory of Open Access Journals (Sweden)
Bobileva Tatiana Nikolaevna
2017-08-01
Full Text Available Almost all subsurface rocks used as foundations for various types of structures are stratified. Such heterogeneity may cause specific behaviour of the materials under strain. Differential equations describing the behaviour of such materials contain rapidly fluctuating coefficients, in view of this, solution of such equations is more time-consuming when using today’s computers. The method of asymptotic averaging leads to getting homogeneous medium under study to averaged equations with fixed factors. The present article is concerned with stratified soil mass consisting of pair-wise alternative isotropic elastic layers. In the results of elastic modules averaging, the present soil mass with horizontal rock stratification is simulated by homogeneous transversal-isotropic half-space with isotropy plane perpendicular to the standing axis. Half-space is loosened by a vertical alveole of circular cross-section, and virgin ground is under its own weight. For horizontal parting planes of layers, the following two types of surface conditions are set: ideal contact and backlash without cleavage. For homogeneous transversal-isotropic half-space received with a vertical alveole, the analytical solution of S.G. Lekhnitsky, well known in scientific papers, is used. The author gives expressions for stress components and displacements in soil mass for different marginal conditions on the alveole surface. Such research problems arise when constructing and maintaining buildings and when composite materials are used.
Measuring mixing efficiency in experiments of strongly stratified turbulence
Augier, P.; Campagne, A.; Valran, T.; Calpe Linares, M.; Mohanan, A. V.; Micard, D.; Viboud, S.; Segalini, A.; Mordant, N.; Sommeria, J.; Lindborg, E.
2017-12-01
Oceanic and atmospheric models need better parameterization of the mixing efficiency. Therefore, we need to measure this quantity for flows representative of geophysical flows, both in terms of types of flows (with vortices and/or waves) and of dynamical regimes. In order to reach sufficiently large Reynolds number for strongly stratified flows, experiments for which salt is used to produce the stratification have to be carried out in a large rotating platform of at least 10-meter diameter.We present new experiments done in summer 2017 to study experimentally strongly stratified turbulence and mixing efficiency in the Coriolis platform. The flow is forced by a slow periodic movement of an array of large vertical or horizontal cylinders. The velocity field is measured by 3D-2C scanned horizontal particles image velocimetry (PIV) and 2D vertical PIV. Six density-temperature probes are used to measure vertical and horizontal profiles and signals at fixed positions.We will show how we rely heavily on open-science methods for this study. Our new results on the mixing efficiency will be presented and discussed in terms of mixing parameterization.
Optimal energy growth in a stably stratified shear flow
Jose, Sharath; Roy, Anubhab; Bale, Rahul; Iyer, Krithika; Govindarajan, Rama
2018-02-01
Transient growth of perturbations by a linear non-modal evolution is studied here in a stably stratified bounded Couette flow. The density stratification is linear. Classical inviscid stability theory states that a parallel shear flow is stable to exponentially growing disturbances if the Richardson number (Ri) is greater than 1/4 everywhere in the flow. Experiments and numerical simulations at higher Ri show however that algebraically growing disturbances can lead to transient amplification. The complexity of a stably stratified shear flow stems from its ability to combine this transient amplification with propagating internal gravity waves (IGWs). The optimal perturbations associated with maximum energy amplification are numerically obtained at intermediate Reynolds numbers. It is shown that in this wall-bounded flow, the three-dimensional optimal perturbations are oblique, unlike in unstratified flow. A partitioning of energy into kinetic and potential helps in understanding the exchange of energies and how it modifies the transient growth. We show that the apportionment between potential and kinetic energy depends, in an interesting manner, on the Richardson number, and on time, as the transient growth proceeds from an optimal perturbation. The oft-quoted stabilizing role of stratification is also probed in the non-diffusive limit in the context of disturbance energy amplification.
International Nuclear Information System (INIS)
Hong Joo Ahn; Se Chul Sohn; Kwang Yong Jee; Ju Youl Kim; In Koo Lee
2007-01-01
Utilization facilities for radioisotope (RI) are increasing annually in South Korea, and the total number was 2,723, as of December 31, 2005. The inspection of a clearance level is a very important problem in order to ensure a social reliance for releasing radioactive materials to the environment. Korean regulations for such a clearance are described in Notice No. 2001-30 of the Ministry of Science and Technology (MOST) and Notice No. 2002-67 of the Ministry of Commerce, Industry and Energy (MOCIE). Most unsealed sources in RI waste drums at a storage facility are low-level beta-emitters with short half-lives, so it is impossible to measure their inventories by a nondestructive analysis. Furthermore, RI wastes generated from hospital, educational and research institutes and industry have a heterogeneous, multiple, irregular, and a small quantity of a waste stream. This study addresses a representative (master) sampling survey and analysis plan for RI wastes because a complete enumeration of waste drums is impossible and not desirable in terms of a cost and efficiency. The existing approaches to a representative sampling include a judgmental, simple random, stratified random, systematic grid, systematic random, composite, and adaptive sampling. A representative sampling plan may combine two or more of the above sampling approaches depending on the type and distribution of a waste stream. Stratified random sampling (constrained randomization) is proven to be adequate for a sampling design of a RI waste regarding a half-life, surface dose, undertaking time to a storage facility, and type of waste. The developed sampling protocol includes estimating the number of drums within a waste stream, estimating the number of samples, and a confirmation of the required number of samples. The statistical process control for a quality assurance plan includes control charts and an upper control limit (UCL) of 95% to determine whether a clearance level is met or not. (authors)
Comparison of randomization techniques for clinical trials with data from the HOMERUS-trial
Verberk, W. J.; Kroon, A. A.; Kessels, A. G. H.; Nelemans, P. J.; van Ree, J. W.; Lenders, J. W. M.; Thien, T.; Bakx, J. C.; van Montfrans, G. A.; Smit, A. J.; Beltman, F. W.; de Leeuw, P. W.
2005-01-01
Background. Several methods of randomization are available to create comparable intervention groups in a study. In the HOMERUS-trial, we compared the minimization procedure with a stratified and a non-stratified method of randomization in order to test which one is most appropriate for use in
Comparison of randomization techniques for clinical trials with data from the HOMERUS-trial.
Verberk, W.J.; Kroon, A.A.; Kessels, A.G.H.; Nelemans, P.J.; Ree, J.W. van; Lenders, J.W.M.; Thien, Th.; Bakx, J.C.; Montfrans, G.A. van; Smit, A.J.; Beltman, F.W.; Leeuw, P.W. de
2005-01-01
BACKGROUND: Several methods of randomization are available to create comparable intervention groups in a study. In the HOMERUS-trial, we compared the minimization procedure with a stratified and a non-stratified method of randomization in order to test which one is most appropriate for use in
Reflection and transmission of electromagnetic waves in planarly stratified media
International Nuclear Information System (INIS)
Caviglia, G.
1999-01-01
Propagation of time-harmonic electromagnetic waves in planarly stratified multilayers is investigated. Each layer is allowed to be inhomogeneous and the layers are separated by interfaces. The procedure is based on the representation of the electromagnetic field in the basis of the eigenvectors of the matrix characterizing the first-order system. Hence the local reflection and transmission matrices are defined and the corresponding differential equations, in the pertinent space variable are determined. The jump conditions at interfaces are also established. The present model incorporates dissipative materials and the procedure holds without any restrictions to material symmetries. Differential equations appeared in the literature are shown to hold in particular (one-dimensional) cases or to represent homogeneous layers only
Microstructure of Turbulence in the Stably Stratified Boundary Layer
Sorbjan, Zbigniew; Balsley, Ben B.
2008-11-01
The microstructure of a stably stratified boundary layer, with a significant low-level nocturnal jet, is investigated based on observations from the CASES-99 campaign in Kansas, U.S.A. The reported, high-resolution vertical profiles of the temperature, wind speed, wind direction, pressure, and the turbulent dissipation rate, were collected under nocturnal conditions on October 14, 1999, using the CIRES Tethered Lifting System. Two methods for evaluating instantaneous (1-sec) background profiles are applied to the raw data. The background potential temperature is calculated using the “bubble sort” algorithm to produce a monotonically increasing potential temperature with increasing height. Other scalar quantities are smoothed using a running vertical average. The behaviour of background flow, buoyant overturns, turbulent fluctuations, and their respective histograms are presented. Ratios of the considered length scales and the Ozmidov scale are nearly constant with height, a fact that can be applied in practice for estimating instantaneous profiles of the dissipation rate.
Hydrodynamics of stratified epithelium: Steady state and linearized dynamics
Yeh, Wei-Ting; Chen, Hsuan-Yi
2016-05-01
A theoretical model for stratified epithelium is presented. The viscoelastic properties of the tissue are assumed to be dependent on the spatial distribution of proliferative and differentiated cells. Based on this assumption, a hydrodynamic description of tissue dynamics at the long-wavelength, long-time limit is developed, and the analysis reveals important insights into the dynamics of an epithelium close to its steady state. When the proliferative cells occupy a thin region close to the basal membrane, the relaxation rate towards the steady state is enhanced by cell division and cell apoptosis. On the other hand, when the region where proliferative cells reside becomes sufficiently thick, a flow induced by cell apoptosis close to the apical surface enhances small perturbations. This destabilizing mechanism is general for continuous self-renewal multilayered tissues; it could be related to the origin of certain tissue morphology, tumor growth, and the development pattern.
A study of stratified gas-liquid pipe flow
Energy Technology Data Exchange (ETDEWEB)
Johnson, George W.
2005-07-01
This work includes both theoretical modelling and experimental observations which are relevant to the design of gas condensate transport lines. Multicomponent hydrocarbon gas mixtures are transported in pipes over long distances and at various inclinations. Under certain circumstances, the heavier hydrocarbon components and/or water vapour condense to form one or more liquid phases. Near the desired capacity, the liquid condensate and water is efficiently transported in the form of a stratified flow with a droplet field. During operating conditions however, the flow rate may be reduced allowing liquid accumulation which can create serious operational problems due to large amounts of excess liquid being expelled into the receiving facilities during production ramp-up or even in steady production in severe cases. In particular, liquid tends to accumulate in upward inclined sections due to insufficient drag on the liquid from the gas. To optimize the transport of gas condensates, a pipe diameters should be carefully chosen to account for varying flow rates and pressure levels which are determined through the knowledge of the multiphase flow present. It is desirable to have a reliable numerical simulation tool to predict liquid accumulation for various flow rates, pipe diameters and pressure levels which is not presently accounted for by industrial flow codes. A critical feature of the simulation code would include the ability to predict the transition from small liquid accumulation at high flow rates to large liquid accumulation at low flow rates. A semi-intermittent flow regime of roll waves alternating with a partly backward flowing liquid film has been observed experimentally to occur for a range of gas flow rates. Most of the liquid is transported in the roll waves. The roll wave regime is not well understood and requires fundamental modelling and experimental research. The lack of reliable models for this regime leads to inaccurate prediction of the onset of
Hydromagnetic stability of rotating stratified compressible fluid flows
Energy Technology Data Exchange (ETDEWEB)
Srinivasan, V; Kandaswamy, P [Dept. of Mathematics, Bharathiar University, Coimbatore, Tamil Nadu, India; Debnath, L [Dept. of Mathematics, University of Central Florida, Orlando, USA
1984-09-01
The hydromagnetic stability of a radially stratified compressible fluid rotating between two coaxial cylinders is investigated. The stability with respect to axisymmetric disturbances is examined. The fluid system is found to be thoroughly stable to axisymmetric disturbances provided the fluid rotates very rapidly. The system is shown to be unstable to non-axisymmetric disturbances, and the slow amplifying hydromagnetic wave modes propagate against the basic rotation. The lower and upper bounds of the azimuthal phase speeds of the amplifying waves are determined. A quadrant theorem on the slow waves characteristic of a rapidly rotating fluid is derived. Special attention is given to the effects of compressibility of the fluid. Some results concerning the stability of an incompressible fluid system are obtained as special cases of the present analysis.
Direct numerical simulation of homogeneous stratified rotating turbulence
Energy Technology Data Exchange (ETDEWEB)
Iida, O.; Tsujimura, S.; Nagano, Y. [Nagoya Institute of Technology, Department of Mech. Eng., Nagoya (Japan)
2005-12-01
The effects of the Prandtl number on stratified rotating turbulence have been studied in homogeneous turbulence by using direct numerical simulations and a rapid distortion theory. Fluctuations under strong stable-density stratification can be theoretically divided into the WAVE and the potential vorticity (PV) modes. In low-Prandtl-number fluids, the WAVE mode deteriorates, while the PV mode remains. Imposing rotation on a low-Prandtl-number fluid makes turbulence two-dimensional as well as geostrophic; it is found from the instantaneous turbulent structure that the vortices merge to form a few vertically-elongated vortex columns. During the period toward two-dimensionalization, the vertical vortices become asymmetric in the sense of rotation. (orig.)
Advanced stratified charge rotary aircraft engine design study
Badgley, P.; Berkowitz, M.; Jones, C.; Myers, D.; Norwood, E.; Pratt, W. B.; Ellis, D. R.; Huggins, G.; Mueller, A.; Hembrey, J. H.
1982-01-01
A technology base of new developments which offered potential benefits to a general aviation engine was compiled and ranked. Using design approaches selected from the ranked list, conceptual design studies were performed of an advanced and a highly advanced engine sized to provide 186/250 shaft Kw/HP under cruise conditions at 7620/25,000 m/ft altitude. These are turbocharged, direct-injected stratified charge engines intended for commercial introduction in the early 1990's. The engine descriptive data includes tables, curves, and drawings depicting configuration, performance, weights and sizes, heat rejection, ignition and fuel injection system descriptions, maintenance requirements, and scaling data for varying power. An engine-airframe integration study of the resulting engines in advanced airframes was performed on a comparative basis with current production type engines. The results show airplane performance, costs, noise & installation factors. The rotary-engined airplanes display substantial improvements over the baseline, including 30 to 35% lower fuel usage.
Internal combustion engine using premixed combustion of stratified charges
Marriott, Craig D [Rochester Hills, MI; Reitz, Rolf D [Madison, WI
2003-12-30
During a combustion cycle, a first stoichiometrically lean fuel charge is injected well prior to top dead center, preferably during the intake stroke. This first fuel charge is substantially mixed with the combustion chamber air during subsequent motion of the piston towards top dead center. A subsequent fuel charge is then injected prior to top dead center to create a stratified, locally richer mixture (but still leaner than stoichiometric) within the combustion chamber. The locally rich region within the combustion chamber has sufficient fuel density to autoignite, and its self-ignition serves to activate ignition for the lean mixture existing within the remainder of the combustion chamber. Because the mixture within the combustion chamber is overall premixed and relatively lean, NO.sub.x and soot production are significantly diminished.
Visualization periodic flows in a continuously stratified fluid.
Bardakov, R.; Vasiliev, A.
2012-04-01
To visualize the flow pattern of viscous continuously stratified fluid both experimental and computational methods were developed. Computational procedures were based on exact solutions of set of the fundamental equations. Solutions of the problems of flows producing by periodically oscillating disk (linear and torsion oscillations) were visualized with a high resolutions to distinguish small-scale the singular components on the background of strong internal waves. Numerical algorithm of visualization allows to represent both the scalar and vector fields, such as velocity, density, pressure, vorticity, stream function. The size of the source, buoyancy and oscillation frequency, kinematic viscosity of the medium effects were traced in 2D an 3D posing problems. Precision schlieren instrument was used to visualize the flow pattern produced by linear and torsion oscillations of strip and disk in a continuously stratified fluid. Uniform stratification was created by the continuous displacement method. The buoyancy period ranged from 7.5 to 14 s. In the experiments disks with diameters from 9 to 30 cm and a thickness of 1 mm to 10 mm were used. Different schlieren methods that are conventional vertical slit - Foucault knife, vertical slit - filament (Maksoutov's method) and horizontal slit - horizontal grating (natural "rainbow" schlieren method) help to produce supplementing flow patterns. Both internal wave beams and fine flow components were visualized in vicinity and far from the source. Intensity of high gradient envelopes increased proportionally the amplitude of the source. In domains of envelopes convergence isolated small scale vortices and extended mushroom like jets were formed. Experiments have shown that in the case of torsion oscillations pattern of currents is more complicated than in case of forced linear oscillations. Comparison with known theoretical model shows that nonlinear interactions between the regular and singular flow components must be taken
Longevity of Compositionally Stratified Layers in Ice Giants
Friedson, A. J.
2017-12-01
In the hydrogen-rich atmospheres of gas giants, a decrease with radius in the mixing ratio of a heavy species (e.g. He, CH4, H2O) has the potential to produce a density stratification that is convectively stable if the heavy species is sufficiently abundant. Formation of stable layers in the interiors of these planets has important implications for their internal structure, chemical mixing, dynamics, and thermal evolution, since vertical transport of heat and constituents in such layers is greatly reduced in comparison to that in convecting layers. Various processes have been suggested for creating compositionally stratified layers. In the interiors of Jupiter and Saturn, these include phase separation of He from metallic hydrogen and dissolution of dense core material into the surrounding metallic-H envelope. Condensation of methane and water has been proposed as a mechanism for producing stable zones in the atmospheres of Saturn and the ice giants. However, if a stably stratified layer is formed adjacent to an active region of convection, it may be susceptible to progressive erosion as the convection intrudes and entrains fluid into the unstable envelope. We discuss the principal factors that control the rate of entrainment and associated erosion and present a specific example concerning the longevity of stable layers formed by condensation of methane and water in Uranus and Neptune. We also consider whether the temporal variability of such layers may engender episodic behavior in the release of the internal heat of these planets. This research is supported by a grant from the NASA Solar System Workings Program.
Investigations on flow reversal in stratified horizontal flow
International Nuclear Information System (INIS)
Staebler, T.; Meyer, L.; Schulenberg, T.; Laurien, E.
2005-01-01
The phenomena of flow reversal in stratified flows are investigated in a horizontal channel with application to the Emergency Core Cooling System (ECCS) in Pressurized Water Reactors (PWR). In case of a Loss-of-Coolant-Accident (LOCA), coolant can be injected through a secondary pipe within the feeding line of the primary circuit, the so called hot leg, counter-currently to the steam flow. It is essential that the coolant reaches the reactor core to prevent overheating. Due to high temperatures in such accident scenarios, steam is generated in the core, which escapes from the reactor vessel through the hot leg. In case of sufficiently high steam flow rates, only a reduced amount of coolant or even no coolant will be delivered to the reactor core. The WENKA test facility at the Institute for Nuclear and Energy Technologies (IKET) at Forschungszentrum Karlsruhe is capable to investigate the fluid dynamics of two-phase flows in such scenarios. Water and air flow counter-currently in a horizontal channel made of clear acrylic glass to allow full optical access. Flow rates of water and air can be varied independently within a wide range. Once flow reversal sets in, a strong hysteresis effect must be taken into account. This was quantified during the present investigations. Local experimental data are needed to expand appropriate models on flow reversal in horizontal two-phase flow and to include them into numerical codes. Investigations are carried out by means of Particle Image Velocimetry (PIV) to obtain local flow velocities without disturbing the flow. Due to the wavy character of the flow, strong reflections at the interfacial area must be taken into account. Using fluorescent particles and an optical filter allows eliminating the reflections and recording only the signals of the particles. The challenges in conducting local investigations in stratified wavy flows by applying optical measurement techniques are discussed. Results are presented and discussed allowing
Stratified flow model for convective condensation in an inclined tube
International Nuclear Information System (INIS)
Lips, Stéphane; Meyer, Josua P.
2012-01-01
Highlights: ► Convective condensation in an inclined tube is modelled. ► The heat transfer coefficient is the highest for about 20° below the horizontal. ► Capillary forces have a strong effect on the liquid–vapour interface shape. ► A good agreement between the model and the experimental results was observed. - Abstract: Experimental data are reported for condensation of R134a in an 8.38 mm inner diameter smooth tube in inclined orientations with a mass flux of 200 kg/m 2 s. Under these conditions, the flow is stratified and there is an optimum inclination angle, which leads to the highest heat transfer coefficient. There is a need for a model to better understand and predict the flow behaviour. In this paper, the state of the art of existing models of stratified two-phase flows in inclined tubes is presented, whereafter a new mechanistic model is proposed. The liquid–vapour distribution in the tube is determined by taking into account the gravitational and the capillary forces. The comparison between the experimental data and the model prediction showed a good agreement in terms of heat transfer coefficients and pressure drops. The effect of the interface curvature on the heat transfer coefficient has been quantified and has been found to be significant. The optimum inclination angle is due to a balance between an increase of the void fraction and an increase in the falling liquid film thickness when the tube is inclined downwards. The effect of the mass flux and the vapour quality on the optimum inclination angle has also been studied.
Rougier, Simon; Puissant, Anne; Stumpf, André; Lachiche, Nicolas
2016-09-01
Vegetation monitoring is becoming a major issue in the urban environment due to the services they procure and necessitates an accurate and up to date mapping. Very High Resolution satellite images enable a detailed mapping of the urban tree and herbaceous vegetation. Several supervised classifications with statistical learning techniques have provided good results for the detection of urban vegetation but necessitate a large amount of training data. In this context, this study proposes to investigate the performances of different sampling strategies in order to reduce the number of examples needed. Two windows based active learning algorithms from state-of-art are compared to a classical stratified random sampling and a third combining active learning and stratified strategies is proposed. The efficiency of these strategies is evaluated on two medium size French cities, Strasbourg and Rennes, associated to different datasets. Results demonstrate that classical stratified random sampling can in some cases be just as effective as active learning methods and that it should be used more frequently to evaluate new active learning methods. Moreover, the active learning strategies proposed in this work enables to reduce the computational runtime by selecting multiple windows at each iteration without increasing the number of windows needed.
International Nuclear Information System (INIS)
Bengtson, P.; Larsson, C.M.; Simenstad, P.; Suomela, J.
1995-09-01
Marine samples from the vicinity of the plants show elevated radionuclide concentrations, caused by discharges from the plants. Very low concentrations are noted in terrestrial samples. At several locations, the effects of the Chernobyl disaster still dominates. Control samples measured by SSI have confirmed the measurements performed by the operators. 8 refs, 6 tabs, 46 figs
Invited Review. Combustion instability in spray-guided stratified-charge engines. A review
Energy Technology Data Exchange (ETDEWEB)
Fansler, Todd D. [Univ. of Wisconsin, Madison, WI (United States); Reuss, D. L. [Univ. of Michigan, Ann Arbor, MI (United States); Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sick, V. [Univ. of Michigan, Ann Arbor, MI (United States); Dahms, R. N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2015-02-02
Our article reviews systematic research on combustion instabilities (principally rare, random misfires and partial burns) in spray-guided stratified-charge (SGSC) engines operated at part load with highly stratified fuel -air -residual mixtures. Results from high-speed optical imaging diagnostics and numerical simulation provide a conceptual framework and quantify the sensitivity of ignition and flame propagation to strong, cyclically varying temporal and spatial gradients in the flow field and in the fuel -air -residual distribution. For SGSC engines using multi-hole injectors, spark stretching and locally rich ignition are beneficial. Moreover, combustion instability is dominated by convective flow fluctuations that impede motion of the spark or flame kernel toward the bulk of the fuel, coupled with low flame speeds due to locally lean mixtures surrounding the kernel. In SGSC engines using outwardly opening piezo-electric injectors, ignition and early flame growth are strongly influenced by the spray's characteristic recirculation vortex. For both injection systems, the spray and the intake/compression-generated flow field influence each other. Factors underlying the benefits of multi-pulse injection are identified. Finally, some unresolved questions include (1) the extent to which piezo-SGSC misfires are caused by failure to form a flame kernel rather than by flame-kernel extinction (as in multi-hole SGSC engines); (2) the relative contributions of partially premixed flame propagation and mixing-controlled combustion under the exceptionally late-injection conditions that permit SGSC operation on E85-like fuels with very low NO_{x} and soot emissions; and (3) the effects of flow-field variability on later combustion, where fuel-air-residual mixing within the piston bowl becomes important.
Phenomenology of two-dimensional stably stratified turbulence under large-scale forcing
Kumar, Abhishek; Verma, Mahendra K.; Sukhatme, Jai
2017-01-01
In this paper, we characterise the scaling of energy spectra, and the interscale transfer of energy and enstrophy, for strongly, moderately and weakly stably stratified two-dimensional (2D) turbulence, restricted in a vertical plane, under large-scale random forcing. In the strongly stratified case, a large-scale vertically sheared horizontal flow (VSHF) coexists with small scale turbulence. The VSHF consists of internal gravity waves and the turbulent flow has a kinetic energy (KE) spectrum that follows an approximate k−3 scaling with zero KE flux and a robust positive enstrophy flux. The spectrum of the turbulent potential energy (PE) also approximately follows a k−3 power-law and its flux is directed to small scales. For moderate stratification, there is no VSHF and the KE of the turbulent flow exhibits Bolgiano–Obukhov scaling that transitions from a shallow k−11/5 form at large scales, to a steeper approximate k−3 scaling at small scales. The entire range of scales shows a strong forward enstrophy flux, and interestingly, large (small) scales show an inverse (forward) KE flux. The PE flux in this regime is directed to small scales, and the PE spectrum is characterised by an approximate k−1.64 scaling. Finally, for weak stratification, KE is transferred upscale and its spectrum closely follows a k−2.5 scaling, while PE exhibits a forward transfer and its spectrum shows an approximate k−1.6 power-law. For all stratification strengths, the total energy always flows from large to small scales and almost all the spectral indicies are well explained by accounting for the scale-dependent nature of the corresponding flux.
Phenomenology of two-dimensional stably stratified turbulence under large-scale forcing
Kumar, Abhishek
2017-01-11
In this paper, we characterise the scaling of energy spectra, and the interscale transfer of energy and enstrophy, for strongly, moderately and weakly stably stratified two-dimensional (2D) turbulence, restricted in a vertical plane, under large-scale random forcing. In the strongly stratified case, a large-scale vertically sheared horizontal flow (VSHF) coexists with small scale turbulence. The VSHF consists of internal gravity waves and the turbulent flow has a kinetic energy (KE) spectrum that follows an approximate k−3 scaling with zero KE flux and a robust positive enstrophy flux. The spectrum of the turbulent potential energy (PE) also approximately follows a k−3 power-law and its flux is directed to small scales. For moderate stratification, there is no VSHF and the KE of the turbulent flow exhibits Bolgiano–Obukhov scaling that transitions from a shallow k−11/5 form at large scales, to a steeper approximate k−3 scaling at small scales. The entire range of scales shows a strong forward enstrophy flux, and interestingly, large (small) scales show an inverse (forward) KE flux. The PE flux in this regime is directed to small scales, and the PE spectrum is characterised by an approximate k−1.64 scaling. Finally, for weak stratification, KE is transferred upscale and its spectrum closely follows a k−2.5 scaling, while PE exhibits a forward transfer and its spectrum shows an approximate k−1.6 power-law. For all stratification strengths, the total energy always flows from large to small scales and almost all the spectral indicies are well explained by accounting for the scale-dependent nature of the corresponding flux.
Che, W W; Frey, H Christopher; Lau, Alexis K H
2014-12-01
Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.
Probability sampling design in ethnobotanical surveys of medicinal plants
Directory of Open Access Journals (Sweden)
Mariano Martinez Espinosa
2012-07-01
Full Text Available Non-probability sampling design can be used in ethnobotanical surveys of medicinal plants. However, this method does not allow statistical inferences to be made from the data generated. The aim of this paper is to present a probability sampling design that is applicable in ethnobotanical studies of medicinal plants. The sampling design employed in the research titled "Ethnobotanical knowledge of medicinal plants used by traditional communities of Nossa Senhora Aparecida do Chumbo district (NSACD, Poconé, Mato Grosso, Brazil" was used as a case study. Probability sampling methods (simple random and stratified sampling were used in this study. In order to determine the sample size, the following data were considered: population size (N of 1179 families; confidence coefficient, 95%; sample error (d, 0.05; and a proportion (p, 0.5. The application of this sampling method resulted in a sample size (n of at least 290 families in the district. The present study concludes that probability sampling methods necessarily have to be employed in ethnobotanical studies of medicinal plants, particularly where statistical inferences have to be made using data obtained. This can be achieved by applying different existing probability sampling methods, or better still, a combination of such methods.
Lee, Joseph G L; Shook-Sa, Bonnie E; Bowling, J Michael; Ribisl, Kurt M
2017-06-23
In the United States, tens of thousands of inspections of tobacco retailers are conducted each year. Various sampling choices can reduce travel costs, emphasize enforcement in areas with greater non-compliance, and allow for comparability between states and over time. We sought to develop a model sampling strategy for state tobacco retailer inspections. Using a 2014 list of 10,161 North Carolina tobacco retailers, we compared results from simple random sampling; stratified, clustered at the ZIP code sampling; and, stratified, clustered at the census tract sampling. We conducted a simulation of repeated sampling and compared approaches for their comparative level of precision, coverage, and retailer dispersion. While maintaining an adequate design effect and statistical precision appropriate for a public health enforcement program, both stratified, clustered ZIP- and tract-based approaches were feasible. Both ZIP and tract strategies yielded improvements over simple random sampling, with relative improvements, respectively, of average distance between retailers (reduced 5.0% and 1.9%), percent Black residents in sampled neighborhoods (increased 17.2% and 32.6%), percent Hispanic residents in sampled neighborhoods (reduced 2.2% and increased 18.3%), percentage of sampled retailers located near schools (increased 61.3% and 37.5%), and poverty rate in sampled neighborhoods (increased 14.0% and 38.2%). States can make retailer inspections more efficient and targeted with stratified, clustered sampling. Use of statistically appropriate sampling strategies like these should be considered by states, researchers, and the Food and Drug Administration to improve program impact and allow for comparisons over time and across states. The authors present a model tobacco retailer sampling strategy for promoting compliance and reducing costs that could be used by U.S. states and the Food and Drug Administration (FDA). The design is feasible to implement in North Carolina. Use of
Margulis, L.; Hinkle, G.; Stolz, J.; Craft, F.; Esteve, I.; Guerrero, R.
1990-01-01
Spirochetes were found in the lower anoxiphototrophic layer of a stratified microbial mat (North Pond, Laguna Figueroa, Baja California, Mexico). Ultra-structural analysis of thin sections of field samples revealed spirochetes approximately 0.25 micrometer in diameter with 10 or more periplasmic flagella, leading to the interpretation that these spirochetes bear 10 flagellar insertions on each end. Morphometric study showed these free-living spirochetes greatly resemble certain symbiotic ones, i.e., Borrelia and certain termite spirochetes, the transverse sections of which are presented here. The ultrastructure of this spirochete also resembles Hollandina and Diplocalyx (spirochetes symbiotic in arthropods) more than it does Spirochaeta, the well known genus of mud-dwelling spirochetes. The new spirochete was detected in mat material collected both in 1985 and in 1987. Unique morphology (i.e., conspicuous outer coat of inner membrane, large number of periplasmic flagella) and ecology prompt us to name a new free-living spirochete.
Stratified flows with variable density: mathematical modelling and numerical challenges.
Murillo, Javier; Navas-Montilla, Adrian
2017-04-01
Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux
Deep silicon maxima in the stratified oligotrophic Mediterranean Sea
Directory of Open Access Journals (Sweden)
Y. Crombet
2011-02-01
Full Text Available The silicon biogeochemical cycle has been studied in the Mediterranean Sea during late summer/early autumn 1999 and summer 2008. The distribution of nutrients, particulate carbon and silicon, fucoxanthin (Fuco, and total chlorophyll-a (TChl-a were investigated along an eastward gradient of oligotrophy during two cruises (PROSOPE and BOUM encompassing the entire Mediterranean Sea during the stratified period. At both seasons, surface waters were depleted in nutrients and the nutriclines gradually deepened towards the East, the phosphacline being the deepest in the easternmost Levantine basin. Following the nutriclines, parallel deep maxima of biogenic silica (DSM, fucoxanthin (DFM and TChl-a (DCM were evidenced during both seasons with maximal concentrations of 0.45 μmol L^{−1} for BSi, 0.26 μg L^{−1} for Fuco, and 1.70 μg L^{−1} for TChl-a, all measured during summer. Contrary to the DCM which was a persistent feature in the Mediterranean Sea, the DSM and DFMs were observed in discrete areas of the Alboran Sea, the Algero-Provencal basin, the Ionian sea and the Levantine basin, indicating that diatoms were able to grow at depth and dominate the DCM under specific conditions. Diatom assemblages were dominated by Chaetoceros spp., Leptocylindrus spp., Pseudonitzschia spp. and the association between large centric diatoms (Hemiaulus hauckii and Rhizosolenia styliformis and the cyanobacterium Richelia intracellularis was observed at nearly all sites. The diatom's ability to grow at depth is commonly observed in other oligotrophic regions and could play a major role in ecosystem productivity and carbon export to depth. Contrary to the common view that Si and siliceous phytoplankton are not major components of the Mediterranean biogeochemistry, we suggest here that diatoms, by persisting at depth during the stratified period, could contribute to a
Experimental CFD grade data for stratified two-phase flows
Energy Technology Data Exchange (ETDEWEB)
Vallee, Christophe, E-mail: c.vallee@fzd.d [Forschungszentrum Dresden-Rossendorf e.V., Institute of Safety Research, D-01314 Dresden (Germany); Lucas, Dirk; Beyer, Matthias; Pietruske, Heiko; Schuetz, Peter; Carl, Helmar [Forschungszentrum Dresden-Rossendorf e.V., Institute of Safety Research, D-01314 Dresden (Germany)
2010-09-15
Stratified two-phase flows were investigated at two test facilities with horizontal test-sections. For both, rectangular channel cross-sections were chosen to provide optimal observation possibilities for the application of optical measurement techniques. In order to show the local flow structure, high-speed video observation was applied, which delivers the high-resolution in space and time needed for CFD code validation. The first investigations were performed in the Horizontal Air/Water Channel (HAWAC), which is made of acrylic glass and allows the investigation of air/water co-current flows at atmospheric pressure and room temperature. At the channel inlet, a special device was designed for well-defined and adjustable inlet boundary conditions. For the quantitative analysis of the optical measurements performed at the HAWAC, an algorithm was developed to recognise the stratified interface in the camera frames. This allows to make statistical treatments for comparison with CFD calculation results. As an example, the unstable wave growth leading to slug flow is shown from the test-section inlet. Moreover, the hydraulic jump as the quasi-stationary discontinuous transition between super- and subcritical flow was investigated in this closed channel. The structure of the hydraulic jump over time is revealed by the calculation of the probability density of the water level. A series of experiments show that the hydraulic jump profile and its position from the inlet vary substantially with the inlet boundary conditions due to the momentum exchange between the phases. The second channel is built in the pressure chamber of the TOPFLOW test facility, which is used to perform air/water and steam/water experiments at pressures of up to 5.0 MPa and temperatures of up to 264 {sup o}C, but under pressure equilibrium with the vessel inside. In the present experiment, the test-section represents a flat model of the hot leg of the German Konvoi pressurised water reactor scaled at
Experimental CFD grade data for stratified two-phase flows
International Nuclear Information System (INIS)
Vallee, Christophe; Lucas, Dirk; Beyer, Matthias; Pietruske, Heiko; Schuetz, Peter; Carl, Helmar
2010-01-01
Stratified two-phase flows were investigated at two test facilities with horizontal test-sections. For both, rectangular channel cross-sections were chosen to provide optimal observation possibilities for the application of optical measurement techniques. In order to show the local flow structure, high-speed video observation was applied, which delivers the high-resolution in space and time needed for CFD code validation. The first investigations were performed in the Horizontal Air/Water Channel (HAWAC), which is made of acrylic glass and allows the investigation of air/water co-current flows at atmospheric pressure and room temperature. At the channel inlet, a special device was designed for well-defined and adjustable inlet boundary conditions. For the quantitative analysis of the optical measurements performed at the HAWAC, an algorithm was developed to recognise the stratified interface in the camera frames. This allows to make statistical treatments for comparison with CFD calculation results. As an example, the unstable wave growth leading to slug flow is shown from the test-section inlet. Moreover, the hydraulic jump as the quasi-stationary discontinuous transition between super- and subcritical flow was investigated in this closed channel. The structure of the hydraulic jump over time is revealed by the calculation of the probability density of the water level. A series of experiments show that the hydraulic jump profile and its position from the inlet vary substantially with the inlet boundary conditions due to the momentum exchange between the phases. The second channel is built in the pressure chamber of the TOPFLOW test facility, which is used to perform air/water and steam/water experiments at pressures of up to 5.0 MPa and temperatures of up to 264 o C, but under pressure equilibrium with the vessel inside. In the present experiment, the test-section represents a flat model of the hot leg of the German Konvoi pressurised water reactor scaled at 1
Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael
2013-12-01
Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.
Plume Splitting in a Two-layer Stratified Ambient Fluid
Ma, Yongxing; Flynn, Morris; Sutherland, Bruce
2017-11-01
A line-source plume descending into a two-layer stratified ambient fluid in a finite sized tank is studied experimentally. Although the total volume of ambient fluid is fixed, lower- and upper-layer fluids are respectively removed and added at a constant rate mimicking marine outfall through diffusers and natural and hybrid ventilated buildings. The influence of the plume on the ambient depends on the value of λ, defined as the ratio of the plume buoyancy to the buoyancy loss of the plume as it crosses the ambient interface. Similar to classical filling-box experiments, the plume can always reach the bottom of the tank if λ > 1 . By contrast, if λ < 1 , an intermediate layer eventually forms as a result of plume splitting. Eventually all of the plume fluid spreads within the intermediate layer. The starting time, tv, and the ending time, tt, of the transition process measured from experiments correlate with the value of λ. A three-layer ambient fluid is observed after transition, and the mean value of the measured densities of the intermediate layer fluid is well predicted using plume theory. Acknowledgments: Funding for this study was provided by NSERC.
Economic evaluation in stratified medicine: methodological issues and challenges
Directory of Open Access Journals (Sweden)
Hans-Joerg eFugel
2016-05-01
Full Text Available Background: Stratified Medicine (SM is becoming a practical reality with the targeting of medicines by using a biomarker or genetic-based diagnostic to identify the eligible patient sub-population. Like any healthcare intervention, SM interventions have costs and consequences that must be considered by reimbursement authorities with limited resources. Methodological standards and guidelines exist for economic evaluations in clinical pharmacology and are an important component for health technology assessments (HTAs in many countries. However, these guidelines have initially been developed for traditional pharmaceuticals and not for complex interventions with multiple components. This raises the issue as to whether these guidelines are adequate to SM interventions or whether new specific guidance and methodology is needed to avoid inconsistencies and contradictory findings when assessing economic value in SM.Objective: This article describes specific methodological challenges when conducting health economic (HE evaluations for SM interventions and outlines potential modifications necessary to existing evaluation guidelines /principles that would promote consistent economic evaluations for SM.Results/Conclusions: Specific methodological aspects for SM comprise considerations on the choice of comparator, measuring effectiveness and outcomes, appropriate modelling structure and the scope of sensitivity analyses. Although current HE methodology can be applied for SM, greater complexity requires further methodology development and modifications in the guidelines.
Operations and Maintenance Cost for Stratified Buildings: A Critical Review
Directory of Open Access Journals (Sweden)
Che-Ghani Nor Zaimah
2016-01-01
Full Text Available Building maintenance is essential in preserving buildings’ appearance and performance. It needs to upkeep the building performance to prolong its value and building life cycle. Malaysia is still lacking in managing cost for building operation and maintenance. It has been found that the cost for housing maintenance is high due to poor maintenance practices. In order to get better understanding on how to manage the cost, this study reviews the contributing factors that affecting operation and maintenance cost of stratified buildings in Malaysia. The research first identified the factors through extensive literature review and scrutinize on factors that affecting and can minimize operation and maintenance cost. This literature review offers insight into building maintenance scenario in Malaysia focusing on the issues and challenges. The study also finds that operation and maintenance cost for housing in Malaysia is still in poor state. Interestingly, this paper revealed that operation and maintenance cost is also influenced by three significant factors like expectation of tenants, building characteristics and building defects. Measures to reduce the housing operation and maintenance cost are also highlighted so that this study can be a stepping stone towards proposing efficient and effective facilities management strategies for affordable housing in future.
Stratified patterns of divorce: Earnings, education, and gender
Directory of Open Access Journals (Sweden)
Amit Kaplan
2015-05-01
Full Text Available Background: Despite evidence that divorce has become more prevalent among weaker socioeconomic groups, knowledge about the stratification aspects of divorce in Israel is lacking. Moreover, although scholarly debate recognizes the importance of stratificational positions with respect to divorce, less attention has been given to the interactions between them. Objective: Our aim is to examine the relationship between social inequality and divorce, focusing on how household income, education, employment stability, relative earnings, and the intersection between them affect the risk of divorce in Israel. Methods: The data is derived from combined census files for 1995-2008, annual administrative employment records from the National Insurance Institute and the Tax Authority, and data from the Civil Registry of Divorce. We used a series of discrete-time event-history analysis models for marital dissolution. Results: Couples in lower socioeconomic positions had a higher risk of divorce in Israel. Higher education in general, and homogamy in terms of higher education (both spouses have degrees in particular, decreased the risk of divorce. The wife's relative earnings had a differential effect on the likelihood of divorce, depending on household income: a wife who outearned her husband increased the log odds of divorce more in the upper tertiles than in the lower tertile. Conclusions: Our study shows that divorce indeed has a stratified pattern and that weaker socioeconomic groups experience the highest levels of divorce. Gender inequality within couples intersects with the household's economic and educational resources.
Clinical research in small genomically stratified patient populations.
Martin-Liberal, J; Rodon, J
2017-07-01
The paradigm of early drug development in cancer is shifting from 'histology-oriented' to 'molecularly oriented' clinical trials. This change can be attributed to the vast amount of tumour biology knowledge generated by large international research initiatives such as The Cancer Genome Atlas (TCGA) and the use of next generation sequencing (NGS) techniques developed in recent years. However, targeting infrequent molecular alterations entails a series of special challenges. The optimal molecular profiling method, the lack of standardised biological thresholds, inter- and intra-tumor heterogeneity, availability of enough tumour material, correct clinical trials design, attrition rate, logistics or costs are only some of the issues that need to be taken into consideration in clinical research in small genomically stratified patient populations. This article examines the most relevant challenges inherent to clinical research in these populations. Moreover, perspectives from the Academia point of view are reviewed as well as initiatives to be taken in forthcoming years. Copyright © 2017 Elsevier Ltd. All rights reserved.
Stratifying the Risk of Venous Thromboembolism in Otolaryngology
Shuman, Andrew G.; Hu, Hsou Mei; Pannucci, Christopher J.; Jackson, Christopher R.; Bradford, Carol R.; Bahl, Vinita
2015-01-01
Objective The consequences of perioperative venous thromboembolism (VTE) are devastating; identifying patients at risk is an essential step in reducing morbidity and mortality. The utility of perioperative VTE risk assessment in otolaryngology is unknown. This study was designed to risk-stratify a diverse population of otolaryngology patients for VTE events. Study Design Retrospective cohort study. Setting Single-institution academic tertiary care medical center. Subjects and Methods Adult patients presenting for otolaryngologic surgery requiring hospital admission from 2003 to 2010 who did not receive VTE chemoprophylaxis were included. The Caprini risk assessment was retrospectively scored via a validated method of electronic chart abstraction. Primary study variables were Caprini risk scores and the incidence of perioperative venous thromboembolic outcomes. Results A total of 2016 patients were identified. The overall 30-day rate of VTE was 1.3%. The incidence of VTE in patients with a Caprini risk score of 6 or less was 0.5%. For patients with scores of 7 or 8, the incidence was 2.4%. Patients with a Caprini risk score greater than 8 had an 18.3% incidence of VTE and were significantly more likely to develop a VTE when compared to patients with a Caprini risk score less than 8 (P otolaryngology patients for 30-day VTE events and allows otolaryngologists to identify patient subgroups who have a higher risk of VTE in the absence of chemoprophylaxis. PMID:22261490
Stratified charge rotary engine critical technology enablement. Volume 2: Appendixes
Irion, C. E.; Mount, R. E.
1992-01-01
This second volume of appendixes is a companion to Volume 1 of this report which summarizes results of a critical technology enablement effort with the stratified charge rotary engine (SCRE) focusing on a power section of 0.67 liters (40 cu. in.) per rotor in single and two rotor versions. The work is a continuation of prior NASA Contracts NAS3-23056 and NAS3-24628. Technical objectives are multi-fuel capability, including civil and military jet fuel and DF-2, fuel efficiency of 0.355 Lbs/BHP-Hr. at best cruise condition above 50 percent power, altitude capability of up to 10Km (33,000 ft.) cruise, 2000 hour TBO and reduced coolant heat rejection. Critical technologies for SCRE's that have the potential for competitive performance and cost in a representative light-aircraft environment were examined. Objectives were: the development and utilization of advanced analytical tools, i.e. higher speed and enhanced three dimensional combustion modeling; identification of critical technologies; development of improved instrumentation; and to isolate and quantitatively identify the contribution to performance and efficiency of critical components or subsystems. A family of four-stage third-order explicit Runge-Kutta schemes is derived that required only two locations and has desirable stability characteristics. Error control is achieved by embedding a second-order scheme within the four-stage procedure. Certain schemes are identified that are as efficient and accurate as conventional embedded schemes of comparable order and require fewer storage locations.
Stratified Charge Rotary Engine Critical Technology Enablement, Volume 1
Irion, C. E.; Mount, R. E.
1992-01-01
This report summarizes results of a critical technology enablement effort with the stratified charge rotary engine (SCRE) focusing on a power section of 0.67 liters (40 cu. in.) per rotor in single and two rotor versions. The work is a continuation of prior NASA Contracts NAS3-23056 and NAS3-24628. Technical objectives are multi-fuel capability, including civil and military jet fuel and DF-2, fuel efficiency of 0.355 Lbs/BHP-Hr. at best cruise condition above 50 percent power, altitude capability of up to 10Km (33,000 ft.) cruise, 2000 hour TBO and reduced coolant heat rejection. Critical technologies for SCRE's that have the potential for competitive performance and cost in a representative light-aircraft environment were examined. Objectives were: the development and utilization of advanced analytical tools, i.e. higher speed and enhanced three dimensional combustion modeling; identification of critical technologies; development of improved instrumentation, and to isolate and quantitatively identify the contribution to performance and efficiency of critical components or subsystems.
Layer contributions to the nonlinear acoustic radiation from stratified media.
Vander Meulen, François; Haumesser, Lionel
2016-12-01
This study presents the thorough investigation of the second harmonic generation scenario in a three fluid layer system. An emphasis is on the evaluation of the nonlinear parameter B/A in each layer from remote measurements. A theoretical approach of the propagation of a finite amplitude acoustic wave in a multilayered medium is developed. In the frame of the KZK equation, the weak nonlinearity of the media, attenuation and diffraction effects are computed for the fundamental and second harmonic waves propagating back and forth in each of the layers of the system. The model uses a gaussian expansion to describe the beam propagation in order to quantitatively evaluate the contribution of each part of the system (layers and interfaces) to its nonlinearity. The model is validated through measurements on a water/aluminum/water system. Transmission as well as reflection configurations are studied. Good agreement is found between the theoretical results and the experimental data. The analysis of the second harmonic field sources measured by the transducers from outside the stratified medium highlights the factors that favor the cumulative effects. Copyright © 2016 Elsevier B.V. All rights reserved.
Local properties of countercurrent stratified steam-water flow
International Nuclear Information System (INIS)
Kim, H.J.
1985-10-01
A study of steam condensation in countercurrent stratified flow of steam and subcooled water has been carried out in a rectangular channel/flat plate geometry over a wide range of inclination angles (4 0 -87 0 ) at several aspect ratios. Variables were inlet water and steam flow rates, and inlet water temperature. Local condensation rates and pressure gradients were measured, and local condensation heat transfer coefficients and interfacial shear stress were calculated. Contact probe traverses of the surface waves were made, which allowed a statistical analysis of the wave properties. The local condensation Nusselt number was correlated in terms of local water and steam Reynolds or Froude numbers, as well as the liquid Prandtl number. A turbulence-centered model developed by Theofanous, et al. principally for gas absorption in several geometries, was modified. A correlation for the interfacial shear stress and the pressure gradient agreed with measured values. Mean water layer thicknesses were calculated. Interfacial wave parameters, such as the mean water layer thickness, liquid fraction probability distribution, wave amplitude and wave frequency, are analyzed
Theory of sampling and its application in tissue based diagnosis
Directory of Open Access Journals (Sweden)
Kayser Gian
2009-02-01
Full Text Available Abstract Background A general theory of sampling and its application in tissue based diagnosis is presented. Sampling is defined as extraction of information from certain limited spaces and its transformation into a statement or measure that is valid for the entire (reference space. The procedure should be reproducible in time and space, i.e. give the same results when applied under similar circumstances. Sampling includes two different aspects, the procedure of sample selection and the efficiency of its performance. The practical performance of sample selection focuses on search for localization of specific compartments within the basic space, and search for presence of specific compartments. Methods When a sampling procedure is applied in diagnostic processes two different procedures can be distinguished: I the evaluation of a diagnostic significance of a certain object, which is the probability that the object can be grouped into a certain diagnosis, and II the probability to detect these basic units. Sampling can be performed without or with external knowledge, such as size of searched objects, neighbourhood conditions, spatial distribution of objects, etc. If the sample size is much larger than the object size, the application of a translation invariant transformation results in Kriege's formula, which is widely used in search for ores. Usually, sampling is performed in a series of area (space selections of identical size. The size can be defined in relation to the reference space or according to interspatial relationship. The first method is called random sampling, the second stratified sampling. Results Random sampling does not require knowledge about the reference space, and is used to estimate the number and size of objects. Estimated features include area (volume fraction, numerical, boundary and surface densities. Stratified sampling requires the knowledge of objects (and their features and evaluates spatial features in relation to
Forkert, Nils Daniel; Fiehler, Jens
2015-03-01
The tissue outcome prediction in acute ischemic stroke patients is highly relevant for clinical and research purposes. It has been shown that the combined analysis of diffusion and perfusion MRI datasets using high-level machine learning techniques leads to an improved prediction of final infarction compared to single perfusion parameter thresholding. However, most high-level classifiers require a previous training and, until now, it is ambiguous how many subjects are required for this, which is the focus of this work. 23 MRI datasets of acute stroke patients with known tissue outcome were used in this work. Relative values of diffusion and perfusion parameters as well as the binary tissue outcome were extracted on a voxel-by- voxel level for all patients and used for training of a random forest classifier. The number of patients used for training set definition was iteratively and randomly reduced from using all 22 other patients to only one other patient. Thus, 22 tissue outcome predictions were generated for each patient using the trained random forest classifiers and compared to the known tissue outcome using the Dice coefficient. Overall, a logarithmic relation between the number of patients used for training set definition and tissue outcome prediction accuracy was found. Quantitatively, a mean Dice coefficient of 0.45 was found for the prediction using the training set consisting of the voxel information from only one other patient, which increases to 0.53 if using all other patients (n=22). Based on extrapolation, 50-100 patients appear to be a reasonable tradeoff between tissue outcome prediction accuracy and effort required for data acquisition and preparation.
International Nuclear Information System (INIS)
Carnet, Bernard; Delhumeau, Michel
1971-06-01
The principles of binary analysis applied to the investigation of sequential circuits were used to design a two way coincidence circuit whose input may be, random or periodic variables of constant or variable duration. The output signal strictly reproduces the characteristics of the input signal triggering the coincidence. A coincidence between input signals does not produce any output signal if one of the signals has already triggered the output signal. The characteristics of the output signal in relation to those of the input signal are: minimum time jitter, excellent duration reproducibility and maximum efficiency. Some rules are given for achieving these results. The symmetry, transitivity and non-transitivity characteristics of the edges on the primitive graph are analyzed and lead to some rules for positioning the states on a secondary graph. It is from this graph that the equations of the circuits can be calculated. The development of the circuit and its dynamic testing are discussed. For this testing, the functioning of the circuit is simulated by feeding into the input randomly generated signals
Loban, Amanda; Mandefield, Laura; Hind, Daniel; Bradburn, Mike
2017-12-01
The objective of this study was to compare the response rates, data completeness, and representativeness of survey data produced by online and postal surveys. A randomized trial nested within a cohort study in Yorkshire, United Kingdom. Participants were randomized to receive either an electronic (online) survey questionnaire with paper reminder (N = 2,982) or paper questionnaire with electronic reminder (N = 2,855). Response rates were similar for electronic contact and postal contacts (50.9% vs. 49.7%, difference = 1.2%, 95% confidence interval: -1.3% to 3.8%). The characteristics of those responding to the two groups were similar. Participants nevertheless demonstrated an overwhelming preference for postal questionnaires, with the majority responding by post in both groups. Online survey questionnaire systems need to be supplemented with a postal reminder to achieve acceptable uptake, but doing so provides a similar response rate and case mix when compared to postal questionnaires alone. For large surveys, online survey systems may be cost saving. Copyright © 2017 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Helmut Prodinger
2007-01-01
Full Text Available In words, generated by independent geometrically distributed random variables, we study the l th descent, which is, roughly speaking, the l th occurrence of a neighbouring pair ab with a>b. The value a is called the initial height, and b the end height. We study these two random variables (and some similar ones by combinatorial and probabilistic tools. We find in all instances a generating function Ψ(v,u, where the coefficient of v j u i refers to the j th descent (ascent, and i to the initial (end height. From this, various conclusions can be drawn, in particular expected values. In the probabilistic part, a Markov chain model is used, which allows to get explicit expressions for the heights of the second descent. In principle, one could go further, but the complexity of the results forbids it. This is extended to permutations of a large number of elements. Methods from q-analysis are used to simplify the expressions. This is the reason that we confine ourselves to the geometric distribution only. For general discrete distributions, no such tools are available.
Second law characterization of stratified thermal storage tanks
Energy Technology Data Exchange (ETDEWEB)
Fraidenraich, N [Departamento de Energia Nuclear-UFPE (Brazil)
2000-07-01
It is well known that fluid stratification in thermal storage tanks improves the overall performance of solar thermal systems, when compared with systems operating with uniform fluid temperature. From the point of view of the first law of thermodynamics, no difference exists between storage tanks with the same mass and average temperature, even if they have different stratified thermal structures. Nevertheless, the useful thermal energy that can be obtained from them might differ significantly. In this work, we derive an expression able to characterize the stratified configuration of thermal fluid. Using results obtained by thermodynamics of irreversible processes, the procedure adopted consists in calculating the maximum work available from the tank's thermal layer is able to develop. We arrive, then, at a dimensionless expression, the stratification parameter (SP), which depends on the mass fraction and absolute temperature of each thermal layer as well as the thermal fluid average temperature. Numerical examples for different types of tank stratification are given and it is verified that the expression obtained is sensitive to small differences in the reservoir thermal configuration. For example a thermal storage with temperatures equal to 74 Celsius degrees, 64 Celsius degrees and 54 Celsius degrees, with its mass equally distributed along the tank yields, for the parameter SP, a figure equal to 0.000294. On the other hand a storage tank with the same average temperature but with different layer's temperatures 76 Celsius degrees, 64 and 52 Celsius degrees, also with uniform mass distribution, yields for SP a value equal to quantitative evaluation of the stratification structure of thermal reservoirs. [Spanish] Es bien conocido que la estratificacion fluida en tanques de almacenamiento termico mejora el rendimiento total de los sistemas termicos solares en comparacion con sistemas que operan con temperatura uniforme del fluido. Desde el punto de vista
Internal and vorticity waves in decaying stratified flows
Matulka, A.; Cano, D.
2009-04-01
Most predictive models fail when forcing at the Rossby deformation Radius is important and a large range of scales have to be taken into account. When mixing of reactants or pollutants has to be accounted, the range of scales spans from hundreds of Kilometers to the Bachelor or Kolmogorov sub milimiter scales. We present some theoretical arguments to describe the flow in terms of the three dimensional vorticity equations, using a lengthscale related to the vorticity (or enstrophy ) transport. Effect of intermittent eddies and non-homogeneity of diffusion are also key issues in the environment because both stratification and rotation body forces are important and cause anisotropy/non-homogeneity. These problems need further theoretical, numerical and observational work and one approach is to try to maximize the relevant geometrical information in order to understand and therefore predict these complex environmental dispersive flows. The importance of the study of turbulence structure and its relevance in diffusion of contaminants in environmental flows is clear when we see the effect of environmental disasters such as the Prestige oil spill or the Chernobil radioactive cloud spread in the atmosphere. A series of Experiments have been performed on a strongly stratified two layer fluid consisting of Brine in the bottom and freshwater above in a 1 square meter tank. The evolution of the vortices after the passage of a grid is video recorded and Particle tracking is applied on small pliolite particles floating at the interface. The combination of internal waves and vertical vorticity produces two separate time scales that may produce resonances. The vorticity is seen to oscilate in a complex way, where the frecuency decreases with time.
Unwilling or Unable to Cheat? Evidence from a Randomized Tax Audit Experiment in Denmark
DEFF Research Database (Denmark)
Kleven, Henrik Jacobsen; Knudsen, Martin B.; Kreiner, Claus Thustrup
2010-01-01
This paper analyzes a randomized tax enforcement experiment in Denmark. In the base year, a stratified and representative sample of over 40,000 individual income tax filers was selected for the experiment. Half of the tax filers were randomly selected to be thoroughly audited, while the rest were...... deliberately not audited. The following year, "threat-of-audit" letters were randomly assigned and sent to tax filers in both groups. Using comprehensive administrative tax data, we present four main findings. First, we find that the tax evasion rate is very small (0.3%) for income subject to third...... impact on tax evasion, but that this effect is small in comparison to avoidance responses. Third, we find that prior audits substantially increase self-reported income, implying that individuals update their beliefs about detection probability based on experiencing an audit. Fourth, threat-of-audit...
Martin, Petra; Biniecka, Monika; Ó'Meachair, Shane; Maguire, Aoife; Tosetto, Miriam; Nolan, Blathnaid; Hyland, John; Sheahan, Kieran; O'Donoghue, Diarmuid; Mulcahy, Hugh; Fennelly, David; O'Sullivan, Jacintha
2018-01-01
Despite treatment of patients with metastatic colorectal cancer (mCRC) with bevacizumab plus chemotherapy, response rates are modest and there are no biomarkers available that will predict response. The aim of this study was to assess if markers associated with three interconnected cancer-associated biological processes, specifically angiogenesis, inflammation and oxidative damage, could stratify the survival outcome of this cohort. Levels of angiogenesis, inflammation and oxidative damage markers were assessed in pre-bevacizumab resected tumour and serum samples of mCRC patients by dual immunofluorescence, immunohistochemistry and ELISA. This study identified that specific markers of angiogenesis, inflammation and oxidative damage stratify survival of patients on this anti-angiogenic treatment. Biomarkers of immature tumour vasculature (% IMM, p=0.026, n=80), high levels of oxidative damage in the tumour epithelium (intensity of 8-oxo-dG in nuclear and cytoplasmic compartments, p=0.042 and 0.038 respectively, n=75) and lower systemic pro-inflammatory cytokines (IL6 and IL8, p=0.053 and 0.049 respectively, n=61) significantly stratify with median overall survival (OS). In summary, screening for a panel of biomarkers for high levels of immature tumour vasculature, high levels of oxidative DNA damage and low levels of systemic pro-inflammatory cytokines may be beneficial in predicting enhanced survival outcome following bevacizumab treatment for mCRC. PMID:29535825
Carlisle, J B
2017-08-01
10 -10 . The difference between the distributions of these two subgroups was confirmed by comparison of their overall distributions, p = 5.3 × 10 -15 . Each journal exhibited the same abnormal distribution of baseline means. There was no difference in distributions of baseline means for 1453 trials in non-anaesthetic journals and 3634 trials in anaesthetic journals, p = 0.30. The rate of retractions from JAMA and NEJM, 6/1453 or 1 in 242, was one-quarter the rate from the six anaesthetic journals, 66/3634 or 1 in 55, relative risk (99%CI) 0.23 (0.08-0.68), p = 0.00022. A probability threshold of 1 in 10,000 identified 8/72 (11%) retracted trials (7 by Fujii et al.) and 82/5015 (1.6%) unretracted trials. Some p values were so extreme that the baseline data could not be correct: for instance, for 43/5015 unretracted trials the probability was less than 1 in 10 15 (equivalent to one drop of water in 20,000 Olympic-sized swimming pools). A probability threshold of 1 in 100 for two or more trials by the same author identified three authors of retracted trials (Boldt, Fujii and Reuben) and 21 first or corresponding authors of 65 unretracted trials. Fraud, unintentional error, correlation, stratified allocation and poor methodology might have contributed to the excess of randomised, controlled trials with similar or dissimilar means, a pattern that was common to all the surveyed journals. It is likely that this work will lead to the identification, correction and retraction of hitherto unretracted randomised, controlled trials. © 2017 The Association of Anaesthetists of Great Britain and Ireland.
Directory of Open Access Journals (Sweden)
Shanyou Zhu
2014-01-01
Full Text Available Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
Using the Internet to Support Exercise and Diet: A Stratified Norwegian Survey.
Wangberg, Silje C; Sørensen, Tove; Andreassen, Hege K
2015-08-26
Internet is used for a variety of health related purposes. Use differs and has differential effects on health according to socioeconomic status. We investigated to what extent the Norwegian population use the Internet to support exercise and diet, what kind of services they use, and whether there are social disparities in use. We expected to find differences according to educational attainment. In November 2013 we surveyed a stratified sample of 2196 persons drawn from a Web panel of about 50,000 Norwegians over 15 years of age. The questionnaire included questions about using the Internet, including social network sites (SNS), or mobile apps in relation to exercise or diet, as well as background information about education, body image, and health. The survey email was opened by 1187 respondents (54%). Of these, 89 did not click on the survey hyperlink (declined to participate), while another 70 did not complete the survey. The final sample size is thus 1028 (87% response rate). Compared to the Norwegian census the sample had a slight under-representation of respondents under the age of 30 and with low education. The data was weighted accordingly before analyses. Sixty-nine percent of women and 53% of men had read about exercise or diet on the Internet (χ(2)= 25.6, Psocial disparities in health, and continue to monitor population use. For Internet- and mobile-based interventions to support health behaviors, this study provides information relevant to tailoring of delivery media and components to user.
Díaz-Astudillo, Macarena; Cáceres, Mario A.; Landaeta, Mauricio F.
2017-09-01
The patterns of abundance, composition, biomass and vertical migration of zooplankton in short-time scales (ADCP device mounted on the hull of a ship were used to obtain vertical profiles of current velocity data and intensity of the backscattered acoustic signal, which was used to study the migratory strategies and to relate the echo intensity with zooplankton biomass. Repeated vertical profiles of temperature, salinity and density were obtained with a CTD instrument to describe the density patterns during both experiments. Zooplankton were sampled every 3 h using a Bongo net to determine abundance, composition and biomass. Migrations were diel in the stratified station, semi-diel in the mixed station, and controlled by light in both locations, with large and significant differences in zooplankton abundance and biomass between day and night samples. No migration pattern associated with the effect of tides was found. The depth of maximum backscatter strength showed differences of approximately 30 m between stations and was deeper in the mixed station. The relation between mean volume backscattering strength (dB) computed from echo intensity and log10 of total dry weight (mg m-3) of zooplankton biomass was moderate but significant in both locations. Biomass estimated from biological samples was higher in the mixed station and determined by euphausiids. Copepods were the most abundant group in both stations. Acoustic methods were a useful technique to understand the detailed patterns of migratory strategies of zooplankton and to help estimate zooplankton biomass and abundance in the inner waters of southern Chile.
Grants, Ilmars; Gerbeth, Gunter
2010-07-01
The stability of a thermally stratified liquid metal flow is considered numerically. The flow is driven by a rotating magnetic field in a cylinder heated from above and cooled from below. The stable thermal stratification turns out to destabilize the flow. This is explained by the fact that a stable stratification suppresses the secondary meridional flow, thus indirectly enhancing the primary rotation. The instability in the form of Taylor-Görtler rolls is consequently promoted. These rolls can only be excited by finite disturbances in the isothermal flow. A sufficiently strong thermal stratification transforms this nonlinear bypass instability into a linear one reducing, thus, the critical value of the magnetic driving force. A weaker temperature gradient delays the linear instability but makes the bypass transition more likely. We quantify the non-normal and nonlinear components of this transition by direct numerical simulation of the flow response to noise. It is observed that the flow sensitivity to finite disturbances increases considerably under the action of a stable thermal stratification. The capabilities of the random forcing approach to identify disconnected coherent states in a general case are discussed.
Abebe, Kaleab Z; Jones, Kelley A; Rofey, Dana; McCauley, Heather L; Clark, Duncan B; Dick, Rebecca; Gmelin, Theresa; Talis, Janine; Anderson, Jocelyn; Chugani, Carla; Algarroba, Gabriela; Antonio, Ashley; Bee, Courtney; Edwards, Clare; Lethihet, Nadia; Macak, Justin; Paley, Joshua; Torres, Irving; Van Dusen, Courtney; Miller, Elizabeth
2018-02-01
Sexual violence (SV) on college campuses is common, especially alcohol-related SV. This is a 2-arm cluster randomized controlled trial to test a brief intervention to reduce risk for alcohol-related sexual violence (SV) among students receiving care from college health centers (CHCs). Intervention CHC staff are trained to deliver universal SV education to all students seeking care, to facilitate patient and provider comfort in discussing SV and related abusive experiences (including the role of alcohol). Control sites provide participants with information about drinking responsibly. Across 28 participating campuses (12 randomized to intervention and 16 to control), 2292 students seeking care at CHCs complete surveys prior to their appointment (baseline), immediately after (exit), 4months later (T2) and one year later (T3). The primary outcome is change in recognition of SV and sexual risk. Among those reporting SV exposure at baseline, changes in SV victimization, disclosure, and use of SV services are additional outcomes. Intervention effects will be assessed using generalized linear mixed models that account for clustering of repeated observations both within CHCs and within students. Slightly more than half of the participating colleges have undergraduate enrollment of ≥3000 students; two-thirds are public and almost half are urban. Among participants there were relatively more Asian (10 v 1%) and Black/African American (13 v 7%) and fewer White (58 v 74%) participants in the intervention compared to control. This study will offer the first formal assessment for SV prevention in the CHC setting. Clinical Trials #: NCT02355470. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Ricardo de Amorim Corrêa
2014-12-01
Full Text Available OBJECTIVE: To compare 28-day mortality rates and clinical outcomes in ICU patients with ventilator-associated pneumonia according to the diagnostic strategy used. METHODS: This was a prospective randomized clinical trial. Of the 73 patients included in the study, 36 and 37 were randomized to undergo BAL or endotracheal aspiration (EA, respectively. Antibiotic therapy was based on guidelines and was adjusted according to the results of quantitative cultures. RESULTS: The 28-day mortality rate was similar in the BAL and EA groups (25.0% and 37.8%, respectively; p = 0.353. There were no differences between the groups regarding the duration of mechanical ventilation, antibiotic therapy, secondary complications, VAP recurrence, or length of ICU and hospital stay. Initial antibiotic therapy was deemed appropriate in 28 (77.8% and 30 (83.3% of the patients in the BAL and EA groups, respectively (p = 0.551. The 28-day mortality rate was not associated with the appropriateness of initial therapy in the BAL and EA groups (appropriate therapy: 35.7% vs. 43.3%; p = 0.553; and inappropriate therapy: 62.5% vs. 50.0%; p = 1.000. Previous use of antibiotics did not affect the culture yield in the EA or BAL group (p = 0.130 and p = 0.484, respectively. CONCLUSIONS: In the context of this study, the management of VAP patients, based on the results of quantitative endotracheal aspirate cultures, led to similar clinical outcomes to those obtained with the results of quantitative BAL fluid cultures.
Pearce, Peter; Sewell, Ros; Cooper, Mick; Osman, Sarah; Fugard, Andrew J B; Pybis, Joanne
2017-06-01
The aim of this study was to pilot a test of the effectiveness of school-based humanistic counselling (SBHC) in an ethnically diverse group of young people (aged 11-18 years old), with follow-up assessments at 6 and 9 months. Pilot randomized controlled trial, using linear-mixed effect modelling and intention-to-treat analysis to compare changes in levels of psychological distress for participants in SBHC against usual care (UC). ISRCTN44253140. In total, 64 young people were randomized to either SBHC or UC. Participants were aged between 11 and 18 (M = 14.2, SD = 1.8), with 78.1% of a non-white ethnicity. The primary outcome was psychological distress at 6 weeks (mid-therapy), 12 weeks (end of therapy), 6-month follow-up and 9-month follow-up. Secondary measures included emotional symptoms, self-esteem and attainment of personal goals. Recruitment and retention rates for the study were acceptable. Participants in the SBHC condition, as compared with participants in the UC condition, showed greater reductions in psychological distress and emotional symptoms, and greater improvements in self-esteem, over time. However, at follow-up, only emotional symptoms showed significant differences across groups. The study adds to the pool of evidence suggesting that SBHC can be tested and that it brings about short-term reductions in psychological and emotional distress in young people, across ethnicities. However, there is no evidence of longer-term effects. School-based humanistic counselling can be an effective means of reducing the psychological distress experienced by young people with emotional symptoms in the short term. The short-term effectiveness of school-based humanistic counselling is not limited to young people of a White ethnicity. There is no evidence that school-based humanistic counselling has effects beyond the end of therapy. © 2016 The British Psychological Society.
Evaluation of sampling strategies to estimate crown biomass
Directory of Open Access Journals (Sweden)
Krishna P Poudel
2015-01-01
Full Text Available Background Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire modeling. However, crown biomass is difficult to predict because of the variability within and among species and sites. Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies. In this study, we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass. Methods Using data collected from 20 destructively sampled trees, we evaluated 11 different sampling strategies using six evaluation statistics: bias, relative bias, root mean square error (RMSE, relative RMSE, amount of biomass sampled, and relative biomass sampled. We also evaluated the performance of the selected sampling strategies when different numbers of branches (3, 6, 9, and 12 are selected from each tree. Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass. Results Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled. However, the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled. Under the stratified sampling strategy, selecting unequal number of branches per stratum produced approximately similar results to simple random sampling, but it further decreased RMSE when information on branch diameter is used in the design and estimation phases. Conclusions Use of
Mix, Joseph A; Crews, W David
2002-08-01
There appears to be an absence of large-scaled clinical trials that have examined the efficacy of Ginkgo biloba extract on the neuropsychological functioning of cognitively intact older adults. The importance of such clinical research appears paramount in light of the plethora of products containing Ginkgo biloba that are currently being widely marketed to predominantly cognitively intact adults with claims of enhanced cognitive performances. The purpose of this research was to conduct the first known, large-scaled clinical trial of the efficacy of Ginkgo biloba extract (EGb 761) on the neuropsychological functioning of cognitively intact older adults. Two hundred and sixty-two community-dwelling volunteers (both male and female) 60 years of age and older, who reported no history of dementia or significant neurocognitive impairments and obtained Mini-Mental State Examination total scores of at least 26, were examined via a 6-week, randomized, double-blind, fixed-dose, placebo-controlled, parallel-group, clinical trial. Participants were randomly assigned to receive either Ginkgo biloba extract EGb 761(n = 131; 180 mg/day) or placebo (n = 131) for 6 weeks. Efficacy measures consisted of participants' raw change in performance scores from pretreatment baseline to those obtained just prior to termination of treatment on the following standardized neuropsychological measures: Selective Reminding Test (SRT), Wechsler Adult Intelligence Scale-III Block Design (WAIS-III BD) and Digit Symbol-Coding (WAIS-III DS) subtests, and the Wechsler Memory Scale-III Faces I (WMS-III FI) and Faces II (WMS-III FII) subtests. A subjective Follow-up Self-report Questionnaire was also administered to participants just prior to termination of the treatment phase. Analyses of covariance indicated that cognitively intact participants who received 180 mg of EGb 761 daily for 6 weeks exhibited significantly more improvement on SRT tasks involving delayed (30 min) free recall (p visual material
Directory of Open Access Journals (Sweden)
Gorini Alessandra
2008-05-01
Full Text Available Abstract Background Generalized anxiety disorder (GAD is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Its high prevalence in general population and the severe limitations it causes, point out the necessity to find new efficient strategies to treat it. Together with the cognitive-behavioural treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. To overcome this limitation we propose the use of virtual reality (VR to facilitate the relaxation process by visually presenting key relaxing images to the subjects. The visual presentation of a virtual calm scenario can facilitate patients' practice and mastery of relaxation, making the experience more vivid and real than the one that most subjects can create using their own imagination and memory, and triggering a broad empowerment process within the experience induced by a high sense of presence. According to these premises, the aim of the present study is to investigate the advantages of using a VR-based relaxation protocol in reducing anxiety in patients affected by GAD. Methods/Design The trial is based on a randomized controlled study, including three groups of 25 patients each (for a total of 75 patients: (1 the VR group, (2 the non-VR group and (3 the waiting list (WL group. Patients in the VR group will be taught to relax using a VR relaxing environment and audio-visual mobile narratives; patients in the non-VR group will be taught to relax using the same relaxing narratives proposed to the VR group, but without the VR support, and patients in the WL group will not receive any kind of relaxation training. Psychometric and psychophysiological outcomes will serve as quantitative dependent variables, while subjective reports of participants will be used as qualitative dependent variables. Conclusion We argue that the use of VR for relaxation
DEFF Research Database (Denmark)
Carmo, Carolina; Dumont, Olivier; Nielsen, Mads Pagh
2015-01-01
The use of stratified hot water tanks in solar energy systems - including ORC systems - as well as heat pump systems is paramount for a better performance of these systems. However, the availability of effective and reliable models to predict the annual performance of stratified hot water tanks...
Implementing content constraints in alpha-stratified adaptive testing using a shadow test approach
van der Linden, Willem J.; Chang, Hua-Hua
2001-01-01
The methods of alpha-stratified adaptive testing and constrained adaptive testing with shadow tests are combined in this study. The advantages are twofold. First, application of the shadow test allows the researcher to implement any type of constraint on item selection in alpha-stratified adaptive
Stratified turbulent Bunsen flames : flame surface analysis and flame surface density modelling
Ramaekers, W.J.S.; Oijen, van J.A.; Goey, de L.P.H.
2012-01-01
In this paper it is investigated whether the Flame Surface Density (FSD) model, developed for turbulent premixed combustion, is also applicable to stratified flames. Direct Numerical Simulations (DNS) of turbulent stratified Bunsen flames have been carried out, using the Flamelet Generated Manifold
SCRAED - Simple and Complex Random Assignment in Experimental Designs
Alferes, Valentim R.
2009-01-01
SCRAED is a package of 37 self-contained SPSS syntax files that performs simple and complex random assignment in experimental designs. For between-subjects designs, SCRAED includes simple random assignment (no restrictions, forced equal sizes, forced unequal sizes, and unequal probabilities), block random assignment (simple and generalized blocks), and stratified random assignment (no restrictions, forced equal sizes, forced unequal sizes, and unequal probabilities). For within-subject...
Turbulence Statistics of a Buoyant Jet in a Stratified Environment
McCleney, Amy Brooke
Using non-intrusive optical diagnostics, turbulence statistics for a round, incompressible, buoyant, and vertical jet discharging freely into a stably linear stratified environment is studied and compared to a reference case of a neutrally buoyant jet in a uniform environment. This is part of a validation campaign for computational fluid dynamics (CFD). Buoyancy forces are known to significantly affect the jet evolution in a stratified environment. Despite their ubiquity in numerous natural and man-made flows, available data in these jets are limited, which constrain our understanding of the underlying physical processes. In particular, there is a dearth of velocity field data, which makes it challenging to validate numerical codes, currently used for modeling these important flows. Herein, jet near- and far-field behaviors are obtained with a combination of planar laser induced fluorescence (PLIF) and multi-scale time-resolved particle image velocimetry (TR-PIV) for Reynolds number up to 20,000. Deploying non-intrusive optical diagnostics in a variable density environment is challenging in liquids. The refractive index is strongly affected by the density, which introduces optical aberrations and occlusions that prevent the resolution of the flow. One solution consists of using index matched fluids with different densities. Here a pair of water solutions - isopropanol and NaCl - are identified that satisfy these requirements. In fact, they provide a density difference up to 5%, which is the largest reported for such fluid pairs. Additionally, by design, the kinematic viscosities of the solutions are identical. This greatly simplifies the analysis and subsequent simulations of the data. The spectral and temperature dependence of the solutions are fully characterized. In the near-field, shear layer roll-up is analyzed and characterized as a function of initial velocity profile. In the far-field, turbulence statistics are reported for two different scales, one
Vikram, Deepti S.; Bratasz, Anna; Ahmad, Rizwan; Kuppusamy, Periannan
2015-01-01
Methods currently available for the measurement of oxygen concentrations (oximetry) in viable tissues differ widely from each other in their methodological basis and applicability. The goal of this study was to compare two novel methods, particulate-based electron paramagnetic resonance (EPR) and OxyLite oximetry, in an experimental tumor model. EPR oximetry uses implantable paramagnetic particulates, whereas OxyLite uses fluorescent probes affixed on a fiber-optic cable. C3H mice were transplanted with radiation-induced fibrosarcoma (RIF-1) tumors in their hind limbs. Lithium phthalocyanine (LiPc) microcrystals were used as EPR probes. The pO2 measurements were taken from random locations at a depth of ~3 mm within the tumor either immediately or 48 h after implantation of LiPc. Both methods revealed significant hypoxia in the tumor. However, there were striking differences between the EPR and OxyLite readings. The differences were attributed to the volume of tissue under examination and the effect of needle invasion at the site of measurement. This study recognizes the unique benefits of EPR oximetry in terms of robustness, repeatability and minimal invasiveness. PMID:17705635
Directory of Open Access Journals (Sweden)
Dagmar Sigmundová
2014-07-01
Full Text Available This study investigates whether more physically active parents bring up more physically active children and whether parents’ level of physical activity helps children achieve step count recommendations on weekdays and weekends. The participants (388 parents aged 35–45 and their 485 children aged 9–12 were randomly recruited from 21 Czech government-funded primary schools. The participants recorded pedometer step counts for seven days (≥10 h a day during April–May and September–October of 2013. Logistic regression (Enter method was used to examine the achievement of the international recommendations of 11,000 steps/day for girls and 13,000 steps/day for boys. The children of fathers and mothers who met the weekend recommendation of 10,000 steps were 5.48 (95% confidence interval: 1.65; 18.19; p < 0.01 and 3.60 times, respectively (95% confidence interval: 1.21; 10.74; p < 0.05 more likely to achieve the international weekend recommendation than the children of less active parents. The children of mothers who reached the weekday pedometer-based step count recommendation were 4.94 times (95% confidence interval: 1.45; 16.82; p < 0.05 more likely to fulfil the step count recommendation on weekdays than the children of less active mothers.
Thermal stratification built up in hot water tank with different inlet stratifiers
DEFF Research Database (Denmark)
Dragsted, Janne; Furbo, Simon; Dannemand, Mark
2017-01-01
Thermal stratification in a water storage tank can strongly increase the thermal performance of solar heating systems. Thermal stratification can be built up in a storage tank during charge, if the heated water enters through an inlet stratifier. Experiments with a test tank have been carried out...... in order to elucidate how well thermal stratification is established in the tank with differently designed inlet stratifiers under different controlled laboratory conditions. The investigated inlet stratifiers are from Solvis GmbH & Co KG and EyeCular Technologies ApS. The inlet stratifier from Solvis Gmb...... for Solvis GmbH & Co KG had a better performance at 4 l/min. In the intermediate charge test the stratifier from EyeCular Technologies ApS had a better performance in terms of maintaining the thermal stratification in the storage tank while charging with a relative low temperature. [All rights reserved...
Exploring the salivary microbiome of children stratified by the oral hygiene index
Mashima, Izumi; Theodorea, Citra F.; Thaweboon, Boonyanit; Thaweboon, Sroisiri; Scannapieco, Frank A.
2017-01-01
Poor oral hygiene often leads to chronic diseases such as periodontitis and dental caries resulting in substantial economic costs and diminished quality of life in not only adults but also in children. In this study, the salivary microbiome was characterized in a group of children stratified by the Simplified Oral Hygiene Index (OHI-S). Illumina MiSeq high-throughput sequencing based on the 16S rRNA was utilized to analyze 90 salivary samples (24 Good, 31 Moderate and 35 Poor oral hygiene) from a cohort of Thai children. A total of 38,521 OTUs (Operational Taxonomic Units) with a 97% similarity were characterized in all of the salivary samples. Twenty taxonomic groups (Seventeen genera, two families and one class; Streptococcus, Veillonella, Gemellaceae, Prevotella, Rothia, Porphyromonas, Granulicatella, Actinomyces, TM-7-3, Leptotrichia, Haemophilus, Selenomonas, Neisseria, Megasphaera, Capnocytophaga, Oribacterium, Abiotrophia, Lachnospiraceae, Peptostreptococcus, and Atopobium) were found in all subjects and constituted 94.5–96.5% of the microbiome. Of these twenty genera, the proportion of Streptococcus decreased while Veillonella increased with poor oral hygiene status (P oral hygiene group. This is the first study demonstrating an important association between increase of Veillonella and poor oral hygiene status in children. However, further studies are required to identify the majority of Veillonella at species level in salivary microbiome of the Poor oral hygiene group. PMID:28934367
Thomas, Luis P.; Marino, Beatriz M.; Szupiany, Ricardo N.; Gallo, Marcos N.
2017-09-01
The ability to predict the sediment and nutrient circulation within estuarine waters is of significant economic and ecological importance. In these complex systems, flocculation is a dynamically active process that is directly affected by the prevalent environmental conditions. Consequently, the floc properties continuously change, which greatly complicates the characterisation of the suspended particle matter (SPM). In the present study, three different techniques are combined in a stratified estuary under quiet weather conditions and with a low river discharge to search for a solution to this problem. The challenge is to obtain the concentration, size and flux of suspended elements through selected cross-sections using the method based on the simultaneous backscatter records of 1200 and 600 kHz ADCPs, isokinetic sampling data and LISST-25X measurements. The two-ADCP method is highly effective for determining the SPM size distributions in a non-intrusive way. The isokinetic sampling and the LISST-25X diffractometer offer point measurements at specific depths, which are especially useful for calibrating the ADCP backscatter intensity as a function of the SPM concentration and size, and providing complementary information on the sites where acoustic records are not available. Limitations and potentials of the techniques applied are discussed.
Lapham, Wayne W.; Olimpio, Julio C.
1989-01-01
Water shortages are a chronic problem in parts of the Taunton River basin and are caused by a combination of factors. Water use in this part of the Boston metropolitan area is likely to increase during the next decade. The Massachusetts Division of Water Resources projects that about 50% of the cities and towns within and on the perimeter of the basin may have water supply deficits by 1990 if water management projects are not pursued throughout the 1980s. Estimates of the long-term yield of the 26 regional aquifers indicate that the yields of the two most productive aquifers equal or exceed 11.9 and 11.3 cu ft/sec, 90% of the time, respectively, if minimum stream discharge is maintained at 99.5% flow duration. Eighteen of the 26 aquifers were pumped for public water supply during 1983. Further analysis of the yield characteristics of these 18 aquifers indicates that the 1983 pumping rate of each of these 18 aquifers can be sustained at least 70% of the time. Selected physical properties and concentrations of major chemical constituents in groundwater from the stratified-drift aquifers at 80 sampling sites were used to characterize general water quality in aquifers throughout the basin. The pH of the groundwater ranged from 5.4 to 7.0. Natural elevated concentrations of Fe and Mn in water in the stratified-drift aquifers are present locally in the basin. Natural concentrations of these two metals commonly exceed the limits of 0.3 mg/L for Fe and 0.05 mg/L for Mn recommended for drinking water. Fifty-one analyses of selected trace metals in groundwater samples from stratified-drift aquifers throughout the basin were used to characterize trace metal concentrations in the groundwater. Of the 10 constituents sampled that have US EPA limits recommended for drinking water, only the Pb concentration in water at one site (60 micrograms/L) exceeded the recommended limit of 50 micrograms/L. Analyses of selected organic compounds in water in the stratified-drift aquifers at 74
Warne, Charles D; Zaloumis, Sophie G; Bertalli, Nadine A; Delatycki, Martin B; Nicoll, Amanda J; McLaren, Christine E; Hopper, John L; Giles, Graham G; Anderson, Greg J; Olynyk, John K; Powell, Lawrie W; Allen, Katrina J; Gurrin, Lyle C
2017-04-01
Women who are homozygous for the p.C282Y mutation in the HFE gene are at much lower risk of iron overload-related disease than p.C282Y homozygous men, presumably because of the iron-depleting effects of menstruation and pregnancy. We used data from a population cohort study to model the impact of menstruation cessation at menopause on serum ferritin (SF) levels in female p.C282Y homozygotes, with p.C282Y/p.H63D simple or compound heterozygotes and those with neither p.C282Y nor p.H63D mutations (HFE wild types) as comparison groups. A sample of the Melbourne Collaborative Cohort Study was selected for the "HealthIron" study (n = 1438) including all HFE p.C282Y homozygotes plus a random sample stratified by HFE-genotype (p.C282Y and p.H63D). The relationship between the natural logarithm of SF and time since menopause was examined using linear mixed models incorporating spline smoothing. For p.C282Y homozygotes, SF increased by a factor of 3.6 (95% CI (1.8, 7.0), P HFE genotype groups increase more gradually and did not show a distinction between premenopausal and postmenopausal SF levels. Only p.C282Y homozygotes had predicted SF exceeding 200 μg/L postmenopause, but the projected SF did not increase the risk of iron overload-related disease. These data provide the first documented evidence that physiological blood loss is a major factor in determining the marked gender difference in expression of p.C282Y homozygosity. © 2016 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
Winer, Rachel L; Tiro, Jasmin A; Miglioretti, Diana L; Thayer, Chris; Beatty, Tara; Lin, John; Gao, Hongyuan; Kimbel, Kilian; Buist, Diana S M
2018-01-01
Women who delay or do not attend Papanicolaou (Pap) screening are at increased risk for cervical cancer. Trials in countries with organized screening programs have demonstrated that mailing high-risk (hr) human papillomavirus (HPV) self-sampling kits to under-screened women increases participation, but U.S. data are lacking. HOME is a pragmatic randomized controlled trial set within a U.S. integrated healthcare delivery system to compare two programmatic approaches for increasing cervical cancer screening uptake and effectiveness in under-screened women (≥3.4years since last Pap) aged 30-64years: 1) usual care (annual patient reminders and ad hoc outreach by clinics) and 2) usual care plus mailed hrHPV self-screening kits. Over 2.5years, eligible women were identified through electronic medical record (EMR) data and randomized 1:1 to the intervention or control arm. Women in the intervention arm were mailed kits with pre-paid envelopes to return samples to the central clinical laboratory for hrHPV testing. Results were documented in the EMR to notify women's primary care providers of appropriate follow-up. Primary outcomes are detection and treatment of cervical neoplasia. Secondary outcomes are cervical cancer screening uptake, abnormal screening results, and women's experiences and attitudes towards hrHPV self-sampling and follow-up of hrHPV-positive results (measured through surveys and interviews). The trial was designed to evaluate whether a programmatic strategy incorporating hrHPV self-sampling is effective in promoting adherence to the complete screening process (including follow-up of abnormal screening results and treatment). The objective of this report is to describe the rationale and design of this pragmatic trial. Copyright © 2017 Elsevier Inc. All rights reserved.
Investigating the Randomness of Numbers
Pendleton, Kenn L.
2009-01-01
The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…
DEFF Research Database (Denmark)
Rothmann, Mette Juel; Huniche, Lotte; Ammentorp, Jette
2014-01-01
main themes: knowledge about osteoporosis, psychological aspects of screening, and moral duty. The women viewed the program in the context of their everyday life and life trajectories. Age, lifestyle, and knowledge about osteoporosis were important to how women ascribed meaning to the program, how......This study aimed to investigate women's perspectives and experiences with screening for osteoporosis. Focus groups and individual interviews were conducted. Three main themes emerged: knowledge about osteoporosis, psychological aspects of screening, and moral duty. Generally, screening was accepted...... due to life experiences, self-perceived risk, and the preventive nature of screening. PURPOSE: The risk-stratified osteoporosis strategy evaluation (ROSE) study is a randomized prospective population-based trial investigating the efficacy of a screening program to prevent fractures in women aged 65...
Directory of Open Access Journals (Sweden)
Jung-Nien Lai
Full Text Available BACKGROUND: Hormonal therapy (HT either estrogen alone (E-alone or estrogen plus progesterone (E+P appears to increase the risk for breast cancer in Western countries. However, limited information is available on the association between HT and breast cancer in Asian women characterized mainly by dietary phytoestrogens intake and low prevalence of contraceptive pills prescription. METHODOLOGY: A total of 65,723 women (20-79 years of age without cancer or the use of Chinese herbal products were recruited from a nation-wide one-million representative sample of the National Health Insurance of Taiwan and followed from 1997 to 2008. Seven hundred and eighty incidents of invasive breast cancer were diagnosed. Using a reference group that comprised 40,052 women who had never received a hormone prescription, Cox proportional hazard models were constructed to determine the hazard ratios for receiving different types of HT and the occurrence of breast cancer. CONCLUSIONS: 5,156 (20% women ever used E+P, 2,798 (10.8% ever used E-alone, and 17,717 (69% ever used other preparation types. The Cox model revealed adjusted hazard ratios (HRs of 2.05 (95% CI 1.37-3.07 for current users of E-alone and 8.65 (95% CI 5.45-13.70 for current users of E+P. Using women who had ceased to take hormonal medication for 6 years or more as the reference group, the adjusted HRs were significantly elevated and greater than current users and women who had discontinued hormonal medication for less than 6 years. Current users of either E-alone or E+P have an increased risk for invasive breast cancer in Taiwan, and precautions should be taken when such agents are prescribed.
Energy Technology Data Exchange (ETDEWEB)
Clarisse, Olivier [Trent University, Department of Chemistry, 1600 West Bank Drive, Peterborough, Ontario K9J 7B8 (Canada)], E-mail: olivier.clarisse@umoncton.ca; Foucher, Delphine; Hintelmann, Holger [Trent University, Department of Chemistry, 1600 West Bank Drive, Peterborough, Ontario K9J 7B8 (Canada)
2009-03-15
The diffusive gradient in thin film (DGT) technique was successfully used to monitor methylmercury (MeHg) speciation in the dissolved phase of a stratified boreal lake, Lake 658 of the Experimental Lakes Area (ELA) in Ontario, Canada. Water samples were conventionally analysed for MeHg, sulfides, and dissolved organic matter (DOM). MeHg accumulated by DGT devices was compared to MeHg concentration measured conventionally in water samples to establish MeHg speciation. In the epilimnion, MeHg was almost entirely bound to DOM. In the top of the hypolimnion an additional labile fraction was identified, and at the bottom of the lake a significant fraction of MeHg was potentially associated to colloidal material. As part of the METAALICUS project, isotope enriched inorganic mercury was applied to Lake 658 and its watershed for several years to establish the relationship between atmospheric Hg deposition and Hg in fish. Little or no difference in MeHg speciation in the dissolved phase was detected between ambient and spike MeHg. - Methylmercury speciation was determined in the dissolved phase of a stratified lake using the diffusive gradient in thin film technique.
International Nuclear Information System (INIS)
Clarisse, Olivier; Foucher, Delphine; Hintelmann, Holger
2009-01-01
The diffusive gradient in thin film (DGT) technique was successfully used to monitor methylmercury (MeHg) speciation in the dissolved phase of a stratified boreal lake, Lake 658 of the Experimental Lakes Area (ELA) in Ontario, Canada. Water samples were conventionally analysed for MeHg, sulfides, and dissolved organic matter (DOM). MeHg accumulated by DGT devices was compared to MeHg concentration measured conventionally in water samples to establish MeHg speciation. In the epilimnion, MeHg was almost entirely bound to DOM. In the top of the hypolimnion an additional labile fraction was identified, and at the bottom of the lake a significant fraction of MeHg was potentially associated to colloidal material. As part of the METAALICUS project, isotope enriched inorganic mercury was applied to Lake 658 and its watershed for several years to establish the relationship between atmospheric Hg deposition and Hg in fish. Little or no difference in MeHg speciation in the dissolved phase was detected between ambient and spike MeHg. - Methylmercury speciation was determined in the dissolved phase of a stratified lake using the diffusive gradient in thin film technique
Wynne-Jones, K; Jackson, M; Grotte, G; Bridgewater, B; North, W
2000-01-01
OBJECTIVE—To study the use of the Parsonnet score to predict mortality following adult cardiac surgery. DESIGN—Prospective study. SETTING—All centres performing adult cardiac surgery in the north west of England. SUBJECTS—8210 patients undergoing surgery between April 1997 and March 1999. MAIN OUTCOME MEASURES—Risk factors and in-hospital mortality were recorded according to agreed definitions. Ten per cent of cases from each centre were selected at random for validation. A Parsonnet score was derived for each patient and its predictive ability was studied. RESULTS—Data collection was complete. The operative mortality was 3.5% (95% confidence interval 3.1% to 3.9%), ranging from 2.7% to 3.8% across the centres. On validation, the incidence of discrepancies ranged from 0% to 13% for the different risk factors. The predictive ability of the Parsonnet score measured by area under the receiver operating characteristic curve was 0.74. The mean Parsonnet score for the region was 7.0, giving an observed to expected mortality ratio of 0.51 (range 0.4 to 0.64 across the centres). A new predictive model was derived from the data by multivariate analysis which includes nine objective risk factors, all with a significant association with mortality, which highlights some of the deficits of the Parsonnet score. CONCLUSIONS—Risk stratified mortality data were collected on 100% of patients undergoing adult cardiac surgery in two years within a defined geographical region and were used to set an audit standard. Problems with the Parsonnet score of subjectivity, inclusion of many items not associated with mortality, and the overprediction of mortality have been highlighted. Keywords: risk stratification; cardiac surgery; Parsonnet score; audit PMID:10862595
Eckmann, Madeleine; Dunham, Jason B.; Connor, Edward J.; Welch, Carmen A.
2018-01-01
Many species living in deeper lentic ecosystems exhibit daily movements that cycle through the water column, generally referred to as diel vertical migration (DVM). In this study, we applied bioenergetics modelling to evaluate growth as a hypothesis to explain DVM by bull trout (Salvelinus confluentus) in a thermally stratified reservoir (Ross Lake, WA, USA) during the peak of thermal stratification in July and August. Bioenergetics model parameters were derived from observed vertical distributions of temperature, prey and bull trout. Field sampling confirmed that bull trout prey almost exclusively on recently introduced redside shiner (Richardsonius balteatus). Model predictions revealed that deeper (>25 m) DVMs commonly exhibited by bull trout during peak thermal stratification cannot be explained by maximising growth. Survival, another common explanation for DVM, may have influenced bull trout depth use, but observations suggest there may be additional drivers of DVM. We propose these deeper summertime excursions may be partly explained by an alternative hypothesis: the importance of colder water for gametogenesis. In Ross Lake, reliance of bull trout on warm water prey (redside shiner) for consumption and growth poses a potential trade-off with the need for colder water for gametogenesis.
International Nuclear Information System (INIS)
Sultana, Farhana; Gertig, Dorota M; English, Dallas R; Simpson, Julie A; Brotherton, Julia ML; Drennan, Kelly; Mullins, Robyn; Heley, Stella; Wrede, C David; Saville, Marion
2014-01-01
Organized screening based on Pap tests has substantially reduced deaths from cervical cancer in many countries, including Australia. However, the impact of the program depends upon the degree to which women participate. A new method of screening, testing for human papillomavirus (HPV) DNA to detect the virus that causes cervical cancer, has recently become available. Because women can collect their own samples for this test at home, it has the potential to overcome some of the barriers to Pap tests. The iPap trial will evaluate whether mailing an HPV self-sampling kit increases participation by never- and under-screened women within a cervical screening program. The iPap trial is a parallel randomized controlled, open label, trial. Participants will be Victorian women age 30–69 years, for whom there is either no record on the Victorian Cervical Cytology Registry (VCCR) of a Pap test (never-screened) or the last recorded Pap test was between five to fifteen years ago (under-screened). Enrolment information from the Victorian Electoral Commission will be linked to the VCCR to determine the never-screened women. Variables that will be used for record linkage include full name, address and date of birth. Never- and under-screened women will be randomly allocated to either receive an invitation letter with an HPV self-sampling kit or a reminder letter to attend for a Pap test, which is standard practice for women overdue for a test in Victoria. All resources have been focus group tested. The primary outcome will be the proportion of women who participate, by returning an HPV self-sampling kit for women in the self-sampling arm, and notification of a Pap test result to the Registry for women in the Pap test arm at 3 and 6 months after mailout. The most important secondary outcome is the proportion of test-positive women who undergo further investigations at 6 and 12 months after mailout of results. The iPap trial will provide strong evidence about whether HPV self-sampling
Monoplane 3D-2D registration of cerebral angiograms based on multi-objective stratified optimization
Aksoy, T.; Špiclin, Ž.; Pernuš, F.; Unal, G.
2017-12-01
Registration of 3D pre-interventional to 2D intra-interventional medical images has an increasingly important role in surgical planning, navigation and treatment, because it enables the physician to co-locate depth information given by pre-interventional 3D images with the live information in intra-interventional 2D images such as x-ray. Most tasks during image-guided interventions are carried out under a monoplane x-ray, which is a highly ill-posed problem for state-of-the-art 3D to 2D registration methods. To address the problem of rigid 3D-2D monoplane registration we propose a novel multi-objective stratified parameter optimization, wherein a small set of high-magnitude intensity gradients are matched between the 3D and 2D images. The stratified parameter optimization matches rotation templates to depth templates, first sampled from projected 3D gradients and second from the 2D image gradients, so as to recover 3D rigid-body rotations and out-of-plane translation. The objective for matching was the gradient magnitude correlation coefficient, which is invariant to in-plane translation. The in-plane translations are then found by locating the maximum of the gradient phase correlation between the best matching pair of rotation and depth templates. On twenty pairs of 3D and 2D images of ten patients undergoing cerebral endovascular image-guided intervention the 3D to monoplane 2D registration experiments were setup with a rather high range of initial mean target registration error from 0 to 100 mm. The proposed method effectively reduced the registration error to below 2 mm, which was further refined by a fast iterative method and resulted in a high final registration accuracy (0.40 mm) and high success rate (> 96%). Taking into account a fast execution time below 10 s, the observed performance of the proposed method shows a high potential for application into clinical image-guidance systems.