Optimum allocation in multivariate stratified random sampling: Stochastic matrix optimisation
Diaz-Garcia, Jose A.; Ramos-Quiroga, Rogelio
2011-01-01
The allocation problem for multivariate stratified random sampling as a problem of stochastic matrix integer mathematical programming is considered. With these aims the asymptotic normality of sample covariance matrices for each strata is established. Some alternative approaches are suggested for its solution. An example is solved by applying the proposed techniques.
Analysis of a global random stratified sample of nurse legislation.
Benton, D C; Fernández-Fernández, M P; González-Jurado, M A; Beneit-Montesinos, J V
2015-06-01
To identify, compare and contrast the major component parts of heterogeneous stratified sample of nursing legislation. Nursing legislation varies from one jurisdiction to another. Up until now no research exists into whether the variations of such legislation are random or if variations are related to a set of key attributes. This mixed method study used a random stratified sample of legislation to map through documentary analysis the content of 14 nursing acts and then explored, using quantitative techniques, whether the material contained relates to a number of key attributes. These attributes include: legal tradition of the jurisdiction; model of regulation; administrative approach; area of the world; and the economic status of the jurisdiction. Twelve component parts of nursing legislation were identified. These were remarkably similar irrespective of attributes of interest. However, not all component parts were specified in the same level of detail and the manner by which the elements were addressed did vary. A number of potential relationships between the structure of the legislation and the key attributes of interest were identified. This study generated a comprehensive and integrated map of a global sample of nursing legislation. It provides a set of descriptors to be used to undertake further quantitative work and provides an important policy tool to facilitate dialogue between regulatory bodies. At the individual nurse level it offers insights that can help nurses pursue recognition of credentials across jurisdictions. © 2015 International Council of Nurses.
Improved estimator of finite population mean using auxiliary attribute in stratified random sampling
Verma, Hemant K.; Sharma, Prayas; Singh, Rajesh
2014-01-01
The present study discuss the problem of estimating the finite population mean using auxiliary attribute in stratified random sampling. In this paper taking the advantage of point bi-serial correlation between the study variable and auxiliary attribute, we have improved the estimation of population mean in stratified random sampling. The expressions for Bias and Mean square error have been derived under stratified random sampling. In addition, an empirical study has been carried out to examin...
Directory of Open Access Journals (Sweden)
Nadia Mushtaq
2017-03-01
Full Text Available In this article, a combined general family of estimators is proposed for estimating finite population mean of a sensitive variable in stratified random sampling with non-sensitive auxiliary variable based on randomized response technique. Under stratified random sampling without replacement scheme, the expression of bias and mean square error (MSE up to the first-order approximations are derived. Theoretical and empirical results through a simulation study show that the proposed class of estimators is more efficient than the existing estimators, i.e., usual stratified random sample mean estimator, Sousa et al (2014 ratio and regression estimator of the sensitive variable in stratified sampling.
Singh, Rajesh; Sharma, Prayas; Smarandache, Florentin
2014-01-01
Singh et al (20009) introduced a family of exponential ratio and product type estimators in stratified random sampling. Under stratified random sampling without replacement scheme, the expressions of bias and mean square error (MSE) of Singh et al (2009) and some other estimators, up to the first- and second-order approximations are derived. Also, the theoretical findings are supported by a numerical example.
National Research Council Canada - National Science Library
Nadia Mushtaq; Noor Ul Amin; Muhammad Hanif
2017-01-01
In this article, a combined general family of estimators is proposed for estimating finite population mean of a sensitive variable in stratified random sampling with non-sensitive auxiliary variable...
A New Estimator For Population Mean Using Two Auxiliary Variables in Stratified random Sampling
Singh, Rajesh; Malik, Sachin
2014-01-01
In this paper, we suggest an estimator using two auxiliary variables in stratified random sampling. The propose estimator has an improvement over mean per unit estimator as well as some other considered estimators. Expressions for bias and MSE of the estimator are derived up to first degree of approximation. Moreover, these theoretical findings are supported by a numerical example with original data. Key words: Study variable, auxiliary variable, stratified random sampling, bias and mean squa...
Stratified random sampling plan for an irrigation customer telephone survey
Energy Technology Data Exchange (ETDEWEB)
Johnston, J.W.; Davis, L.J.
1986-05-01
This report describes the procedures used to design and select a sample for a telephone survey of individuals who use electricity in irrigating agricultural cropland in the Pacific Northwest. The survey is intended to gather information on the irrigated agricultural sector that will be useful for conservation assessment, load forecasting, rate design, and other regional power planning activities.
Pu, Xiangke; Gao, Ge; Fan, Yubo; Wang, Mian
2016-01-01
Randomized response is a research method to get accurate answers to sensitive questions in structured sample survey. Simple random sampling is widely used in surveys of sensitive questions but hard to apply on large targeted populations. On the other side, more sophisticated sampling regimes and corresponding formulas are seldom employed to sensitive question surveys. In this work, we developed a series of formulas for parameter estimation in cluster sampling and stratified cluster sampling under two kinds of randomized response models by using classic sampling theories and total probability formulas. The performances of the sampling methods and formulas in the survey of premarital sex and cheating on exams at Soochow University were also provided. The reliability of the survey methods and formulas for sensitive question survey was found to be high.
Directory of Open Access Journals (Sweden)
Xiangke Pu
Full Text Available Randomized response is a research method to get accurate answers to sensitive questions in structured sample survey. Simple random sampling is widely used in surveys of sensitive questions but hard to apply on large targeted populations. On the other side, more sophisticated sampling regimes and corresponding formulas are seldom employed to sensitive question surveys. In this work, we developed a series of formulas for parameter estimation in cluster sampling and stratified cluster sampling under two kinds of randomized response models by using classic sampling theories and total probability formulas. The performances of the sampling methods and formulas in the survey of premarital sex and cheating on exams at Soochow University were also provided. The reliability of the survey methods and formulas for sensitive question survey was found to be high.
SNP selection and classification of genome-wide SNP data using stratified sampling random forests.
Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K
2012-09-01
For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.
Özel, Gamze
2015-01-01
In this paper, a new exponential type estimator is developed in the stratified random sampling for the population mean using auxiliary variable information. In order to evaluate efﬁciency of the introduced estimator, we ﬁrst review some estimators and study the optimum property of the suggested strategy. To judge the merits of the suggested class of estimators over others under the optimal condition, simulation study and real data applications are conducted. The results show that the introduc...
Sample size and power for a stratified doubly randomized preference design.
Cameron, Briana; Esserman, Denise A
2016-11-21
The two-stage (or doubly) randomized preference trial design is an important tool for researchers seeking to disentangle the role of patient treatment preference on treatment response through estimation of selection and preference effects. Up until now, these designs have been limited by their assumption of equal preference rates and effect sizes across the entire study population. We propose a stratified two-stage randomized trial design that addresses this limitation. We begin by deriving stratified test statistics for the treatment, preference, and selection effects. Next, we develop a sample size formula for the number of patients required to detect each effect. The properties of the model and the efficiency of the design are established using a series of simulation studies. We demonstrate the applicability of the design using a study of Hepatitis C treatment modality, specialty clinic versus mobile medical clinic. In this example, a stratified preference design (stratified by alcohol/drug use) may more closely capture the true distribution of patient preferences and allow for a more efficient design than a design which ignores these differences (unstratified version). © The Author(s) 2016.
Stratified random sampling for estimating billing accuracy in health care systems.
Buddhakulsomsiri, Jirachai; Parthanadee, Parthana
2008-03-01
This paper presents a stratified random sampling plan for estimating accuracy of bill processing performance for the health care bills submitted to third party payers in health care systems. Bill processing accuracy is estimated with two measures: percent accuracy and total dollar accuracy. Difficulties in constructing a sampling plan arise when the population strata structure is unknown, and when the two measures require different sampling schemes. To efficiently utilize sample resource, the sampling plan is designed to effectively estimate both measures from the same sample. The sampling plan features a simple but efficient strata construction method, called rectangular method, and two accuracy estimation methods, one for each measure. The sampling plan is tested on actual populations from an insurance company. Accuracy estimates obtained are then used to compare the rectangular method to other potential clustering methods for strata construction, and compare the accuracy estimation methods to other eligible methods. Computational study results show effectiveness of the proposed sampling plan.
Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling
Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah
2014-01-01
Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…
Brunner, N. M.; Mladinich, C. S.; Caldwell, M. K.; Beal, Y. J. G.
2014-12-01
The U.S. Geological Survey is generating a suite of Essential Climate Variables (ECVs) products, as defined by the Global Climate Observing System, from the Landsat data archive. Validation protocols for these products are being established, incorporating the Committee on Earth Observing Satellites Land Product Validation Subgroup's best practice guidelines and validation hierarchy stages. The sampling design and accuracy measures follow the methodology developed by the European Space Agency's Climate Change Initiative Fire Disturbance (fire_cci) project (Padilla and others, 2014). A rigorous validation was performed on the 2008 Burned Area ECV (BAECV) prototype product, using a stratified random sample of 48 Thiessen scene areas overlaying Landsat path/rows distributed across several terrestrial biomes throughout North America. The validation reference data consisted of fourteen sample sites acquired from the fire_cci project and the remaining new samples sites generated from a densification of the stratified sampling for North America. The reference burned area polygons were generated using the ABAMS (Automatic Burned Area Mapping) software (Bastarrika and others, 2011; Izagirre, 2014). Accuracy results will be presented indicating strengths and weaknesses of the BAECV algorithm.Bastarrika, A., Chuvieco, E., and Martín, M.P., 2011, Mapping burned areas from Landsat TM/ETM+ data with a two-phase algorithm: Balancing omission and commission errors: Remote Sensing of Environment, v. 115, no. 4, p. 1003-1012.Izagirre, A.B., 2014, Automatic Burned Area Mapping Software (ABAMS), Preliminary Documentation, Version 10 v4,: Vitoria-Gasteiz, Spain, University of Basque Country, p. 27.Padilla, M., Chuvieco, E., Hantson, S., Theis, R., and Sandow, C., 2014, D2.1 - Product Validation Plan: UAH - University of Alcalá de Henares (Spain), 37 p.
Notes on interval estimation of the generalized odds ratio under stratified random sampling.
Lui, Kung-Jong; Chang, Kuang-Chao
2013-05-01
It is not rare to encounter the patient response on the ordinal scale in a randomized clinical trial (RCT). Under the assumption that the generalized odds ratio (GOR) is homogeneous across strata, we consider four asymptotic interval estimators for the GOR under stratified random sampling. These include the interval estimator using the weighted-least-squares (WLS) approach with the logarithmic transformation (WLSL), the interval estimator using the Mantel-Haenszel (MH) type of estimator with the logarithmic transformation (MHL), the interval estimator using Fieller's theorem with the MH weights (FTMH) and the interval estimator using Fieller's theorem with the WLS weights (FTWLS). We employ Monte Carlo simulation to evaluate the performance of these interval estimators by calculating the coverage probability and the average length. To study the bias of these interval estimators, we also calculate and compare the noncoverage probabilities in the two tails of the resulting confidence intervals. We find that WLSL and MHL can generally perform well, while FTMH and FTWLS can lose either precision or accuracy. We further find that MHL is likely the least biased. Finally, we use the data taken from a study of smoking status and breathing test among workers in certain industrial plants in Houston, Texas, during 1974 to 1975 to illustrate the use of these interval estimators.
Stemflow estimation in a redwood forest using model-based stratified random sampling
Jack Lewis
2003-01-01
Model-based stratified sampling is illustrated by a case study of stemflow volume in a redwood forest. The approach is actually a model-assisted sampling design in which auxiliary information (tree diameter) is utilized in the design of stratum boundaries to optimize the efficiency of a regression or ratio estimator. The auxiliary information is utilized in both the...
Notes on interval estimation of the gamma correlation under stratified random sampling.
Lui, Kung-Jong; Chang, Kuang-Chao
2012-07-01
We have developed four asymptotic interval estimators in closed forms for the gamma correlation under stratified random sampling, including the confidence interval based on the most commonly used weighted-least-squares (WLS) approach (CIWLS), the confidence interval calculated from the Mantel-Haenszel (MH) type estimator with the Fisher-type transformation (CIMHT), the confidence interval using the fundamental idea of Fieller's Theorem (CIFT) and the confidence interval derived from a monotonic function of the WLS estimator of Agresti's α with the logarithmic transformation (MWLSLR). To evaluate the finite-sample performance of these four interval estimators and note the possible loss of accuracy in application of both Wald's confidence interval and MWLSLR using pooled data without accounting for stratification, we employ Monte Carlo simulation. We use the data taken from a general social survey studying the association between the income level and job satisfaction with strata formed by genders in black Americans published elsewhere to illustrate the practical use of these interval estimators. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multivariate stratified sampling by stochastic multiobjective optimisation
Diaz-Garcia, Jose A.; Ramos-Quiroga, Rogelio
2011-01-01
This work considers the allocation problem for multivariate stratified random sampling as a problem of integer non-linear stochastic multiobjective mathematical programming. With this goal in mind the asymptotic distribution of the vector of sample variances is studied. Two alternative approaches are suggested for solving the allocation problem for multivariate stratified random sampling. An example is presented by applying the different proposed techniques.
Dual to Ratio-Cum-Product Estimator in Simple and Stratified Random Sampling
Yunusa Olufadi
2013-01-01
New estimators for estimating the finite population mean using two auxiliary variables under simple and stratified sampling design is proposed. Their properties (e.g., mean square error) are studied to the first order of approximation. More so, some estimators are shown to be a particular member of this estimator. Furthermore, comparison of the proposed estimator with the usual unbiased estimator and other estimators considered in this paper reveals interesting results. These results are fur...
Multivariate Multi-Objective Allocation in Stratified Random Sampling: A Game Theoretic Approach.
Muhammad, Yousaf Shad; Hussain, Ijaz; Shoukry, Alaa Mohamd
2016-01-01
We consider the problem of multivariate multi-objective allocation where no or limited information is available within the stratum variance. Results show that a game theoretic approach (based on weighted goal programming) can be applied to sample size allocation problems. We use simulation technique to determine payoff matrix and to solve a minimax game.
Zur, Richard M; Pesce, Lorenzo L; Jiang, Yulei
2015-05-01
To evaluate stratified random sampling (SRS) of screening mammograms by (1) Breast Imaging Reporting and Data System (BI-RADS) assessment categories, and (2) the presence of breast cancer in mammograms, for estimation of screening-mammography receiver operating characteristic (ROC) curves in retrospective observer studies. We compared observer study case sets constructed by (1) random sampling (RS); (2) SRS with proportional allocation (SRS-P) with BI-RADS 1 and 2 noncancer cases accounting for 90.6% of all noncancer cases; (3) SRS with disproportional allocation (SRS-D) with BI-RADS 1 and 2 noncancer cases accounting for 10%-80%; and (4) SRS-D and multiple imputation (SRS-D + MI) with missing BI-RADS 1 and 2 noncancer cases imputed to recover the 90.6% proportion. Monte Carlo simulated case sets were drawn from a large case population modeled after published Digital Mammography Imaging Screening Trial data. We compared the bias, root-mean-square error, and coverage of 95% confidence intervals of area under the ROC curve (AUC) estimates from the sampling methods (200-2000 cases, of which 25% were cancer cases) versus from the large case population. AUC estimates were unbiased from RS, SRS-P, and SRS-D + MI, but biased from SRS-D. AUC estimates from SRS-P and SRS-D + MI had 10% smaller root-mean-square error than RS. Both SRS-P and SRS-D + MI can be used to obtain unbiased and 10% more efficient estimate of screening-mammography ROC curves. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino
2012-01-01
Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...
An Evaluation of Stratified Sampling of Microarchitecture Simulations
Wunderlich, Roland E.; Wenisch, Thomas F.; Falsafi, Babak; Hoe, James C.
2004-01-01
Recent research advocates applying sampling to accelerate microarchitecture simulation. Simple random sampling offers accurate performance estimates (with a high quantifiable confidence) by taking a large number (e.g., 10,000) of short performance measurements over the full length of a benchmark. Simple random sampling does not exploit the often repetitive behaviors of benchmarks, collecting many redundant measurements. By identifying repetitive behaviors, we can apply stratified rand...
Rosanowski, S M; Cogger, N; Rogers, C W; Benschop, J; Stevenson, M A
2012-12-01
We conducted a cross-sectional survey to determine the demographic characteristics of non-commercial horses in New Zealand. A sampling frame of properties with non-commercial horses was derived from the national farms database, AgriBase™. Horse properties were stratified by property size and a generalised random-tessellated stratified (GRTS) sampling strategy was used to select properties (n=2912) to take part in the survey. The GRTS sampling design allowed for the selection of properties that were spatially balanced relative to the distribution of horse properties throughout the country. The registered decision maker of the property, as identified in AgriBase™, was sent a questionnaire asking them to describe the demographic characteristics of horses on the property, including the number and reason for keeping horses, as well as information about other animals kept on the property and the proximity of boundary neighbours with horses. The response rate to the survey was 38% (1044/2912) and the response rate was not associated with property size or region. A total of 5322 horses were kept for recreation, competition, racing, breeding, stock work, or as pets. The reasons for keeping horses and the number and class of horses varied significantly between regions and by property size. Of the properties sampled, less than half kept horses that could have been registered with Equestrian Sports New Zealand or either of the racing codes. Of the respondents that reported knowing whether their neighbours had horses, 58.6% (455/776) of properties had at least one boundary neighbour that kept horses. The results of this study have important implications for New Zealand, which has an equine population that is naïve to many equine diseases considered endemic worldwide. The ability to identify, and apply accurate knowledge of the population at risk to infectious disease control strategies would lead to more effective strategies to control and prevent disease spread during an
Bayesian stratified sampling to assess corpus utility
Energy Technology Data Exchange (ETDEWEB)
Hochberg, J.; Scovel, C.; Thomas, T.; Hall, S.
1998-12-01
This paper describes a method for asking statistical questions about a large text corpus. The authors exemplify the method by addressing the question, ``What percentage of Federal Register documents are real documents, of possible interest to a text researcher or analyst?`` They estimate an answer to this question by evaluating 200 documents selected from a corpus of 45,820 Federal Register documents. Bayesian analysis and stratified sampling are used to reduce the sampling uncertainty of the estimate from over 3,100 documents to fewer than 1,000. A possible application of the method is to establish baseline statistics used to estimate recall rates for information retrieval systems.
Lapeyrouse, L M; Morera, O; Heyman, J M C; Amaya, M A; Pingitore, N E; Balcazar, H
2012-04-01
Examination of border-specific characteristics such as trans-border mobility and transborder health service illuminates the heterogeneity of border Hispanics and may provide greater insight toward understanding differential health behaviors and status among these populations. In this study, we create a descriptive profile of the concept of trans-border mobility by exploring the relationship between mobility status and a series of demographic, economic and socio-cultural characteristics among mobile and non-mobile Hispanics living in the El Paso-Juarez border region. Using a two-stage stratified random sampling design, bilingual interviewers collected survey data from border residents (n = 1,002). Findings show that significant economic, cultural, and behavioral differences exist between mobile and non-mobile respondents. While non-mobile respondents were found to have higher social economic status than their mobile counterparts, mobility across the border was found to offer less acculturated and poorer Hispanics access to alternative sources of health care and other services.
Lin, Yifei; Yin, Senlin; Lai, Sike; Tang, Ji; Huang, Jin; Du, Liang
2016-10-01
As the relationship between physicians and patients deteriorated in China recently, medical conflicts occurred more frequently now. Physicians, to a certain extent, also take some responsibilities. Awareness of medical professionalism and its influence factors can be helpful to take targeted measures and alleviate the contradiction. Through a combination of physicians' self-assessment and patients' assessment in ambulatory care clinics in Chengdu, this research aims to evaluate the importance of medical professionalism in hospitals and explore the influence factors, hoping to provide decision-making references to improve this grim situation. From February to March, 2013, a cross-sectional study was conducted in 2 tier 3 hospitals, 5 tier 2 hospitals, and 10 community hospitals through a stratified-random sampling method on physicians and patients, at a ratio of 1/5. Questionnaires are adopted from a pilot study. A total of 382 physicians and 1910 patients were matched and surveyed. Regarding the medical professionalism, the scores of the self-assessment for physicians were 85.18 ± 7.267 out of 100 and the scores of patient-assessment were 57.66 ± 7.043 out of 70. The influence factors of self-assessment were physicians' working years (P = 0.003) and patients' complaints (P = 0.006), whereas the influence factors of patient-assessment were patients' ages (P = 0.001) and their physicians' working years (P < 0.01) and satisfaction on the payment mode (P = 0.006). Higher self-assessment on the medical professionalism was in accordance with physicians of more working years and no complaint history. Higher patient-assessment was in line with elder patients, the physicians' more working years, and higher satisfaction on the payment mode. Elder patients, encountering with physicians who worked more years in health care services or with higher satisfaction on the payment mode, contribute to higher scores in patient assessment part. The government should
Optimal Allocation in Stratified Randomized Response Model
Directory of Open Access Journals (Sweden)
Javid Shabbir
2005-07-01
Full Text Available A Warner (1965 randomized response model based on stratification is used to determine the allocation of samples. Both linear and log-linear cost functions are discussed under uni and double stratification. It observed that by using a log-linear cost function, one can get better allocations.
Chance constrained compromise mixed allocation in multivariate stratified sampling
Directory of Open Access Journals (Sweden)
Ummatul Fatima
2016-03-01
Full Text Available Consider a multivariate stratified population with strata and characteristics. Let the estimation of the population means be of interest. In such cases the traditional individual optimum allocations may differ widely from characteristic to characteristic and there will be no obvious compromise between them unless they are highly correlated. As a result there does not exist a single set of allocations that can be practically implemented on all characteristics. Assuming the characteristics independent many authors worked out allocations based on different compromise criterion such allocations are called compromise allocation. These allocations are optimum for all characteristics in some sense. Ahsan et al. (2005 introduced the concept of ‘Mixed allocation’ in univariate stratified sampling. Later on Varshney et al. (2011 extended it for multivariate case and called it a ‘Compromise Mixed Allocation’. Ahsan et al. (2013 worked on mixed allocation in stratified sampling by using the ‘Chance Constrained Programming Technique’, that allows the cost constraint to be violated by a specified small probability. This paper presents a more realistic approach to the compromise mixed allocation by formulating the problem as a Chance Constrained Nonlinear Programming Problem in which the per unit measurement costs in various strata are random variables. The application of this approach is exhibited through a numerical example assuming normal distributions of the random parameters.
Prototypic Features of Loneliness in a Stratified Sample of Adolescents
DEFF Research Database (Denmark)
Lasgaard, Mathias; Elklit, Ask
2009-01-01
. A questionnaire survey was conducted with 379 Danish Grade 8 students (M = 14.1 years, SD = 0.4) from 22 geographically stratified and randomly selected schools. Hierarchical linear regression analysis showed that network orientation, success expectation and avoidance in affiliative situations predicted...
Molinario, G.; Hansen, M.; Potapov, P.
2016-12-01
High resolution satellite imagery obtained from the National Geospatial Intelligence Agency through NASA was used to photo-interpret sample areas within the DRC. The area sampled is a stratifcation of the forest cover loss from circa 2014 that either occurred completely within the previosly mapped homogenous area of the Rural Complex, at it's interface with primary forest, or in isolated forest perforations. Previous research resulted in a map of these areas that contextualizes forest loss depending on where it occurs and with what spatial density, leading to a better understading of the real impacts on forest degradation of livelihood shifting cultivation. The stratified random sampling approach of these areas allows the characterization of the constituent land cover types within these areas, and their variability throughout the DRC. Shifting cultivation has a variable forest degradation footprint in the DRC depending on many factors that drive it, but it's role in forest degradation and deforestation had been disputed, leading us to investigate and quantify the clearing and reuse rates within the strata throughout the country.
Random forcing of geostrophic motion in rotating stratified turbulence
Waite, Michael L.
2017-12-01
Random forcing of geostrophic motion is a common approach in idealized simulations of rotating stratified turbulence. Such forcing represents the injection of energy into large-scale balanced motion, and the resulting breakdown of quasi-geostrophic turbulence into inertia-gravity waves and stratified turbulence can shed light on the turbulent cascade processes of the atmospheric mesoscale. White noise forcing is commonly employed, which excites all frequencies equally, including frequencies much higher than the natural frequencies of large-scale vortices. In this paper, the effects of these high frequencies in the forcing are investigated. Geostrophic motion is randomly forced with red noise over a range of decorrelation time scales τ, from a few time steps to twice the large-scale vortex time scale. It is found that short τ (i.e., nearly white noise) results in about 46% more gravity wave energy than longer τ, despite the fact that waves are not directly forced. We argue that this effect is due to wave-vortex interactions, through which the high frequencies in the forcing are able to excite waves at their natural frequencies. It is concluded that white noise forcing should be avoided, even if it is only applied to the geostrophic motion, when a careful investigation of spontaneous wave generation is needed.
Schildcrout, Jonathan S; Garbett, Shawn P; Heagerty, Patrick J
2013-06-01
The analysis of longitudinal trajectories usually focuses on evaluation of explanatory factors that are either associated with rates of change, or with overall mean levels of a continuous outcome variable. In this article, we introduce valid design and analysis methods that permit outcome dependent sampling of longitudinal data for scenarios where all outcome data currently exist, but a targeted substudy is being planned in order to collect additional key exposure information on a limited number of subjects. We propose a stratified sampling based on specific summaries of individual longitudinal trajectories, and we detail an ascertainment corrected maximum likelihood approach for estimation using the resulting biased sample of subjects. In addition, we demonstrate that the efficiency of an outcome-based sampling design relative to use of a simple random sample depends highly on the choice of outcome summary statistic used to direct sampling, and we show a natural link between the goals of the longitudinal regression model and corresponding desirable designs. Using data from the Childhood Asthma Management Program, where genetic information required retrospective ascertainment, we study a range of designs that examine lung function profiles over 4 years of follow-up for children classified according to their genotype for the IL 13 cytokine. © 2013, The International Biometric Society.
Schildcrout, Jonathan S.; Garbett, Shawn P.; Heagerty, Patrick J.
2013-01-01
Summary The analysis of longitudinal trajectories usually focuses on evaluation of explanatory factors that are either associated with rates of change, or with overall mean levels of a continuous outcome variable. In this manuscript we introduce valid design and analysis methods that permit outcome dependent sampling of longitudinal data for scenarios where all outcome data currently exist, but a targeted sub-study is being planned in order to collect additional key exposure information on a limited number of subjects. We propose a stratified sampling based on specific summaries of individual longitudinal trajectories, and we detail an ascertainment corrected maximum likelihood approach for estimation using the resulting biased sample of subjects. In addition, we demonstrate that the efficiency of an outcome-based sampling design relative to use of a simple random sample depends highly on the choice of outcome summary statistic used to direct sampling, and we show a natural link between the goals of the longitudinal regression model and corresponding desirable designs. Using data from the Childhood Asthma Management Program, where genetic information required retrospective ascertainment, we study a range of designs that examine lung function profiles over four years of follow-up for children classified according to their genotype for the IL 13 cytokine. PMID:23409789
NONPARAMETRIC MIXED RATIO ESTIMATOR FOR A FINITE POPULATION TOTAL IN STRATIFIED SAMPLING
Directory of Open Access Journals (Sweden)
George Otieno Orwa
2010-08-01
Full Text Available We propose a nonparametric regression approach to the estimation of a finite population total in model based frameworks in the case of stratified sampling. Similar work has been done, by Nadaraya and Watson (1964, Hansen et al (1983, and Breidt and Opsomer (2000. Our point of departure from these works is at selection of the sampling weights within every stratum, where we treat the individual strata as compact Abelian groups and demonstrate that the resulting proposed estimator is easier to compute. We also make use of mixed ratios but this time not in the contexts of simple random sampling or two stage cluster sampling, but in stratified sampling schemes, where a void still exists.
GSAMPLE: Stata module to draw a random sample
Jann, Ben
2006-01-01
gsample draws a random sample from the data in memory. Simple random sampling (SRS) is supported, as well as unequal probability sampling (UPS), of which sampling with probabilities proportional to size (PPS) is a special case. Both methods, SRS and UPS/PPS, provide sampling with replacement and sampling without replacement. Furthermore, stratified sampling and cluster sampling is supported.
Data splitting for artificial neural networks using SOM-based stratified sampling.
May, R J; Maier, H R; Dandy, G C
2010-03-01
Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.
Chiba, Yasutaka
2017-09-01
Fisher's exact test is commonly used to compare two groups when the outcome is binary in randomized trials. In the context of causal inference, this test explores the sharp causal null hypothesis (i.e. the causal effect of treatment is the same for all subjects), but not the weak causal null hypothesis (i.e. the causal risks are the same in the two groups). Therefore, in general, rejection of the null hypothesis by Fisher's exact test does not mean that the causal risk difference is not zero. Recently, Chiba (Journal of Biometrics and Biostatistics 2015; 6: 244) developed a new exact test for the weak causal null hypothesis when the outcome is binary in randomized trials; the new test is not based on any large sample theory and does not require any assumption. In this paper, we extend the new test; we create a version of the test applicable to a stratified analysis. The stratified exact test that we propose is general in nature and can be used in several approaches toward the estimation of treatment effects after adjusting for stratification factors. The stratified Fisher's exact test of Jung (Biometrical Journal 2014; 56: 129-140) tests the sharp causal null hypothesis. This test applies a crude estimator of the treatment effect and can be regarded as a special case of our proposed exact test. Our proposed stratified exact test can be straightforwardly extended to analysis of noninferiority trials and to construct the associated confidence interval. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mixing in a stratified shear flow: Energetics and sampling
Ivey, G. N.; Koseff, J. R.; Briggs, D. A.; Ferziger, J. H.
1993-01-01
Direct numerical simulations of the time evolution of homogeneous stably stratified shear flows have been performed for Richardson numbers from 0 to 1 and for Prandtl numbers between 0.1 and 2. The results indicate that mixing efficiency R(sub f) varies with turbulent Froude number in a manner consistent with laboratory experiments performed with Prandtl numbers of 0.7 and 700. However, unlike the laboratory results, for a particular Froude number, the simulations do not show a clear dependence on the magnitude of R(sub f) on Pr. The observed maximum value of R(sub f) is 0.25. When averaged over vertical length scales of an order of magnitude greater than either the overturning or Ozmidov scales of the flow, the simulations indicate that the dissipation rate epsilon is only weakly lognormally distributed with an intermittency of about 0.01 whereas estimated values in the ocean are 3 to 7.
Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y
2015-06-01
A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A random spatial sampling method in a rural developing nation
Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas
2014-01-01
Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...
Zhao, Wenle
2014-12-30
Stratified permuted block randomization has been the dominant covariate-adaptive randomization procedure in clinical trials for several decades. Its high probability of deterministic assignment and low capacity of covariate balancing have been well recognized. The popularity of this sub-optimal method is largely due to its simplicity in implementation and the lack of better alternatives. Proposed in this paper is a two-stage covariate-adaptive randomization procedure that uses the block urn design or the big stick design in stage one to restrict the treatment imbalance within each covariate stratum, and uses the biased-coin minimization method in stage two to control imbalances in the distribution of additional covariates that are not included in the stratification algorithm. Analytical and simulation results show that the new randomization procedure significantly reduces the probability of deterministic assignments, and improve the covariate balancing capacity when compared to the traditional stratified permuted block randomization. Copyright © 2014 John Wiley & Sons, Ltd.
A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin
Blaschek, Michael; Duttmann, Rainer
2015-04-01
The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using
Comparing a stratified treatment strategy with the standard treatment in randomized clinical trials
DEFF Research Database (Denmark)
Sun, Hong; Bretz, Frank; Gerke, Oke
2016-01-01
The increasing emergence of predictive markers for different treatments in the same patient population allows us to define stratified treatment strategies. We consider randomized clinical trials that compare a standard treatment with a new stratified treatment strategy that divides the study...... population into subgroups receiving different treatments. Because the new strategy may not be beneficial in all subgroups, we consider in this paper an intermediate approach that establishes a treatment effect in a subset of patients built by joining several subgroups. The approach is based on the simple...
Tipton, Elizabeth
2013-04-01
An important question in the design of experiments is how to ensure that the findings from the experiment are generalizable to a larger population. This concern with generalizability is particularly important when treatment effects are heterogeneous and when selecting units into the experiment using random sampling is not possible-two conditions commonly met in large-scale educational experiments. This article introduces a model-based balanced-sampling framework for improving generalizations, with a focus on developing methods that are robust to model misspecification. Additionally, the article provides a new method for sample selection within this framework: First units in an inference population are divided into relatively homogenous strata using cluster analysis, and then the sample is selected using distance rankings. In order to demonstrate and evaluate the method, a reanalysis of a completed experiment is conducted. This example compares samples selected using the new method with the actual sample used in the experiment. Results indicate that even under high nonresponse, balance is better on most covariates and that fewer coverage errors result. The article concludes with a discussion of additional benefits and limitations of the method.
Directory of Open Access Journals (Sweden)
Øren Anita
2008-12-01
Full Text Available Abstract Background Prior studies on the impact of problem gambling in the family mainly include help-seeking populations with small numbers of participants. The objective of the present stratified probability sample study was to explore the epidemiology of problem gambling in the family in the general population. Methods Men and women 16–74 years-old randomly selected from the Norwegian national population database received an invitation to participate in this postal questionnaire study. The response rate was 36.1% (3,483/9,638. Given the lack of validated criteria, two survey questions ("Have you ever noticed that a close relative spent more and more money on gambling?" and "Have you ever experienced that a close relative lied to you about how much he/she gambles?" were extrapolated from the Lie/Bet Screen for pathological gambling. Respondents answering "yes" to both questions were defined as Concerned Significant Others (CSOs. Results Overall, 2.0% of the study population was defined as CSOs. Young age, female gender, and divorced marital status were factors positively associated with being a CSO. CSOs often reported to have experienced conflicts in the family related to gambling, worsening of the family's financial situation, and impaired mental and physical health. Conclusion Problematic gambling behaviour not only affects the gambling individual but also has a strong impact on the quality of life of family members.
LTRMP Fisheries Data - Stratified Random and Fixed Site Sampling
U.S. Geological Survey, Department of the Interior — The Long Term Resource Monitoring Programs (LTRMP) annual fish monitoring began on the Upper Mississippi and Illinois Rivers in 1989. During the first two years...
Braithwaite, Jeffrey; Greenfield, David; Westbrook, Johanna; Pawsey, Marjorie; Westbrook, Mary; Gibberd, Robert; Naylor, Justine; Nathan, Sally; Robinson, Maureen; Runciman, Bill; Jackson, Margaret; Travaglia, Joanne; Johnston, Brian; Yen, Desmond; McDonald, Heather; Low, Lena; Redman, Sally; Johnson, Betty; Corbett, Angus; Hennessy, Darlene; Clark, John; Lancaster, Judie
2010-02-01
Despite the widespread use of accreditation in many countries, and prevailing beliefs that accreditation is associated with variables contributing to clinical care and organisational outcomes, little systematic research has been conducted to examine its validity as a predictor of healthcare performance. To determine whether accreditation performance is associated with self-reported clinical performance and independent ratings of four aspects of organisational performance. Independent blinded assessment of these variables in a random, stratified sample of health service organisations. Acute care: large, medium and small health-service organisations in Australia. Study participants Nineteen health service organisations employing 16 448 staff treating 321 289 inpatients and 1 971 087 non-inpatient services annually, representing approximately 5% of the Australian acute care health system. Correlations of accreditation performance with organisational culture, organisational climate, consumer involvement, leadership and clinical performance. Results Accreditation performance was significantly positively correlated with organisational culture (rho=0.618, p=0.005) and leadership (rho=0.616, p=0.005). There was a trend between accreditation and clinical performance (rho=0.450, p=0.080). Accreditation was unrelated to organisational climate (rho=0.378, p=0.110) and consumer involvement (rho=0.215, p=0.377). Accreditation results predict leadership behaviours and cultural characteristics of healthcare organisations but not organisational climate or consumer participation, and a positive trend between accreditation and clinical performance is noted.
Lafontaine, Sean J V; Sawada, M; Kristjansson, Elizabeth
2017-02-16
With the expansion and growth of research on neighbourhood characteristics, there is an increased need for direct observational field audits. Herein, we introduce a novel direct observational audit method and systematic social observation instrument (SSOI) for efficiently assessing neighbourhood aesthetics over large urban areas. Our audit method uses spatial random sampling stratified by residential zoning and incorporates both mobile geographic information systems technology and virtual environments. The reliability of our method was tested in two ways: first, in 15 Ottawa neighbourhoods, we compared results at audited locations over two subsequent years, and second; we audited every residential block (167 blocks) in one neighbourhood and compared the distribution of SSOI aesthetics index scores with results from the randomly audited locations. Finally, we present interrater reliability and consistency results on all observed items. The observed neighbourhood average aesthetics index score estimated from four or five stratified random audit locations is sufficient to characterize the average neighbourhood aesthetics. The SSOI was internally consistent and demonstrated good to excellent interrater reliability. At the neighbourhood level, aesthetics is positively related to SES and physical activity and negatively correlated with BMI. The proposed approach to direct neighbourhood auditing performs sufficiently and has the advantage of financial and temporal efficiency when auditing a large city.
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here are benthic habitat imagery that result from benthic photo-quadrat surveys conducted along transects at stratified random sites across...
Kim, Kihong
2015-06-01
The propagation and the Anderson localization of electromagnetic waves in a randomly-stratified slab, where both the dielectric permittivity and the magnetic permeability depend on one spatial coordinate in a random manner, is theoretically studied. The case where the wave impedance is uniform, while the refractive index is random, is considered in detail. The localization length and the disorder-averaged transmittance of s and p waves incident obliquely on the slab are calculated as a function of the incident angle θ and the strength of randomness in a numerically precise manner, using the invariant imbedding method. It is found that the waves incident perpendicularly on the slab are delocalized, while those incident obliquely are localized. As the incident angle increases from zero, the localization length decreases from infinity monotonically to some finite value. The localization length is found to depend on the incident angle as θ-4 and a simple analytical formula, which works quite well for weak disorder and small incident angles, is derived. The localization length does not depend on the wave polarization, but the disorder-averaged transmittance generally does.
César, Cibele C; Carvalho, Marilia S
2011-06-26
Longitudinal studies often employ complex sample designs to optimize sample size, over-representing population groups of interest. The effect of sample design on parameter estimates is quite often ignored, particularly when fitting survival models. Another major problem in long-term cohort studies is the potential bias due to loss to follow-up. In this paper we simulated a dataset with approximately 50,000 individuals as the target population and 15,000 participants to be followed up for 40 years, both based on real cohort studies of cardiovascular diseases. Two sample strategies--simple random (our golden standard) and Stratified by professional group, with non-proportional allocation--and two loss to follow-up scenarios--non-informative censoring and losses related to the professional group--were analyzed. Two modeling approaches were evaluated: weighted and non-weighted fit. Our results indicate that under the correctly specified model, ignoring the sample weights does not affect the results. However, the model ignoring the interaction of sample strata with the variable of interest and the crude estimates were highly biased. In epidemiological studies misspecification should always be considered, as different sources of variability, related to the individuals and not captured by the covariates, are always present. Therefore, allowance must be made for the possibility of unknown confounders and interactions with the main variable of interest in our data. It is strongly recommended always to correct by sample weights.
Directory of Open Access Journals (Sweden)
Juan José Leani
2015-01-01
Full Text Available X-ray resonant Raman scattering is applied at grazing incidence conditions with the aim of discriminating and identifying chemical environment of iron in different layers of stratified materials using a low resolution energy dispersive system. The methodology allows for depth studies with nanometric resolution. Nanostratified samples of Fe oxides were studied at the Brazilian synchrotron facility (LNLS using monochromatic radiation and an EDS setup. The measurements were carried out in grazing incident regime with incident photon energy lower than and close to the Fe-K absorption edge. The result allowed for characterizing oxide nanolayers, not observable with conventional geometries, identifying the oxidation state present in a particular depth of a sample surface with nanometric, or even subnanometric, resolution using a low-resolution system.
Directory of Open Access Journals (Sweden)
Paula Costa Mosca Macedo
2009-06-01
Full Text Available OBJECTIVE: To evaluate the quality of life during the first three years of training and identify its association with sociodemographicoccupational characteristics, leisure time and health habits. METHOD: A cross-sectional study with a random sample of 128 residents stratified by year of training was conducted. The Medical Outcome Study -short form 36 was administered. Mann-Whitney tests were carried out to compare percentile distributions of the eight quality of life domains, according to sociodemographic variables, and a multiple linear regression analysis was performed, followed by a validity checking for the resulting models. RESULTS: The physical component presented higher quality of life medians than the mental component. Comparisons between the three years showed that in almost all domains the quality of life scores of the second year residents were higher than the first year residents (p OBJETIVO: Avaliar a qualidade de vida do médico residente durante os três anos do treinamento e identificar sua associação com as características sociodemográficas-ocupacionais, tempo de lazer e hábitos de saúde. MÉTODO: Foi realizado um estudo transversal com amostra randomizada de 128 residentes, estratificada por ano de residência. O Medical Outcome Study-Short Form 36 foi aplicado; as distribuições percentis dos domínios de qualidade de vida de acordo com variáveis sociodemográficas foram analisadas pelo teste de Mann-Whitney e regressão linear múltipla, bem como estudo de validação pós-regressão. RESULTADOS: O componente físico da qualidade de vida apresentou medianas mais altas do que o mental. Comparações entre os três anos mostraram que quase todos os domínios de qualidade de vida tiveram escores maiores no segundo do que no primeiro ano (p < 0,01; em relação ao componente mental observamos maiores escores no terceiro ano do que nos demais (p < 0,01. Preditores de maior qualidade de vida foram: estar no segundo ou
A census-weighted, spatially-stratified household sampling strategy for urban malaria epidemiology
Directory of Open Access Journals (Sweden)
Slutsker Laurence
2008-02-01
Full Text Available Abstract Background Urban malaria is likely to become increasingly important as a consequence of the growing proportion of Africans living in cities. A novel sampling strategy was developed for urban areas to generate a sample simultaneously representative of population and inhabited environments. Such a strategy should facilitate analysis of important epidemiological relationships in this ecological context. Methods Census maps and summary data for Kisumu, Kenya, were used to create a pseudo-sampling frame using the geographic coordinates of census-sampled structures. For every enumeration area (EA designated as urban by the census (n = 535, a sample of structures equal to one-tenth the number of households was selected. In EAs designated as rural (n = 32, a geographically random sample totalling one-tenth the number of households was selected from a grid of points at 100 m intervals. The selected samples were cross-referenced to a geographic information system, and coordinates transferred to handheld global positioning units. Interviewers found the closest eligible household to the sampling point and interviewed the caregiver of a child aged Results 4,336 interviews were completed in 473 of the 567 study area EAs from June 2002 through February 2003. EAs without completed interviews were randomly distributed, and non-response was approximately 2%. Mean distance from the assigned sampling point to the completed interview was 74.6 m, and was significantly less in urban than rural EAs, even when controlling for number of households. The selected sample had significantly more children and females of childbearing age than the general population, and fewer older individuals. Conclusion This method selected a sample that was simultaneously population-representative and inclusive of important environmental variation. The use of a pseudo-sampling frame and pre-programmed handheld GPS units is more efficient and may yield a more complete sample than
A census-weighted, spatially-stratified household sampling strategy for urban malaria epidemiology.
Siri, Jose G; Lindblade, Kim A; Rosen, Daniel H; Onyango, Bernard; Vulule, John M; Slutsker, Laurence; Wilson, Mark L
2008-02-29
Urban malaria is likely to become increasingly important as a consequence of the growing proportion of Africans living in cities. A novel sampling strategy was developed for urban areas to generate a sample simultaneously representative of population and inhabited environments. Such a strategy should facilitate analysis of important epidemiological relationships in this ecological context. Census maps and summary data for Kisumu, Kenya, were used to create a pseudo-sampling frame using the geographic coordinates of census-sampled structures. For every enumeration area (EA) designated as urban by the census (n = 535), a sample of structures equal to one-tenth the number of households was selected. In EAs designated as rural (n = 32), a geographically random sample totalling one-tenth the number of households was selected from a grid of points at 100 m intervals. The selected samples were cross-referenced to a geographic information system, and coordinates transferred to handheld global positioning units. Interviewers found the closest eligible household to the sampling point and interviewed the caregiver of a child aged < 10 years. The demographics of the selected sample were compared with results from the Kenya Demographic and Health Survey to assess sample validity. Results were also compared among urban and rural EAs. 4,336 interviews were completed in 473 of the 567 study area EAs from June 2002 through February 2003. EAs without completed interviews were randomly distributed, and non-response was approximately 2%. Mean distance from the assigned sampling point to the completed interview was 74.6 m, and was significantly less in urban than rural EAs, even when controlling for number of households. The selected sample had significantly more children and females of childbearing age than the general population, and fewer older individuals. This method selected a sample that was simultaneously population-representative and inclusive of important environmental
Employment status, inflation and suicidal behaviour: an analysis of a stratified sample in Italy.
Solano, Paola; Pizzorno, Enrico; Gallina, Anna M; Mattei, Chiara; Gabrielli, Filippo; Kayman, Joshua
2012-09-01
There is abundant empirical evidence of a surplus risk of suicide among the unemployed, although few studies have investigated the influence of economic downturns on suicidal behaviours in an employment status-stratified sample. We investigated how economic inflation affected suicidal behaviours according to employment status in Italy from 2001 to 2008. Data concerning economically active people were provided by the Italian Institute for Statistical Analysis and by the International Monetary Fund. The association between inflation and completed versus attempted suicide with respect to employment status was investigated in every year and quarter-year of the study time frame. We considered three occupational categories: employed, unemployed who were previously employed and unemployed who had never worked. The unemployed are at higher suicide risk than the employed. Among the PE, a significant association between inflation and suicide attempt was found, whereas no association was reported concerning completed suicides. No association was found between completed and attempted suicides among the employed, the NE and inflation. Completed suicide in females is significantly associated with unemployment in every quarter-year. The reported vulnerability to suicidal behaviours among the PE as inflation rises underlines the need of effective support strategies for both genders in times of economic downturns.
Robert B. Thomas; Jack Lewis
1993-01-01
Time-stratified sampling of sediment for estimating suspended load is introduced and compared to selection at list time (SALT) sampling. Both methods provide unbiased estimates of load and variance. The magnitude of the variance of the two methods is compared using five storm populations of suspended sediment flux derived from turbidity data. Under like conditions,...
Papamichail, Dimitris; Petraki, Ioanna; Arkoudis, Chrisoula; Terzidis, Agis; Smyrnakis, Emmanouil; Benos, Alexis; Panagiotopoulos, Takis
2017-04-01
Research on Roma health is fragmentary as major methodological obstacles often exist. Reliable estimates on vaccination coverage of Roma children at a national level and identification of risk factors for low coverage could play an instrumental role in developing evidence-based policies to promote vaccination in this marginalized population group. We carried out a national vaccination coverage survey of Roma children. Thirty Roma settlements, stratified by geographical region and settlement type, were included; 7-10 children aged 24-77 months were selected from each settlement using systematic sampling. Information on children's vaccination coverage was collected from multiple sources. In the analysis we applied weights for each stratum, identified through a consensus process. A total of 251 Roma children participated in the study. A vaccination document was presented for the large majority (86%). We found very low vaccination coverage for all vaccines. In 35-39% of children 'minimum vaccination' (DTP3 and IPV2 and MMR1) was administered, while 34-38% had received HepB3 and 31-35% Hib3; no child was vaccinated against tuberculosis in the first year of life. Better living conditions and primary care services close to Roma settlements were associated with higher vaccination indices. Our study showed inadequate vaccination coverage of Roma children in Greece, much lower than that of the non-minority child population. This serious public health challenge should be systematically addressed, or, amid continuing economic recession, the gap may widen. Valid national estimates on important characteristics of the Roma population can contribute to planning inclusion policies.
Directory of Open Access Journals (Sweden)
Ergün Eraslan
2009-01-01
Full Text Available There has been a great interest in the use of variance reduction techniques (VRTs in simulation output analysis for the purpose of improving accuracy when the performance measurements of complex production and service systems are estimated. Therefore, a simulation output analysis to improve the accuracy and reliability of the output is required. The performance measurements are required to have a narrow and strong confidence interval. For a given confidence level, a smaller confidence interval is supposed to be better than the larger one. The wide of confidence interval, determined by the half length, will depend on the variance. Generally, increased replication of the simulation model appears to have been the easiest way to reduce variance but this increases the simulation costs in complex-structured and large-sized manufacturing and service systems. Thus, VRTs are used in experiments to avoid computational cost of decision-making processes for more precise results. In this study, the effect of Control Variates (CVs and Stratified Sampling (SS techniques in reducing variance of the performance measurements of M/M/1 and GI/G/1 queue models is investigated considering four probability distributions utilizing randomly generated parameters for arrival and service processes.
Hussey, Michael A; Koch, Gary G; Preisser, John S; Saville, Benjamin R
2016-01-01
Time-to-event or dichotomous outcomes in randomized clinical trials often have analyses using the Cox proportional hazards model or conditional logistic regression, respectively, to obtain covariate-adjusted log hazard (or odds) ratios. Nonparametric Randomization-Based Analysis of Covariance (NPANCOVA) can be applied to unadjusted log hazard (or odds) ratios estimated from a model containing treatment as the only explanatory variable. These adjusted estimates are stratified population-averaged treatment effects and only require a valid randomization to the two treatment groups and avoid key modeling assumptions (e.g., proportional hazards in the case of a Cox model) for the adjustment variables. The methodology has application in the regulatory environment where such assumptions cannot be verified a priori. Application of the methodology is illustrated through three examples on real data from two randomized trials.
A random spatial sampling method in a rural developing nation.
Kondo, Michelle C; Bream, Kent D W; Barg, Frances K; Branas, Charles C
2014-04-10
Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method using geographical information system (GIS) software and global positioning system (GPS) technology for application in a health survey in a rural region of Guatemala, as well as a qualitative study of the enumeration process. This method offers an alternative sampling technique that could reduce opportunities for bias in household selection compared to cluster methods. However, its use is subject to issues surrounding survey preparation, technological limitations and in-the-field household selection. Application of this method in remote areas will raise challenges surrounding the boundary delineation process, use and translation of satellite imagery between GIS and GPS, and household selection at each survey point in varying field conditions. This method favors household selection in denser urban areas and in new residential developments. Random spatial sampling methodology can be used to survey a random sample of population in a remote region of a developing nation. Although this method should be further validated and compared with more established methods to determine its utility in social survey applications, it shows promise for use in developing nations with resource-challenged environments where detailed geographic and human census data are less available.
Content analysis of a stratified random selection of JVME articles: 1974-2004.
Olson, Lynne E
2011-01-01
A content analysis was performed on a random sample (N = 168) of 25% of the articles published in the Journal of Veterinary Medical Education (JVME) per year from 1974 through 2004. Over time, there were increased numbers of authors per paper, more cross-institutional collaborations, greater prevalence of references or endnotes, and lengthier articles, which could indicate a trend toward publications describing more complex or complete work. The number of first authors that could be identified as female was greatest for the most recent time period studied (2000-2004). Two different categorization schemes were created to assess the content of the publications. The first categorization scheme identified the most frequently published topics as admissions, descriptions of courses, the effect of changing teaching methods, issues facing the profession, and examples of uses of technology. The second categorization scheme identified the subset of articles that described medical education research on the basis of the purpose of the research, which represented only 14% of the sample articles (24 of 168). Of that group, only three of 24, or 12%, represented studies based on a firm conceptual framework that could be confirmed or refuted by the study's results. The results indicate that JVME is meeting its broadly based mission and that publications in the veterinary medical education literature have features common to publications in medicine and medical education.
k-Means: Random Sampling Procedure
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.
Rostami, Maryam; Ramezani Tehrani, Fahimeh; Simbar, Masoumeh; Hosseinpanah, Farhad; Alavi Majd, Hamid
2017-04-07
Although there have been marked improvements in our understanding of vitamin D functions in different diseases, gaps on its role during pregnancy remain. Due to the lack of consensus on the most accurate marker of vitamin D deficiency during pregnancy and the optimal level of 25-hydroxyvitamin D, 25(OH)D, for its definition, vitamin D deficiency assessment during pregnancy is a complicated process. Besides, the optimal protocol for treatment of hypovitaminosis D and its effect on maternal and neonatal outcomes are still unclear. The aim of our study was to estimate the prevalence of vitamin D deficiency in the first trimester of pregnancy and to compare vitamin D screening strategy with no screening. Also, we intended to compare the effectiveness of various treatment regimens on maternal and neonatal outcomes in Masjed-Soleyman and Shushtar cities of Khuzestan province, Iran. This was a two-phase study. First, a population-based cross-sectional study was conducted; recruiting 1600 and 900 first trimester pregnant women from health centers of Masjed-Soleyman and Shushtar, respectively, using stratified multistage cluster sampling with probability proportional to size (PPS) method. Second, to assess the effect of screening strategy on maternal and neonatal outcomes, Masjed-Soleyman participants were assigned to a screening program versus Shushtar participants who became the nonscreening arm. Within the framework of the screening regimen, an 8-arm blind randomized clinical trial was undertaken to compare the effects of various treatment protocols. A total of 800 pregnant women with vitamin D deficiency were selected using simple random sampling from the 1600 individuals of Masjed-Soleyman as interventional groups. Serum concentrations of 25(OH)D were classified as: (1) severe deficient (20ng/ml). Those with severe and moderate deficiency were randomly divided into 4 subgroups and received vitamin D3 based on protocol and were followed until delivery. Data was analyzed
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here result from benthic photo-quadrat surveys conducted along transects at stratified random sites across American Samoa in 2015 as a part of...
Sengaloundeth, Sivong; Green, Michael D; Fernández, Facundo M; Manolin, Ot; Phommavong, Khamlieng; Insixiengmay, Vongsavanh; Hampton, Christina Y; Nyadong, Leonard; Mildenhall, Dallas C; Hostetler, Dana; Khounsaknalath, Lamphet; Vongsack, Latsamy; Phompida, Samlane; Vanisaveth, Viengxay; Syhakhang, Lamphone; Newton, Paul N
2009-07-28
Counterfeit oral artesunate has been a major public health problem in mainland SE Asia, impeding malaria control. A countrywide stratified random survey was performed to determine the availability and quality of oral artesunate in pharmacies and outlets (shops selling medicines) in the Lao PDR (Laos). In 2003, 'mystery' shoppers were asked to buy artesunate tablets from 180 outlets in 12 of the 18 Lao provinces. Outlets were selected using stratified random sampling by investigators not involved in sampling. Samples were analysed for packaging characteristics, by the Fast Red Dye test, high-performance liquid chromatography (HPLC), mass spectrometry (MS), X-ray diffractometry and pollen analysis. Of 180 outlets sampled, 25 (13.9%) sold oral artesunate. Outlets selling artesunate were more commonly found in the more malarious southern Laos. Of the 25 outlets, 22 (88%; 95%CI 68-97%) sold counterfeit artesunate, as defined by packaging and chemistry. No artesunate was detected in the counterfeits by any of the chemical analysis techniques and analysis of the packaging demonstrated seven different counterfeit types. There was complete agreement between the Fast Red dye test, HPLC and MS analysis. A wide variety of wrong active ingredients were found by MS. Of great concern, 4/27 (14.8%) fakes contained detectable amounts of artemisinin (0.26-115.7 mg/tablet). This random survey confirms results from previous convenience surveys that counterfeit artesunate is a severe public health problem. The presence of artemisinin in counterfeits may encourage malaria resistance to artemisinin derivatives. With increasing accessibility of artemisinin-derivative combination therapy (ACT) in Laos, the removal of artesunate monotherapy from pharmacies may be an effective intervention.
Sengaloundeth, Sivong; Green, Michael D; Fernández, Facundo M; Manolin, Ot; Phommavong, Khamlieng; Insixiengmay, Vongsavanh; Hampton, Christina Y; Nyadong, Leonard; Mildenhall, Dallas C; Hostetler, Dana; Khounsaknalath, Lamphet; Vongsack, Latsamy; Phompida, Samlane; Vanisaveth, Viengxay; Syhakhang, Lamphone; Newton, Paul N
2009-01-01
Background Counterfeit oral artesunate has been a major public health problem in mainland SE Asia, impeding malaria control. A countrywide stratified random survey was performed to determine the availability and quality of oral artesunate in pharmacies and outlets (shops selling medicines) in the Lao PDR (Laos). Methods In 2003, 'mystery' shoppers were asked to buy artesunate tablets from 180 outlets in 12 of the 18 Lao provinces. Outlets were selected using stratified random sampling by investigators not involved in sampling. Samples were analysed for packaging characteristics, by the Fast Red Dye test, high-performance liquid chromatography (HPLC), mass spectrometry (MS), X-ray diffractometry and pollen analysis. Results Of 180 outlets sampled, 25 (13.9%) sold oral artesunate. Outlets selling artesunate were more commonly found in the more malarious southern Laos. Of the 25 outlets, 22 (88%; 95%CI 68–97%) sold counterfeit artesunate, as defined by packaging and chemistry. No artesunate was detected in the counterfeits by any of the chemical analysis techniques and analysis of the packaging demonstrated seven different counterfeit types. There was complete agreement between the Fast Red dye test, HPLC and MS analysis. A wide variety of wrong active ingredients were found by MS. Of great concern, 4/27 (14.8%) fakes contained detectable amounts of artemisinin (0.26–115.7 mg/tablet). Conclusion This random survey confirms results from previous convenience surveys that counterfeit artesunate is a severe public health problem. The presence of artemisinin in counterfeits may encourage malaria resistance to artemisinin derivatives. With increasing accessibility of artemisinin-derivative combination therapy (ACT) in Laos, the removal of artesunate monotherapy from pharmacies may be an effective intervention. PMID:19638225
Post-stratified estimation: with-in strata and total sample size recommendations
James A. Westfall; Paul L. Patterson; John W. Coulston
2011-01-01
Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...
K-Median: Random Sampling Procedure
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. K-Median: Random Sampling Procedure. Sample a set of 1/ + 1 points from P. Let Q = first 1/ points, p = last point. Let T = Avg. 1-Median cost of P, c=1-Median. Let B1 = B(c,T/ 2), B2 = B(p, T). Let P' = points in B1.
Leonardo, Lydia; Rivera, Pilarita; Saniel, Ofelia; Villacorte, Elena; Lebanan, May Antonnette; Crisostomo, Bobby; Hernandez, Leda; Baquilod, Mario; Erce, Edgardo; Martinez, Ruth; Velayudhan, Raman
2012-01-01
For the first time in the country, a national baseline prevalence survey using a well-defined sampling design such as a stratified two-step systematic cluster sampling was conducted in 2005 to 2008. The purpose of the survey was to stratify the provinces according to prevalence of schistosomiasis such as high, moderate, and low prevalence which in turn would be used as basis for the intervention program to be implemented. The national survey was divided into four phases. Results of the first two phases conducted in Mindanao and the Visayas were published in 2008. Data from the last two phases showed three provinces with prevalence rates higher than endemic provinces surveyed in the first two phases thus changing the overall ranking of endemic provinces at the national level. Age and sex distribution of schistosomiasis remained the same in Luzon and Maguindanao. Soil-transmitted and food-borne helminthes were also recorded in these surveys. This paper deals with the results of the last 2 phases done in Luzon and Maguindanao and integrates all four phases in the discussion.
Directory of Open Access Journals (Sweden)
Lydia Leonardo
2012-01-01
Full Text Available For the first time in the country, a national baseline prevalence survey using a well-defined sampling design such as a stratified two-step systematic cluster sampling was conducted in 2005 to 2008. The purpose of the survey was to stratify the provinces according to prevalence of schistosomiasis such as high, moderate, and low prevalence which in turn would be used as basis for the intervention program to be implemented. The national survey was divided into four phases. Results of the first two phases conducted in Mindanao and the Visayas were published in 2008. Data from the last two phases showed three provinces with prevalence rates higher than endemic provinces surveyed in the first two phases thus changing the overall ranking of endemic provinces at the national level. Age and sex distribution of schistosomiasis remained the same in Luzon and Maguindanao. Soil-transmitted and food-borne helminthes were also recorded in these surveys. This paper deals with the results of the last 2 phases done in Luzon and Maguindanao and integrates all four phases in the discussion.
Directory of Open Access Journals (Sweden)
Khewal Bhupendra Kesur
2013-01-01
Full Text Available This paper examines the application of Latin Hypercube Sampling (LHS and Antithetic Variables (AVs to reduce the variance of estimated performance measures from microscopic traffic simulators. LHS and AV allow for a more representative coverage of input probability distributions through stratification, reducing the standard error of simulation outputs. Two methods of implementation are examined, one where stratification is applied to headways and routing decisions of individual vehicles and another where vehicle counts and entry times are more evenly sampled. The proposed methods have wider applicability in general queuing systems. LHS is found to outperform AV, and reductions of up to 71% in the standard error of estimates of traffic network performance relative to independent sampling are obtained. LHS allows for a reduction in the execution time of computationally expensive microscopic traffic simulators as fewer simulations are required to achieve a fixed level of precision with reductions of up to 84% in computing time noted on the test cases considered. The benefits of LHS are amplified for more congested networks and as the required level of precision increases.
Directory of Open Access Journals (Sweden)
A. Martín Andrés
2015-01-01
Full Text Available The Mantel-Haenszel test is the most frequent asymptotic test used for analyzing stratified 2 × 2 tables. Its exact alternative is the test of Birch, which has recently been reconsidered by Jung. Both tests have a conditional origin: Pearson’s chi-squared test and Fisher’s exact test, respectively. But both tests have the same drawback that the result of global test (the stratified test may not be compatible with the result of individual tests (the test for each stratum. In this paper, we propose to carry out the global test using a multiple comparisons method (MC method which does not have this disadvantage. By refining the method (MCB method an alternative to the Mantel-Haenszel and Birch tests may be obtained. The new MC and MCB methods have the advantage that they may be applied from an unconditional view, a methodology which until now has not been applied to this problem. We also propose some sample size calculation methods.
Predictors of suicidal ideation in a gender-stratified sample of OEF/OIF veterans.
Gradus, Jaimie L; Street, Amy E; Suvak, Michael K; Resick, Patricia A
2013-10-01
There is a growing concern about suicide among Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF) veterans. We examined the role of postdeployment mental health in associations between deployment stressors and postdeployment suicidal ideation (SI) in a national sample of 2,321 female and male OEF/OIF veterans. Data were obtained via survey, and path analysis was used. For women and men, mental health symptoms largely accounted for associations between deployment stressors and SI; however, they only partly accounted for the sexual harassment and SI association among women. These findings enhance the understanding of the mental health profile of OEF/OIF veterans. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.
Acceptance sampling using judgmental and randomly selected samples
Energy Technology Data Exchange (ETDEWEB)
Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl
2010-09-01
We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.
Ventura, Sandra Rua; Ramos, Isabel; Rodrigues, Pedro Pereira
2015-01-01
Background Mammography is considered the best imaging technique for breast cancer screening, and the radiographer plays an important role in its performance. Therefore, continuing education is critical to improving the performance of these professionals and thus providing better health care services. Objective Our goal was to develop an e-learning course on breast imaging for radiographers, assessing its efficacy, effectiveness, and user satisfaction. Methods A stratified randomized controlled trial was performed with radiographers and radiology students who already had mammography training, using pre- and post-knowledge tests, and satisfaction questionnaires. The primary outcome was the improvement in test results (percentage of correct answers), using intention-to-treat and per-protocol analysis. Results A total of 54 participants were assigned to the intervention (20 students plus 34 radiographers) with 53 controls (19+34). The intervention was completed by 40 participants (11+29), with 4 (2+2) discontinued interventions, and 10 (7+3) lost to follow-up. Differences in the primary outcome were found between intervention and control: 21 versus 4 percentage points (pp), Pe-learning course is effective, especially for radiographers, which highlights the need for continuing education. PMID:25560547
Generation and Analysis of Constrained Random Sampling Patterns
DEFF Research Database (Denmark)
Pierzchlewski, Jacek; Arildsen, Thomas
2016-01-01
Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, whi...
Jayashree, R.; A Malini; A Rakhshani; Nagendra, H R; S Gunasheela; Nagarathna, R
2013-01-01
Background: Yoga improves maternal and fetal outcomes in pregnancy. Platelet Count and Uric acid (Ua) are valuable screening measures in high-risk pregnancy. Aim: To examine the effect of yoga on platelet counts and serum Ua in high-risk pregnancy. Materials and Methods: This stratified randomized controlled trial, conducted by S-VYASA University at St. John′s Medical College Hospital and Gunasheela Maternity Hospital, recruited 68 women with high-risk pregnancy (30 yoga and 38 control...
Christensen, Darren R; Dowling, Nicki A; Jackson, Alun C; Thomas, Shane A
2015-12-01
Demographic characteristics associated with gambling participation and problem gambling severity were investigated in a stratified random survey in Tasmania, Australia. Computer-assisted telephone interviews were conducted in March 2011 resulting in a representative sample of 4,303 Tasmanian residents aged 18 years or older. Overall, 64.8% of Tasmanian adults reported participating in some form of gambling in the previous 12 months. The most common forms of gambling were lotteries (46.5%), keno (24.3%), instant scratch tickets (24.3%), and electronic gaming machines (20.5%). Gambling severity rates were estimated at non-gambling (34.8%), non-problem gambling (57.4%), low risk gambling (5.3%), moderate risk (1.8%), and problem gambling (.7%). Compared to Tasmanian gamblers as a whole significantly higher annual participation rates were reported by couples with no children, those in full time paid employment, and people who did not complete secondary school. Compared to Tasmanian gamblers as a whole significantly higher gambling frequencies were reported by males, people aged 65 or older, and people who were on pensions or were unable to work. Compared to Tasmanian gamblers as a whole significantly higher gambling expenditure was reported by males. The highest average expenditure was for horse and greyhound racing ($AUD 1,556), double the next highest gambling activity electronic gaming machines ($AUD 767). Compared to Tasmanian gamblers as a whole problem gamblers were significantly younger, in paid employment, reported lower incomes, and were born in Australia. Although gambling participation rates appear to be falling, problem gambling severity rates remain stable. These changes appear to reflect a maturing gambling market and the need for population specific harm minimisation strategies.
Carter, J.; Merino, J.H.; Merino, S.L.
2009-01-01
Estimates of submerged aquatic vegetative (SAV) along the U.S. Gulf of Mexico (Gulf) generally focus on seagrasses. In 2000, we attempted a synoptic survey of SAV in the mesohaline (5-20 ppt) zone of estuarine and nearshore areas of the northeastern Gulf. Areas with SAV were identified from existing aerial 1992 photography, and a literature review was used to select those areas that were likely to experience mesohaline conditions during the growing season. In 2000, a drought year, we visited 217 randomly selected SAV beds and collected data on species composition and environmental conditions. In general, sites were either clearly polyhaline (2: 20 ppt) or oligohaline (S 5 ppt), with only five sites measuring between 5 and 20 ppt. Ruppia maritima L. (13-35 ppt, n = 28) was the only species that occurred in mesohaline salinities. Halodule wrightii Asch. occurred in 73% of the beds. The nonindigenous Myriophyllum spicatum L. was present in four locations with salinities below 3 ppt. No nonindigenous macroalgae were identified, and no nonindigenous angiosperms occurred in salinities above 3 ppt. Selecting sample locations based on historical salinity data was not a successful strategy for surveying SAV in mesohaline systems, particularly during a drought year. Our ability to locate SAV beds within 50 m of their aerially located position 8 yr later demonstrates some SAV stability in the highly variable conditions of the study area. ?? 2009 by the Marine Environmental Silences Consortium of Alabama.
Occupational position and its relation to mental distress in a random sample of Danish residents
DEFF Research Database (Denmark)
Rugulies, Reiner Ernst; Madsen, Ida E H; Nielsen, Maj Britt D
2010-01-01
somatization symptoms (OR = 6.28, 95% CI = 1.39-28.46). CONCLUSIONS: Unskilled manual workers, the unemployed, and, to a lesser extent, the low-grade self-employed showed an increased level of mental distress. Activities to promote mental health in the Danish population should be directed toward these groups.......PURPOSE: To analyze the distribution of depressive, anxiety, and somatization symptoms across different occupational positions in a random sample of Danish residents. METHODS: The study sample consisted of 591 Danish residents (50% women), aged 20-65, drawn from an age- and gender-stratified random...... sample of the Danish population. Participants filled out a survey that included the 92 item version of the Hopkins Symptom Checklist (SCL-92). We categorized occupational position into seven groups: high- and low-grade non-manual workers, skilled and unskilled manual workers, high- and low-grade self...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here result from benthic photo-quadrat surveys conducted along transects at stratified random sites across the Hawaiian archipelago in 2013 as a...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here result from benthic photo-quadrat surveys conducted along transects at stratified random sites across American Samoa in 2015 as a part of...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here result from benthic photo-quadrat surveys conducted along transects at stratified random sites across the Pacific Remote Island Areas since...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here result from benthic photo-quadrat surveys conducted along transects at stratified random sites across the Hawaiian archipelago in 2016 as a...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here result from benthic photo-quadrat surveys conducted along transects at stratified random sites across the Mariana archipelago in 2014 as a...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here are benthic habitat imagery that result from benthic photo-quadrat surveys conducted along transects at stratified random sites across Wake...
National Oceanic and Atmospheric Administration, Department of Commerce — The data described here are benthic habitat imagery that result from benthic photo-quadrat surveys conducted along transects at stratified random sites across the...
Power Spectrum Estimation of Randomly Sampled Signals
DEFF Research Database (Denmark)
Velte, Clara M.; Buchhave, Preben; K. George, William
2014-01-01
with high data rate and low inherent bias, respectively, while residence time weighting provides non-biased estimates regardless of setting. The free-running processor was also tested and compared to residence time weighting using actual LDA measurements in a turbulent round jet. Power spectra from...... of alternative methods attempting to produce correct power spectra have been invented andtested. The objective of the current study is to create a simple computer generated signal for baseline testing of residence time weighting and some of the most commonly proposed algorithms (or algorithms which most...... modernalgorithms ultimately are based on), sample-and-hold and the direct spectral estimator without residence time weighting, and compare how they perform in relation to power spectra based on the equidistantly sampled reference signal. The computer generated signal is a Poisson process with a sample rate...
Random constraint sampling and duality for convex optimization
Haskell, William B.; Pengqian, Yu
2016-01-01
We are interested in solving convex optimization problems with large numbers of constraints. Randomized algorithms, such as random constraint sampling, have been very successful in giving nearly optimal solutions to such problems. In this paper, we combine random constraint sampling with the classical primal-dual algorithm for convex optimization problems with large numbers of constraints, and we give a convergence rate analysis. We then report numerical experiments that verify the effectiven...
Random number datasets generated from statistical analysis of randomly sampled GSM recharge cards.
Okagbue, Hilary I; Opanuga, Abiodun A; Oguntunde, Pelumi E; Ugwoke, Paulinus O
2017-02-01
In this article, a random number of datasets was generated from random samples of used GSM (Global Systems for Mobile Communications) recharge cards. Statistical analyses were performed to refine the raw data to random number datasets arranged in table. A detailed description of the method and relevant tests of randomness were also discussed.
Power Spectrum Estimation of Randomly Sampled Signals
DEFF Research Database (Denmark)
Velte, C. M.; Buchhave, P.; K. George, W.
. Residence time weighting provides non-biased estimates regardless of setting. The free-running processor was also tested and compared to residence time weighting using actual LDA measurements in a turbulent round jet. Power spectra from measurements on the jet centerline and the outer part of the jet...... sine waves. The primary signal and the corresponding power spectrum are shown in Figure 1. The conventional spectrum shows multiple erroneous mixing frequencies and the peak values are too low. The residence time weighted spectrum is correct. The sample-and-hold spectrum has lower power than...... the correct spectrum, and the f -2-filtering effect appearing for low data densities is evident (Adrian and Yao 1987). The remaining tests also show that sample-and-hold and the free-running processor perform well only under very particular circumstances with high data rate and low inherent bias, respectively...
Biro, Peter A
2013-02-01
Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.
Effect of Sampling Depth on Air-Sea CO2 Flux Estimates in River-Stratified Arctic Coastal Waters
Miller, L. A.; Papakyriakou, T. N.
2015-12-01
In summer-time Arctic coastal waters that are strongly influenced by river run-off, extreme stratification severely limits wind mixing, making it difficult to effectively sample the surface 'mixed layer', which can be as shallow as 1 m, from a ship. During two expeditions in southwestern Hudson Bay, off the Nelson, Hayes, and Churchill River estuaries, we confirmed that sampling depth has a strong impact on estimates of 'surface' pCO2 and calculated air-sea CO2 fluxes. We determined pCO2 in samples collected from 5 m, using a typical underway system on the ship's seawater supply; from the 'surface' rosette bottle, which was generally between 1 and 3 m; and using a niskin bottle deployed at 1 m and just below the surface from a small boat away from the ship. Our samples confirmed that the error in pCO2 derived from typical ship-board versus small-boat sampling at a single station could be nearly 90 μatm, leading to errors in the calculated air-sea CO2 flux of more than 0.1 mmol/(m2s). Attempting to extrapolate such fluxes over the 6,000,000 km2 area of the Arctic shelves would generate an error approaching a gigamol CO2/s. Averaging the station data over a cruise still resulted in an error of nearly 50% in the total flux estimate. Our results have implications not only for the design and execution of expedition-based sampling, but also for placement of in-situ sensors. Particularly in polar waters, sensors are usually deployed on moorings, well below the surface, to avoid damage and destruction from drifting ice. However, to obtain accurate information on air-sea fluxes in these areas, it is necessary to deploy sensors on ice-capable buoys that can position the sensors in true 'surface' waters.
SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS
Directory of Open Access Journals (Sweden)
Sampath Sundaram
2010-09-01
Full Text Available In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957for various types of systematic sampling schemes available in literature, namely(i Balanced Systematic Sampling (BSS of Sethi (1965 and (ii Modified Systematic Sampling (MSS of Singh, Jindal, and Garg (1968. Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic sampling (LSS with two random starts using appropriate super population models with the help of R package for statistical computing.
Efficient sampling of complex network with modified random walk strategies
Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei
2018-02-01
We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.
Valverde Arias, Omar; Valencia, José; Saa Requejo, António; Garrido, Alberto
2017-04-01
Based-index insurance for farming has become in an efficient alternative for farmers to transfer risk to another instances. Actually, in Ecuador, there is a conventional agricultural insurance for rice crop, although it is necessary to develop based-index insurance that could cover more farmers for extreme events with catastrophic consequences. This based-index insurance could consist to estimate crop losses by drought through NDVI (Normalized Difference Vegetation Index). A first step was to establish homogeneous areas based on Principal Component Analysis of soil properties (Valverde et al., 2016) where rice is cultivated. Two main areas were found (f7 and f15) that was based mainly on slope, texture and effective depth. These ones are the sites considered to sampling and study the NDVI. MODIS images of 250x250 m resolution were selected of the study area, Babahoyo canton (Los Rios province, Ecuador), and calculated the NDVI index at rice growth stage in both sites at several years. The number of samples in each site was proportional to the area of cultivated rice. NDVI distribution values were calculated in each homogeneous zones (f7 and f15) through years. Several statistical analysis were performed to investigate the difference between both sites. Results are discussed in the context of based index insurance.
Herrero, Pablo; Gómez-Trullén, Eva M; Asensio, Angel; García, Elena; Casas, Roberto; Monserrat, Esther; Pandyan, Anand
2012-12-01
To investigate whether hippotherapy (when applied by a simulator) improves postural control and balance in children with cerebral palsy. Stratified single-blind randomized controlled trial with an independent assessor. Stratification was made by gross motor function classification system levels, and allocation was concealed. Children between 4 and 18 years old with cerebral palsy. Participants were randomized to an intervention (simulator ON) or control (simulator OFF) group after getting informed consent. Treatment was provided once a week (15 minutes) for 10 weeks. Gross Motor Function Measure (dimension B for balance and the Total Score) and Sitting Assessment Scale were carried out at baseline (prior to randomization), end of intervention and 12 weeks after completing the intervention. Thirty-eight children participated. The groups were balanced at baseline. Sitting balance (measured by dimension B of the Gross Motor Function Measure) improved significantly in the treatment group (effect size = 0.36; 95% CI 0.01-0.71) and the effect size was greater in the severely disabled group (effect size = 0.80; 95% CI 0.13-1.47). The improvements in sitting balance were not maintained over the follow-up period. Changes in the total score of the Gross Motor Function Measure and the Sitting Assessment Scale were not significant. Hippotherapy with a simulator can improve sitting balance in cerebral palsy children who have higher levels of disability. However, this did not lead to a change in the overall function of these children (Gross Motor Function Classification System level V).
Spatial Random Sampling: A Structure-Preserving Data Sketching Tool
Rahmani, Mostafa; Atia, George K.
2017-09-01
Random column sampling is not guaranteed to yield data sketches that preserve the underlying structures of the data and may not sample sufficiently from less-populated data clusters. Also, adaptive sampling can often provide accurate low rank approximations, yet may fall short of producing descriptive data sketches, especially when the cluster centers are linearly dependent. Motivated by that, this paper introduces a novel randomized column sampling tool dubbed Spatial Random Sampling (SRS), in which data points are sampled based on their proximity to randomly sampled points on the unit sphere. The most compelling feature of SRS is that the corresponding probability of sampling from a given data cluster is proportional to the surface area the cluster occupies on the unit sphere, independently from the size of the cluster population. Although it is fully randomized, SRS is shown to provide descriptive and balanced data representations. The proposed idea addresses a pressing need in data science and holds potential to inspire many novel approaches for analysis of big data.
Methods for sample size determination in cluster randomized trials.
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-06-01
The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. © The Author 2015. Published by Oxford University Press on behalf of the International Epidemiological Association.
SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS
Sampath Sundaram; Ammani Sivaraman
2010-01-01
In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i) Balanced Systematic Sampling (BSS) of Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...
Sampling large random knots in a confined space
Energy Technology Data Exchange (ETDEWEB)
Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)
2007-09-28
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
Sequential time interleaved random equivalent sampling for repetitive signal
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
Bakri, Barbara; Weimer, Marco; Hauck, Gerrit; Reich, Gabriele
2015-11-01
Scope of the study was (1) to develop a lean quantitative calibration for real-time near-infrared (NIR) blend monitoring, which meets the requirements in early development of pharmaceutical products and (2) to compare the prediction performance of this approach with the results obtained from stratified sampling using a sample thief in combination with off-line high pressure liquid chromatography (HPLC) and at-line near-infrared chemical imaging (NIRCI). Tablets were manufactured from powder blends and analyzed with NIRCI and HPLC to verify the real-time results. The model formulation contained 25% w/w naproxen as a cohesive active pharmaceutical ingredient (API), microcrystalline cellulose and croscarmellose sodium as cohesive excipients and free-flowing mannitol. Five in-line NIR calibration approaches, all using the spectra from the end of the blending process as reference for PLS modeling, were compared in terms of selectivity, precision, prediction accuracy and robustness. High selectivity could be achieved with a "reduced" approach i.e. API and time saving approach (35% reduction of API amount) based on six concentration levels of the API with three levels realized by three independent powder blends and the additional levels obtained by simply increasing the API concentration in these blends. Accuracy and robustness were further improved by combining this calibration set with a second independent data set comprising different excipient concentrations and reflecting different environmental conditions. The combined calibration model was used to monitor the blending process of independent batches. For this model formulation the target concentration of the API could be achieved within 3 min indicating a short blending time. The in-line NIR approach was verified by stratified sampling HPLC and NIRCI results. All three methods revealed comparable results regarding blend end point determination. Differences in both mean API concentration and RSD values could be
Performance of Random Effects Model Estimators under Complex Sampling Designs
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…
Picard, M. A.; Solana, P.; Urchueguía, J. F.
1998-09-01
In our work we have analyzed different stratified rockwool samples, with considerable differences regarding density, mean pore diameter, and porosity, by means of a new method for the measurement of the dynamic flow resistance based on the electrical analogy. This method enables us to measure this parameter without the need for placing the sample between two microphones. Our experimental results have been compared to those obtained with a different measurement scheme and, from a theoretical point of view, we have examined the extent to which the capillary pore approximation can be utilized in intermediate flow regimes and Poiseuille flow regimes and in real situations. For this purpose, a static flow resistivity, which was also approximated using an acoustic method and a commonly accepted theoretical approximation, was calculated based on a microscopic study of the samples and the fibre's diameter. Regarding the conclusions obtained, the results show that the new experimental procedure for determining the dynamic flow resistance is of interest in the intermediate and Poiseuille flow regimes in which, within the limitation of our experimental set-up, good results were obtained. The acoustic procedure for measuring a static flow resistivity delivered good results only for a regime close to Poiseuille, which was obtained only with higher density samples.
Random sampling for a mental health survey in a deprived multi-ethnic area of Berlin.
Mundt, Adrian P; Aichberger, Marion C; Kliewe, Thomas; Ignatyev, Yuriy; Yayla, Seda; Heimann, Hannah; Schouler-Ocak, Meryam; Busch, Markus; Rapp, Michael; Heinz, Andreas; Ströhle, Andreas
2012-12-01
The aim of the study was to assess the response to random sampling for a mental health survey in a deprived multi-ethnic area of Berlin, Germany, with a large Turkish-speaking population. A random list from the registration office with 1,000 persons stratified by age and gender was retrieved from the population registry and these persons were contacted using a three-stage design including written information, telephone calls and personal contact at home. A female bilingual interviewer contacted persons with Turkish names. Of the persons on the list, 202 were not living in the area, one was deceased, 502 did not respond. Of the 295 responders, 152 explicitly refused(51.5%) to participate. We retained a sample of 143 participants(48.5%) representing the rate of multi-ethnicity in the area (52.1% migrants in the sample vs. 53.5% in the population). Turkish migrants were over-represented(28.9% in the sample vs. 18.6% in the population). Polish migrants (2.1 vs. 5.3% in the population) and persons from the former Yugoslavia (1.4 vs. 4.8% in the population)were under-represented. Bilingual contact procedures can improve the response rates of the most common migrant populations to random sampling if migrants of the same origin gate the contact. High non-contact and non-response rates for migrant and non-migrant populations in deprived urban areas remain a challenge for obtaining representative random samples.
Rubin, K H; Rothmann, M J; Holmberg, T; Høiberg, M; Möller, S; Barkmann, R; Glüer, C C; Hermann, A P; Bech, M; Gram, J; Brixen, K
2017-12-07
The Risk-stratified Osteoporosis Strategy Evaluation (ROSE) study investigated the effectiveness of a two-step screening program for osteoporosis in women. We found no overall reduction in fractures from systematic screening compared to the current case-finding strategy. The group of moderate- to high-risk women, who accepted the invitation to DXA, seemed to benefit from the program. The purpose of the ROSE study was to investigate the effectiveness of a two-step population-based osteoporosis screening program using the Fracture Risk Assessment Tool (FRAX) derived from a self-administered questionnaire to select women for DXA scan. After the scanning, standard osteoporosis management according to Danish national guidelines was followed. Participants were randomized to either screening or control group, and randomization was stratified according to age and area of residence. Inclusion took place from February 2010 to November 2011. Participants received a self-administered questionnaire, and women in the screening group with a FRAX score ≥ 15% (major osteoporotic fractures) were invited to a DXA scan. Primary outcome was incident clinical fractures. Intention-to-treat analysis and two per-protocol analyses were performed. A total of 3416 fractures were observed during a median follow-up of 5 years. No significant differences were found in the intention-to-treat analyses with 34,229 women included aged 65-80 years. The per-protocol analyses showed a risk reduction in the group that underwent DXA scanning compared to women in the control group with a FRAX ≥ 15%, in regard to major osteoporotic fractures, hip fractures, and all fractures. The risk reduction was most pronounced for hip fractures (adjusted SHR 0.741, p = 0.007). Compared to an office-based case-finding strategy, the two-step systematic screening strategy had no overall effect on fracture incidence. The two-step strategy seemed, however, to be beneficial in the group of women who were
Hildebrandt, Thomas; Pick, Denis; Einax, Jürgen W
2012-02-01
The pollution of soil and environment as a result of human activity is a major problem. Nowadays, the determination of local contaminations is of interest for environmental remediation. These hotspots can have various toxic effects on plants, animals, humans, and the whole ecological system. However, economical and juridical consequences are also possible, e.g., high costs for remediation measures. In this study three sampling strategies (simple random sampling, stratified sampling, and systematic sampling) were applied on randomly distributed hotspot contaminations to prove their efficiency in term of finding hotspots. The results were used for the validation of a computerized simulation. This application can simulate the contamination on a field, the sampling pattern, and a virtual sampling. A constant hit rate showed that none of the sampling patterns could reach better results than others. Furthermore, the uncertainty associated with the results is described by confidence intervals. It is to be considered that the uncertainty during sampling is enormous and will decrease slightly, even the number of samples applied was increased to an unreasonable amount. It is hardly possible to identify the exact number of randomly distributed hotspot contaminations by statistical sampling. But a range of possible results could be calculated. Depending on various parameters such as shape and size of the area, number of hotspots, and sample quantity, optimal sampling strategies could be derived. Furthermore, an estimation of bias arising from sampling methodology is possible. The developed computerized simulation is an innovative tool for optimizing sampling strategies in terrestrial compartments for hotspot distributions.
Jayashree, R; Malini, A; Rakhshani, A; Nagendra, Hr; Gunasheela, S; Nagarathna, R
2013-01-01
Yoga improves maternal and fetal outcomes in pregnancy. Platelet Count and Uric acid (Ua) are valuable screening measures in high-risk pregnancy. To examine the effect of yoga on platelet counts and serum Ua in high-risk pregnancy. This stratified randomized controlled trial, conducted by S-VYASA University at St. John's Medical College Hospital and Gunasheela Maternity Hospital, recruited 68 women with high-risk pregnancy (30 yoga and 38 controls) in the twelfth week of pregnancy. The inclusion criteria were: Bad obstetrics history, twin pregnancies, maternal age 35 years, obesity (BMI > 30), and genetic history of pregnancy complications. Those with normal pregnancy, anemia (yoga group practiced simple meditative yoga (three days / week for three months). At baseline, all women had normal platelet counts (> 150×10(9)/L) with a decrease as pregnancy advanced. Ua (normal at baseline) increased in both groups. No one developed abnormal thrombocytopenia or hyperuricemia. Healthy reduction in platelet count (twelfth to twentieth week) occurred in a higher (P yoga group than the control group. A similar trend was found in uric acid. Significantly lesser number of women in the yoga group (n = 3) developed pregnancy-induced hypertension (PIH) / pre-eclampsia (PE) than those in the control group (n = 12), with absolute risk reduction (ARR) by 21%. Antenatal integrated yoga from the twelfth week is safe and effective in promoting a healthy progression of platelets and uric acid in women with high-risk pregnancy, pointing to healthy hemodilution and better physiological adaptation.
Random sampling and validation of covariance matrices of resonance parameters
Plevnik, Lucijan; Zerovnik, Gašper
2017-09-01
Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.
Generalized and synthetic regression estimators for randomized branch sampling
David L. R. Affleck; Timothy G. Gregoire
2015-01-01
In felled-tree studies, ratio and regression estimators are commonly used to convert more readily measured branch characteristics to dry crown mass estimates. In some cases, data from multiple trees are pooled to form these estimates. This research evaluates the utility of both tactics in the estimation of crown biomass following randomized branch sampling (...
Effective sampling of random surfaces by baby universe surgery
Ambjørn, J.; Białas, P.; Jurkiewicz, J.; Burda, Z.; Petersson, B.
1994-01-01
We propose a new, very efficient algorithm for sampling of random surfaces in the Monte Carlo simulations, based on so-called baby universe surgery, i.e. cutting and pasting of baby universe. It drastically reduces slowing down as compared to the standard local flip algorithm, thereby allowing
A number of articles have investigated the impact of sampling design on remotely sensed landcover accuracy estimates. Gong and Howarth (1990) found significant differences for Kappa accuracy values when comparing purepixel sampling, stratified random sampling, and stratified sys...
Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël
2016-11-17
Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.
Raymond L. Czaplewski
2000-01-01
Consider the following example of an accuracy assessment. Landsat data are used to build a thematic map of land cover for a multicounty region. The map classifier (e.g., a supervised classification algorithm) assigns each pixel into one category of land cover. The classification system includes 12 different types of forest and land cover: black spruce, balsam fir,...
Random sampling and validation of covariance matrices of resonance parameters
Directory of Open Access Journals (Sweden)
Plevnik Lucijan
2017-01-01
Full Text Available Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.
Sampling Polymorphs of Ionic Solids using Random Superlattices.
Stevanović, Vladan
2016-02-19
Polymorphism offers rich and virtually unexplored space for discovering novel functional materials. To harness this potential approaches capable of both exploring the space of polymorphs and assessing their realizability are needed. One such approach devised for partially ionic solids is presented. The structure prediction part is carried out by performing local density functional theory relaxations on a large set of random supperlattices (RSLs) with atoms distributed randomly over different planes in a way that favors cation-anion coordination. Applying the RSL sampling on MgO, ZnO, and SnO_{2} reveals that the resulting probability of occurrence of a given structure offers a measure of its realizability explaining fully the experimentally observed, metastable polymorphs in these three systems.
Randomly Sampled-Data Control Systems. Ph.D. Thesis
Han, Kuoruey
1990-01-01
The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.
A Table-Based Random Sampling Simulation for Bioluminescence Tomography
Directory of Open Access Journals (Sweden)
Xiaomeng Zhang
2006-01-01
Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.
A comparison of methods for representing sparsely sampled random quantities.
Energy Technology Data Exchange (ETDEWEB)
Romero, Vicente Jose; Swiler, Laura Painton; Urbina, Angel; Mullins, Joshua
2013-09-01
This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.
Methods and analysis of realizing randomized grouping.
Hu, Liang-Ping; Bao, Xiao-Lei; Wang, Qi
2011-07-01
Randomization is one of the four basic principles of research design. The meaning of randomization includes two aspects: one is to randomly select samples from the population, which is known as random sampling; the other is to randomly group all the samples, which is called randomized grouping. Randomized grouping can be subdivided into three categories: completely, stratified and dynamically randomized grouping. This article mainly introduces the steps of complete randomization, the definition of dynamic randomization and the realization of random sampling and grouping by SAS software.
[A comparative study of different sampling designs in fish community estimation].
Zhao, Jing; Zhang, Shou-Yu; Lin, Jun; Zhou, Xi-Jie
2014-04-01
The study of fishery community ecology depends on quality and quantity of data collected from well-designed sampling programs. The optimal sampling design must be cost-efficient, and the sampling results have been recognized as a significant factor affecting resources management. In this paper, the performances of stationary sampling, simple random sampling and stratified random sampling in estimating fish community were compared based on computer simulation by design effect (De), relative error (REE) and relative bias (RB). The results showed that, De of stationary sampling (average De was 3.37) was worse than simple random sampling and stratified random sampling (average De was 0.961). Stratified random sampling performed best among the three designs in terms of De, REE and RB. With the sample size increased, the design effect of stratified random sampling decreased but the precision and accuracy increased.
DEFF Research Database (Denmark)
Schou, Morten; Gustafsson, Finn; Videbaek, Lars
2008-01-01
from 2006 to 2009. At present (March 2008), 720 patients are randomized. Results expect to be presented in the second half of 2010. CONCLUSIONS: This article outlines the design of the NorthStar study. If our hypotheses are confirmed, the results will help cardiologists and nurses in HFCs to identify...
Sample size calculations for 3-level cluster randomized trials
Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.
2008-01-01
BACKGROUND: The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health
Sample size calculations for 3-level cluster randomized trials
Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.
2008-01-01
Background The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health
Petersen, Tom; Christensen, Robin; Juhl, Carsten Bogh
2015-01-01
Background Reports vary considerably concerning characteristics of patients who will respond to mobilizing exercises or manipulation. The objective of this prospective cohort study was to identify characteristics of patients with a changeable lumbar condition, i.e. presenting with centralization or peripheralization, that were likely to benefit the most from either the McKenzie method or spinal manipulation. Methods 350 patients with chronic low back pain were randomized to either the McKenzi...
Computer Corner: A Note on Pascal's Triangle and Simple Random Sampling.
Wright, Tommy
1989-01-01
Describes the algorithm used to select a simple random sample of certain size without having to list all possible samples and a justification based on Pascal's triangle. Provides testing results by various computers. (YP)
Nakamura, Masato; Muramatsu, Toshiya; Yokoi, Hiroyoshi; Okada, Hisayuki; Ochiai, Masahiko; Suwa, Satoru; Hozawa, Hidenari; Kawai, Kazuya; Awata, Masaki; Mukawa, Hiroaki; Fujita, Hiroshi; Shiode, Nobuo; Asano, Ryuta; Tsukamoto, Yoshiaki; Yamada, Takahisa; Yasumura, Yoshio; Ohira, Hiroshi; Miyamoto, Akira; Takashima, Hiroaki; Ogawa, Takayuki; Ito, Shigenori; Matsuyama, Yutaka; Nanto, Shinsuke
2016-04-01
Three-year clinical follow-up of patients with diabetes mellitus (DM) in the Japan-Drug Eluting Stents Evaluation; a Randomized Trial (J-DESsERT) using 2 different drug eluting stents (DES). A recent study demonstrated that efficacy of sirolimus eluting stents (SES) attenuated over time in diabetic patients. In the largest trial of its kind, 1724 DM patients out of 3533 enrolled patients were randomized to either SES or paclitaxel eluting stents (PES). There were no significant differences in baseline clinical characteristics aside from hypertension. Incidence of major adverse cardiac cerebrovascular events (MACCE) mainly due to higher target vessel failure (TVF) initially indicated a benefit in SES (MACCE rate at 1 year: SES 9.4%, PES 12.2%, p=0.08); however this had attenuated by the time of the 3-year follow-up (MACCE rate from 1 to 3 years: SES 8.4%, PES 6.1%, p=0.10). A similar pattern was observed in insulin-treated patients: MACCE rate from 1 to 3 years was 10.5% in SES and 6.4% in PES (p=0.25). Angiographic follow-up also resulted in higher major adverse cardiac event (MACE) rates at 1 year (presence 11.5%, absence 8.3%, p=0.04); however by 3 years rates were similar regardless of the presence of angiographic follow-up (MACE rate at 3 years: presence 16.0%, absence 14.5%, p=0.35). The superiority of SES over PES in MACCE at 1 year had attenuated by 3-year follow-up. Eventually, the 3-year safety and efficacy profiles were similar regardless of insulin treatment. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Query-Based Sampling: Can we do Better than Random?
Tigelaar, A.S.; Hiemstra, Djoerd
2010-01-01
Many servers on the web offer content that is only accessible via a search interface. These are part of the deep web. Using conventional crawling to index the content of these remote servers is impossible without some form of cooperation. Query-based sampling provides an alternative to crawling
On Stratified Adjusted Tests by Binomial Trials.
Shimokawa, Asanao; Miyaoka, Etsuo
2017-02-14
To estimate or test the treatment effect in randomized clinical trials, it is important to adjust for the potential influence of covariates that are likely to affect the association between the treatment or control group and the response. If these covariates are known at the start of the trial, random assignment of the treatment within each stratum would be considered. On the other hand, if these covariates are not clear at the start of the trial, or if it is difficult to allocate the treatment within each stratum, completely randomized assignment of the treatment would be performed. In both sampling structures, the use of a stratified adjusted test is a useful way to evaluate the significance of the overall treatment effect by reducing the variance and/or bias of the result. If the trial has a binary endpoint, the Cochran and Mantel-Haenszel tests are generally used. These tests are constructed based on the assumption that the number of patients within a stratum is fixed. However, in practice, the stratum sizes are not fixed at the start of the trial in many situations, and are instead allowed to vary. Therefore, there is a risk that using these tests under such situations would result in an error in the estimated variation of the test statistics. To handle the problem, we propose new test statistics under both sampling structures based on multinomial distributions. Our proposed approach is based on the Cochran test, and the difference between the two tests tends to have similar values in the case of a large number of patients. When the total number of patients is small, our approach yields a more conservative result. Through simulation studies, we show that the new approach could correctly maintain the type I error better than the traditional approach.
Petersen, Tom; Christensen, Robin; Juhl, Carsten
2015-04-01
Reports vary considerably concerning characteristics of patients who will respond to mobilizing exercises or manipulation. The objective of this prospective cohort study was to identify characteristics of patients with a changeable lumbar condition, i.e. presenting with centralization or peripheralization, that were likely to benefit the most from either the McKenzie method or spinal manipulation. 350 patients with chronic low back pain were randomized to either the McKenzie method or manipulation. The possible effect modifiers were age, severity of leg pain, pain-distribution, nerve root involvement, duration of symptoms, and centralization of symptoms. The primary outcome was the number of patients reporting success at two months follow-up. The values of the dichotomized predictors were tested according to the prespecified analysis plan. No predictors were found to produce a statistically significant interaction effect. The McKenzie method was superior to manipulation across all subgroups, thus the probability of success was consistently in favor of this treatment independent of predictor observed. When the two strongest predictors, nerve root involvement and peripheralization, were combined, the chance of success was relative risk 10.5 (95% CI 0.71-155.43) for the McKenzie method and 1.23 (95% CI 1.03-1.46) for manipulation (P = 0.11 for interaction effect). We did not find any baseline variables which were statistically significant effect modifiers in predicting different response to either McKenzie treatment or spinal manipulation when compared to each other. However, we did identify nerve root involvement and peripheralization to produce differences in response to McKenzie treatment compared to manipulation that appear to be clinically important. These findings need testing in larger studies. Clinicaltrials.gov: NCT00939107.
Tchoubi, Sébastien; Sobngwi-Tambekou, Joëlle; Noubiap, Jean Jacques N; Asangbeh, Serra Lem; Nkoum, Benjamin Alexandre; Sobngwi, Eugene
2015-01-01
Childhood obesity is one of the most serious public health challenges of the 21st century. The prevalence of overweight and obesity among children (risk factors of overweight and obesity among children aged 6 months to 5 years in Cameroon in 2011. Four thousand five hundred and eighteen children (2205 boys and 2313 girls) aged between 6 to 59 months were sampled in the 2011 Demographic Health Survey (DHS) database. Body Mass Index (BMI) z-scores based on WHO 2006 reference population was chosen to estimate overweight (BMI z-score > 2) and obesity (BMI for age > 3). Regression analyses were performed to investigate risk factors of overweight/obesity. The prevalence of overweight and obesity was 8% (1.7% for obesity alone). Boys were more affected by overweight than girls with a prevalence of 9.7% and 6.4% respectively. The highest prevalence of overweight was observed in the Grassfield area (including people living in West and North-West regions) (15.3%). Factors that were independently associated with overweight and obesity included: having overweight mother (adjusted odds ratio (aOR) = 1.51; 95% CI 1.15 to 1.97) and obese mother (aOR = 2.19; 95% CI = 155 to 3.07), compared to having normal weight mother; high birth weight (aOR = 1.69; 95% CI 1.24 to 2.28) compared to normal birth weight; male gender (aOR = 1.56; 95% CI 1.24 to 1.95); low birth rank (aOR = 1.35; 95% CI 1.06 to 1.72); being aged between 13-24 months (aOR = 1.81; 95% CI = 1.21 to 2.66) and 25-36 months (aOR = 2.79; 95% CI 1.93 to 4.13) compared to being aged 45 to 49 months; living in the grassfield area (aOR = 2.65; 95% CI = 1.87 to 3.79) compared to living in Forest area. Muslim appeared as a protective factor (aOR = 0.67; 95% CI 0.46 to 0.95).compared to Christian religion. This study underlines a high prevalence of early childhood overweight with significant disparities between ecological areas of Cameroon. Risk factors of overweight included high maternal BMI, high birth weight, male gender, low
Tchoubi, Sébastien; Sobngwi-Tambekou, Joëlle; Noubiap, Jean Jacques N.; Asangbeh, Serra Lem; Nkoum, Benjamin Alexandre; Sobngwi, Eugene
2015-01-01
Background Childhood obesity is one of the most serious public health challenges of the 21st century. The prevalence of overweight and obesity among children (obesity among children aged 6 months to 5 years in Cameroon in 2011. Methods Four thousand five hundred and eighteen children (2205 boys and 2313 girls) aged between 6 to 59 months were sampled in the 2011 Demographic Health Survey (DHS) database. Body Mass Index (BMI) z-scores based on WHO 2006 reference population was chosen to estimate overweight (BMI z-score > 2) and obesity (BMI for age > 3). Regression analyses were performed to investigate risk factors of overweight/obesity. Results The prevalence of overweight and obesity was 8% (1.7% for obesity alone). Boys were more affected by overweight than girls with a prevalence of 9.7% and 6.4% respectively. The highest prevalence of overweight was observed in the Grassfield area (including people living in West and North-West regions) (15.3%). Factors that were independently associated with overweight and obesity included: having overweight mother (adjusted odds ratio (aOR) = 1.51; 95% CI 1.15 to 1.97) and obese mother (aOR = 2.19; 95% CI = 155 to 3.07), compared to having normal weight mother; high birth weight (aOR = 1.69; 95% CI 1.24 to 2.28) compared to normal birth weight; male gender (aOR = 1.56; 95% CI 1.24 to 1.95); low birth rank (aOR = 1.35; 95% CI 1.06 to 1.72); being aged between 13–24 months (aOR = 1.81; 95% CI = 1.21 to 2.66) and 25–36 months (aOR = 2.79; 95% CI 1.93 to 4.13) compared to being aged 45 to 49 months; living in the grassfield area (aOR = 2.65; 95% CI = 1.87 to 3.79) compared to living in Forest area. Muslim appeared as a protective factor (aOR = 0.67; 95% CI 0.46 to 0.95).compared to Christian religion. Conclusion This study underlines a high prevalence of early childhood overweight with significant disparities between ecological areas of Cameroon. Risk factors of overweight included high maternal BMI, high birth weight, male
Chaudhuri, Arijit
2014-01-01
Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...
Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster
DEFF Research Database (Denmark)
Schou, Mads Fristrup
2013-01-01
When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column and dimini......When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column...... and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila....
A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models
Shieh, Gwowen
2007-01-01
The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…
Conflict-cost based random sampling design for parallel MRI with low rank constraints
Kim, Wan; Zhou, Yihang; Lyu, Jingyuan; Ying, Leslie
2015-05-01
In compressed sensing MRI, it is very important to design sampling pattern for random sampling. For example, SAKE (simultaneous auto-calibrating and k-space estimation) is a parallel MRI reconstruction method using random undersampling. It formulates image reconstruction as a structured low-rank matrix completion problem. Variable density (VD) Poisson discs are typically adopted for 2D random sampling. The basic concept of Poisson disc generation is to guarantee samples are neither too close to nor too far away from each other. However, it is difficult to meet such a condition especially in the high density region. Therefore the sampling becomes inefficient. In this paper, we present an improved random sampling pattern for SAKE reconstruction. The pattern is generated based on a conflict cost with a probability model. The conflict cost measures how many dense samples already assigned are around a target location, while the probability model adopts the generalized Gaussian distribution which includes uniform and Gaussian-like distributions as special cases. Our method preferentially assigns a sample to a k-space location with the least conflict cost on the circle of the highest probability. To evaluate the effectiveness of the proposed random pattern, we compare the performance of SAKEs using both VD Poisson discs and the proposed pattern. Experimental results for brain data show that the proposed pattern yields lower normalized mean square error (NMSE) than VD Poisson discs.
Scholefield, P. A.; Arnscheidt, J.; Jordan, P.; Beven, K.; Heathwaite, L.
2007-12-01
The uncertainties associated with stream nutrient transport estimates are frequently overlooked and the sampling strategy is rarely if ever investigated. Indeed, the impact of sampling strategy and estimation method on the bias and precision of stream phosphorus (P) transport calculations is little understood despite the use of such values in the calibration and testing of models of phosphorus transport. The objectives of this research were to investigate the variability and uncertainty in the estimates of total phosphorus transfers at an intensively monitored agricultural catchment. The Oona Water which is located in the Irish border region, is part of a long term monitoring program focusing on water quality. The Oona Water is a rural river catchment with grassland agriculture and scattered dwelling houses and has been monitored for total phosphorus (TP) at 10 min resolution for several years (Jordan et al, 2007). Concurrent sensitive measurements of discharge are also collected. The water quality and discharge data were provided at 1 hour resolution (averaged) and this meant that a robust estimate of the annual flow weighted concentration could be obtained by simple interpolation between points. A two-strata approach (Kronvang and Bruhn, 1996) was used to estimate flow weighted concentrations using randomly sampled storm events from the 400 identified within the time series and also base flow concentrations. Using a random stratified sampling approach for the selection of events, a series ranging from 10 through to the full 400 were used, each time generating a flow weighted mean using a load-discharge relationship identified through log-log regression and monte-carlo simulation. These values were then compared to the observed total phosphorus concentration for the catchment. Analysis of these results show the impact of sampling strategy, the inherent bias in any estimate of phosphorus concentrations and the uncertainty associated with such estimates. The
Williamson, Graham R
2003-11-01
This paper discusses the theoretical limitations of the use of random sampling and probability theory in the production of a significance level (or P-value) in nursing research. Potential alternatives, in the form of randomization tests, are proposed. Research papers in nursing, medicine and psychology frequently misrepresent their statistical findings, as the P-values reported assume random sampling. In this systematic review of studies published between January 1995 and June 2002 in the Journal of Advanced Nursing, 89 (68%) studies broke this assumption because they used convenience samples or entire populations. As a result, some of the findings may be questionable. The key ideas of random sampling and probability theory for statistical testing (for generating a P-value) are outlined. The result of a systematic review of research papers published in the Journal of Advanced Nursing is then presented, showing how frequently random sampling appears to have been misrepresented. Useful alternative techniques that might overcome these limitations are then discussed. REVIEW LIMITATIONS: This review is limited in scope because it is applied to one journal, and so the findings cannot be generalized to other nursing journals or to nursing research in general. However, it is possible that other nursing journals are also publishing research articles based on the misrepresentation of random sampling. The review is also limited because in several of the articles the sampling method was not completely clearly stated, and in this circumstance a judgment has been made as to the sampling method employed, based on the indications given by author(s). Quantitative researchers in nursing should be very careful that the statistical techniques they use are appropriate for the design and sampling methods of their studies. If the techniques they employ are not appropriate, they run the risk of misinterpreting findings by using inappropriate, unrepresentative and biased samples.
Directory of Open Access Journals (Sweden)
Jiang Houlong
2016-01-01
Full Text Available Sampling methods are important factors that can potentially limit the accuracy of predictions of spatial distribution patterns. A 10 ha tobacco-planted field was selected to compared the accuracy in predicting the spatial distribution of soil properties by using ordinary kriging and cross validation methods between grid sampling and simple random sampling scheme (SRS. To achieve this objective, we collected soil samples from the topsoil (0-20 cm in March 2012. Sample numbers of grid sampling and SRS were both 115 points each. Accuracies of spatial interpolation using the two sampling schemes were then evaluated based on validation samples (36 points and deviations of the estimates. The results suggested that soil pH and nitrate-N (NO3-N had low variation, whereas all other soil properties exhibited medium variation. Soil pH, organic matter (OM, total nitrogen (TN, cation exchange capacity (CEC, total phosphorus (TP and available phosphorus (AP matched the spherical model, whereas the remaining variables fit an exponential model with both sampling methods. The interpolation error of soil pH, TP, and AP was the lowest in SRS. The errors of interpolation for OM, CEC, TN, available potassium (AK and total potassium (TK were the lowest for grid sampling. The interpolation precisions of the soil NO3-N showed no significant differences between the two sampling schemes. Considering our data on interpolation precision and the importance of minerals for cultivation of flue-cured tobacco, the grid-sampling scheme should be used in tobacco-planted fields to determine the spatial distribution of soil properties. The grid-sampling method can be applied in a practical and cost-effective manner to facilitate soil sampling in tobacco-planted field.
Santos, Kelly T; De Freitas, Rossana G A; Manta, Fernanda S N; De Carvalho, Elizeu F; Silva, Dayse A
2015-07-01
Cardiovascular diseases (CVD) have the highest worldwide mortality rate of any type of disease. In recent years, genetic research regarding CVD has been conducted using association studies, in which the presence of a genetic polymorphism associated with a specific cell signaling pathway in a lower or in a higher frequency among patients may be interpreted as a possible causal factor. Genetic polymorphisms that occur in the β-adrenergic receptor 1 (β-ADR1) can result in significant changes in its function that may result in physiopathologies. Ambiguous categorizations, such as skin color and self-reported ethnicity have been used in pharmacogenetic studies as phenotypic proxies for ancestry; however, admixed populations present a particular challenge to the effectiveness of this approach. The main objective of the present study was to estimate the diversity and the frequency of the Ser49Gly polymorphism of the β-ADR1 gene in a sample of 188 male individuals from the population of Rio de Janeiro. The Ser49Gly frequencies were analyzed by two forms of sample stratification: The phenotypic criterion of black or non-black skin color, and African or non-African ancestry, defined using Y-chromosome single nucleotide polymorphisms and autosomal indel markers. These results were used to evaluate whether marker-based ancestry criteria and/or skin color were associated with the frequency of the Ser49Gly polymorphisms in the heterogeneous Rio de Janeiro/Brazilian population. The DNA fragments of interest were amplified by polymerase chain reaction with specific primers for the Ser49Gly marker, and genotyping reactions were performed by restriction with the enzyme Eco0109I. Heterozygosity values ranging from 0.25 to 0.50 and 0.20 to 0.41 were found for the groups stratified by ancestry and skin color, respectively. Using the Hardy-Weinberg equilibrium at the ser49Gly marker, it was found that there was no significant deviation in the genotype distribution of the whole Rio de
Sampling estimators of total mill receipts for use in timber product output studies
John P. Brown; Richard G. Oderwald
2012-01-01
Data from the 2001 timber product output study for Georgia was explored to determine new methods for stratifying mills and finding suitable sampling estimators. Estimators for roundwood receipts totals comprised several types: simple random sample, ratio, stratified sample, and combined ratio. Two stratification methods were examined: the Dalenius-Hodges (DH) square...
Calculating sample sizes for cluster randomized trials: we can keep it simple and efficient !
van Breukelen, Gerard J.P.; Candel, Math J.J.M.
2012-01-01
Objective: Simple guidelines for efficient sample sizes in cluster randomized trials with unknown intraclass correlation and varying cluster sizes. Methods: A simple equation is given for the optimal number of clusters and sample size per cluster. Here, optimal means maximizing power for a given
Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne
2018-01-01
Introduction Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. Materials and methods The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. Results The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. Conclusions The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in
Directory of Open Access Journals (Sweden)
Nguyen Phuong H
2012-10-01
Full Text Available Abstract Background Low birth weight and maternal anemia remain intractable problems in many developing countries. The adequacy of the current strategy of providing iron-folic acid (IFA supplements only during pregnancy has been questioned given many women enter pregnancy with poor iron stores, the substantial micronutrient demand by maternal and fetal tissues, and programmatic issues related to timing and coverage of prenatal care. Weekly IFA supplementation for women of reproductive age (WRA improves iron status and reduces the burden of anemia in the short term, but few studies have evaluated subsequent pregnancy and birth outcomes. The Preconcept trial aims to determine whether pre-pregnancy weekly IFA or multiple micronutrient (MM supplementation will improve birth outcomes and maternal and infant iron status compared to the current practice of prenatal IFA supplementation only. This paper provides an overview of study design, methodology and sample characteristics from baseline survey data and key lessons learned. Methods/design We have recruited 5011 WRA in a double-blind stratified randomized controlled trial in rural Vietnam and randomly assigned them to receive weekly supplements containing either: 1 2800 μg folic acid 2 60 mg iron and 2800 μg folic acid or 3 MM. Women who become pregnant receive daily IFA, and are being followed through pregnancy, delivery, and up to three months post-partum. Study outcomes include birth outcomes and maternal and infant iron status. Data are being collected on household characteristics, maternal diet and mental health, anthropometry, infant feeding practices, morbidity and compliance. Discussion The study is timely and responds to the WHO Global Expert Consultation which identified the need to evaluate the long term benefits of weekly IFA and MM supplementation in WRA. Findings will generate new information to help guide policy and programs designed to reduce the burden of anemia in women and
Random Walks on Directed Networks: Inference and Respondent-driven Sampling
Malmros, Jens; Britton, Tom
2013-01-01
Respondent driven sampling (RDS) is a method often used to estimate population properties (e.g. sexual risk behavior) in hard-to-reach populations. It combines an effective modified snowball sampling methodology with an estimation procedure that yields unbiased population estimates under the assumption that the sampling process behaves like a random walk on the social network of the population. Current RDS estimation methodology assumes that the social network is undirected, i.e. that all edges are reciprocal. However, empirical social networks in general also have non-reciprocated edges. To account for this fact, we develop a new estimation method for RDS in the presence of directed edges on the basis of random walks on directed networks. We distinguish directed and undirected edges and consider the possibility that the random walk returns to its current position in two steps through an undirected edge. We derive estimators of the selection probabilities of individuals as a function of the number of outgoing...
inverse gaussian model for small area estimation via gibbs sampling
African Journals Online (AJOL)
ADMIN
These small domains need not be geographical locations, but can represent distinct subdomains defined by several stratification factors. Sample survey data are .... a stratified random sample design is used such that each cell defines a stratum from which a random sample of size nij is drawn. Following the terminology of a ...
Estimation of Sensitive Proportion by Randomized Response Data in Successive Sampling
Directory of Open Access Journals (Sweden)
Bo Yu
2015-01-01
Full Text Available This paper considers the problem of estimation for binomial proportions of sensitive or stigmatizing attributes in the population of interest. Randomized response techniques are suggested for protecting the privacy of respondents and reducing the response bias while eliciting information on sensitive attributes. In many sensitive question surveys, the same population is often sampled repeatedly on each occasion. In this paper, we apply successive sampling scheme to improve the estimation of the sensitive proportion on current occasion.
Random sampling of elementary flux modes in large-scale metabolic networks.
Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel
2012-09-15
The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.
Power and sample size calculations for Mendelian randomization studies using one genetic instrument.
Freeman, Guy; Cowling, Benjamin J; Schooling, C Mary
2013-08-01
Mendelian randomization, which is instrumental variable analysis using genetic variants as instruments, is an increasingly popular method of making causal inferences from observational studies. In order to design efficient Mendelian randomization studies, it is essential to calculate the sample sizes required. We present formulas for calculating the power of a Mendelian randomization study using one genetic instrument to detect an effect of a given size, and the minimum sample size required to detect effects for given levels of significance and power, using asymptotic statistical theory. We apply the formulas to some example data and compare the results with those from simulation methods. Power and sample size calculations using these formulas should be more straightforward to carry out than simulation approaches. These formulas make explicit that the sample size needed for Mendelian randomization study is inversely proportional to the square of the correlation between the genetic instrument and the exposure and proportional to the residual variance of the outcome after removing the effect of the exposure, as well as inversely proportional to the square of the effect size.
Sampling versus Random Binning for Multiple Descriptions of a Bandlimited Source
DEFF Research Database (Denmark)
Mashiach, Adam; Østergaard, Jan; Zamir, Ram
2013-01-01
Random binning is an efficient, yet complex, coding technique for the symmetric L-description source coding problem. We propose an alternative approach, that uses the quantized samples of a bandlimited source as "descriptions". By the Nyquist condition, the source can be reconstructed if enough s...
Recidivism among Child Sexual Abusers: Initial Results of a 13-Year Longitudinal Random Sample
Patrick, Steven; Marsh, Robert
2009-01-01
In the initial analysis of data from a random sample of all those charged with child sexual abuse in Idaho over a 13-year period, only one predictive variable was found that related to recidivism of those convicted. Variables such as ethnicity, relationship, gender, and age differences did not show a significant or even large association with…
HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA
Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...
Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA
Taylor, Laura; Doehler, Kirsten
2015-01-01
This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…
Flexible sampling large-scale social networks by self-adjustable random walk
Xu, Xiao-Ke; Zhu, Jonathan J. H.
2016-12-01
Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.
Sample size calculations for micro-randomized trials in mHealth.
Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A
2016-05-30
The use and development of mobile interventions are experiencing rapid growth. In "just-in-time" mobile interventions, treatments are provided via a mobile device, and they are intended to help an individual make healthy decisions 'in the moment,' and thus have a proximal, near future impact. Currently, the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a 'micro-randomized' trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Assessment of proteinuria by using protein: creatinine index in random urine sample.
Khan, Dilshad Ahmed; Ahmad, Tariq Mahmood; Qureshil, Ayaz Hussain; Halim, Abdul; Ahmad, Mumtaz; Afzal, Saeed
2005-10-01
To assess the quantitative measurement of proteinuria by using random urine protein:creatinine index/ratio in comparison with 24 hours urinary protein excretion in patients of renal diseases having normal glomerular filtration rate. One hundred and thirty patients, 94 males and 36 females, with an age range of 5 to 60 years; having proteinuria of more than 150 mg/day were included in this study. Qualitative urinary protein estimation was done on random urine specimen by dipstick. Quantitative measurement of protein in the random and 24 hours urine specimens were carried out by a method based on the formation of a red complex of protein with pyrogallal red in acid medium on Micro lab 200 (Merck). Estimation of creatinine was done on Selectra -2 (Merck) by Jaffe's reaction. The urine protein:creatinine index and ratio were calculated by dividing the urine protein concentration (mg/L) by urine creatinine concentration (mmol/L) multilplied by 10 and mg/mg respectively. The protein:creatinine index and ratio of more than 140 and 0.18 respectively in a random urine sample indicated pathological proteinuria. An excellent correlation (r=0.96) was found between random urine protein:creatinine index/ratio and standard 24 hours urinary protein excretion in these patients (pprotein:creatinine index in random urine is a convenient, quick and reliable method of estimation of proteinuria as compared to 24 hours of urinary protein excretion for diagnosis and monitoring of renal diseases in our medical setup.
Lv, Chao; Zheng, Lianqing; Yang, Wei
2012-01-28
Molecular dynamics sampling can be enhanced via the promoting of potential energy fluctuations, for instance, based on a Hamiltonian modified with the addition of a potential-energy-dependent biasing term. To overcome the diffusion sampling issue, which reveals the fact that enlargement of event-irrelevant energy fluctuations may abolish sampling efficiency, the essential energy space random walk (EESRW) approach was proposed earlier. To more effectively accelerate the sampling of solute conformations in aqueous environment, in the current work, we generalized the EESRW method to a two-dimension-EESRW (2D-EESRW) strategy. Specifically, the essential internal energy component of a focused region and the essential interaction energy component between the focused region and the environmental region are employed to define the two-dimensional essential energy space. This proposal is motivated by the general observation that in different conformational events, the two essential energy components have distinctive interplays. Model studies on the alanine dipeptide and the aspartate-arginine peptide demonstrate sampling improvement over the original one-dimension-EESRW strategy; with the same biasing level, the present generalization allows more effective acceleration of the sampling of conformational transitions in aqueous solution. The 2D-EESRW generalization is readily extended to higher dimension schemes and employed in more advanced enhanced-sampling schemes, such as the recent orthogonal space random walk method. © 2012 American Institute of Physics
Wang, Mingjun; Feng, Shaodong; Wu, Jigang
2017-10-06
We report a multilayer lensless in-line holographic microscope (LIHM) with improved imaging resolution by using the pixel super-resolution technique and random sample movement. In our imaging system, a laser beam illuminated the sample and a CMOS imaging sensor located behind the sample recorded the in-line hologram for image reconstruction. During the imaging process, the sample was moved by hand randomly and the in-line holograms were acquired sequentially. Then the sample image was reconstructed from an enhanced-resolution hologram obtained from multiple low-resolution in-line holograms by applying the pixel super-resolution (PSR) technique. We studied the resolution enhancement effects by using the U.S. Air Force (USAF) target as the sample in numerical simulation and experiment. We also showed that multilayer pixel super-resolution images can be obtained by imaging a triple-layer sample made with the filamentous algae on the middle layer and microspheres with diameter of 2 μm on the top and bottom layers. Our pixel super-resolution LIHM provides a compact and low-cost solution for microscopic imaging and is promising for many biomedical applications.
2014-01-01
Background As health care has increased in complexity and health care teams have been offered as a solution, so too is there an increased need for stronger interprofessional collaboration. However the intraprofessional factions that exist within every profession challenge interprofessional communication through contrary paradigms. As a contender in the conservative spinal health care market, factions within chiropractic that result in unorthodox practice behaviours may compromise interprofessional relations and that profession’s progress toward institutionalization. The purpose of this investigation was to quantify the professional stratification among Canadian chiropractic practitioners and evaluate the practice perceptions of those factions. Methods A stratified random sample of 740 Canadian chiropractors was surveyed to determine faction membership and how professional stratification could be related to views that could be considered unorthodox to current evidence-based care and guidelines. Stratification in practice behaviours is a stated concern of mainstream medicine when considering interprofessional referrals. Results Of 740 deliverable questionnaires, 503 were returned for a response rate of 68%. Less than 20% of chiropractors (18.8%) were aligned with a predefined unorthodox perspective of the conditions they treat. Prediction models suggest that unorthodox perceptions of health practice related to treatment choices, x-ray use and vaccinations were strongly associated with unorthodox group membership (X2 =13.4, p = 0.0002). Conclusion Chiropractors holding unorthodox views may be identified based on response to specific beliefs that appear to align with unorthodox health practices. Despite continued concerns by mainstream medicine, only a minority of the profession has retained a perspective in contrast to current scientific paradigms. Understanding the profession’s factions is important to the anticipation of care delivery when considering
Greene, Tom
2015-01-01
Performing well-powered randomized controlled trials is of fundamental importance in clinical research. The goal of sample size calculations is to assure that statistical power is acceptable while maintaining a small probability of a type I error. This chapter overviews the fundamentals of sample size calculation for standard types of outcomes for two-group studies. It considers (1) the problems of determining the size of the treatment effect that the studies will be designed to detect, (2) the modifications to sample size calculations to account for loss to follow-up and nonadherence, (3) the options when initial calculations indicate that the feasible sample size is insufficient to provide adequate power, and (4) the implication of using multiple primary endpoints. Sample size estimates for longitudinal cohort studies must take account of confounding by baseline factors.
Characterization of Electron Microscopes with Binary Pseudo-random Multilayer Test Samples
Energy Technology Data Exchange (ETDEWEB)
V Yashchuk; R Conley; E Anderson; S Barber; N Bouet; W McKinney; P Takacs; D Voronov
2011-12-31
Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1] and [2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.
Characterization of electron microscopes with binary pseudo-random multilayer test samples
Energy Technology Data Exchange (ETDEWEB)
Yashchuk, Valeriy V., E-mail: VVYashchuk@lbl.gov [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Conley, Raymond [NSLS-II, Brookhaven National Laboratory, Upton, NY 11973 (United States); Anderson, Erik H. [Center for X-ray Optics, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Barber, Samuel K. [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Bouet, Nathalie [NSLS-II, Brookhaven National Laboratory, Upton, NY 11973 (United States); McKinney, Wayne R. [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Takacs, Peter Z. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Voronov, Dmitriy L. [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)
2011-09-01
Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi{sub 2}/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
Electromagnetic waves in stratified media
Wait, James R; Fock, V A; Wait, J R
2013-01-01
International Series of Monographs in Electromagnetic Waves, Volume 3: Electromagnetic Waves in Stratified Media provides information pertinent to the electromagnetic waves in media whose properties differ in one particular direction. This book discusses the important feature of the waves that enables communications at global distances. Organized into 13 chapters, this volume begins with an overview of the general analysis for the electromagnetic response of a plane stratified medium comprising of any number of parallel homogeneous layers. This text then explains the reflection of electromagne
On analysis-based two-step interpolation methods for randomly sampled seismic data
Yang, Pengliang; Gao, Jinghuai; Chen, Wenchao
2013-02-01
Interpolating the missing traces of regularly or irregularly sampled seismic record is an exceedingly important issue in the geophysical community. Many modern acquisition and reconstruction methods are designed to exploit the transform domain sparsity of the few randomly recorded but informative seismic data using thresholding techniques. In this paper, to regularize randomly sampled seismic data, we introduce two accelerated, analysis-based two-step interpolation algorithms, the analysis-based FISTA (fast iterative shrinkage-thresholding algorithm) and the FPOCS (fast projection onto convex sets) algorithm from the IST (iterative shrinkage-thresholding) algorithm and the POCS (projection onto convex sets) algorithm. A MATLAB package is developed for the implementation of these thresholding-related interpolation methods. Based on this package, we compare the reconstruction performance of these algorithms, using synthetic and real seismic data. Combined with several thresholding strategies, the accelerated convergence of the proposed methods is also highlighted.
Hemodynamic and glucometabolic factors fail to predict renal function in a random population sample
DEFF Research Database (Denmark)
Pareek, M.; Nielsen, M.; Olesen, Thomas Bastholm
2015-01-01
Objective: To determine whether baseline hemodynamic and/or glucometabolic risk factors could predict renal function at follow-up, independently of baseline serum creatinine, in survivors from a random population sample. Design and method: We examined associations between baseline serum creatinine...... indices of beta-cell function (HOMA-2B), insulin sensitivity (HOMA-2S), and insulin resistance (HOMA-2IR)), traditional cardiovascular risk factors (age, sex, smoking status, body mass index, diabetes mellitus, total serum cholesterol), and later renal function determined as serum cystatin C in 238 men...... and 7 women aged 38 to 49 years at the time of inclusion, using multivariable linear regression analysis (p-entry 0.05, p-removal 0.20). Study subjects came from a random population based sample and were included 1974-1992, whilst the follow-up with cystatin C measurement was performed 2002...
An inversion method based on random sampling for real-time MEG neuroimaging
Pascarella, Annalisa
2016-01-01
The MagnetoEncephaloGraphy (MEG) has gained great interest in neurorehabilitation training due to its high temporal resolution. The challenge is to localize the active regions of the brain in a fast and accurate way. In this paper we use an inversion method based on random spatial sampling to solve the real-time MEG inverse problem. Several numerical tests on synthetic but realistic data show that the method takes just a few hundredths of a second on a laptop to produce an accurate map of the electric activity inside the brain. Moreover, it requires very little memory storage. For this reasons the random sampling method is particularly attractive in real-time MEG applications.
Effectiveness of hand hygiene education among a random sample of women from the community
Ubheeram, J.; Biranjia-Hurdoyal, S.D.
2017-01-01
Summary Objective. The effectiveness of hand hygiene education was investigated by studying the hand hygiene awareness and bacterial hand contamination among a random sample of 170 women in the community. Methods. Questionnaire was used to assess the hand hygiene awareness score, followed by swabbing of the dominant hand. Bacterial identification was done by conventional biochemical tests. Results. Better hand hygiene awareness score was significantly associated with age, scarce bacterial gro...
Control Capacity and A Random Sampling Method in Exploring Controllability of Complex Networks
Jia, Tao; Barab?si, Albert-L?szl?
2013-01-01
Controlling complex systems is a fundamental challenge of network science. Recent advances indicate that control over the system can be achieved through a minimum driver node set (MDS). The existence of multiple MDS's suggests that nodes do not participate in control equally, prompting us to quantify their participations. Here we introduce control capacity quantifying the likelihood that a node is a driver node. To efficiently measure this quantity, we develop a random sampling algorithm. Thi...
Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui
2016-01-01
Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential fea...
McGarvey, Richard; Burch, Paul; Matthews, Janet M
2016-01-01
Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with
Willan, Andrew; Kowgier, Matthew
2008-01-01
Traditional sample size calculations for randomized clinical trials depend on somewhat arbitrarily chosen factors, such as Type I and II errors. An effectiveness trial (otherwise known as a pragmatic trial or management trial) is essentially an effort to inform decision-making, i.e., should treatment be adopted over standard? Taking a societal perspective and using Bayesian decision theory, Willan and Pinto (Stat. Med. 2005; 24:1791-1806 and Stat. Med. 2006; 25:720) show how to determine the sample size that maximizes the expected net gain, i.e., the difference between the cost of doing the trial and the value of the information gained from the results. These methods are extended to include multi-stage adaptive designs, with a solution given for a two-stage design. The methods are applied to two examples. As demonstrated by the two examples, substantial increases in the expected net gain (ENG) can be realized by using multi-stage adaptive designs based on expected value of information methods. In addition, the expected sample size and total cost may be reduced. Exact solutions have been provided for the two-stage design. Solutions for higher-order designs may prove to be prohibitively complex and approximate solutions may be required. The use of multi-stage adaptive designs for randomized clinical trials based on expected value of sample information methods leads to substantial gains in the ENG and reductions in the expected sample size and total cost.
Sample size calculations for pilot randomized trials: a confidence interval approach.
Cocks, Kim; Torgerson, David J
2013-02-01
To describe a method using confidence intervals (CIs) to estimate the sample size for a pilot randomized trial. Using one-sided CIs and the estimated effect size that would be sought in a large trial, we calculated the sample size needed for pilot trials. Using an 80% one-sided CI, we estimated that a pilot trial should have at least 9% of the sample size of the main planned trial. Using the estimated effect size difference for the main trial and using a one-sided CI, this allows us to calculate a sample size for a pilot trial, which will make its results more useful than at present. Copyright © 2013 Elsevier Inc. All rights reserved.
Stratified Outcome Evaluation of Peritonitis
African Journals Online (AJOL)
severity in peritonitis makes outcome prediction challenging. Risk evaluation in secondary peritonitis can direct treatment planning, predict outcomes and aid in the conduct of surgical audits. Objective: To determine the outcome of peritonitis in patients stratified according to disease severity at the Kenyatta National Hospital.
Estimating the Size of a Large Network and its Communities from a Random Sample.
Chen, Lin; Karbasi, Amin; Crawford, Forrest W
2016-01-01
Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.
Carpenter, Matthew J; Hughes, John R; Gray, Kevin M; Wahlquist, Amy E; Saladin, Michael E; Alberg, Anthony J
2011-11-28
Rates of smoking cessation have not changed in a decade, accentuating the need for novel approaches to prompt quit attempts. Within a nationwide randomized clinical trial (N = 849) to induce further quit attempts and cessation, smokers currently unmotivated to quit were randomized to a practice quit attempt (PQA) alone or to nicotine replacement therapy (hereafter referred to as nicotine therapy), sampling within the context of a PQA. Following a 6-week intervention period, participants were followed up for 6 months to assess outcomes. The PQA intervention was designed to increase motivation, confidence, and coping skills. The combination of a PQA plus nicotine therapy sampling added samples of nicotine lozenges to enhance attitudes toward pharmacotherapy and to promote the use of additional cessation resources. Primary outcomes included the incidence of any ever occurring self-defined quit attempt and 24-hour quit attempt. Secondary measures included 7-day point prevalence abstinence at any time during the study (ie, floating abstinence) and at the final follow-up assessment. Compared with PQA intervention, nicotine therapy sampling was associated with a significantly higher incidence of any quit attempt (49% vs 40%; relative risk [RR], 1.2; 95% CI, 1.1-1.4) and any 24-hour quit attempt (43% vs 34%; 1.3; 1.1-1.5). Nicotine therapy sampling was marginally more likely to promote floating abstinence (19% vs 15%; RR, 1.3; 95% CI, 1.0-1.7); 6-month point prevalence abstinence rates were no different between groups (16% vs 14%; 1.2; 0.9-1.6). Nicotine therapy sampling during a PQA represents a novel strategy to motivate smokers to make a quit attempt. clinicaltrials.gov Identifier: NCT00706979.
Directory of Open Access Journals (Sweden)
Alireza Goli
2015-09-01
Full Text Available Distribution and optimum allocation of emergency resources are the most important tasks, which need to be accomplished during crisis. When a natural disaster such as earthquake, flood, etc. takes place, it is necessary to deliver rescue efforts as quickly as possible. Therefore, it is important to find optimum location and distribution of emergency relief resources. When a natural disaster occurs, it is not possible to reach some damaged areas. In this paper, location and multi-depot vehicle routing for emergency vehicles using tour coverage and random sampling is investigated. In this study, there is no need to visit all the places and some demand points receive their needs from the nearest possible location. The proposed study is implemented for some randomly generated numbers in different sizes. The preliminary results indicate that the proposed method was capable of reaching desirable solutions in reasonable amount of time.
ESTIMATION OF FINITE POPULATION MEAN USING RANDOM NON–RESPONSE IN SURVEY SAMPLING
Directory of Open Access Journals (Sweden)
Housila P. Singh
2010-12-01
Full Text Available This paper consider the problem of estimating the population mean under three different situations of random non–response envisaged by Singh et al (2000. Some ratio and product type estimators have been proposed and their properties are studied under an assumption that the number of sampling units on which information can not be obtained owing to random non–response follows some distribution. The suggested estimators are compared with the usual ratio and product estimators. An empirical study is carried out to show the performance of the suggested estimators over usual unbiased estimator, ratio and product estimators. A generalized version of the proposed ratio and product estimators is also given.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Randomized controlled trial on timing and number of sampling for bile aspiration cytology.
Tsuchiya, Tomonori; Yokoyama, Yukihiro; Ebata, Tomoki; Igami, Tsuyoshi; Sugawara, Gen; Kato, Katsuyuki; Shimoyama, Yoshie; Nagino, Masato
2014-06-01
The issue on timing and number of bile sampling for exfoliative bile cytology is still unsettled. A total of 100 patients with cholangiocarcinoma undergoing resection after external biliary drainage were randomized into two groups: a 2-day group where bile was sampled five times per day for 2 days; and a 10-day group where bile was sampled once per day for 10 days (registered University Hospital Medical Information Network/ID 000005983). The outcome of 87 patients who underwent laparotomy was analyzed, 44 in the 2-day group and 43 in the 10-day group. There were no significant differences in patient characteristics between the two groups. Positivity after one sampling session was significantly lower in the 2-day group than in the 10-day group (17.0 ± 3.7% vs. 20.7 ± 3.5%, P = 0.034). However, cumulative positivity curves were similar and overlapped each other between both groups. The final cumulative positivity by the 10th sampling session was 52.3% in the 2-day group and 51.2% in the 10-day group. We observed a small increase in cumulative positivity after the 5th or 6th session in both groups. Bile cytology positivity is unlikely to be affected by sample time. © 2013 Japanese Society of Hepato-Biliary-Pancreatic Surgery.
Estimating the Size of a Large Network and its Communities from a Random Sample
Chen, Lin; Crawford, Forrest W
2016-01-01
Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V;E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that correctly estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhausti...
Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data
Sree, David
1992-01-01
Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.
Protein/creatinine ratio on random urine samples for prediction of proteinuria in preeclampsia.
Roudsari, F Vahid; Ayati, S; Ayatollahi, H; Shakeri, M T
2012-01-01
To evaluate Protein/Creatinine ratio on random urine samples for prediction of proteinuria in preeclampsia. This study was performed on 150 pregnant women who were hospitalized as preeclampsia in Ghaem Hospital during 2006. At first, a 24-hours urine sample was collected for each patient to determine protein/creatinine ratio. Then, 24-hours urine collection was analyzed for the evaluation of proteinuria. Statistical analysis was performed with SPSS software. A total of 150 patients entered the study. There was a significant relation between the 24-hours urine protein and protein/creatinine ratio (r = 0.659, P < 0.001). Since the measurement of protein/creatinine ratio is more accurate, reliable, and cost-effective, it can be replaced by the method of measurement the 24-hours urine protein.
LOD score exclusion analyses for candidate QTLs using random population samples.
Deng, Hong-Wen
2003-11-01
While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes as putative QTLs using random population samples. Previously, we developed an LOD score exclusion mapping approach for candidate genes for complex diseases. Here, we extend this LOD score approach for exclusion analyses of candidate genes for quantitative traits. Under this approach, specific genetic effects (as reflected by heritability) and inheritance models at candidate QTLs can be analyzed and if an LOD score is < or = -2.0, the locus can be excluded from having a heritability larger than that specified. Simulations show that this approach has high power to exclude a candidate gene from having moderate genetic effects if it is not a QTL and is robust to population admixture. Our exclusion analysis complements association analysis for candidate genes as putative QTLs in random population samples. The approach is applied to test the importance of Vitamin D receptor (VDR) gene as a potential QTL underlying the variation of bone mass, an important determinant of osteoporosis.
A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling
Directory of Open Access Journals (Sweden)
Ying Yan
2017-01-01
Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.
Directory of Open Access Journals (Sweden)
Thomson Denise
2010-12-01
Full Text Available Abstract Background Randomized controlled trials (RCTs are the gold standard for trials assessing the effects of therapeutic interventions; therefore it is important to understand how they are conducted. Our objectives were to provide an overview of a representative sample of pediatric RCTs published in 2007 and assess the validity of their results. Methods We searched Cochrane Central Register of Controlled Trials using a pediatric filter and randomly selected 300 RCTs published in 2007. We extracted data on trial characteristics; outcomes; methodological quality; reporting; and registration and protocol characteristics. Trial registration and protocol availability were determined for each study based on the publication, an Internet search and an author survey. Results Most studies (83% were efficacy trials, 40% evaluated drugs, and 30% were placebo-controlled. Primary outcomes were specified in 41%; 43% reported on adverse events. At least one statistically significant outcome was reported in 77% of trials; 63% favored the treatment group. Trial registration was declared in 12% of publications and 23% were found through an Internet search. Risk of bias (ROB was high in 59% of trials, unclear in 33%, and low in 8%. Registered trials were more likely to have low ROB than non-registered trials (16% vs. 5%; p = 0.008. Effect sizes tended to be larger for trials at high vs. low ROB (0.28, 95% CI 0.21,0.35 vs. 0.16, 95% CI 0.07,0.25. Among survey respondents (50% response rate, the most common reason for trial registration was a publication requirement and for non-registration, a lack of familiarity with the process. Conclusions More than half of this random sample of pediatric RCTs published in 2007 was at high ROB and three quarters of trials were not registered. There is an urgent need to improve the design, conduct, and reporting of child health research.
Inflammatory Biomarkers and Risk of Schizophrenia: A 2-Sample Mendelian Randomization Study.
Hartwig, Fernando Pires; Borges, Maria Carolina; Horta, Bernardo Lessa; Bowden, Jack; Davey Smith, George
2017-12-01
Positive associations between inflammatory biomarkers and risk of psychiatric disorders, including schizophrenia, have been reported in observational studies. However, conventional observational studies are prone to bias, such as reverse causation and residual confounding, thus limiting our understanding of the effect (if any) of inflammatory biomarkers on schizophrenia risk. To evaluate whether inflammatory biomarkers have an effect on the risk of developing schizophrenia. Two-sample mendelian randomization study using genetic variants associated with inflammatory biomarkers as instrumental variables to improve inference. Summary association results from large consortia of candidate gene or genome-wide association studies, including several epidemiologic studies with different designs, were used. Gene-inflammatory biomarker associations were estimated in pooled samples ranging from 1645 to more than 80 000 individuals, while gene-schizophrenia associations were estimated in more than 30 000 cases and more than 45 000 ancestry-matched controls. In most studies included in the consortia, participants were of European ancestry, and the prevalence of men was approximately 50%. All studies were conducted in adults, with a wide age range (18 to 80 years). Genetically elevated circulating levels of C-reactive protein (CRP), interleukin-1 receptor antagonist (IL-1Ra), and soluble interleukin-6 receptor (sIL-6R). Risk of developing schizophrenia. Individuals with schizophrenia or schizoaffective disorders were included as cases. Given that many studies contributed to the analyses, different diagnostic procedures were used. The pooled odds ratio estimate using 18 CRP genetic instruments was 0.90 (random effects 95% CI, 0.84-0.97; P = .005) per 2-fold increment in CRP levels; consistent results were obtained using different mendelian randomization methods and a more conservative set of instruments. The odds ratio for sIL-6R was 1.06 (95% CI, 1.01-1.12; P = .02
Serang, Oliver
2012-01-01
Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics.
Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch
2017-06-06
An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.
US Fish and Wildlife Service, Department of the Interior — Winter waterfowl surveys have been conducted across much of the United States since 1935. Aerial surveys conducted using stratified random sampling have the...
Control capacity and a random sampling method in exploring controllability of complex networks.
Jia, Tao; Barabási, Albert-László
2013-01-01
Controlling complex systems is a fundamental challenge of network science. Recent advances indicate that control over the system can be achieved through a minimum driver node set (MDS). The existence of multiple MDS's suggests that nodes do not participate in control equally, prompting us to quantify their participations. Here we introduce control capacity quantifying the likelihood that a node is a driver node. To efficiently measure this quantity, we develop a random sampling algorithm. This algorithm not only provides a statistical estimate of the control capacity, but also bridges the gap between multiple microscopic control configurations and macroscopic properties of the network under control. We demonstrate that the possibility of being a driver node decreases with a node's in-degree and is independent of its out-degree. Given the inherent multiplicity of MDS's, our findings offer tools to explore control in various complex systems.
Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui
2016-06-01
Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.
Clerkin, Elise M.; Magee, Joshua C.; Wells, Tony T.; Beard, Courtney; Barnett, Nancy P.
2016-01-01
Objective Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Method Adult participants (N=86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Results Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. Conclusions These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. PMID:27591918
Brus, D.J.; Gruijter, de J.J.
1997-01-01
Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based
Song, Zhuoyi; Zhou, Yu; Juusola, Mikko
2016-01-01
Many diurnal photoreceptors encode vast real-world light changes effectively, but how this performance originates from photon sampling is unclear. A 4-module biophysically-realistic fly photoreceptor model, in which information capture is limited by the number of its sampling units (microvilli) and their photon-hit recovery time (refractoriness), can accurately simulate real recordings and their information content. However, sublinear summation in quantum bump production (quantum-gain-nonlinearity) may also cause adaptation by reducing the bump/photon gain when multiple photons hit the same microvillus simultaneously. Here, we use a Random Photon Absorption Model (RandPAM), which is the 1st module of the 4-module fly photoreceptor model, to quantify the contribution of quantum-gain-nonlinearity in light adaptation. We show how quantum-gain-nonlinearity already results from photon sampling alone. In the extreme case, when two or more simultaneous photon-hits reduce to a single sublinear value, quantum-gain-nonlinearity is preset before the phototransduction reactions adapt the quantum bump waveform. However, the contribution of quantum-gain-nonlinearity in light adaptation depends upon the likelihood of multi-photon-hits, which is strictly determined by the number of microvilli and light intensity. Specifically, its contribution to light-adaptation is marginal (≤ 1%) in fly photoreceptors with many thousands of microvilli, because the probability of simultaneous multi-photon-hits on any one microvillus is low even during daylight conditions. However, in cells with fewer sampling units, the impact of quantum-gain-nonlinearity increases with brightening light. PMID:27445779
Mok, Alexander; Haldar, Sumanto; Lee, Jetty Chung-Yung; Leow, Melvin Khee-Shing; Henry, Christiani Jeyakumar
2016-03-15
Cardio-Metabolic Disease (CMD) is the leading cause of death globally and particularly in Asia. Postprandial elevation of glycaemia, insulinaemia, triglyceridaemia are associated with an increased risk of CMD. While studies have shown that higher protein intake or increased meal frequency may benefit postprandial metabolism, their combined effect has rarely been investigated using composite mixed meals. We therefore examined the combined effects of increasing meal frequency (2-large vs 6-smaller meals), with high or low-protein (40 % vs 10 % energy from protein respectively) isocaloric mixed meals on a range of postprandial CMD risk markers. In a randomized crossover study, 10 healthy Chinese males (Age: 29 ± 7 years; BMI: 21.9 ± 1.7 kg/m(2)) underwent 4 dietary treatments: CON-2 (2 large Low-Protein meals), CON-6 (6 Small Low-Protein meals), PRO-2 (2 Large High-Protein meals) and PRO-6 (6 Small High-Protein meals). Subjects wore a continuous glucose monitor (CGM) and venous blood samples were obtained at baseline and at regular intervals for 8.5 h to monitor postprandial changes in glucose, insulin, triglycerides and high sensitivity C-reactive protein (hsCRP). Blood pressure was measured at regular intervals pre- and post- meal consumption. Urine was collected to measure excretion of creatinine and F2-isoprostanes and its metabolites over the 8.5 h postprandial period. The high-protein meals, irrespective of meal frequency were beneficial for glycaemic health since glucose incremental area under the curve (iAUC) for PRO-2 (185 ± 166 mmol.min.L(-1)) and PRO-6 (214 ± 188 mmol.min.L(-1)) were 66 and 60 % lower respectively (both p postprandial responses in other measurements between the dietary treatments. The consumption of composite meals with higher protein content, irrespective of meal frequency appears to be beneficial for postprandial glycemic and insulinemic responses in young, healthy Chinese males. Implications of this study may be useful in the Asian
Suppression of stratified explosive interactions
Energy Technology Data Exchange (ETDEWEB)
Meeks, M.K.; Shamoun, B.I.; Bonazza, R.; Corradini, M.L. [Wisconsin Univ., Madison, WI (United States). Dept. of Nuclear Engineering and Engineering Physics
1998-01-01
Stratified Fuel-Coolant Interaction (FCI) experiments with Refrigerant-134a and water were performed in a large-scale system. Air was uniformly injected into the coolant pool to establish a pre-existing void which could suppress the explosion. Two competing effects due to the variation of the air flow rate seem to influence the intensity of the explosion in this geometrical configuration. At low flow rates, although the injected air increases the void fraction, the concurrent agitation and mixing increases the intensity of the interaction. At higher flow rates, the increase in void fraction tends to attenuate the propagated pressure wave generated by the explosion. Experimental results show a complete suppression of the vapor explosion at high rates of air injection, corresponding to an average void fraction of larger than 30%. (author)
Stratified medicine for mental disorders.
Schumann, Gunter; Binder, Elisabeth B; Holte, Arne; de Kloet, E Ronald; Oedegaard, Ketil J; Robbins, Trevor W; Walker-Tilley, Tom R; Bitter, Istvan; Brown, Verity J; Buitelaar, Jan; Ciccocioppo, Roberto; Cools, Roshan; Escera, Carles; Fleischhacker, Wolfgang; Flor, Herta; Frith, Chris D; Heinz, Andreas; Johnsen, Erik; Kirschbaum, Clemens; Klingberg, Torkel; Lesch, Klaus-Peter; Lewis, Shon; Maier, Wolfgang; Mann, Karl; Martinot, Jean-Luc; Meyer-Lindenberg, Andreas; Müller, Christian P; Müller, Walter E; Nutt, David J; Persico, Antonio; Perugi, Giulio; Pessiglione, Mathias; Preuss, Ulrich W; Roiser, Jonathan P; Rossini, Paolo M; Rybakowski, Janusz K; Sandi, Carmen; Stephan, Klaas E; Undurraga, Juan; Vieta, Eduard; van der Wee, Nic; Wykes, Til; Haro, Josep Maria; Wittchen, Hans Ulrich
2014-01-01
There is recognition that biomedical research into the causes of mental disorders and their treatment needs to adopt new approaches to research. Novel biomedical techniques have advanced our understanding of how the brain develops and is shaped by behaviour and environment. This has led to the advent of stratified medicine, which translates advances in basic research by targeting aetiological mechanisms underlying mental disorder. The resulting increase in diagnostic precision and targeted treatments may provide a window of opportunity to address the large public health burden, and individual suffering associated with mental disorders. While mental health and mental disorders have significant representation in the "health, demographic change and wellbeing" challenge identified in Horizon 2020, the framework programme for research and innovation of the European Commission (2014-2020), and in national funding agencies, clear advice on a potential strategy for mental health research investment is needed. The development of such a strategy is supported by the EC-funded "Roadmap for Mental Health Research" (ROAMER) which will provide recommendations for a European mental health research strategy integrating the areas of biomedicine, psychology, public health well being, research integration and structuring, and stakeholder participation. Leading experts on biomedical research on mental disorders have provided an assessment of the state of the art in core psychopathological domains, including arousal and stress regulation, affect, cognition social processes, comorbidity and pharmacotherapy. They have identified major advances and promising methods and pointed out gaps to be addressed in order to achieve the promise of a stratified medicine for mental disorders. © 2013 Published by Elsevier B.V. and ECNP.
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-09-01
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.
Bouhanick, B; Berrut, G; Chameau, A M; Hallar, M; Bled, F; Chevet, B; Vergely, J; Rohmer, V; Fressinaud, P; Marre, M
1992-01-01
The predictive value of random urine sample during outpatient visit to predict persistent microalbuminuria was studied in 76 Type 1, insulin-dependent diabetic subjects, 61 Type 2, non-insulin-dependent diabetic subjects, and 72 Type 2, insulin-treated diabetic subjects. Seventy-six patients attended outpatient clinic during morning, and 133 during afternoon. Microalbuminuria was suspected if Urinary Albumin Excretion (UAE) exceeded 20 mg/l. All patients were hospitalized within 6 months following outpatient visit, and persistent microalbuminuria was assessed then if UAE was between 30 and 300 mg/24 h on 2-3 occasions in 3 urines samples. Of these 209 subjects eighty-three were also screened with Microbumintest (Ames-Bayer), a semi-quantitative method. Among the 209 subjects, 71 were positive both for microalbuminuria during outpatient visit and a persistent microalbuminuria during hospitalization: sensitivity 91.0%, specificity 83.2%, concordance 86.1%, and positive predictive value 76.3% (chi-squared test: 191; p less than 10(-4)). Data were not different for subjects examined on morning, or on afternoon. Among the 83 subjects also screened with Microbumintest, 22 displayed both a positive reaction and a persistent microalbuminuria: sensitivity 76%, specificity 81%, concordance 80%, and positive predictive value 69% (chi-squared test: 126; p less than 10(-4)). Both types of screening appeared equally effective during outpatient visit. Hence, a persistent microalbuminuria can be predicted during an outpatient visit in a diabetic clinic.
Effectiveness of hand hygiene education among a random sample of women from the community.
Ubheeram, J; Biranjia-Hurdoyal, S D
2017-03-01
The effectiveness of hand hygiene education was investigated by studying the hand hygiene awareness and bacterial hand contamination among a random sample of 170 women in the community. Questionnaire was used to assess the hand hygiene awareness score, followed by swabbing of the dominant hand. Bacterial identification was done by conventional biochemical tests. Better hand hygiene awareness score was significantly associated with age, scarce bacterial growth and absence of potential pathogen (p hand samples, bacterial growth was noted in 155 (91.2%), which included 91 (53.5%) heavy growth, 53 (31.2%) moderate growth and 11 (6.47%) scanty growth. The presence of enteric bacteria was associated with long nails (49.4% vs 29.2%; p = 0.007; OR = 2.3; 95% CI: 1.25-4.44) while finger rings were associated with higher bacterial load (p = 0.003). Coliforms was significantly higher among women who had a lower hand hygiene awareness score, washed their hands at lower frequency (59.0% vs 32.8%; p = 0.003; OR = 2.9; 95% CI: 1.41-6.13) and used common soap as compared to antiseptic soaps (69.7% vs 30.3%, p = 0.000; OR = 4.11; 95% CI: 1.67-10.12). Level of hand hygiene awareness among the participants was satisfactory but not the compliance of hand washing practice, especially among the elders.
Association between stalking victimisation and psychiatric morbidity in a random community sample.
Purcell, Rosemary; Pathé, Michele; Mullen, Paul E
2005-11-01
No studies have assessed psychopathology among victims of stalking who have not sought specialist help. To examine the associations between stalking victimisation and psychiatric morbidity in a representative community sample. A random community sample (n=1844) completed surveys examining the experience of harassment and current mental health. The 28-item General Health Questionnaire (GHQ-28) and the Impact of Event Scale were used to assess symptomatology in those reporting brief harassment (n=196) or protracted stalking (n=236) and a matched control group reporting no harassment (n=432). Rates of caseness on the GHQ-28 were higher among stalking victims (36.4%) than among controls (19.3%) and victims of brief harassment (21.9%). Psychiatric morbidity did not differ according to the recency of victimisation, with 34.1% of victims meeting caseness criteria 1 year after stalking had ended. In a significant minority of victims, stalking victimisation is associated with psychiatric morbidity that may persist long after it has ceased. Recognition of the immediate and long-term impacts of stalking is necessary to assist victims and help alleviate distress and long-term disability.
Random sample community-based health surveys: does the effort to reach participants matter?
Messiah, Antoine; Castro, Grettel; Rodríguez de la Vega, Pura; Acuna, Juan M
2014-12-15
Conducting health surveys with community-based random samples are essential to capture an otherwise unreachable population, but these surveys can be biased if the effort to reach participants is insufficient. This study determines the desirable amount of effort to minimise such bias. A household-based health survey with random sampling and face-to-face interviews. Up to 11 visits, organised by canvassing rounds, were made to obtain an interview. Single-family homes in an underserved and understudied population in North Miami-Dade County, Florida, USA. Of a probabilistic sample of 2200 household addresses, 30 corresponded to empty lots, 74 were abandoned houses, 625 households declined to participate and 265 could not be reached and interviewed within 11 attempts. Analyses were performed on the 1206 remaining households. Each household was asked if any of their members had been told by a doctor that they had high blood pressure, heart disease including heart attack, cancer, diabetes, anxiety/ depression, obesity or asthma. Responses to these questions were analysed by the number of visit attempts needed to obtain the interview. Return per visit fell below 10% after four attempts, below 5% after six attempts and below 2% after eight attempts. As the effort increased, household size decreased, while household income and the percentage of interviewees active and employed increased; proportion of the seven health conditions decreased, four of which did so significantly: heart disease 20.4-9.2%, high blood pressure 63.5-58.1%, anxiety/depression 24.4-9.2% and obesity 21.8-12.6%. Beyond the fifth attempt, however, cumulative percentages varied by less than 1% and precision varied by less than 0.1%. In spite of the early and steep drop, sustaining at least five attempts to reach participants is necessary to reduce selection bias. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Sample-to-sample fluctuations of power spectrum of a random motion in a periodic Sinai model
Dean, David S.; Iorio, Antonio; Marinari, Enzo; Oshanin, Gleb
2016-09-01
The Sinai model of a tracer diffusing in a quenched Brownian potential is a much-studied problem exhibiting a logarithmically slow anomalous diffusion due to the growth of energy barriers with the system size. However, if the potential is random but periodic, the regime of anomalous diffusion crosses over to one of normal diffusion once a tracer has diffused over a few periods of the system. Here we consider a system in which the potential is given by a Brownian bridge on a finite interval (0 ,L ) and then periodically repeated over the whole real line and study the power spectrum S (f ) of the diffusive process x (t ) in such a potential. We show that for most of realizations of x (t ) in a given realization of the potential, the low-frequency behavior is S (f ) ˜A /f2 , i.e., the same as for standard Brownian motion, and the amplitude A is a disorder-dependent random variable with a finite support. Focusing on the statistical properties of this random variable, we determine the moments of A of arbitrary, negative, or positive order k and demonstrate that they exhibit a multifractal dependence on k and a rather unusual dependence on the temperature and on the periodicity L , which are supported by atypical realizations of the periodic disorder. We finally show that the distribution of A has a log-normal left tail and exhibits an essential singularity close to the right edge of the support, which is related to the Lifshitz singularity. Our findings are based both on analytic results and on extensive numerical simulations of the process x (t ) .
Weinhold, Jan; Hunger, Christina; Bornhäuser, Annette; Link, Leoni; Rochon, Justine; Wild, Beate; Schweitzer, Jochen
2013-10-01
The study examined the efficacy of nonrecurring family constellation seminars on psychological health. We conducted a monocentric, single-blind, stratified, and balanced randomized controlled trial (RCT). After choosing their roles for participating in a family constellation seminar as either active participant (AP) or observing participant (OP), 208 adults (M = 48 years, SD = 10; 79% women) from the general population were randomly allocated to the intervention group (IG; 3-day family constellation seminar; 64 AP, 40 OP) or a wait-list control group (WLG; 64 AP, 40 OP). It was predicted that family constellation seminars would improve psychological functioning (Outcome Questionnaire OQ-45.2) at 2-week and 4-month follow-ups. In addition, we assessed the impact of family constellation seminars on psychological distress and motivational incongruence. The IG showed significantly improved psychological functioning (d = 0.45 at 2-week follow-up, p = .003; d = 0.46 at 4-month follow-up, p = .003). Results were confirmed for psychological distress and motivational incongruence. No adverse events were reported. This RCT provides evidence for the efficacy of family constellation in a nonclinical population. The implications of the findings are discussed.
Random Model Sampling: Making Craig Interpolation Work When It Should Not
Directory of Open Access Journals (Sweden)
Marat Akhin
2014-01-01
Full Text Available One of the most serious problems when doing program analyses is dealing with function calls. While function inlining is the traditional approach to this problem, it nonetheless suffers from the increase in analysis complexity due to the state space explosion. Craig interpolation has been successfully used in recent years in the context of bounded model checking to do function summarization which allows one to replace the complete function body with its succinct summary and, therefore, reduce the complexity. Unfortunately this technique can be applied only to a pair of unsatisfiable formulae.In this work-in-progress paper we present an approach to function summarization based on Craig interpolation that overcomes its limitation by using random model sampling. It captures interesting input/output relations, strengthening satisfiable formulae into unsatisfiable ones and thus allowing the use of Craig interpolation. Preliminary experiments show the applicability of this approach; in our future work we plan to do a full evaluation on real-world examples.
Discriminative motif discovery via simulated evolution and random under-sampling.
Directory of Open Access Journals (Sweden)
Tao Song
Full Text Available Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.
Schmidt, Jennifer; Martin, Alexandra
2016-09-01
Brain-directed treatment techniques, such as neurofeedback, have recently been proposed as adjuncts in the treatment of eating disorders to improve therapeutic outcomes. In line with this recommendation, a cue exposure EEG-neurofeedback protocol was developed. The present study aimed at the evaluation of the specific efficacy of neurofeedback to reduce subjective binge eating in a female subthreshold sample. A total of 75 subjects were randomized to EEG-neurofeedback, mental imagery with a comparable treatment set-up or a waitlist group. At post-treatment, only EEG-neurofeedback led to a reduced frequency of binge eating (p = .015, g = 0.65). The effects remained stable to a 3-month follow-up. EEG-neurofeedback further showed particular beneficial effects on perceived stress and dietary self-efficacy. Differences in outcomes did not arise from divergent treatment expectations. Because EEG-neurofeedback showed a specific efficacy, it may be a promising brain-directed approach that should be tested as a treatment adjunct in clinical groups with binge eating. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.
A coupled well-balanced and random sampling scheme for computing bubble oscillations*
Directory of Open Access Journals (Sweden)
Jung Jonathan
2012-04-01
Full Text Available We propose a finite volume scheme to study the oscillations of a spherical bubble of gas in a liquid phase. Spherical symmetry implies a geometric source term in the Euler equations. Our scheme satisfies the well-balanced property. It is based on the VFRoe approach. In order to avoid spurious pressure oscillations, the well-balanced approach is coupled with an ALE (Arbitrary Lagrangian Eulerian technique at the interface and a random sampling remap. Nous proposons un schéma de volumes finis pour étudier les oscillations d’une bulle sphérique de gaz dans l’eau. La symétrie sphérique fait apparaitre un terme source géométrique dans les équations d’Euler. Notre schéma est basé sur une approche VFRoe et préserve les états stationnaires. Pour éviter les oscillations de pression, l’approche well-balanced est couplée avec une approche ALE (Arbitrary Lagrangian Eulerian, et une étape de projection basée sur un échantillonage aléatoire.
Energy Technology Data Exchange (ETDEWEB)
Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Ahmed, Oumer S.; Franklin, Steven E.; Wulder, Michael A.; White, Joanne C.
2015-03-01
Many forest management activities, including the development of forest inventories, require spatially detailed forest canopy cover and height data. Among the various remote sensing technologies, LiDAR (Light Detection and Ranging) offers the most accurate and consistent means for obtaining reliable canopy structure measurements. A potential solution to reduce the cost of LiDAR data, is to integrate transects (samples) of LiDAR data with frequently acquired and spatially comprehensive optical remotely sensed data. Although multiple regression is commonly used for such modeling, often it does not fully capture the complex relationships between forest structure variables. This study investigates the potential of Random Forest (RF), a machine learning technique, to estimate LiDAR measured canopy structure using a time series of Landsat imagery. The study is implemented over a 2600 ha area of industrially managed coastal temperate forests on Vancouver Island, British Columbia, Canada. We implemented a trajectory-based approach to time series analysis that generates time since disturbance (TSD) and disturbance intensity information for each pixel and we used this information to stratify the forest land base into two strata: mature forests and young forests. Canopy cover and height for three forest classes (i.e. mature, young and mature and young (combined)) were modeled separately using multiple regression and Random Forest (RF) techniques. For all forest classes, the RF models provided improved estimates relative to the multiple regression models. The lowest validation error was obtained for the mature forest strata in a RF model (R2 = 0.88, RMSE = 2.39 m and bias = -0.16 for canopy height; R2 = 0.72, RMSE = 0.068% and bias = -0.0049 for canopy cover). This study demonstrates the value of using disturbance and successional history to inform estimates of canopy structure and obtain improved estimates of forest canopy cover and height using the RF algorithm.
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
Daoud, Nihaya; Hayek, Samah; Sheikh Muhammad, Ahmad; Abu-Saad, Kathleen; Osman, Amira; Thrasher, James F; Kalter-Leibovici, Ofra
2015-07-16
Despite advanced smoking prevention and cessation policies in many countries, the prevalence of cigarette smoking among indigenous and some ethnic minorities continues to be high. This study examined the stages of change (SOC) of the readiness to quit smoking among Arab men in Israel shortly after new regulations of free-of-charge smoking cessation workshops and subsidized medications were introduced through primary health care clinics. We conducted a countrywide study in Israel between 2012-2013. Participants, 735 current smokers; 18-64 years old; were recruited from a stratified random sample and interviewed face-to-face using a structured questionnaire in Arabic. We used ordered regression to examine the contribution of socio-economic position (SEP), health status, psychosocial attributes, smoking-related factors, and physician advice to the SOC of the readiness to quit smoking (pre-contemplation, contemplation and preparation). Of the current smokers, 61.8% were at the pre-contemplation stage, 23.8% were at the contemplation stage, and only 14.4% were at the preparation stage. In the multinomial analysis, factors significantly (P stage compared to pre-contemplation stage included [odds ratio (OR), 95% confidence interval (CI)]: chronic morbidity [0.52, (0.31-0.88)], social support [1.35, (1.07-1.70)], duration of smoking for 11-21 years [1.94, (1.07-3.50)], three or more previous attempts to quit [2.27, (1.26-4.01)], knowledge about smoking hazards [1.75, (1.29-2.35)], positive attitudes toward smoking prevention [1.44, (1.14-1.82)], and physician advice to quit smoking [1.88, (1.19-2.97)]. The factors significantly (P stage compared to pre-contemplation stage were [OR, (95 % CI)]: chronic morbidity [0.36, (0.20-0.67)], anxiety [1.07, (1.01-1.13)], social support [1.34, (1.01-1.78)], duration of smoking 5 years or less [2.93, (1.14-7.52)], three or more previous attempts to quit [3.16, (1.60-6.26)], knowledge about smoking hazards [1.57, (1.10-2.21)], and
Stratified Sampling–When the Optimum Allocation Demands More ...
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 4. Stratified Sampling– When the Optimum Allocation Demands More than 100% Sampling. R Vasudeva. Classroom Volume 2 Issue 4 April 1997 pp 74-76. Fulltext. Click here to view fulltext PDF. Permanent link:
Forest inventory and stratified estimation: a cautionary note
John Coulston
2008-01-01
The Forest Inventory and Analysis (FIA) Program uses stratified estimation techniques to produce estimates of forest attributes. Stratification must be unbiased and stratification procedures should be examined to identify any potential bias. This note explains simple techniques for identifying potential bias, discriminating between sample bias and stratification bias,...
A Dexterous Optional Randomized Response Model
Tarray, Tanveer A.; Singh, Housila P.; Yan, Zaizai
2017-01-01
This article addresses the problem of estimating the proportion Pi[subscript S] of the population belonging to a sensitive group using optional randomized response technique in stratified sampling based on Mangat model that has proportional and Neyman allocation and larger gain in efficiency. Numerically, it is found that the suggested model is…
Analysis of photonic band-gap structures in stratified medium
DEFF Research Database (Denmark)
Tong, Ming-Sze; Yinchao, Chen; Lu, Yilong
2005-01-01
Purpose - To demonstrate the flexibility and advantages of a non-uniform pseudo-spectral time domain (nu-PSTD) method through studies of the wave propagation characteristics on photonic band-gap (PBG) structures in stratified medium Design/methodology/approach - A nu-PSTD method is proposed...... with the transformed space technique in order to make the algorithm flexible in terms of non-uniform spatial sampling. Findings - Through the studies of the wave propagation characteristics on PBG structures in stratified medium, it has been found that the proposed method retains excellent accuracy in the occasions...... in electromagnetic and microwave applications once the Maxwell's equations are appropriately modeled. Originality/value - The method validates its values and properties through extensive studies on regular and defective 1D PBG structures in stratified medium, and it can be further extended to solving more...
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
RIJCKEN, B; SCHOUTEN, JP; WEISS, ST; ROSNER, B; DEVRIES, K; VANDERLENDE, R
1993-01-01
Long-term variability of bronchial responsiveness has been studied in a random population sample of adults. During a follow-up period of 18 yr, 2,216 subjects contributed 5,012 observations to the analyses. Each subject could have as many as seven observations. Bronchial responsiveness was assessed
Albumin to creatinine ratio in a random urine sample: Correlation with severity of preeclampsia
Directory of Open Access Journals (Sweden)
Fady S. Moiety
2014-06-01
Conclusions: Random urine ACR may be a reliable method for prediction and assessment of severity of preeclampsia. Using the estimated cut-off may add to the predictive value of such a simple quick test.
Energy Technology Data Exchange (ETDEWEB)
Romero, Vicente [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bonney, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schroeder, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-11-01
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10^{-4} probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Lee, Chul-Ho; Eun, Do Young
2012-01-01
Graph sampling via crawling has been actively considered as a generic and important tool for collecting uniform node samples so as to consistently estimate and uncover various characteristics of complex networks. The so-called simple random walk with re-weighting (SRW-rw) and Metropolis-Hastings (MH) algorithm have been popular in the literature for such unbiased graph sampling. However, an unavoidable downside of their core random walks -- slow diffusion over the space, can cause poor estimation accuracy. In this paper, we propose non-backtracking random walk with re-weighting (NBRW-rw) and MH algorithm with delayed acceptance (MHDA) which are theoretically guaranteed to achieve, at almost no additional cost, not only unbiased graph sampling but also higher efficiency (smaller asymptotic variance of the resulting unbiased estimators) than the SRW-rw and the MH algorithm, respectively. In particular, a remarkable feature of the MHDA is its applicability for any non-uniform node sampling like the MH algorithm,...
Baudron, Paul; Alonso-Sarría, Francisco; García-Aróstegui, José Luís; Cánovas-García, Fulgencio; Martínez-Vicente, David; Moreno-Brotóns, Jesús
2013-08-01
Accurate identification of the origin of groundwater samples is not always possible in complex multilayered aquifers. This poses a major difficulty for a reliable interpretation of geochemical results. The problem is especially severe when the information on the tubewells design is hard to obtain. This paper shows a supervised classification method based on the Random Forest (RF) machine learning technique to identify the layer from where groundwater samples were extracted. The classification rules were based on the major ion composition of the samples. We applied this method to the Campo de Cartagena multi-layer aquifer system, in southeastern Spain. A large amount of hydrogeochemical data was available, but only a limited fraction of the sampled tubewells included a reliable determination of the borehole design and, consequently, of the aquifer layer being exploited. Added difficulty was the very similar compositions of water samples extracted from different aquifer layers. Moreover, not all groundwater samples included the same geochemical variables. Despite of the difficulty of such a background, the Random Forest classification reached accuracies over 90%. These results were much better than the Linear Discriminant Analysis (LDA) and Decision Trees (CART) supervised classification methods. From a total of 1549 samples, 805 proceeded from one unique identified aquifer, 409 proceeded from a possible blend of waters from several aquifers and 335 were of unknown origin. Only 468 of the 805 unique-aquifer samples included all the chemical variables needed to calibrate and validate the models. Finally, 107 of the groundwater samples of unknown origin could be classified. Most unclassified samples did not feature a complete dataset. The uncertainty on the identification of training samples was taken in account to enhance the model. Most of the samples that could not be identified had an incomplete dataset.
DEFF Research Database (Denmark)
Møller, Anders Bjørn; Malone, Brendan P.; Odgers, Nathan
algorithm were evaluated. The resulting maps were validated on 777 soil profiles situated in a grid covering Denmark. The experiments showed that the results obtained with Jacobsen’s map were more accurate than the results obtained with the CEC map, despite a nominally coarser scale of 1:2,000,000 vs. 1...... of European Communities (CEC, 1985) respectively, both using the FAO 1974 classification. Furthermore, the effects of implementing soil-landscape relationships, using area proportional sampling instead of per polygon sampling, and replacing the default C5.0 classification tree algorithm with a random forest......:1,000,000. This finding is probably related to the fact that Jacobsen’s map was more detailed with a larger number of polygons, soil map units and soil types, despite its coarser scale. The results showed that the implementation of soil-landscape relationships, area-proportional sampling and the random forest...
Methodology Series Module 5: Sampling Strategies.
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.
Methodology series module 5: Sampling strategies
Directory of Open Access Journals (Sweden)
Maninder Singh Setia
2016-01-01
Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability sampling – based on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.
Edgington, Eugene
2007-01-01
Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani
Analytic and geometric study of stratified spaces
Pflaum, Markus J
2001-01-01
The book provides an introduction to stratification theory leading the reader up to modern research topics in the field. The first part presents the basics of stratification theory, in particular the Whitney conditions and Mather's control theory, and introduces the notion of a smooth structure. Moreover, it explains how one can use smooth structures to transfer differential geometric and analytic methods from the arena of manifolds to stratified spaces. In the second part the methods established in the first part are applied to particular classes of stratified spaces like for example orbit spaces. Then a new de Rham theory for stratified spaces is established and finally the Hochschild (co)homology theory of smooth functions on certain classes of stratified spaces is studied. The book should be accessible to readers acquainted with the basics of topology, analysis and differential geometry.
Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B
2017-04-01
Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.
Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing
2017-11-01
Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.
Alcohol and marijuana use in adolescents' daily lives: a random sample of experiences.
Larson, R; Csikszentmihalyi, M; Freeman, M
1984-07-01
High school students filled out reports on their experiences at random times during their daily lives, including 48 occasions when they were using alcohol or marijuana. Alcohol use was reported primarily in the context of Friday and Saturday night social gatherings and was associated with a happy and gregarious subjective state. Marijuana use was reported across a wider range of situations and was associated with an average state that differed much less from ordinary experience.
Random or systematic sampling to detect a localised microbial contamination within a batch of food
Jongenburger, I.; Reij, M.W.; Boer, E.P.J.; Gorris, L.G.M.; Zwietering, M.H.
2011-01-01
Pathogenic microorganisms are known to be distributed heterogeneously in food products that are solid, semi-solid or powdered, like for instance peanut butter, cereals, or powdered milk. This complicates effective detection of the pathogens by sampling. Two-class sampling plans, which are deployed
Multistage point relascope and randomized branch sampling for downed coarse woody debris estimation
Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine
2002-01-01
New sampling methods have recently been introduced that allow estimation of downed coarse woody debris using an angle gauge, or relascope. The theory behind these methods is based on sampling straight pieces of downed coarse woody debris. When pieces deviate from this ideal situation, auxillary methods must be employed. We describe a two-stage procedure where the...
The Risk-Stratified Osteoporosis Strategy Evaluation study (ROSE)
DEFF Research Database (Denmark)
Rubin, Katrine Hass; Holmberg, Teresa; Rothmann, Mette Juel
2015-01-01
The risk-stratified osteoporosis strategy evaluation study (ROSE) is a randomized prospective population-based study investigating the effectiveness of a two-step screening program for osteoporosis in women. This paper reports the study design and baseline characteristics of the study population....... 35,000 women aged 65-80 years were selected at random from the population in the Region of Southern Denmark and-before inclusion-randomized to either a screening group or a control group. As first step, a self-administered questionnaire regarding risk factors for osteoporosis based on FRAX......(®) was issued to both groups. As second step, subjects in the screening group with a 10-year probability of major osteoporotic fractures ≥15 % were offered a DXA scan. Patients diagnosed with osteoporosis from the DXA scan were advised to see their GP and discuss pharmaceutical treatment according to Danish...
Unwilling or Unable to Cheat? Evidence from a Randomized Tax Audit Experiment in Denmark
DEFF Research Database (Denmark)
Kleven, Henrik Jacobsen; Knudsen, Martin B.; Kreiner, Claus Thustrup
2010-01-01
This paper analyzes a randomized tax enforcement experiment in Denmark. In the base year, a stratified and representative sample of over 40,000 individual income tax filers was selected for the experiment. Half of the tax filers were randomly selected to be thoroughly audited, while the rest were...
Sample size estimation and sampling techniques for selecting a representative sample
Directory of Open Access Journals (Sweden)
Aamir Omair
2014-01-01
Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.
Fitting stratified proportional odds models by amalgamating conditional likelihoods.
Mukherjee, Bhramar; Ahn, Jaeil; Liu, Ivy; Rathouz, Paul J; Sánchez, Brisa N
2008-10-30
Classical methods for fitting a varying intercept logistic regression model to stratified data are based on the conditional likelihood principle to eliminate the stratum-specific nuisance parameters. When the outcome variable has multiple ordered categories, a natural choice for the outcome model is a stratified proportional odds or cumulative logit model. However, classical conditioning techniques do not apply to the general K-category cumulative logit model (K>2) with varying stratum-specific intercepts as there is no reduction due to sufficiency; the nuisance parameters remain in the conditional likelihood. We propose a methodology to fit stratified proportional odds model by amalgamating conditional likelihoods obtained from all possible binary collapsings of the ordinal scale. The method allows for categorical and continuous covariates in a general regression framework. We provide a robust sandwich estimate of the variance of the proposed estimator. For binary exposures, we show equivalence of our approach to the estimators already proposed in the literature. The proposed recipe can be implemented very easily in standard software. We illustrate the methods via three real data examples related to biomedical research. Simulation results comparing the proposed method with a random effects model on the stratification parameters are also furnished. Copyright 2008 John Wiley & Sons, Ltd.
Min, M.
2017-10-01
Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.
Kim, Diane N. H.; Teitell, Michael A.; Reed, Jason; Zangle, Thomas A.
2015-11-01
Standard algorithms for phase unwrapping often fail for interferometric quantitative phase imaging (QPI) of biological samples due to the variable morphology of these samples and the requirement to image at low light intensities to avoid phototoxicity. We describe a new algorithm combining random walk-based image segmentation with linear discriminant analysis (LDA)-based feature detection, using assumptions about the morphology of biological samples to account for phase ambiguities when standard methods have failed. We present three versions of our method: first, a method for LDA image segmentation based on a manually compiled training dataset; second, a method using a random walker (RW) algorithm informed by the assumed properties of a biological phase image; and third, an algorithm which combines LDA-based edge detection with an efficient RW algorithm. We show that the combination of LDA plus the RW algorithm gives the best overall performance with little speed penalty compared to LDA alone, and that this algorithm can be further optimized using a genetic algorithm to yield superior performance for phase unwrapping of QPI data from biological samples.
Yadav, B K; Adhikari, S; Gyawali, P; Shrestha, R; Poudel, B; Khanal, M
2010-06-01
Present study was undertaken during a period of 6 months (September 2008-February 2009) to see an correlation of 24 hours urine protein estimation with random spot protein-creatinine (P:C) ratio among a diabetic patients. The study comprised of 144 patients aged 30-70 years, recruited from Kantipur hospital, Kathmandu. The 24-hr urine sample was collected, followed by spot random urine sample. Both samples were analyzed for protein and creatinine excretion. An informed consent was taken from all participants. Sixteen inadequately collected urine samples as defined by (predicted creatinine--measured creatinine)/predicted creatinine > 0.2 were excluded from analysis. The Spearman's rank correlation between the spot urine P:C ratio and 24-hr total protein were performed by the Statistical Package for Social Service. At the P:C ratio cutoff of 0.15 and reference method (24-hr urine protein) cutoff of 150 mg/day, the correlation coefficient was found to be 0.892 (p urine collection but the cutoff should be carefully selected for different patients group under different laboratory procedures and settings.
Ibrahim, Azianah; Singh, Devinder Kaur Ajit; Shahar, Suzana
2017-01-01
The aim of this study was to establish 'Timed up and Go' test (TUG) normative data among community dwelling older adults stratified based on cognitive status, gender and age groups. A total of 2084 community dwelling older adults from wave I and II were recruited through a multistage random sampling method. TUG was performed using the standard protocol and scores were then stratified based on with and without mild cognitive impairment (MCI), gender and in a 5-year age groups ranging from ages of 60's to 80's. 529(16%) participants were identified to have MCI. Past history of falls and medical history of hypertension, heart disease, joint pain, hearing and vision problem, and urinary incontinence were found to have influenced TUG performance. Cognitive status as a mediator, predicted TUG performance even when both gender and age were controlled for (B 0.24, 95% CI (0.02-0.47), β 0.03, t 2.10, p = 0.36). Further descriptive analysis showed, participants with MCI, women and older in age took a longer time to complete TUG, as compared to men with MCI across all age groups with exceptions for some age groups. These results suggested that MCI needs to be taken into consideration when testing older adults using TUG, besides age and gender factors. Data using fast speed TUG may be required among older adults with and without MCI for further understanding.
Directory of Open Access Journals (Sweden)
Azianah Ibrahim
Full Text Available The aim of this study was to establish 'Timed up and Go' test (TUG normative data among community dwelling older adults stratified based on cognitive status, gender and age groups.A total of 2084 community dwelling older adults from wave I and II were recruited through a multistage random sampling method. TUG was performed using the standard protocol and scores were then stratified based on with and without mild cognitive impairment (MCI, gender and in a 5-year age groups ranging from ages of 60's to 80's.529(16% participants were identified to have MCI. Past history of falls and medical history of hypertension, heart disease, joint pain, hearing and vision problem, and urinary incontinence were found to have influenced TUG performance. Cognitive status as a mediator, predicted TUG performance even when both gender and age were controlled for (B 0.24, 95% CI (0.02-0.47, β 0.03, t 2.10, p = 0.36. Further descriptive analysis showed, participants with MCI, women and older in age took a longer time to complete TUG, as compared to men with MCI across all age groups with exceptions for some age groups.These results suggested that MCI needs to be taken into consideration when testing older adults using TUG, besides age and gender factors. Data using fast speed TUG may be required among older adults with and without MCI for further understanding.
Lusinchi, Dominic
2017-03-01
The scientific pollsters (Archibald Crossley, George H. Gallup, and Elmo Roper) emerged onto the American news media scene in 1935. Much of what they did in the following years (1935-1948) was to promote both the political and scientific legitimacy of their enterprise. They sought to be recognized as the sole legitimate producers of public opinion. In this essay I examine the, mostly overlooked, rhetorical work deployed by the pollsters to publicize the scientific credentials of their polling activities, and the central role the concept of sampling has had in that pursuit. First, they distanced themselves from the failed straw poll by claiming that their sampling methodology based on quotas was informed by science. Second, although in practice they did not use random sampling, they relied on it rhetorically to derive the symbolic benefits of being associated with the "laws of probability." © 2017 Wiley Periodicals, Inc.
The effect of dead time on randomly sampled power spectral estimates
DEFF Research Database (Denmark)
Buchhave, Preben; Velte, Clara Marika; George, William K.
2014-01-01
consider both the effect on the measured spectrum of a finite sampling time, i.e., a finite time during which the signal is acquired, and a finite dead time, that is a time in which the signal processor is busy evaluating a data point and therefore unable to measure a subsequent data point arriving within...... the dead time delay....
Phase microscopy of technical and biological samples through random phase modulation with a difuser
DEFF Research Database (Denmark)
Almoro, Percival; Pedrini, Giancarlo; Gundu, Phanindra Narayan
2010-01-01
A technique for phase microscopy using a phase diffuser and a reconstruction algorithm is proposed. A magnified specimen wavefront is projected on the diffuser plane that modulates the wavefront into a speckle field. The speckle patterns at axially displaced planes are sampled and used in an iter...
Random Evolutionary Dynamics Driven by Fitness and House-of-Cards Mutations: Sampling Formulae
Huillet, Thierry E.
2017-07-01
We first revisit the multi-allelic mutation-fitness balance problem, especially when mutations obey a house of cards condition, where the discrete-time deterministic evolutionary dynamics of the allelic frequencies derives from a Shahshahani potential. We then consider multi-allelic Wright-Fisher stochastic models whose deviation to neutrality is from the Shahshahani mutation/selection potential. We next focus on the weak selection, weak mutation cases and, making use of a Gamma calculus, we compute the normalizing partition functions of the invariant probability densities appearing in their Wright-Fisher diffusive approximations. Using these results, generalized Ewens sampling formulae (ESF) from the equilibrium distributions are derived. We start treating the ESF in the mixed mutation/selection potential case and then we restrict ourselves to the ESF in the simpler house-of-cards mutations only situation. We also address some issues concerning sampling problems from infinitely-many alleles weak limits.
The psychometric properties of the AUDIT: a survey from a random sample of elderly Swedish adults.
Källmén, Håkan; Wennberg, Peter; Ramstedt, Mats; Hallgren, Mats
2014-07-01
Increasing alcohol consumption and related harms have been reported among the elderly population of Europe. Consequently, it is important to monitor patterns of alcohol use, and to use a valid and reliable tool when screening for risky consumption in this age group. The aim was to evaluate the internal consistency reliability and construct validity of the Alcohol Use Disorders Identification Test (AUDIT) in elderly Swedish adults, and to compare the results with the general Swedish population. Another aim was to calculate the level of alcohol consumption (AUDIT-C) to be used for comparison in future studies. The questionnaire was sent to 1459 Swedish adults aged 79-80 years with a response rate of 73.3%. Internal consistency reliability, were assessed using Cronbach alpha, and confirmatory factor analysis assessed construct validity of the Alcohol Use Disorders Identification Test (AUDIT) in elderly population as compared to a Swedish general population sample. The results showed that AUDIT was more reliable and valid among the Swedish general population sample than among the elderly and that Item 1 and 4 in AUDIT was less reliable and valid among the elderly. While the AUDIT showed acceptable psychometric properties in the general population sample, it's performance was of less quality among the elderly respondents. Further psychometric assessments of the AUDIT in elderly populations are required before it is implemented more widely.
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Active Learning Not Associated with Student Learning in a Random Sample of College Biology Courses
Andrews, T. M.; Leonard, M. J.; Colgrove, C. A.; Kalinowski, S. T.
2011-01-01
Previous research has suggested that adding active learning to traditional college science lectures substantially improves student learning. However, this research predominantly studied courses taught by science education researchers, who are likely to have exceptional teaching expertise. The present study investigated introductory biology courses randomly selected from a list of prominent colleges and universities to include instructors representing a broader population. We examined the relationship between active learning and student learning in the subject area of natural selection. We found no association between student learning gains and the use of active-learning instruction. Although active learning has the potential to substantially improve student learning, this research suggests that active learning, as used by typical college biology instructors, is not associated with greater learning gains. We contend that most instructors lack the rich and nuanced understanding of teaching and learning that science education researchers have developed. Therefore, active learning as designed and implemented by typical college biology instructors may superficially resemble active learning used by education researchers, but lacks the constructivist elements necessary for improving learning. PMID:22135373
Detection and monitoring of invasive exotic plants: a comparison of four sampling methods
Cynthia D. Huebner
2007-01-01
The ability to detect and monitor exotic invasive plants is likely to vary depending on the sampling method employed. Methods with strong qualitative thoroughness for species detection often lack the intensity necessary to monitor vegetation change. Four sampling methods (systematic plot, stratified-random plot, modified Whittaker, and timed meander) in hemlock and red...
CTEPP STANDARD OPERATING PROCEDURE FOR SAMPLE SELECTION (SOP-1.10)
The procedures for selecting CTEPP study subjects are described in the SOP. The primary, county-level stratification is by region and urbanicity. Six sample counties in each of the two states (North Carolina and Ohio) are selected using stratified random sampling and reflect ...
Fitó, Montserrat; Estruch, Ramón; Salas-Salvadó, Jordi; Martínez-Gonzalez, Miguel Angel; Arós, Fernando; Vila, Joan; Corella, Dolores; Díaz, Oscar; Sáez, Guillermo; de la Torre, Rafael; Mitjavila, María-Teresa; Muñoz, Miguel Angel; Lamuela-Raventós, Rosa-María; Ruiz-Gutierrez, Valentina; Fiol, Miquel; Gómez-Gracia, Enrique; Lapetra, José; Ros, Emilio; Serra-Majem, Lluis; Covas, María-Isabel
2014-05-01
Scarce data are available on the effect of the traditional Mediterranean diet (TMD) on heart failure biomarkers. We assessed the effect of TMD on biomarkers related to heart failure in a high cardiovascular disease risk population. A total of 930 subjects at high cardiovascular risk (420 men and 510 women) were recruited in the framework of a multicentre, randomized, controlled, parallel-group clinical trial directed at testing the efficacy of the TMD on the primary prevention of cardiovascular disease (The PREDIMED Study). Participants were assigned to a low-fat diet (control, n = 310) or one of two TMDs [TMD + virgin olive oil (VOO) or TMD + nuts]. Depending on group assignment, participants received free provision of extra-virgin olive oil, mixed nuts, or small non-food gifts. After 1 year of intervention, both TMDs decreased plasma N-terminal pro-brain natriuretic peptide, with changes reaching significance vs. control group (P cardiovascular disease (CVD) who improved their diet toward a TMD pattern reduced their N-terminal pro-brain natriuretic peptide compared with those assigned to a low-fat diet. The same was found for in vivo oxidized low-density lipoprotein and lipoprotein(a) plasma concentrations after the TMD + VOO diet. From our results TMD could be a useful tool to mitigate against risk factors for heart failure. From our results TMD could modify markers of heart failure towards a more protective mode. © 2014 The Authors. European Journal of Heart Failure © 2014 European Society of Cardiology.
Diederich, Adele; Oswald, Peter
2014-01-01
A sequential sampling model for multiattribute binary choice options, called multiattribute attention switching (MAAS) model, assumes a separate sampling process for each attribute. During the deliberation process attention switches from one attribute consideration to the next. The order in which attributes are considered as well for how long each attribute is considered-the attention time-influences the predicted choice probabilities and choice response times. Several probability distributions for the attention time with different variances are investigated. Depending on the time and order schedule the model predicts a rich choice probability/choice response time pattern including preference reversals and fast errors. Furthermore, the difference between finite and infinite decision horizons for the attribute considered last is investigated. For the former case the model predicts a probability p 0 > 0 of not deciding within the available time. The underlying stochastic process for each attribute is an Ornstein-Uhlenbeck process approximated by a discrete birth-death process. All predictions are also true for the widely applied Wiener process.
Directory of Open Access Journals (Sweden)
Adele eDiederich
2014-09-01
Full Text Available A sequential sampling model for multiattribute binary choice options, called Multiattribute attention switching (MAAS model, assumes a separate sampling process for each attribute. During the deliberation process attention switches from one attribute consideration to the next. The order in which attributes are considered as well for how long each attribute is considered - the attention time - influences the predicted choice probabilities and choice response times. Several probability distributions for the attention time including deterministic, Poisson, binomial, geometric, and uniform with different variances are investigated. Depending on the time and order schedule the model predicts a rich choice probability/choice response time pattern including preference reversals and fast errors. Furthermore, the difference between a finite and infinite decision horizons for the attribute considered last is investigated. For the former case the model predicts a probability $p_0> 0$ of not deciding within the available time. The underlying stochastic process for each attribute is an Ornstein-Uhlenbeck process approximated by a discrete birth-death process. All predictions are also true for the widely applied Wiener process.
Directory of Open Access Journals (Sweden)
Gunter eSpöck
2015-05-01
Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.
Towards Cost-efficient Sampling Methods
Peng, Luo; Chong, Wu
2014-01-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and selects the high degree nodes with higher probability by classifying the nodes according to their degree distribution. The second sampling method improves the existing snowball sampling method so that it enables to sample the targeted nodes selectively in every sampling step. Besides, the two proposed sampling methods not only sample the nodes but also pick the edges directly connected to these nodes. In order to demonstrate the two methods' availability and accuracy, we compare them with the existing sampling methods in...
Dynamics of Vorticity Defects in Stratified Shear
2010-10-19
vanishes at points on dif- ferent Riemann sheets. While doing the initial value problem this leads to observation of a collective behaviour of the...it is hypothesized that there would exist a band of wavenumbers in which a stratified shear flow, modelled by ‘defect’ theory, will be stable. For
Nitrogen transformations in stratified aquatic microbial ecosystems
DEFF Research Database (Denmark)
Revsbech, Niels Peter; Risgaard-Petersen, N.; Schramm, Andreas
2006-01-01
Abstract New analytical methods such as advanced molecular techniques and microsensors have resulted in new insights about how nitrogen transformations in stratified microbial systems such as sediments and biofilms are regulated at a µm-mm scale. A large and ever-expanding knowledge base about n...
Direct multiangle solution for poorly stratified atmospheres
Vladimir Kovalev; Cyle Wold; Alexander Petkov; Wei Min Hao
2012-01-01
The direct multiangle solution is considered, which allows improving the scanning lidar-data-inversion accuracy when the requirement of the horizontally stratified atmosphere is poorly met. The signal measured at zenith or close to zenith is used as a core source for extracting optical characteristics of the atmospheric aerosol loading. The multiangle signals are used...
Usami, Satoshi
2017-03-01
Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.
Teoh, Jeremy Yuen-Chun; Chan, Eddie Shu-Yin; Yip, Siu-Ying; Tam, Ho-Man; Chiu, Peter Ka-Fung; Yee, Chi-Hang; Wong, Hon-Ming; Chan, Chi-Kwok; Hou, Simon See-Ming; Ng, Chi-Fai
2017-05-01
Our aim was to investigate the detrusor muscle sampling rate after monopolar versus bipolar transurethral resection of bladder tumor (TURBT). This was a single-center, prospective, randomized, phase III trial on monopolar versus bipolar TURBT. Baseline patient characteristics, disease characteristics and perioperative outcomes were compared, with the primary outcome being the detrusor muscle sampling rate in the TURBT specimen. Multivariate logistic regression analyses on detrusor muscle sampling were performed. From May 2012 to December 2015, a total of 160 patients with similar baseline characteristics were randomized to receive monopolar or bipolar TURBT. Fewer patients in the bipolar TURBT group required postoperative irrigation than patients in the monopolar TURBT group (18.7 vs. 43%; p = 0.001). In the whole cohort, no significant difference in the detrusor muscle sampling rates was observed between the bipolar and monopolar TURBT groups (77.3 vs. 63.3%; p = 0.057). In patients with urothelial carcinoma, bipolar TURBT achieved a higher detrusor muscle sampling rate than monopolar TURBT (84.6 vs. 67.7%; p = 0.025). On multivariate analyses, bipolar TURBT (odds ratio [OR] 2.23, 95% confidence interval [CI] 1.03-4.81; p = 0.042) and larger tumor size (OR 1.04, 95% CI 1.01-1.08; p = 0.022) were significantly associated with detrusor muscle sampling in the whole cohort. In addition, bipolar TURBT (OR 2.88, 95% CI 1.10-7.53; p = 0.031), larger tumor size (OR 1.05, 95% CI 1.01-1.10; p = 0.035), and female sex (OR 3.25, 95% CI 1.10-9.59; p = 0.033) were significantly associated with detrusor muscle sampling in patients with urothelial carcinoma. There was a trend towards a superior detrusor muscle sampling rate after bipolar TURBT. Further studies are needed to determine its implications on disease recurrence and progression.
Chen, Maggie H; Willan, Andrew R
2013-02-01
Most often, sample size determinations for randomized clinical trials are based on frequentist approaches that depend on somewhat arbitrarily chosen factors, such as type I and II error probabilities and the smallest clinically important difference. As an alternative, many authors have proposed decision-theoretic (full Bayesian) approaches, often referred to as value of information methods that attempt to determine the sample size that maximizes the difference between the trial's expected utility and its expected cost, referred to as the expected net gain. Taking an industry perspective, Willan proposes a solution in which the trial's utility is the increase in expected profit. Furthermore, Willan and Kowgier, taking a societal perspective, show that multistage designs can increase expected net gain. The purpose of this article is to determine the optimal sample size using value of information methods for industry-based, multistage adaptive randomized clinical trials, and to demonstrate the increase in expected net gain realized. At the end of each stage, the trial's sponsor must decide between three actions: continue to the next stage, stop the trial and seek regulatory approval, or stop the trial and abandon the drug. A model for expected total profit is proposed that includes consideration of per-patient profit, disease incidence, time horizon, trial duration, market share, and the relationship between trial results and probability of regulatory approval. The proposed method is extended to include multistage designs with a solution provided for a two-stage design. An example is given. Significant increases in the expected net gain are realized by using multistage designs. The complexity of the solutions increases with the number of stages, although far simpler near-optimal solutions exist. The method relies on the central limit theorem, assuming that the sample size is sufficiently large so that the relevant statistics are normally distributed. From a value of
Global Stratigraphy of Venus: Analysis of a Random Sample of Thirty-Six Test Areas
Basilevsky, Alexander T.; Head, James W., III
1995-01-01
The age relations between 36 impact craters with dark paraboloids and other geologic units and structures at these localities have been studied through photogeologic analysis of Magellan SAR images of the surface of Venus. Geologic settings in all 36 sites, about 1000 x 1000 km each, could be characterized using only 10 different terrain units and six types of structures. These units and structures form a major stratigraphic and geologic sequence (from oldest to youngest): (1) tessera terrain; (2) densely fractured terrains associated with coronae and in the form of remnants among plains; (3) fractured and ridged plains and ridge belts; (4) plains with wrinkle ridges; (5) ridges associated with coronae annulae and ridges of arachnoid annulae which are contemporary with wrinkle ridges of the ridged plains; (6) smooth and lobate plains; (7) fractures of coronae annulae, and fractures not related to coronae annulae, which disrupt ridged and smooth plains; (8) rift-associated fractures; and (9) craters with associated dark paraboloids, which represent the youngest 1O% of the Venus impact crater population (Campbell et al.), and are on top of all volcanic and tectonic units except the youngest episodes of rift-associated fracturing and volcanism; surficial streaks and patches are approximately contemporary with dark-paraboloid craters. Mapping of such units and structures in 36 randomly distributed large regions (each approximately 10(exp 6) sq km) shows evidence for a distinctive regional and global stratigraphic and geologic sequence. On the basis of this sequence we have developed a model that illustrates several major themes in the history of Venus. Most of the history of Venus (that of its first 80% or so) is not preserved in the surface geomorphological record. The major deformation associated with tessera formation in the period sometime between 0.5-1.0 b.y. ago (Ivanov and Basilevsky) is the earliest event detected. In the terminal stages of tessera fon
Use of pornography in a random sample of Norwegian heterosexual couples.
Daneback, Kristian; Traeen, Bente; Månsson, Sven-Axel
2009-10-01
This study examined the use of pornography in couple relationships to enhance the sex-life. The study contained a representative sample of 398 heterosexual couples aged 22-67 years. Data collection was carried out by self-administered postal questionnaires. The majority (77%) of the couples did not report any kind of pornography use to enhance the sex-life. In 15% of the couples, both had used pornography; in 3% of the couples, only the female partner had used pornography; and, in 5% of the couples, only the male partner had used pornography for this purpose. Based on the results of a discriminant function analysis, it is suggested that couples where one or both used pornography had a more permissive erotic climate compared to the couples who did not use pornography. In couples where only one partner used pornography, we found more problems related to arousal (male) and negative (female) self-perception. These findings could be of importance for clinicians who work with couples.
Spatial Sampling of Weather Data for Regional Crop Yield Simulations
Van Bussel, Lenny G. J.; Ewert, Frank; Zhao, Gang; Hoffmann, Holger; Enders, Andreas; Wallach, Daniel; Asseng, Senthold; Baigorria, Guillermo A.; Basso, Bruno; Biernath, Christian;
2016-01-01
Field-scale crop models are increasingly applied at spatio-temporal scales that range from regions to the globe and from decades up to 100 years. Sufficiently detailed data to capture the prevailing spatio-temporal heterogeneity in weather, soil, and management conditions as needed by crop models are rarely available. Effective sampling may overcome the problem of missing data but has rarely been investigated. In this study the effect of sampling weather data has been evaluated for simulating yields of winter wheat in a region in Germany over a 30-year period (1982-2011) using 12 process-based crop models. A stratified sampling was applied to compare the effect of different sizes of spatially sampled weather data (10, 30, 50, 100, 500, 1000 and full coverage of 34,078 sampling points) on simulated wheat yields. Stratified sampling was further compared with random sampling. Possible interactions between sample size and crop model were evaluated. The results showed differences in simulated yields among crop models but all models reproduced well the pattern of the stratification. Importantly, the regional mean of simulated yields based on full coverage could already be reproduced by a small sample of 10 points. This was also true for reproducing the temporal variability in simulated yields but more sampling points (about 100) were required to accurately reproduce spatial yield variability. The number of sampling points can be smaller when a stratified sampling is applied as compared to a random sampling. However, differences between crop models were observed including some interaction between the effect of sampling on simulated yields and the model used. We concluded that stratified sampling can considerably reduce the number of required simulations. But, differences between crop models must be considered as the choice for a specific model can have larger effects on simulated yields than the sampling strategy. Assessing the impact of sampling soil and crop management
Turbulent Mixing in Stably Stratified Flows
2008-03-01
is top-heavy in salt, a fluid parcel from the top will flow downward. As the parcel convects downward it will exchange heat, but negligible salt...analytique de la chaleur , volume 2, p. 172. Gauthier-Villars, Paris, 1903. .1. D. Boyd. Properties of thermal staircase off the northeast coast of South...ments. J. Phys. Oceanogr., 10:83 89, 1980. R. V. Ozmidov. On the turbulent exchange in a stable stratified ocean. Izv. Atmospheric and Oceanic Physics
Directory of Open Access Journals (Sweden)
Rosa Catarino
Full Text Available Human papillomavirus (HPV self-sampling (self-HPV is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab.A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA. After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET. HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic.HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2 detected by s-DRY, 56.2% (95%CI: 47.6-64.4 by Dr-WET, and 54.6% (95%CI: 46.1-62.9 by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34, and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56. Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+ were: 64.0% (95%CI: 44.5-79.8 for s-FTA, 84.6% (95%CI: 66.5-93.9 for s-DRY, and 76.9% (95%CI: 58.0-89.0 for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%. Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD/per card vs. ~1 USD/per swab.Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA.International Standard Randomized Controlled Trial Number (ISRCTN: 43310942.
Directory of Open Access Journals (Sweden)
Fuqun Zhou
2016-10-01
Full Text Available Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS. It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2–3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests’ features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.
Zhou, Fuqun; Zhang, Aining
2016-10-25
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.
Unwilling or Unable to Cheat? Evidence from a Randomized Tax Audit Experiment in Denmark
Henrik J. Kleven; Martin B. Knudsen; Claus T. Kreiner; Søren Pedersen; Emmanuel Saez
2010-01-01
This paper analyzes a randomized tax enforcement experiment in Denmark. In the base year, a stratified and representative sample of over 40,000 individual income tax filers was selected for the experiment. Half of the tax filers were randomly selected to be thoroughly audited, while the rest were deliberately not audited. The following year, "threat-of-audit" letters were randomly assigned and sent to tax filers in both groups. Using comprehensive administrative tax data, we present four main...
Media Use and Source Trust among Muslims in Seven Countries: Results of a Large Random Sample Survey
Directory of Open Access Journals (Sweden)
Steven R. Corman
2013-12-01
Full Text Available Despite the perceived importance of media in the spread of and resistance against Islamist extremism, little is known about how Muslims use different kinds of media to get information about religious issues, and what sources they trust when doing so. This paper reports the results of a large, random sample survey among Muslims in seven countries Southeast Asia, West Africa and Western Europe, which helps fill this gap. Results show a diverse set of profiles of media use and source trust that differ by country, with overall low trust in mediated sources of information. Based on these findings, we conclude that mass media is still the most common source of religious information for Muslims, but that trust in mediated information is low overall. This suggests that media are probably best used to persuade opinion leaders, who will then carry anti-extremist messages through more personal means.
Negrão, Mariana; Pereira, Mariana; Soares, Isabel; Mesman, Judi
2014-01-01
This study tested the attachment-based intervention program Video-feedback Intervention to promote Positive Parenting and Sensitive Discipline (VIPP-SD) in a randomized controlled trial with poor families of toddlers screened for professional's concerns about the child's caregiving environment. The VIPP-SD is an evidence-based intervention, but has not yet been tested in the context of poverty. The sample included 43 families with 1- to 4-year-old children: mean age at the pretest was 29 months and 51% were boys. At the pretest and posttest, mother-child interactions were observed at home, and mothers reported on family functioning. The VIPP-SD proved to be effective in enhancing positive parent-child interactions and positive family relations in a severely deprived context. Results are discussed in terms of implications for support services provided to such poor families in order to reduce intergenerational risk transmission.
Buller, David B.; Andersen, Peter A.; Walkosz, Barbara J.; Scott, Michael D.; Beck, Larry; Cutter, Gary R.
2016-01-01
Introduction Exposure to solar ultraviolet radiation during recreation is a risk factor for skin cancer. A trial evaluating an intervention to promote advanced sun protection (sunscreen pre-application/reapplication; protective hats and clothing; use of shade) during vacations. Materials and Methods Adult visitors to hotels/resorts with outdoor recreation (i.e., vacationers) participated in a group-randomized pretest-posttest controlled quasi-experimental design in 2012–14. Hotels/resorts were pair-matched and randomly assigned to the intervention or untreated control group. Sun protection (e.g., clothing, hats, shade and sunscreen) was measured in cross-sectional samples by observation and a face-to-face intercept survey during two-day visits. Results Initially, 41 hotel/resorts (11%) participated but 4 dropped out before posttest. Hotel/resorts were diverse (employees=30 to 900; latitude=24o 78′ N to 50o 52′ N; elevation=2 ft. to 9,726 ft. above sea level), and had a variety of outdoor venues (beaches/pools, court/lawn games, golf courses, common areas, and chairlifts). At pretest, 4,347 vacationers were observed and 3,531 surveyed. More females were surveyed (61%) than observed (50%). Vacationers were mostly 35–60 years old, highly educated (college education = 68%) and non-Hispanic white (93%), with high-risk skin types (22%). Vacationers reported covering 60% of their skin with clothing. Also, 40% of vacationers used shade; 60% applied sunscreen; and 42% had been sunburned. Conclusions The trial faced challenges recruiting resorts but result show that the large, multi-state sample of vacationers were at high risk for solar UV exposure. PMID:26593781
Verweij, Karin J H; Treur, Jorien L; Vink, Jacqueline M
2018-01-15
Epidemiological studies consistently show co-occurrence of use of different addictive substances. Whether these associations are causal or due to overlapping underlying influences remains an important question in addiction research. Methodological advances have made it possible to use published genetic associations to infer causal relationships between phenotypes. In this exploratory study, we used Mendelian randomization (MR) to examine the causality of well-established associations between nicotine, alcohol, caffeine, and cannabis use. Two-sample MR was employed to estimate bi-directional causal effects between four addictive substances: nicotine (smoking initiation and cigarettes smoked per day), caffeine (cups of coffee per day), alcohol (units per week), and cannabis (initiation). Based on existing genome-wide association results we selected genetic variants associated with the exposure measure as an instrument to estimate causal effects. Where possible we applied sensitivity analyses (MR-Egger and weighted median) more robust to horizontal pleiotropy. Most MR tests did not reveal causal associations. There was some weak evidence for a causal positive effect of genetically instrumented alcohol use on smoking initiation and of cigarettes per day on caffeine use, but these did not hold up with the sensitivity analyses. There was also some suggestive evidence for a positive effect of alcohol use on caffeine use (only with MR-Egger) and smoking initiation on cannabis initiation (only with weighted median). None of the suggestive causal associations survived corrections for multiple testing. Two-sample Mendelian randomization analyses found little evidence for causal relationships between nicotine, alcohol, caffeine, and cannabis use. This article is protected by copyright. All rights reserved.
Designing Wood Supply Scenarios from Forest Inventories with Stratified Predictions
Directory of Open Access Journals (Sweden)
Philipp Kilham
2018-02-01
Full Text Available Forest growth and wood supply projections are increasingly used to estimate the future availability of woody biomass and the correlated effects on forests and climate. This research parameterizes an inventory-based business-as-usual wood supply scenario, with a focus on southwest Germany and the period 2002–2012 with a stratified prediction. First, the Classification and Regression Trees algorithm groups the inventory plots into strata with corresponding harvest probabilities. Second, Random Forest algorithms generate individual harvest probabilities for the plots of each stratum. Third, the plots with the highest individual probabilities are selected as harvested until the harvest probability of the stratum is fulfilled. Fourth, the harvested volume of these plots is predicted with a linear regression model trained on harvested plots only. To illustrate the pros and cons of this method, it is compared to a direct harvested volume prediction with linear regression, and a combination of logistic regression and linear regression. Direct harvested volume regression predicts comparable volume figures, but generates these volumes in a way that differs from business-as-usual. The logistic model achieves higher overall classification accuracies, but results in underestimations or overestimations of harvest shares for several subsets of the data. The stratified prediction method balances this shortcoming, and can be of general use for forest growth and timber supply projections from large-scale forest inventories.
Deal, J. H.
1975-01-01
One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.
The fully nonlinear stratified geostrophic adjustment problem
Coutino, Aaron; Stastna, Marek
2017-01-01
The study of the adjustment to equilibrium by a stratified fluid in a rotating reference frame is a classical problem in geophysical fluid dynamics. We consider the fully nonlinear, stratified adjustment problem from a numerical point of view. We present results of smoothed dam break simulations based on experiments in the published literature, with a focus on both the wave trains that propagate away from the nascent geostrophic state and the geostrophic state itself. We demonstrate that for Rossby numbers in excess of roughly 2 the wave train cannot be interpreted in terms of linear theory. This wave train consists of a leading solitary-like packet and a trailing tail of dispersive waves. However, it is found that the leading wave packet never completely separates from the trailing tail. Somewhat surprisingly, the inertial oscillations associated with the geostrophic state exhibit evidence of nonlinearity even when the Rossby number falls below 1. We vary the width of the initial disturbance and the rotation rate so as to keep the Rossby number fixed, and find that while the qualitative response remains consistent, the Froude number varies, and these variations are manifested in the form of the emanating wave train. For wider initial disturbances we find clear evidence of a wave train that initially propagates toward the near wall, reflects, and propagates away from the geostrophic state behind the leading wave train. We compare kinetic energy inside and outside of the geostrophic state, finding that for long times a Rossby number of around one-quarter yields an equal split between the two, with lower (higher) Rossby numbers yielding more energy in the geostrophic state (wave train). Finally we compare the energetics of the geostrophic state as the Rossby number varies, finding long-lived inertial oscillations in the majority of the cases and a general agreement with the past literature that employed either hydrostatic, shallow-water equation-based theory or
Stratified growth in Pseudomonas aeruginosa biofilms
DEFF Research Database (Denmark)
Werner, E.; Roe, F.; Bugnicourt, A.
2004-01-01
In this study, stratified patterns of protein synthesis and growth were demonstrated in Pseudomonas aeruginosa biofilms. Spatial patterns of protein synthetic activity inside biofilms were characterized by the use of two green fluorescent protein (GFP) reporter gene constructs. One construct...... of oxygen limitation in the biofilm. Oxygen microelectrode measurements showed that oxygen only penetrated approximately 50 mum into the biofilm. P. aeruginosa was incapable of anaerobic growth in the medium used for this investigation. These results show that while mature P. aeruginosa biofilms contain...
Li, Ningzhi; Li, Shizhe; Shen, Jun
2017-06-01
In vivo 13C magnetic resonance spectroscopy (MRS) is a unique and effective tool for studying dynamic human brain metabolism and the cycling of neurotransmitters. One of the major technical challenges for in vivo 13C-MRS is the high radio frequency (RF) power necessary for heteronuclear decoupling. In the common practice of in vivo 13C-MRS, alkanyl carbons are detected in the spectra range of 10-65ppm. The amplitude of decoupling pulses has to be significantly greater than the large one-bond 1H-13C scalar coupling (1JCH=125-145 Hz). Two main proton decoupling methods have been developed: broadband stochastic decoupling and coherent composite or adiabatic pulse decoupling (e.g., WALTZ); the latter is widely used because of its efficiency and superb performance under inhomogeneous B1 field. Because the RF power required for proton decoupling increases quadratically with field strength, in vivo 13C-MRS using coherent decoupling is often limited to low magnetic fields (Drug Administration (FDA). Alternately, carboxylic/amide carbons are coupled to protons via weak long-range 1H-13C scalar couplings, which can be decoupled using low RF power broadband stochastic decoupling. Recently, the carboxylic/amide 13C-MRS technique using low power random RF heteronuclear decoupling was safely applied to human brain studies at 7T. Here, we review the two major decoupling methods and the carboxylic/amide 13C-MRS with low power decoupling strategy. Further decreases in RF power deposition by frequency-domain windowing and time-domain random under-sampling are also discussed. Low RF power decoupling opens the possibility of performing in vivo 13C experiments of human brain at very high magnetic fields (such as 11.7T), where signal-to-noise ratio as well as spatial and temporal spectral resolution are more favorable than lower fields.
A user`s guide to LHS: Sandia`s Latin Hypercube Sampling Software
Energy Technology Data Exchange (ETDEWEB)
Wyss, G.D.; Jorgensen, K.H. [Sandia National Labs., Albuquerque, NM (United States). Risk Assessment and Systems Modeling Dept.
1998-02-01
This document is a reference guide for LHS, Sandia`s Latin Hypercube Sampling Software. This software has been developed to generate either Latin hypercube or random multivariate samples. The Latin hypercube technique employs a constrained sampling scheme, whereas random sampling corresponds to a simple Monte Carlo technique. The present program replaces the previous Latin hypercube sampling program developed at Sandia National Laboratories (SAND83-2365). This manual covers the theory behind stratified sampling as well as use of the LHS code both with the Windows graphical user interface and in the stand-alone mode.
Vertically integrated flow in stratified aquifers
Strack, Otto D. L.
2017-05-01
We present a set of continuous discharge potentials that can be used to determine the vertically integrated flow in stratified aquifers. The method applies to cases where the boundaries are vertical and either the hydraulic head is given, or the boundary is a seepage face, or the integrated discharge is given. The approach is valid for cases of given recharge through the upper and/or lower boundaries of the aquifer. The method is valid for any values of hydraulic conductivity; there are no limitations of the contrast for the method to be valid. The flows in the strata may be either confined or unconfined, and locally perched conditions may exist, but the effect of capillarity is not included. The hydraulic head is determined by applying the Dupuit-Forchheimer approximation. The main advantage of the approach is that very complex conditions in stratified aquifer systems, including locally perched conditions and extremely complex flow systems can be treated in a relatively straight forward approach by considering only the vertically integrated flow rates. The approach is particularly useful for assessing groundwater sustainability, as a model to be constructed prior to developing a fully three-dimensional numerical model.
Directory of Open Access Journals (Sweden)
Alanis Kelly L
2006-02-01
Full Text Available Abstract Background Establishing more sensible measures to treat cocaine-addicted mothers and their children is essential for improving U.S. drug policy. Favorable post-natal environments have moderated potential deleterious prenatal effects. However, since cocaine is an illicit substance having long been demonized, we hypothesized that attitudes toward prenatal cocaine exposure would be more negative than for licit substances, alcohol, nicotine and caffeine. Further, media portrayals about long-term outcomes were hypothesized to influence viewers' attitudes, measured immediately post-viewing. Reducing popular crack baby stigmas could influence future policy decisions by legislators. In Study 1, 336 participants were randomly assigned to 1 of 4 conditions describing hypothetical legal sanction scenarios for pregnant women using cocaine, alcohol, nicotine or caffeine. Participants rated legal sanctions against pregnant women who used one of these substances and risk potential for developing children. In Study 2, 139 participants were randomly assigned to positive, neutral and negative media conditions. Immediately post-viewing, participants rated prenatal cocaine-exposed or non-exposed teens for their academic performance and risk for problems at age18. Results Participants in Study 1 imposed significantly greater legal sanctions for cocaine, perceiving prenatal cocaine exposure as more harmful than alcohol, nicotine or caffeine. A one-way ANOVA for independent samples showed significant differences, beyond .0001. Post-hoc Sheffe test illustrated that cocaine was rated differently from other substances. In Study 2, a one-way ANOVA for independent samples was performed on difference scores for the positive, neutral or negative media conditions about prenatal cocaine exposure. Participants in the neutral and negative media conditions estimated significantly lower grade point averages and more problems for the teen with prenatal cocaine exposure
Ginsburg, Harvey J; Raffeld, Paul; Alanis, Kelly L; Boyce, Angela S
2006-01-01
Background Establishing more sensible measures to treat cocaine-addicted mothers and their children is essential for improving U.S. drug policy. Favorable post-natal environments have moderated potential deleterious prenatal effects. However, since cocaine is an illicit substance having long been demonized, we hypothesized that attitudes toward prenatal cocaine exposure would be more negative than for licit substances, alcohol, nicotine and caffeine. Further, media portrayals about long-term outcomes were hypothesized to influence viewers' attitudes, measured immediately post-viewing. Reducing popular crack baby stigmas could influence future policy decisions by legislators. In Study 1, 336 participants were randomly assigned to 1 of 4 conditions describing hypothetical legal sanction scenarios for pregnant women using cocaine, alcohol, nicotine or caffeine. Participants rated legal sanctions against pregnant women who used one of these substances and risk potential for developing children. In Study 2, 139 participants were randomly assigned to positive, neutral and negative media conditions. Immediately post-viewing, participants rated prenatal cocaine-exposed or non-exposed teens for their academic performance and risk for problems at age18. Results Participants in Study 1 imposed significantly greater legal sanctions for cocaine, perceiving prenatal cocaine exposure as more harmful than alcohol, nicotine or caffeine. A one-way ANOVA for independent samples showed significant differences, beyond .0001. Post-hoc Sheffe test illustrated that cocaine was rated differently from other substances. In Study 2, a one-way ANOVA for independent samples was performed on difference scores for the positive, neutral or negative media conditions about prenatal cocaine exposure. Participants in the neutral and negative media conditions estimated significantly lower grade point averages and more problems for the teen with prenatal cocaine exposure than for the non-exposed teen
Directory of Open Access Journals (Sweden)
Giles M. Foody
2017-08-01
Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.
Turla, Ahmet; Dundar, Cihad; Ozkanli, Caglar
2010-01-01
The main objective of this article is to obtain the prevalence of childhood physical abuse experiences in college students. This cross-sectional study was performed on a gender-stratified random sample of 988 participants studying at Ondokuz Mayis University, with self-reported anonymous questionnaires. It included questions on physical abuse in…
Ríos, Antonio; López-Navas, Ana Isabel; López-López, Ana Isabel; Gómez, Francisco Javier; Iriarte, Jorge; Herruzo, Rafael; Blanco, Gerardo; Llorca, Francisco Javier; Asunsolo, Angel; Sánchez-Gallegos, Pilar; Gutiérrez, Pedro Ramón; Fernández, Ana; de Jesús, María Teresa; Martínez-Alarcón, Laura; Lana, Alberto; Fuentes, Lorena; Hernández, Juan Ramón; Virseda, Julio; Yelamos, José; Bondía, José Antonio; Hernández, Antonio Miguel; Ayala, Marco Antonio; Ramírez, Pablo; Parrilla, Pascual
2016-01-01
AIM: To analyze the attitude of Spanish medical students toward living liver donation (LLD) and to establish which factors have an influence on this attitude. METHODS: Study type: A sociological, interdisciplinary, multicenter and observational study. Study population: Medical students enrolled in Spain (n = 34000) in the university academic year 2010-2011. Sample size: A sample of 9598 students stratified by geographical area and academic year. Instrument used to measure attitude: A validated questionnaire (PCID-DVH RIOS) was self-administered and completed anonymously. Data collection procedure: Randomly selected medical schools. The questionnaire was applied to each academic year at compulsory sessions. Statistical analysis: Student´s t test, χ2 test and logistic regression analysis. RESULTS: The completion rate was 95.7% (n = 9275). 89% (n = 8258) were in favor of related LLD, and 32% (n = 2937) supported unrelated LLD. The following variables were associated with having a more favorable attitude: (1) age (P = 0.008); (2) sex (P donation (P donation (P donated liver segment from a family member if one were needed (P donation (P < 0.001). CONCLUSION: Spanish medical students have a favorable attitude toward LLD. PMID:27433093
Ahluwalia, N; Ferrières, J; Dallongeville, J; Simon, C; Ducimetière, P; Amouyel, P; Arveiler, D; Ruidavets, J-B
2009-04-01
Diet is considered an important modifiable factor in the overweight. The role of macronutrients in obesity has been examined in general in selected populations, but the results of these studies are mixed, depending on the potential confounders and adjustments for other macronutrients. For this reason, we examined the association between macronutrient intake patterns and being overweight in a population-based representative sample of middle-aged (55.1+/-6.1 years) men (n=966), using various adjustment modalities. The study subjects kept 3-day food-intake records, and the standard cardiovascular risk factors were assessed. Weight, height and waist circumference (WC) were also measured. Carbohydrate intake was negatively associated and fat intake was positively associated with body mass index (BMI) and WC in regression models adjusted for energy intake and other factors, including age, smoking and physical activity. However, with mutual adjustments for other energy-yielding nutrients, the negative association of carbohydrate intake with WC remained significant, whereas the associations between fat intake and measures of obesity did not. Adjusted odds ratios (95% confidence interval) comparing the highest and lowest quartiles of carbohydrate intake were 0.50 (0.25-0.97) for obesity (BMI>29.9) and 0.41 (0.23-0.73) for abdominal obesity (WC>101.9 cm). Consistent negative associations between carbohydrate intake and BMI and WC were seen in this random representative sample of the general male population. The associations between fat intake and these measures of being overweight were attenuated on adjusting for carbohydrate intake. Thus, the balance of carbohydrate-to-fat intake is an important element in obesity in a general male population, and should be highlighted in dietary guidelines.
Gage, S H; Jones, H J; Burgess, S; Bowden, J; Davey Smith, G; Zammit, S; Munafò, M R
2017-04-01
Observational associations between cannabis and schizophrenia are well documented, but ascertaining causation is more challenging. We used Mendelian randomization (MR), utilizing publicly available data as a method for ascertaining causation from observational data. We performed bi-directional two-sample MR using summary-level genome-wide data from the International Cannabis Consortium (ICC) and the Psychiatric Genomics Consortium (PGC2). Single nucleotide polymorphisms (SNPs) associated with cannabis initiation (p schizophrenia (p cannabis initiation on risk of schizophrenia [odds ratio (OR) 1.04 per doubling odds of cannabis initiation, 95% confidence interval (CI) 1.01-1.07, p = 0.019]. There was strong evidence consistent with a causal effect of schizophrenia risk on likelihood of cannabis initiation (OR 1.10 per doubling of the odds of schizophrenia, 95% CI 1.05-1.14, p = 2.64 × 10-5). Findings were as predicted for the negative control (height: OR 1.00, 95% CI 0.99-1.01, p = 0.90) but weaker than predicted for the positive control (years in education: OR 0.99, 95% CI 0.97-1.00, p = 0.066) analyses. Our results provide some that cannabis initiation increases the risk of schizophrenia, although the size of the causal estimate is small. We find stronger evidence that schizophrenia risk predicts cannabis initiation, possibly as genetic instruments for schizophrenia are stronger than for cannabis initiation.
Fan, Desheng; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Pan, Xuemei; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2015-04-10
A multiple-image authentication method with a cascaded multilevel architecture in the Fresnel domain is proposed, in which a synthetic encoded complex amplitude is first fabricated, and its real amplitude component is generated by iterative amplitude encoding, random sampling, and space multiplexing for the low-level certification images, while the phase component of the synthetic encoded complex amplitude is constructed by iterative phase information encoding and multiplexing for the high-level certification images. Then the synthetic encoded complex amplitude is iteratively encoded into two phase-type ciphertexts located in two different transform planes. During high-level authentication, when the two phase-type ciphertexts and the high-level decryption key are presented to the system and then the Fresnel transform is carried out, a meaningful image with good quality and a high correlation coefficient with the original certification image can be recovered in the output plane. Similar to the procedure of high-level authentication, in the case of low-level authentication with the aid of a low-level decryption key, no significant or meaningful information is retrieved, but it can result in a remarkable peak output in the nonlinear correlation coefficient of the output image and the corresponding original certification image. Therefore, the method realizes different levels of accessibility to the original certification image for different authority levels with the same cascaded multilevel architecture.
Messiah, Antoine; Acuna, Juan M; Castro, Grettel; de la Vega, Pura Rodríguez; Vaiva, Guillaume; Shultz, James; Neria, Yuval; De La Rosa, Mario
2014-07-01
This study examined the mental health consequences of the January 2010 Haiti earthquake on Haitians living in Miami-Dade County, Florida, 2-3 years following the event. A random-sample household survey was conducted from October 2011 through December 2012 in Miami-Dade County, Florida. Haitian participants (N = 421) were assessed for their earthquake exposure and its impact on family, friends, and household finances; and for symptoms of posttraumatic stress disorder (PTSD), anxiety, and major depression; using standardized screening measures and thresholds. Exposure was considered as "direct" if the interviewee was in Haiti during the earthquake. Exposure was classified as "indirect" if the interviewee was not in Haiti during the earthquake but (1) family members or close friends were victims of the earthquake, and/or (2) family members were hosted in the respondent's household, and/or (3) assets or jobs were lost because of the earthquake. Interviewees who did not qualify for either direct or indirect exposure were designated as "lower" exposure. Eight percent of respondents qualified for direct exposure, and 63% qualified for indirect exposure. Among those with direct exposure, 19% exceeded threshold for PTSD, 36% for anxiety, and 45% for depression. Corresponding percentages were 9%, 22% and 24% for respondents with indirect exposure, and 6%, 14%, and 10% for those with lower exposure. A majority of Miami Haitians were directly or indirectly exposed to the earthquake. Mental health distress among them remains considerable two to three years post-earthquake.
Messiah, Antoine; Lacoste, Jérôme; Gokalsing, Erick; Shultz, James M; Rodríguez de la Vega, Pura; Castro, Grettel; Acuna, Juan M
2016-08-01
Studies on the mental health of families hosting disaster refugees are lacking. This study compares participants in households that hosted 2010 Haitian earthquake disaster refugees with their nonhost counterparts. A random sample survey was conducted from October 2011 through December 2012 in Miami-Dade County, Florida. Haitian participants were assessed regarding their 2010 earthquake exposure and impact on family and friends and whether they hosted earthquake refugees. Using standardized scores and thresholds, they were evaluated for symptoms of three common mental disorders (CMDs): posttraumatic stress disorder, generalized anxiety disorder, and major depressive disorder (MDD). Participants who hosted refugees (n = 51) had significantly higher percentages of scores beyond thresholds for MDD than those who did not host refugees (n = 365) and for at least one CMD, after adjusting for participants' earthquake exposures and effects on family and friends. Hosting refugees from a natural disaster appears to elevate the risk for MDD and possibly other CMDs, independent of risks posed by exposure to the disaster itself. Families hosting refugees deserve special attention.
Directory of Open Access Journals (Sweden)
Troy David Querec
Full Text Available Detection of multiple human papillomavirus (HPV types in the genital tract is common. Associations among HPV types may impact HPV vaccination modeling and type replacement. The objectives were to determine the distribution of concurrent HPV type infections in cervicovaginal samples and examine type-specific associations. We analyzed HPV genotyping results from 32,245 cervicovaginal specimens collected from women aged 11 to 83 years in the United States from 2001 through 2011. Statistical power was enhanced by combining 6 separate studies. Expected concurrent infection frequencies from a series of permutation models, each with increasing fidelity to the real data, were compared with the observed data. Statistics were computed based on the distributional properties of the randomized data. Concurrent detection occurred more than expected with 0 or ≥3 HPV types and less than expected with 1 and 2 types. Some women bear a disproportionate burden of the HPV type prevalence. Type associations were observed that exceeded multiple hypothesis corrected significance. Multiple HPV types were detected more frequently than expected by chance and associations among particular HPV types were detected. However vaccine-targeted types were not specifically affected, supporting the expectation that current bivalent/quadrivalent HPV vaccination will not result in type replacement with other high-risk types.
Ecosystem metabolism in a stratified lake
DEFF Research Database (Denmark)
Stæhr, Peter Anton; Christensen, Jesper Philip Aagaard; Batt, Ryan D.
2012-01-01
Seasonal changes in rates of gross primary production (GPP), net ecosystem production (NEP), and respiration (R) were determined from frequent automated profiles of dissolved oxygen (DO) and temperature in a clear-water polymictic lake. Metabolic rate calculations were made using a method...... that integrates rates across the entire depth profile and includes DO exchange between depth layers driven by mixed-layer deepening and eddy diffusivity. During full mixing, NEP was close to zero throughout the water column, and GPP and R were reduced 2-10 times compared to stratified periods. When present....... Although air-water gas exchange rates differed among the three formulations of gas-transfer velocity, this had no significant effect on metabolic rates....
Stratified scaffold design for engineering composite tissues.
Mosher, Christopher Z; Spalazzi, Jeffrey P; Lu, Helen H
2015-08-01
A significant challenge to orthopaedic soft tissue repair is the biological fixation of autologous or allogeneic grafts with bone, whereby the lack of functional integration between such grafts and host bone has limited the clinical success of anterior cruciate ligament (ACL) and other common soft tissue-based reconstructive grafts. The inability of current surgical reconstruction to restore the native fibrocartilaginous insertion between the ACL and the femur or tibia, which minimizes stress concentration and facilitates load transfer between the soft and hard tissues, compromises the long-term clinical functionality of these grafts. To enable integration, a stratified scaffold design that mimics the multiple tissue regions of the ACL interface (ligament-fibrocartilage-bone) represents a promising strategy for composite tissue formation. Moreover, distinct cellular organization and phase-specific matrix heterogeneity achieved through co- or tri-culture within the scaffold system can promote biomimetic multi-tissue regeneration. Here, we describe the methods for fabricating a tri-phasic scaffold intended for ligament-bone integration, as well as the tri-culture of fibroblasts, chondrocytes, and osteoblasts on the stratified scaffold for the formation of structurally contiguous and compositionally distinct regions of ligament, fibrocartilage and bone. The primary advantage of the tri-phasic scaffold is the recapitulation of the multi-tissue organization across the native interface through the layered design. Moreover, in addition to ease of fabrication, each scaffold phase is similar in polymer composition and therefore can be joined together by sintering, enabling the seamless integration of each region and avoiding delamination between scaffold layers. Copyright © 2015 Elsevier Inc. All rights reserved.
Li, Ying; Li, Yan; Liu, Li-an; Zhao, Ling; Hu, Ka-ming; Wu, Xi; Chen, Xiao-qin; Li, Gui-ping; Mang, Ling-ling; Qi, Qi-hua
2011-04-01
To explore the best intervention time of acupuncture and moxibustion for peripheral facial palsy (Bell's palsy) and the clinical advantage program of selective treatment with acupuncture and moxibustion. Multi-central large-sample randomized controlled trial was carried out. Nine hundreds cases of Bell's palsy were randomized into 5 treatment groups, named selective filiform needle group (group A), selective acupuncture + moxibustion group (group B), selective acupuncture + electroacupuncture (group C), selective acupuncture + line-up needling on muscle region of meridian group (group D) and non-selective filiform needle group (group E). Four sessions of treatment were required in each group. Separately, during the enrollment, after 4 sessions of treatment, in 1 month and 3 months of follow-up after treatment, House-Brackmann Scale, Facial Disability Index Scale and Degree of Facial Nerve Paralysis (NFNP) were adopted for efficacy assessment. And the efficacy systematic analysis was provided in view of the intervention time and nerve localization of disease separately. The curative rates of intervention in acute stage and resting stage were 50.1% (223/445) and 52.1% (162/311), which were superior to recovery stage (25.9%, 35/135) separately. There were no statistical significant differences in efficacy in comparison among 5 treatment programs at the same stage (all P > 0.05). The efficacy of intervention of group A and group E in acute stage was superior to that in recovery stage (both P < 0.01). The difference was significant statistically between the efficacy on the localization above chorda tympani nerve and that on the localization below the nerve in group D (P < 0.01). The efficacy on the localization below chorda tympani nerve was superior to the localization above the nerve. The best intervention time for the treatment of Bell's palsy is in acute stage and resting stage, meaning 1 to 3 weeks after occurrence. All of the 5 treatment programs are advantageous
Directory of Open Access Journals (Sweden)
Romain Guignard
Full Text Available OBJECTIVES: It is crucial for policy makers to monitor the evolution of tobacco smoking prevalence. In France, this monitoring is based on a series of cross-sectional general population surveys, the Health Barometers, conducted every five years and based on random samples. A methodological study has been carried out to assess the reliability of a monitoring system based on regular quota sampling surveys for smoking prevalence. DESIGN / OUTCOME MEASURES: In 2010, current and daily tobacco smoking prevalences obtained in a quota survey on 8,018 people were compared with those of the 2010 Health Barometer carried out on 27,653 people. Prevalences were assessed separately according to the telephone equipment of the interviewee (landline phone owner vs "mobile-only", and logistic regressions were conducted in the pooled database to assess the impact of the telephone equipment and of the survey mode on the prevalences found. Finally, logistic regressions adjusted for sociodemographic characteristics were conducted in the random sample in order to determine the impact of the needed number of calls to interwiew "hard-to-reach" people on the prevalence found. RESULTS: Current and daily prevalences were higher in the random sample (respectively 33.9% and 27.5% in 15-75 years-old than in the quota sample (respectively 30.2% and 25.3%. In both surveys, current and daily prevalences were lower among landline phone owners (respectively 31.8% and 25.5% in the random sample and 28.9% and 24.0% in the quota survey. The required number of calls was slightly related to the smoking status after adjustment for sociodemographic characteristics. CONCLUSION: Random sampling appears to be more effective than quota sampling, mainly by making it possible to interview hard-to-reach populations.
The optimism trap: Migrants' educational choices in stratified education systems.
Tjaden, Jasper Dag; Hunkler, Christian
2017-09-01
Immigrant children's ambitious educational choices have often been linked to their families' high level of optimism and motivation for upward mobility. However, previous research has mostly neglected alternative explanations such as information asymmetries or anticipated discrimination. Moreover, immigrant children's higher dropout rates at the higher secondary and university level suggest that low performing migrant students could have benefitted more from pursuing less ambitious tracks, especially in countries that offer viable vocational alternatives. We examine ethnic minority's educational choices using a sample of academically low performing, lower secondary school students in Germany's highly stratified education system. We find that their families' optimism diverts migrant students from viable vocational alternatives. Information asymmetries and anticipated discrimination do not explain their high educational ambitions. While our findings further support the immigrant optimism hypothesis, we discuss how its effect may have different implications depending on the education system. Copyright © 2017. Published by Elsevier Inc.
Improved patient selection by stratified surgical intervention
DEFF Research Database (Denmark)
Wang, Miao; Bünger, Cody E; Li, Haisheng
2015-01-01
BACKGROUND CONTEXT: Choosing the best surgical treatment for patients with spinal metastases remains a significant challenge for spine surgeons. There is currently no gold standard for surgical treatments. The Aarhus Spinal Metastases Algorithm (ASMA) was established to help surgeons choose...... the most appropriate surgical intervention for patients with spinal metastases. PURPOSE: The purpose of this study was to evaluate the clinical outcome of stratified surgical interventions based on the ASMA, which combines life expectancy and the anatomical classification of patients with spinal metastases...... survival times in the five surgical groups determined by the ASMA were 2.1 (TS 0-4, TC 1-7), 5.1 (TS 5-8, TC 1-7), 12.1 (TS 9-11, TC 1-7 or TS 12-15, TC 7), 26.0 (TS 12-15, TC 4-6), and 36.0 (TS 12-15, TC 1-3) months. The 30-day mortality rate was 7.5%. Postoperative neurological function was maintained...
Mixed layer in a stably stratified fluid
Directory of Open Access Journals (Sweden)
F. Califano
1994-01-01
Full Text Available We present a numerical study of the generation and evolution of a mixed layer in a stably stratified layer of Boussinesq fluid. We use an external forcing in the equation of motion to model the experimental situation where the mechanical energy input is due to an oscillating grid. The results of 2D and 3D numerical simulations indicate that the basic mechanism for the entrainment is the advection of the temperature field. This advection tends to produce horizontally thin regions of small temperature vertical gradients (jets where the hydrodynamics forces are nearly zero. At the bottom of these structures, the buoyancy brakes the vertical motions. The jets are also characterized by the presence of very short horizontal scales where the thermal diffusion time turn out to be comparable with the dynamics time. As a result, the temperature field is well mixed in a few dynamics times. This process stops when the mechanical energy injected becomes comparable with the energy dissipated by viscosity.
Spiral arms in thermally stratified protoplanetary discs
Juhász, Attila; Rosotti, Giovanni P.
2018-02-01
Spiral arms have been observed in nearly a dozen protoplanetary discs in near-infrared scattered light and recently also in the submillimetre continuum. While one of the most compelling explanations is that they are driven by planetary or stellar companions, in all but one cases such companions have not yet been detected and there is even ambiguity on whether the planet should be located inside or outside the spirals. Here, we use 3D hydrodynamic simulations to study the morphology of spiral density waves launched by embedded planets taking into account the vertical temperature gradient, a natural consequence of stellar irradiation. Our simulations show that the pitch angle of the spirals in thermally stratified discs is the lowest in the disc mid-plane and increases towards the disc surface. We combine the hydrodynamic simulations with 3D radiative transfer calculations to predict that the pitch angle of planetary spirals observed in the near-infrared is higher than in the submillimetre. We also find that in both cases the spirals converge towards the planet. This provides a new powerful observational method to determine if the perturbing planet is inside or outside the spirals, as well as map the thermal stratification of the disc.
Boezen, H M; Schouten, J. P.; Postma, D S; Rijcken, B
1994-01-01
Peak expiratory flow (PEF) variability can be considered as an index of bronchial lability. Population studies on PEF variability are few. The purpose of the current paper is to describe the distribution of PEF variability in a random population sample of adults with a wide age range (20-70 yrs),
Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica
2013-01-01
This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…
Winter, Joanne R; Kaler, Jasmeet; Ferguson, Eamonn; KilBride, Amy L; Green, Laura E
2015-11-01
The aims of this study were to update the prevalence of lameness in sheep in England and identify novel risk factors. A total of 1260 sheep farmers responded to a postal survey. The survey captured detailed information on the period prevalence of lameness from May 2012-April 2013 and the prevalence and farmer naming of lesions attributable to interdigital dermatitis (ID), severe footrot (SFR), contagious ovine digital dermatitis (CODD) and shelly hoof (SH), management and treatment of lameness, and farm and flock details. The global mean period prevalence of lameness fell between 2004 and 2013 from 10.6% to 4.9% and the geometric mean period prevalence of lameness fell from 5.4% (95% CL: 4.7%-6.0%) to 3.5% (95% CI: 3.3%-3.7%). In 2013, more farmers were using vaccination and antibiotic treatment for ID and SFR and fewer farmers were using foot trimming as a routine or therapeutic treatment than in 2004. Two over-dispersed Poisson regression models were developed with the outcome the period prevalence of lameness, one investigated associations with farmer estimates of prevalence of the four foot lesions and one investigated associations with management practices to control and treat lameness and footrot. A prevalence of ID>10%, SFR>2.5% and CODD>2.5% were associated with a higher prevalence of lameness compared with those lesions being absent, however, the prevalence of SH was not associated with a change in risk of lameness. A key novel management risk associated with higher prevalence of lameness was the rate of feet bleeding/100 ewes trimmed/year. In addition, vaccination of ewes once per year and selecting breeding replacements from never-lame ewes were associated with a decreased risk of lameness. Other factors associated with a lower risk of lameness for the first time in a random sample of farmers and a full risk model were: recognising lameness in sheep at locomotion score 1 compared with higher scores, treatment of the first lame sheep in a group compared
Chien, Ming-Hung; Guo, How-Ran
2014-01-01
Falls are common in older people and may lead to functional decline, disability, and death. Many risk factors have been identified, but studies evaluating effects of nutritional status are limited. To determine whether nutritional status is a predictor of falls in older people living in the community, we analyzed data collected through the Survey of Health and Living Status of the Elderly in Taiwan (SHLSET). SHLSET include a series of interview surveys conducted by the government on a random sample of people living in community dwellings in the nation. We included participants who received nutritional status assessment using the Mini Nutritional Assessment Taiwan Version 2 (MNA-T2) in the 1999 survey when they were 53 years or older and followed up on the cumulative incidence of falls in the one-year period before the interview in the 2003 survey. At the beginning of follow-up, the 4440 participants had a mean age of 69.5 (standard deviation= 9.1) years, and 467 participants were "not well-nourished," which was defined as having an MNA-T2 score of 23 or less. In the one-year study period, 659 participants reported having at least one fall. After adjusting for other risk factors, we found the associated odds ratio for falls was 1.73 (95% confidence interval, 1.23, 2.42) for "not well-nourished," 1.57 (1.30, 1.90) for female gender, 1.03 (1.02, 1.04) for one-year older, 1.55 (1.22, 1.98) for history of falls, 1.34 (1.05, 1.72) for hospital stay during the past 12 months, 1.66 (1.07, 2.58) for difficulties in activities of daily living, and 1.53 (1.23, 1.91) for difficulties in instrumental activities of daily living. Nutritional status is an independent predictor of falls in older people living in the community. Further studies are warranted to identify nutritional interventions that can help prevent falls in the elderly.
Inferences from Genomic Models in Stratified Populations
DEFF Research Database (Denmark)
Janss, Luc; de los Campos, Gustavo; Sheehan, Nuala
2012-01-01
Unaccounted population stratification can lead to spurious associations in genome-wide association studies (GWAS) and in this context several methods have been proposed to deal with this problem. An alternative line of research uses whole-genome random regression (WGRR) models that fit all marker...
Gender-stratified gene and gene-treatment interactions in smoking cessation
Benowitz, Neal; Lee, W.; Bergen, AW; Swan, GE; D. Li; Liu, J.; Thomas, P.; Tyndale, RF; Benowitz, NL; Lerman, C; Conti, DV
2012-01-01
We conducted gender-stratified analyses on a systems-based candidate gene study of 53 regions involved in nicotinic response and the brain-reward pathway in two randomized clinical trials of smoking cessation treatments (placebo, bupropion, transdermal and nasal spray nicotine replacement therapy). We adjusted P-values for multiple correlated tests, and used a Bonferroni corrected ?-level of 5 ? 10?4 to determine system-wide significance. Four SNPs (rs12021667, rs12027267, rs6702335, rs120399...
Sampling Methods in Cardiovascular Nursing Research: An Overview.
Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie
2014-01-01
Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.
The Accuracy of Pass/Fail Decisions in Random and Difficulty-Balanced Domain-Sampling Tests.
Schnipke, Deborah L.
A common practice in some certification fields (e.g., information technology) is to draw items from an item pool randomly and apply a common passing score, regardless of the items administered. Because these tests are commonly used, it is important to determine how accurate the pass/fail decisions are for such tests and whether fairly small,…
The Universal Aspect Ratio of Vortices in Rotating Stratifi?ed Flows: Experiments and Observations
Aubert, Oriane; Gal, Patrice Le; Marcus, Philip S
2012-01-01
We validate a new law for the aspect ratio $\\alpha = H/L$ of vortices in a rotating, stratified flow, where $H$ and $L$ are the vertical half-height and horizontal length scale of the vortices. The aspect ratio depends not only on the Coriolis parameter f and buoyancy (or Brunt-Vaisala) frequency $\\bar{N}$ of the background flow, but also on the buoyancy frequency $N_c$ within the vortex and on the Rossby number $Ro$ of the vortex such that $\\alpha = f \\sqrt{[Ro (1 + Ro)/(N_c^2- \\bar{N}^2)]}$. This law for $\\alpha$ is obeyed precisely by the exact equilibrium solution of the inviscid Boussinesq equations that we show to be a useful model of our laboratory vortices. The law is valid for both cyclones and anticyclones. Our anticyclones are generated by injecting fluid into a rotating tank filled with linearly-stratified salt water. The vortices are far from the top and bottom boundaries of the tank, so there is no Ekman circulation. In one set of experiments, the vortices viscously decay, but as they do, they c...
Aligning the Economic Value of Companion Diagnostics and Stratified Medicines
Directory of Open Access Journals (Sweden)
Edward D. Blair
2012-11-01
Full Text Available The twin forces of payors seeking fair pricing and the rising costs of developing new medicines has driven a closer relationship between pharmaceutical companies and diagnostics companies, because stratified medicines, guided by companion diagnostics, offer better commercial, as well as clinical, outcomes. Stratified medicines have created clinical success and provided rapid product approvals, particularly in oncology, and indeed have changed the dynamic between drug and diagnostic developers. The commercial payback for such partnerships offered by stratified medicines has been less well articulated, but this has shifted as the benefits in risk management, pricing and value creation for all stakeholders become clearer. In this larger healthcare setting, stratified medicine provides both physicians and patients with greater insight on the disease and provides rationale for providers to understand cost-effectiveness of treatment. This article considers how the economic value of stratified medicine relationships can be recognized and translated into better outcomes for all healthcare stakeholders.
Presence of psychoactive substances in oral fluid from randomly selected drivers in Denmark
DEFF Research Database (Denmark)
Simonsen, K. Wiese; Steentoft, A.; Hels, Tove
2012-01-01
This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme stratified by time, season...... of narcotic drugs. It can be concluded that driving under the influence of drugs is as serious a road safety problem as drunk driving.......This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme stratified by time, season...
Presence of psychoactive substances in oral fluid from randomly selected drivers in Denmark
DEFF Research Database (Denmark)
Simonsen, Kirsten Wiese; Steentoft, Anni; Hels, Tove
2012-01-01
This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme stratified by time, season....... It can be concluded that driving under the influence of drugs is as serious a road safety problem as drunk driving.......This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme stratified by time, season...
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2016-01-01
Cluster-level dynamic treatment regimens can be used to guide sequential, intervention or treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level DTR, the intervention or treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including based on aggregate measures of the individuals or patients that comprise it. Cluster-randomized sequentia...
Brus, D.J.; Saby, N.P.A.
2016-01-01
In France like in many other countries, the soil is monitored at the locations of a regular, square grid thus forming a systematic sample (SY). This sampling design leads to good spatial coverage, enhancing the precision of design-based estimates of spatial means and totals. Design-based
Gad, Mohamed Z; Abdel Rahman, Mohamed F; Hashad, Ingy M; Abdel-Maksoud, Sahar M; Farag, Nabil M; Abou-Aisha, Khaled
2012-07-01
The aim of this study was to detect endothelial nitric oxide synthase (eNOS) Glu298Asp gene variants in a random sample of the Egyptian population, compare it with those from other populations, and attempt to correlate these variants with serum levels of nitric oxide (NO). The association of eNOS genotypes or serum NO levels with the incidence of acute myocardial infarction (AMI) was also examined. One hundred one unrelated healthy subjects and 104 unrelated AMI patients were recruited randomly from the 57357 Hospital and intensive care units of El Demerdash Hospital and National Heart Institute, Cairo, Egypt. eNOS genotypes were determined by polymerase chain reaction-restriction fragment length polymorphism. Serum NO was determined spectrophotometrically. The genotype distribution of eNOS Glu298Asp polymorphism determined for our sample was 58.42% GG (wild type), 33.66% GT, and 7.92% TT genotypes while allele frequencies were 75.25% and 24.75% for G and T alleles, respectively. No significant association between serum NO and specific eNOS genotype could be detected. No significant correlation between eNOS genotype distribution or allele frequencies and the incidence of AMI was observed. The present study demonstrated the predominance of the homozygous genotype GG over the heterozygous GT and homozygous TT in random samples of Egyptian population. It also showed the lack of association between eNOS genotypes and mean serum levels of NO, as well as the incidence of AMI.
Optimization of sampling effort for a fishery-independent survey with multiple goals.
Xu, Binduo; Zhang, Chongliang; Xue, Ying; Ren, Yiping; Chen, Yong
2015-05-01
Fishery-independent surveys are essential for collecting high quality data to support fisheries management. For fish populations with low abundance and aggregated distribution in a coastal ecosystem, high intensity bottom trawl surveys may result in extra mortality and disturbance to benthic community, imposing unnecessarily large negative impacts on the populations and ecosystem. Optimization of sampling design is necessary to acquire cost-effective sampling efforts, which, however, may not be straightforward for a survey with multiple goals. We developed a simulation approach to evaluate and optimize sampling efforts for a stratified random survey with multiple goals including estimation of abundance indices of individual species and fish groups and species diversity indices. We compared the performances of different sampling efforts when the target estimation indices had different spatial variability over different survey seasons. This study suggests that sampling efforts in a stratified random survey can be reduced while still achieving relatively high precision and accuracy for most indices measuring abundance and biodiversity, which can reduce survey mortality. This study also shows that optimal sampling efforts for a stratified random design may vary with survey objectives. A postsurvey analysis, such as this study, can improve survey designs to achieve the most important survey goals.
Kashdan, Todd B; Farmer, Antonina S
2014-06-01
The ability to recognize and label emotional experiences has been associated with well-being and adaptive functioning. This skill is particularly important in social situations, as emotions provide information about the state of relationships and help guide interpersonal decisions, such as whether to disclose personal information. Given the interpersonal difficulties linked to social anxiety disorder (SAD), deficient negative emotion differentiation may contribute to impairment in this population. We hypothesized that people with SAD would exhibit less negative emotion differentiation in daily life, and these differences would translate to impairment in social functioning. We recruited 43 people diagnosed with generalized SAD and 43 healthy adults to describe the emotions they experienced over 14 days. Participants received palmtop computers for responding to random prompts and describing naturalistic social interactions; to complete end-of-day diary entries, they used a secure online website. We calculated intraclass correlation coefficients to capture the degree of differentiation of negative and positive emotions for each context (random moments, face-to-face social interactions, and end-of-day reflections). Compared to healthy controls, the SAD group exhibited less negative (but not positive) emotion differentiation during random prompts, social interactions, and (at trend level) end-of-day assessments. These differences could not be explained by emotion intensity or variability over the 14 days, or to comorbid depression or anxiety disorders. Our findings suggest that people with generalized SAD have deficits in clarifying specific negative emotions felt at a given point of time. These deficits may contribute to difficulties with effective emotion regulation and healthy social relationship functioning.
Kashdan, Todd B.; Farmer, Antonina S.
2014-01-01
The ability to recognize and label emotional experiences has been associated with well-being and adaptive functioning. This skill is particularly important in social situations, as emotions provide information about the state of relationships and help guide interpersonal decisions, such as whether to disclose personal information. Given the interpersonal difficulties linked to social anxiety disorder (SAD), deficient negative emotion differentiation may contribute to impairment in this population. We hypothesized that people with SAD would exhibit less negative emotion differentiation in daily life, and these differences would translate to impairment in social functioning. We recruited 43 people diagnosed with generalized SAD and 43 healthy adults to describe the emotions they experienced over 14 days. Participants received palmtop computers for responding to random prompts and describing naturalistic social interactions; to complete end-of-day diary entries, they used a secure online website. We calculated intraclass correlation coefficients to capture the degree of differentiation of negative and positive emotions for each context (random moments, face-to-face social interactions, and end-of-day reflections). Compared to healthy controls, the SAD group exhibited less negative (but not positive) emotion differentiation during random prompts, social interactions, and (at trend level) end-of-day assessments. These differences could not be explained by emotion intensity or variability over the 14 days, or to comorbid depression or anxiety disorders. Our findings suggest that people with generalized SAD have deficits in clarifying specific negative emotions felt at a given point of time. These deficits may contribute to difficulties with effective emotion regulation and healthy social relationship functioning. PMID:24512246
The stratified H-index makes scientific impact transparent
DEFF Research Database (Denmark)
Würtz, Morten; Schmidt, Morten
2017-01-01
The H-index is widely used to quantify and standardize researchers' scientific impact. However, the H-index does not account for the fact that co-authors rarely contribute equally to a paper. Accordingly, we propose the use of a stratified H-index to measure scientific impact. The stratified H......-index supplements the conventional H-index with three separate H-indices: one for first authorships, one for second authorships and one for last authorships. The stratified H-index takes scientific output, quality and individual author contribution into account....
Mukherjee, Shubhabrata; Walter, Stefan; Kauwe, John S K; Saykin, Andrew J; Bennett, David A; Larson, Eric B; Crane, Paul K; Glymour, M Maria
2015-12-01
Observational research shows that higher body mass index (BMI) increases Alzheimer's disease (AD) risk, but it is unclear whether this association is causal. We applied genetic variants that predict BMI in Mendelian randomization analyses, an approach that is not biased by reverse causation or confounding, to evaluate whether higher BMI increases AD risk. We evaluated individual-level data from the AD Genetics Consortium (ADGC: 10,079 AD cases and 9613 controls), the Health and Retirement Study (HRS: 8403 participants with algorithm-predicted dementia status), and published associations from the Genetic and Environmental Risk for AD consortium (GERAD1: 3177 AD cases and 7277 controls). No evidence from individual single-nucleotide polymorphisms or polygenic scores indicated BMI increased AD risk. Mendelian randomization effect estimates per BMI point (95% confidence intervals) were as follows: ADGC, odds ratio (OR) = 0.95 (0.90-1.01); HRS, OR = 1.00 (0.75-1.32); GERAD1, OR = 0.96 (0.87-1.07). One subscore (cellular processes not otherwise specified) unexpectedly predicted lower AD risk. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Andersson, Ola; Hellström-Westas, Lena; Andersson, Dan; Clausen, Jesper; Domellöf, Magnus
2013-05-01
To investigate the effect of delayed cord clamping (DCC) compared with early cord clamping (ECC) on maternal postpartum hemorrhage (PPH) and umbilical cord blood gas sampling. Secondary analysis of a parallel-group, single-center, randomized controlled trial. Swedish county hospital. 382 term deliveries after a low-risk pregnancy. Deliveries were randomized to DCC (≥180 seconds, n = 193) or ECC (≤10 seconds, n = 189). Maternal blood loss was estimated by the midwife. Samples for blood gas analysis were taken from one umbilical artery and the umbilical vein, from the pulsating unclamped cord in the DCC group and from the double-clamped cord in the ECC group. Samples were classified as valid when the arterial-venous difference was -0.02 or less for pH and 0.5 kPa or more for pCO2 . Main outcome measures. PPH and proportion of valid blood gas samples. The differences between the DCC and ECC groups with regard to PPH (1.2%, p = 0.8) and severe PPH (-2.7%, p = 0.3) were small and non-significant. The proportion of valid blood gas samples was similar between the DCC (67%, n = 130) and ECC (74%, n = 139) groups, with 6% (95% confidence interval: -4%-16%, p = 0.2) fewer valid samples after DCC. Delayed cord clamping, compared with early, did not have a significant effect on maternal postpartum hemorrhage or on the proportion of valid blood gas samples. We conclude that delayed cord clamping is a feasible method from an obstetric perspective. © 2012 The Authors Acta Obstetricia et Gynecologica Scandinavica© 2012 Nordic Federation of Societies of Obstetrics and Gynecology.
Lui, Kung-Jong; Chang, Kuang-Chao
2008-01-15
When a generic drug is developed, it is important to assess the equivalence of therapeutic efficacy between the new and the standard drugs. Although the number of publications on testing equivalence and its relevant sample size determination is numerous, the discussion on sample size determination for a desired power of detecting equivalence under a randomized clinical trial (RCT) with non-compliance and missing outcomes is limited. In this paper, we derive under the compound exclusion restriction model the maximum likelihood estimator (MLE) for the ratio of probabilities of response among compliers between two treatments in a RCT with both non-compliance and missing outcomes. Using the MLE with the logarithmic transformation, we develop an asymptotic test procedure for assessing equivalence and find that this test procedure can perform well with respect to type I error based on Monte Carlo simulation. We further develop a sample size calculation formula for a desired power of detecting equivalence at a nominal alpha-level. To evaluate the accuracy of the sample size calculation formula, we apply Monte Carlo simulation again to calculate the simulated power of the proposed test procedure corresponding to the resulting sample size for a desired power of 80 per cent at 0.05 level in a variety of situations. We also include a discussion on determining the optimal ratio of sample size allocation subject to a desired power to minimize a linear cost function and provide a sensitivity analysis of the sample size formula developed here under an alterative model with missing at random. Copyright (c) 2007 John Wiley & Sons, Ltd.
LENUS (Irish Health Repository)
Billington, Jennifer
2012-08-07
AbstractBackgroundThe STRATIFY score is a clinical prediction rule (CPR) derived to assist clinicians to identify patients at risk of falling. The purpose of this systematic review and meta-analysis is to determine the overall diagnostic accuracy of the STRATIFY rule across a variety of clinical settings.MethodsA literature search was performed to identify all studies that validated the STRATIFY rule. The methodological quality of the studies was assessed using the Quality Assessment of Diagnostic Accuracy Studies tool. A STRATIFY score of ≥2 points was used to identify individuals at higher risk of falling. All included studies were combined using a bivariate random effects model to generate pooled sensitivity and specificity of STRATIFY at ≥2 points. Heterogeneity was assessed using the variance of logit transformed sensitivity and specificity.ResultsSeventeen studies were included in our meta-analysis, incorporating 11,378 patients. At a score ≥2 points, the STRATIFY rule is more useful at ruling out falls in those classified as low risk, with a greater pooled sensitivity estimate (0.67, 95% CI 0.52–0.80) than specificity (0.57, 95% CI 0.45 – 0.69). The sensitivity analysis which examined the performance of the rule in different settings and subgroups also showed broadly comparable results, indicating that the STRATIFY rule performs in a similar manner across a variety of different ‘at risk’ patient groups in different clinical settings.ConclusionThis systematic review shows that the diagnostic accuracy of the STRATIFY rule is limited and should not be used in isolation for identifying individuals at high risk of falls in clinical practice.
Energy Technology Data Exchange (ETDEWEB)
Burr, T. [Statistical Sciences Group, Los Alamos National Laboratory, Mail Stop F600, Los Alamos, NM 87545 (United States)], E-mail: tburr@lanl.gov; Butterfield, K. [Advanced Nuclear Technology Group, Los Alamos National Laboratory, Mail Stop F600, Los Alamos, NM 87545 (United States)
2008-09-01
Neutron multiplicity counting is an established method to estimate the spontaneous fission rate, and therefore also the plutonium mass for example, in a sample that includes other neutron sources. The extent to which the sample and detector obey the 'point model' assumptions impacts the estimate's total measurement error, but, in nearly all cases, for the random error contribution, it is useful to evaluate the variances of the second and third reduced sample moments of the neutron source strength. Therefore, this paper derives exact expressions for the variances and covariances of the second and third reduced sample moments for either randomly triggered or signal-triggered non-overlapping counting gates, and compares them to the corresponding variances in simulated data. Approximate expressions are also provided for the case of overlapping counting gates. These variances and covariances are useful in figure of merit calculations to predict assay performance prior to data collection. In addition, whenever real data are available, a bootstrap method is presented as an alternate but effective way to estimate these variances.
Wu, Huai-Hui; Wang, Chih-Yuan; Teng, Hwa-Jen; Lin, Cheo; Lu, Liang-Chen; Jian, Shu-Wan; Chang, Niann-Tai; Wen, Tzai-Hung; Wu, Jhy-Wen; Liu, Ding-Ping; Lin, Li-Jen; Norris, Douglas E; Wu, Ho-Sheng
2013-03-01
Aedes aegypti L. is the primary dengue vector in southern Taiwan. This article is the first report on a large-scale surveillance program to study the spatial-temporal distribution of the local Ae. aegytpi population using ovitraps stratified according to the human population in high dengue-risk areas. The sampling program was conducted for 1 yr and was based on weekly collections of eggs and adults in Kaohsiung City. In total, 10,380 ovitraps were placed in 5,190 households. Paired ovitraps, one indoors and one outdoors were used per 400 people. Three treatments in these ovitraps (paddle-shaped wooden sticks, sticky plastic, or both) were assigned by stratified random sampling to two areas (i.e., metropolitan or rural, respectively). We found that the sticky plastic alone had a higher sensitivity for detecting the occurrence of indigenous dengue cases than other treatments with time lags of up to 14 wk. The wooden paddle alone detected the oviposition of Ae. aegypti throughout the year in this study area. Furthermore, significantly more Ae. aegypti females were collected indoors than outdoors. Therefore, our survey identified the whole year oviposition activity, spatial-temporal distribution of the local Ae. aegypti population and a 14 wk lag correlation with dengue incidence to plan an effectively proactive control.
Beyond the E-Value: Stratified Statistics for Protein Domain Prediction.
Directory of Open Access Journals (Sweden)
Alejandro Ochoa
2015-11-01
Full Text Available E-values have been the dominant statistic for protein sequence analysis for the past two decades: from identifying statistically significant local sequence alignments to evaluating matches to hidden Markov models describing protein domain families. Here we formally show that for "stratified" multiple hypothesis testing problems-that is, those in which statistical tests can be partitioned naturally-controlling the local False Discovery Rate (lFDR per stratum, or partition, yields the most predictions across the data at any given threshold on the FDR or E-value over all strata combined. For the important problem of protein domain prediction, a key step in characterizing protein structure, function and evolution, we show that stratifying statistical tests by domain family yields excellent results. We develop the first FDR-estimating algorithms for domain prediction, and evaluate how well thresholds based on q-values, E-values and lFDRs perform in domain prediction using five complementary approaches for estimating empirical FDRs in this context. We show that stratified q-value thresholds substantially outperform E-values. Contradicting our theoretical results, q-values also outperform lFDRs; however, our tests reveal a small but coherent subset of domain families, biased towards models for specific repetitive patterns, for which weaknesses in random sequence models yield notably inaccurate statistical significance measures. Usage of lFDR thresholds outperform q-values for the remaining families, which have as-expected noise, suggesting that further improvements in domain predictions can be achieved with improved modeling of random sequences. Overall, our theoretical and empirical findings suggest that the use of stratified q-values and lFDRs could result in improvements in a host of structured multiple hypothesis testing problems arising in bioinformatics, including genome-wide association studies, orthology prediction, and motif scanning.
Yeh, Mary S L; Mari, Jair Jesus; Costa, Mariana Caddrobi Pupo; Andreoli, Sergio Baxter; Bressan, Rodrigo Affonseca; Mello, Marcelo Feijó
2011-10-01
To evaluate the efficacy and tolerability of topiramate in patients with posttraumatic stress disorder (PTSD). We conducted a 12-week double-blind, randomized, placebo-controlled study comparing topiramate to placebo. Men and women aged 18-62 years with diagnosis of PTSD according to DSM-IV were recruited from the outpatient clinic of the violence program of Federal University of São Paulo Hospital (Prove-UNIFESP), São Paulo City, between April 2006 and December 2009. Subjects were assessed for the Clinician-Administered Posttraumatic Stress Scale (CAPS), Clinical Global Impression, and Beck Depression Inventory (BDI). After 1-week period of washout, 35 patients were randomized to either group. The primary outcome measure was the CAPS total score changes from baseline to the endpoint. 82.35% of patients in the topiramate group exhibited improvements in PTSD symptoms. The efficacy analysis demonstrated that patients in the topiramate group exhibited significant improvements in reexperiencing symptoms: flashbacks, intrusive memories, and nightmares of the trauma (CAPS-B; P= 0.04) and in avoidance/numbing symptoms associated with the trauma, social isolation, and emotional numbing (CAPS-C; P= 0.0001). Furthermore, the experimental group demonstrated a significant difference in decrease in CAPS total score (topiramate -57.78; placebo -32.41; P= 0.0076). Mean topiramate dose was 102.94 mg/d. Topiramate was generally well tolerated. Topiramate was effective in improving reexperiencing and avoidance/numbing symptom clusters in patients with PTSD. This study supports the use of anticonvulsants for the improvement of symptoms of PTSD. © 2010 Blackwell Publishing Ltd.
Burger, Rulof P; McLaren, Zoë M
2017-09-01
The problem of sample selection complicates the process of drawing inference about populations. Selective sampling arises in many real world situations when agents such as doctors and customs officials search for targets with high values of a characteristic. We propose a new method for estimating population characteristics from these types of selected samples. We develop a model that captures key features of the agent's sampling decision. We use a generalized method of moments with instrumental variables and maximum likelihood to estimate the population prevalence of the characteristic of interest and the agents' accuracy in identifying targets. We apply this method to tuberculosis (TB), which is the leading infectious disease cause of death worldwide. We use a national database of TB test data from South Africa to examine testing for multidrug resistant TB (MDR-TB). Approximately one quarter of MDR-TB cases was undiagnosed between 2004 and 2010. The official estimate of 2.5% is therefore too low, and MDR-TB prevalence is as high as 3.5%. Signal-to-noise ratios are estimated to be between 0.5 and 1. Our approach is widely applicable because of the availability of routinely collected data and abundance of potential instruments. Using routinely collected data to monitor population prevalence can guide evidence-based policy making. Copyright © 2017 John Wiley & Sons, Ltd.
Knotters, M.; Brus, D.J.
2013-01-01
The quality of ecotope maps of five districts of main water courses in the Netherlands was assessed on the basis of independent validation samples of field observations. The overall proportion of area correctly classified, and user's and producer's accuracy for each map unit were estimated. In four
Huynh, Huynh; Feldt, Leonard S.
1976-01-01
When the variance assumptions of a repeated measures ANOVA are not met, the F distribution of the mean square ratio should be adjusted by the sample estimate of the Box correction factor. An alternative is proposed which is shown by Monte Carlo methods to be less biased for a moderately large factor. (RC)
DEFF Research Database (Denmark)
Puri, Rajesh; Vilmann, Peter; Saftoiu, Adrian
2009-01-01
). The samples were characterized for cellularity and bloodiness, with a final cytology diagnosis established blindly. The final diagnosis was reached either by EUS-FNA if malignancy was definite, or by surgery and/or clinical follow-up of a minimum of 6 months in the cases of non-specific benign lesions...
Sancho-Garnier, H; Tamalet, C; Halfon, P; Leandri, F X; Le Retraite, L; Djoufelkit, K; Heid, P; Davies, P; Piana, L
2013-12-01
Today in France, low attendance to cervical screening by Papanicolaou cytology (Pap-smear) is a major contributor to the 3,000 new cervical cancer cases and 1,000 deaths that occur from this disease every year. Nonattenders are mostly from lower socioeconomic groups and testing of self-obtained samples for high-risk Human Papilloma virus (HPV) types has been proposed as a method to increase screening participation in these groups. In 2011, we conducted a randomized study of women aged 35-69 from very low-income populations around Marseille who had not responded to an initial invitation for a free Pap-smear. After randomization, one group received a second invitation for a free Pap-smear and the other group was offered a free self-sampling kit for HPV testing. Participation rates were significantly different between the two groups with only 2.0% of women attending for a Pap-smear while 18.3% of women returned a self-sample for HPV testing (p ≤ 0.001). The detection rate of high-grade lesions (≥CIN2) was 0.2‰ in the Pap-smear group and 1.25‰ in the self-sampling group (p = 0.01). Offering self-sampling increased participation rates while the use of HPV testing increased the detection of cervical lesions (≥CIN2) in comparison to the group of women receiving a second invitation for a Pap-smear. However, low compliance to follow-up in the self-sampling group reduces the effectiveness of this screening approach in nonattenders women and must be carefully managed. Copyright © 2013 UICC.
A comparison of two sampling designs for fish assemblage assessment in a large river
Kiraly, Ian A.; Coghlan, Stephen M.; Zydlewski, Joseph; Hayes, Daniel
2014-01-01
We compared the efficiency of stratified random and fixed-station sampling designs to characterize fish assemblages in anticipation of dam removal on the Penobscot River, the largest river in Maine. We used boat electrofishing methods in both sampling designs. Multiple 500-m transects were selected randomly and electrofished in each of nine strata within the stratified random sampling design. Within the fixed-station design, up to 11 transects (1,000 m) were electrofished, all of which had been sampled previously. In total, 88 km of shoreline were electrofished during summer and fall in 2010 and 2011, and 45,874 individuals of 34 fish species were captured. Species-accumulation and dissimilarity curve analyses indicated that all sampling effort, other than fall 2011 under the fixed-station design, provided repeatable estimates of total species richness and proportional abundances. Overall, our sampling designs were similar in precision and efficiency for sampling fish assemblages. The fixed-station design was negatively biased for estimating the abundance of species such as Common Shiner Luxilus cornutus and Fallfish Semotilus corporalis and was positively biased for estimating biomass for species such as White Sucker Catostomus commersonii and Atlantic Salmon Salmo salar. However, we found no significant differences between the designs for proportional catch and biomass per unit effort, except in fall 2011. The difference observed in fall 2011 was due to limitations on the number and location of fixed sites that could be sampled, rather than an inherent bias within the design. Given the results from sampling in the Penobscot River, application of the stratified random design is preferable to the fixed-station design due to less potential for bias caused by varying sampling effort, such as what occurred in the fall 2011 fixed-station sample or due to purposeful site selection.
Directory of Open Access Journals (Sweden)
Karunamuni Nandini
2008-12-01
Full Text Available Abstract Background Aerobic physical activity (PA and resistance training are paramount in the treatment and management of type 2 diabetes (T2D, but few studies have examined the determinants of both types of exercise in the same sample. Objective The primary purpose was to investigate the utility of the Theory of Planned Behavior (TPB in explaining aerobic PA and resistance training in a population sample of T2D adults. Methods A total of 244 individuals were recruited through a random national sample which was created by generating a random list of household phone numbers. The list was proportionate to the actual number of household telephone numbers for each Canadian province (with the exception of Quebec. These individuals completed self-report TPB constructs of attitude, subjective norm, perceived behavioral control and intention, and a 3-month follow-up that assessed aerobic PA and resistance training. Results TPB explained 10% and 8% of the variance respectively for aerobic PA and resistance training; and accounted for 39% and 45% of the variance respectively for aerobic PA and resistance training intentions. Conclusion These results may guide the development of appropriate PA interventions for aerobic PA and resistance training based on the TPB.
Assessment of efficient sampling designs for urban stormwater monitoring.
Leecaster, Molly K; Schiff, Kenneth; Tiefenthaler, Liesl L
2002-03-01
Monitoring programs for urban runoff have not been assessed for effectiveness or efficiency in estimating mass emissions. In order to determine appropriate designs for stormwater, total suspended solids (TSS) and flow information from the Santa Ana River was collected nearly every 15 min for every storm of the 1998 water year. All samples were used to calculate the "true load" and then three within-storm sampling designs (flow-interval, time-interval, and simple random) and five among-storm sampling designs (stratified by size, stratified by season, simple random, simple random of medium and large storms, and the first m storms of the season) were simulated. Using these designs, we evaluated three estimators for storm mass emissions (mean, volume-weighted, and ratio) and three estimators for annual mass emissions (median, ratio, and regular). Designs and estimators were evaluated with respect to accuracy and precision. The optimal strategy was used to determine the appropriate number of storms to sample annually based upon confidence interval width for estimates of annual mass emissions and concentration. The amount of detectable trend in mass emissions and concentration was determined for sample sizes 3 and 7. Single storms were most efficiently characterized (small bias and standard error) by taking 12 samples following a flow-interval schedule and using a volume-weighted estimator of mass emissions. The ratio estimator, when coupled with the simple random sample of medium and large storms within a season, most accurately estimated concentration and mass emissions; and had low bias over all of the designs. Sampling seven storms is the most efficient method for attaining small confidence interval width for annual concentration. Sampling three storms per year allows a 20% trend to be detected in mass emissions or concentration over five years. These results are decreased by 10% by sampling seven storms per year.
Abebe, Kaleab Z; Jones, Kelley A; Ciaravino, Samantha; Ripper, Lisa; Paglisotti, Taylor; Morrow, Sarah Elizabeth; Grafals, Melanie; Van Dusen, Courtney; Miller, Elizabeth
2017-11-01
High rates of adolescent relationship abuse (ARA) and sexual violence (SV) reported among adolescents point to the need for prevention among middle school-age youth. This is a cluster randomized controlled trial to test an athletic coach-delivered ARA/SV prevention program in 41 middle schools (38 clusters). Trained coaches talk to their male athletes about 1) what constitutes harmful vs. respectful relationship behaviors, 2) dispelling myths that glorify male sexual aggression and promoting more gender-equitable attitudes, and 3) positive bystander intervention when aggressive male behaviors toward females are witnessed. A total of 973 male athletes (ages 11-14, grades 6-8) are participating. Athletes complete surveys at the beginning and end of sports season (Time 2), and one year later (Time 3). The primary outcome is an increase in positive bystander behaviors (i.e., intervening in peers' disrespectful or harmful behaviors); secondary outcomes are changes in recognition of what constitutes abusive behavior, intentions to intervene, and gender equitable attitudes (Time 2 and 3) as well as reduction in abuse perpetration (Time 3). Participating schools have a greater proportion of non-White students and students on free/reduced lunch compared to schools that declined participation. Participants' self-reported ethnicities are 54.5% White, 29.0% Black, 1.4% Hispanic and the remainder, multi-racial, other, or not reported. This study will evaluate the effectiveness of a coach-delivered ARA/SV prevention program for middle school male athletes. Findings will add to the evidence base regarding developmentally appropriate violence prevention programs as well as the role of coaches in adolescent health promotion. Clinical Trials #: NCT02331238. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Taylor, C Barr; Kass, Andrea E; Trockel, Mickey; Cunning, Darby; Weisman, Hannah; Bailey, Jakki; Sinton, Meghan; Aspen, Vandana; Schecthman, Kenneth; Jacobi, Corinna; Wilfley, Denise E
2016-05-01
Eating disorders (EDs) are serious problems among college-age women and may be preventable. An indicated online eating disorder (ED) intervention, designed to reduce ED and comorbid pathology, was evaluated. 206 women (M age = 20 ± 1.8 years; 51% White/Caucasian, 11% African American, 10% Hispanic, 21% Asian/Asian American, 7% other) at very high risk for ED onset (i.e., with high weight/shape concerns plus a history of being teased, current or lifetime depression, and/or nonclinical levels of compensatory behaviors) were randomized to a 10-week, Internet-based, cognitive-behavioral intervention or waitlist control. Assessments included the Eating Disorder Examination (EDE, to assess ED onset), EDE-Questionnaire, Structured Clinical Interview for DSM Disorders, and Beck Depression Inventory-II. ED attitudes and behaviors improved more in the intervention than control group (p = .02, d = 0.31); although ED onset rate was 27% lower, this difference was not significant (p = .28, NNT = 15). In the subgroup with highest shape concerns, ED onset rate was significantly lower in the intervention than control group (20% vs. 42%, p = .025, NNT = 5). For the 27 individuals with depression at baseline, depressive symptomatology improved more in the intervention than control group (p = .016, d = 0.96); although ED onset rate was lower in the intervention than control group, this difference was not significant (25% vs. 57%, NNT = 4). An inexpensive, easily disseminated intervention might reduce ED onset among those at highest risk. Low adoption rates need to be addressed in future research. (c) 2016 APA, all rights reserved).
Sulaiman, Nabil; Albadawi, Salah; Abusnana, Salah; Fikri, Mahmoud; Madani, Abdulrazzag; Mairghani, Maisoon; Alawadi, Fatheya; Zimmet, Paul; Shaw, Jonathan
2015-09-01
The prevalence of diabetes has risen rapidly in the Middle East, particularly in the Gulf Region. However, some prevalence estimates have not fully accounted for large migrant worker populations and have focused on minority indigenous populations. The objectives of the UAE National Diabetes and Lifestyle Study are to: (i) define the prevalence of, and risk factors for, T2DM; (ii) describe the distribution and determinants of T2DM risk factors; (iii) study health knowledge, attitudes, and (iv) identify gene-environment interactions; and (v) develop baseline data for evaluation of future intervention programs. Given the high burden of diabetes in the region and the absence of accurate data on non-UAE nationals in the UAE, a representative sample of the non-UAE nationals was essential. We used an innovative methodology in which non-UAE nationals were sampled when attending the mandatory biannual health check that is required for visa renewal. Such an approach could also be used in other countries in the region. Complete data were available for 2719 eligible non-UAE nationals (25.9% Arabs, 70.7% Asian non-Arabs, 1.1% African non-Arabs, and 2.3% Westerners). Most were men service and sales, and unskilled occupations. Most (37.4%) had completed high school and 4.1% had a postgraduate degree. This novel methodology could provide insights for epidemiological studies in the UAE and other Gulf States, particularly for expatriates. © 2015 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.
Chest pain presenting to the Emergency Department--to stratify risk with GRACE or TIMI?
Lyon, Richard; Morris, Andrew Conway; Caesar, David; Gray, Sarah; Gray, Alasdair
2007-07-01
There is a need to stratify risk rapidly in patients presenting to the Emergency Department (ED) with undifferentiated chest pain. The Global Registry of Acute Coronary Events (GRACE) and the Thrombolysis in Myocardial Infarction (TIMI) scoring systems predict outcome of adverse coronary events in patients admitted to specialist cardiac units. This study evaluates the relationship between GRACE score and outcome in patients presenting to the ED with undifferentiated chest pain and establishes whether GRACE is preferential to TIMI in stratifying risk in patients in the ED setting. Descriptive study of a consecutive sample of 1000 ED patients with undifferentiated chest pain presenting to Edinburgh Royal Infirmary, Scotland. GRACE and TIMI scores were calculated for each patient and outcomes noted at 30 days. Outcomes included ST and non-ST myocardial infarction, cardiac arrest, revascularisation, unstable angina with myocardial damage and all cause mortality at 30 days. Score and outcome were compared using receiver operator characteristic curves (AUC-ROC). The GRACE score stratifies risk accurately in patients presenting to the ED with undifferentiated chest pain (AUC-ROC 0.80 (95% CI 0.75-0.85), see Table 1). The TIMI score was found to be similarly accurate in stratifying risk in the study cohort with an AUC-ROC of 0.79 (95% CI 0.74-0.85). It was only possible to calculate a complete GRACE score in 76% (n=760) cases as not all the data variables were measured routinely in the ED. GRACE and TIMI are both effective in accurately stratifying risk in patients presenting to the ED with undifferentiated chest pain. The GRACE score is more complex than the TIMI score and in the ED setting TIMI may be the preferred scoring method.
Directory of Open Access Journals (Sweden)
Smedslund Geir
2013-02-01
Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.
Energy Technology Data Exchange (ETDEWEB)
Donner, P.
2016-07-01
Citation counts of scientific research contributions are one fundamental data in scientometrics. Accuracy and completeness of citation links are therefore crucial data quality issues (Moed, 2005, Ch. 13). However, despite the known flaws of reference matching algorithms, usually no attempts are made to incorporate uncertainty about citation counts into indicators. This study is a step towards that goal. Particular attention is paid to the question whether publications from countries not using basic Latin script are differently affected by missed citations. The proprietary reference matching procedure of Web of Science (WoS) is based on (near) exact agreement of cited reference data (normalized during processing) to the target papers bibliographical data. Consequently, the procedure has near-optimal precision but incomplete recall - it is known to miss some slightly inaccurate reference links (Olensky, 2015). However, there has been no attempt so far to estimate the rate of missed citations by a principled method for a random sample. For this study a simple random sample of WoS source papers was drawn and it was attempted to find all reference strings of WoS indexed documents that refer to them, in particular inexact matches. The objective is to give a statistical estimate of the proportion of missed citations and to describe the relationship of the number of found citations to the number of missed citations, i.e. the conditional error distribution. The empirical error distribution is statistically analyzed and modelled. (Author)
Stability of stratified two-phase flows in horizontal channels
Barmak, Ilya; Ullmann, Amos; Brauner, Neima; Vitoshkin, Helen
2016-01-01
Linear stability of stratified two-phase flows in horizontal channels to arbitrary wavenumber disturbances is studied. The problem is reduced to Orr-Sommerfeld equations for the stream function disturbances, defined in each sublayer and coupled via boundary conditions that account also for possible interface deformation and capillary forces. Applying the Chebyshev collocation method, the equations and interface boundary conditions are reduced to the generalized eigenvalue problems solved by standard means of numerical linear algebra for the entire spectrum of eigenvalues and the associated eigenvectors. Some additional conclusions concerning the instability nature are derived from the most unstable perturbation patterns. The results are summarized in the form of stability maps showing the operational conditions at which a stratified-smooth flow pattern is stable. It is found that for gas-liquid and liquid-liquid systems the stratified flow with smooth interface is stable only in confined zone of relatively lo...
Stability of stratified two-phase flows in inclined channels
Barmak, Ilya; Ullmann, Amos; Brauner, Neima
2016-01-01
Linear stability of stratified gas-liquid and liquid-liquid plane-parallel flows in inclined channels is studied with respect to all wavenumber perturbations. The main objective is to predict parameter regions in which stable stratified configuration in inclined channels exists. Up to three distinct base states with different holdups exist in inclined flows, so that the stability analysis has to be carried out for each branch separately. Special attention is paid to the multiple solution regions to reveal the feasibility of non-unique stable stratified configurations in inclined channels. The stability boundaries of each branch of steady state solutions are presented on the flow pattern map and are accompanied by critical wavenumbers and spatial profiles of the most unstable perturbations. Instabilities of different nature are visualized by streamlines of the neutrally stable perturbed flows, consisting of the critical perturbation superimposed on the base flow. The present analysis confirms the existence of ...
Thermal vibrational convection in a two-phase stratified liquid
Chang, Qingming; Alexander, J. Iwan D.
2007-05-01
The response of a two-phase stratified liquid system subject to a vibration parallel to an imposed temperature gradient is analyzed using a hybrid thermal lattice Boltzmann method (HTLB). The vibrations considered correspond to sinusoidal translations of a rigid cavity at a fixed frequency. The layers are thermally and mechanically coupled. Interaction between gravity-induced and vibration-induced thermal convection is studied. The ability of the applied vibration to enhance the flow, heat transfer and interface distortion is investigated. For the range of conditions investigated, the results reveal that the effect of the vibrational Rayleigh number and vibrational frequency on a two-phase stratified fluid system is much different from that for a single-phase fluid system. Comparisons of the response of a two-phase stratified fluid system with a single-phase fluid system are discussed. To cite this article: Q. Chang, J.I.D. Alexander, C. R. Mecanique 335 (2007).
Sound propagation in a continuously stratified laboratory ocean model.
Zhang, Likun; Swinney, Harry L
2017-05-01
The propagation of sound in a density-stratified fluid is examined in an experiment with a tank of salty water whose density increases continuously from the fluid surface to the tank bottom. Measurements of the height dependence of the fluid density are used to calculate the height dependence of the fluid salinity and sound speed. The height-dependent sound speed is then used to calculate the refraction of sound rays. Sound propagation in the fluid is measured in three dimensions and compared with the ray analysis. This study provides a basis for laboratory modeling of underwater sound propagation in the fluctuating stratified oceans.
Tooms, S.; Attenborough, K.
1990-01-01
Using a Fast Fourier integration method and a global matrix method for solution of the boundary condition equations at all interfaces simultaneously, a useful tool for predicting acoustic propagation in a stratified fluid over a stratified porous-elastic solid was developed. The model for the solid is a modified Biot-Stoll model incorporating four parameters describing the pore structure corresponding to the Rayleigh-Attenborough rigid-porous structure model. The method is also compared to another Fast Fourier code (CERL-FFP) which models the ground as an impedance surface under a horizontally stratified air. Agreement with the CERL FFP is good. The effects on sound propagation of a combination of ground elasticity, complex ground structure, and atmospheric conditions are demonstrated by theoretical results over a snow layer, and experimental results over a model ground surface.
Beyond the E-Value: Stratified Statistics for Protein Domain Prediction
Ochoa, Alejandro; Storey, John D.; Llinás, Manuel; Singh, Mona
2015-01-01
E-values have been the dominant statistic for protein sequence analysis for the past two decades: from identifying statistically significant local sequence alignments to evaluating matches to hidden Markov models describing protein domain families. Here we formally show that for “stratified” multiple hypothesis testing problems—that is, those in which statistical tests can be partitioned naturally—controlling the local False Discovery Rate (lFDR) per stratum, or partition, yields the most predictions across the data at any given threshold on the FDR or E-value over all strata combined. For the important problem of protein domain prediction, a key step in characterizing protein structure, function and evolution, we show that stratifying statistical tests by domain family yields excellent results. We develop the first FDR-estimating algorithms for domain prediction, and evaluate how well thresholds based on q-values, E-values and lFDRs perform in domain prediction using five complementary approaches for estimating empirical FDRs in this context. We show that stratified q-value thresholds substantially outperform E-values. Contradicting our theoretical results, q-values also outperform lFDRs; however, our tests reveal a small but coherent subset of domain families, biased towards models for specific repetitive patterns, for which weaknesses in random sequence models yield notably inaccurate statistical significance measures. Usage of lFDR thresholds outperform q-values for the remaining families, which have as-expected noise, suggesting that further improvements in domain predictions can be achieved with improved modeling of random sequences. Overall, our theoretical and empirical findings suggest that the use of stratified q-values and lFDRs could result in improvements in a host of structured multiple hypothesis testing problems arising in bioinformatics, including genome-wide association studies, orthology prediction, and motif scanning. PMID:26575353
Morgenstern, Bruce Z; Butani, Lavjay; Wollan, Peter; Wilson, David M; Larson, Timothy S
2003-04-01
Proteinuria is an important marker of kidney disease. Simple methods to determine the presence of proteinuria in a semiquantitative fashion require measurement of either a protein-creatinine or protein-osmolality ratio. Urine samples from 134 healthy infants and children and 150 children from the pediatric nephrology practice were analyzed to develop normative data for protein-osmolality ratios on random urine samples and compare protein-osmolality with protein-creatinine ratio as a predictor of 24-hour urine protein excretion. Children were grouped according to age. Three groups were established: infants (protein excretion was determined to be a protein-osmolality ratio of 0.15 mg x kg H2O/mOsm. L; for children between 2 and 8 years old, 0.14; and for children older than 8 years, 0.17 (P = not significant between age groups). The corresponding optimal cutoff value for protein-creatinine ratio for the entire group of children older than 2 years is 0.20. Area under the curve analysis of receiver operator characteristic curves showed protein-creatinine ratio was superior to protein-osmolality ratio for predicting abnormal amounts of proteinuria in children and adolescents (P protein-creatinine ratio in children, it would be appropriate to screen urine samples for proteinuria using protein-creatinine ratio rather than protein-osmolality ratio.
Mortimer, James A; Ding, Ding; Borenstein, Amy R; DeCarli, Charles; Guo, Qihao; Wu, Yougui; Zhao, Qianhua; Chu, Shugang
2012-01-01
Physical exercise has been shown to increase brain volume and improve cognition in randomized trials of non-demented elderly. Although greater social engagement was found to reduce dementia risk in observational studies, randomized trials of social interventions have not been reported. A representative sample of 120 elderly from Shanghai, China was randomized to four groups (Tai Chi, Walking, Social Interaction, No Intervention) for 40 weeks. Two MRIs were obtained, one before the intervention period, the other after. A neuropsychological battery was administered at baseline, 20 weeks, and 40 weeks. Comparison of changes in brain volumes in intervention groups with the No Intervention group were assessed by t-tests. Time-intervention group interactions for neuropsychological measures were evaluated with repeated-measures mixed models. Compared to the No Intervention group, significant increases in brain volume were seen in the Tai Chi and Social Intervention groups (p < 0.05). Improvements also were observed in several neuropsychological measures in the Tai Chi group, including the Mattis Dementia Rating Scale score (p = 0.004), the Trailmaking Test A (p = 0.002) and B (p = 0.0002), the Auditory Verbal Learning Test (p = 0.009), and verbal fluency for animals (p = 0.01). The Social Interaction group showed improvement on some, but fewer neuropsychological indices. No differences were observed between the Walking and No Intervention groups. The findings differ from previous clinical trials in showing increases in brain volume and improvements in cognition with a largely non-aerobic exercise (Tai Chi). In addition, intellectual stimulation through social interaction was associated with increases in brain volume as well as with some cognitive improvements.
Directory of Open Access Journals (Sweden)
Chunrong Mi
2017-01-01
Full Text Available Species distribution models (SDMs have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane (Grus monacha, n = 33, White-naped Crane (Grus vipio, n = 40, and Black-necked Crane (Grus nigricollis, n = 75 in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model, Random Forest, CART (Classification and Regression Tree and Maxent (Maximum Entropy Models. In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC and true skill statistic (TSS were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid
FDTD scattered field formulation for scatterers in stratified dispersive media.
Olkkonen, Juuso
2010-03-01
We introduce a simple scattered field (SF) technique that enables finite difference time domain (FDTD) modeling of light scattering from dispersive objects residing in stratified dispersive media. The introduced SF technique is verified against the total field scattered field (TFSF) technique. As an application example, we study surface plasmon polariton enhanced light transmission through a 100 nm wide slit in a silver film.
A structured e-content development framework using a stratified ...
African Journals Online (AJOL)
The paper discusses a stratified objectives-driven e-content structuring and deployment framework which is an iterative and intuitive approach to content structuring and sequencing. The model has been developed from experiences and insights gained over a four-stage content development training process involving ...
Bacterial production, protozoan grazing, and mineralization in stratified Lake Vechten
Bloem, J.
1989-01-01
The role of heterotrophic nanoflagellates (HNAN, size 2-20 μm) in grazing on bacteria and mineralization of organic matter in stratified Lake Vechten was studied.
Quantitative effects of manipulation and fixation on HNAN were checked. Considerable losses were caused by
Plane Stratified Flow in a Room Ventilated by Displacement Ventilation
DEFF Research Database (Denmark)
Nielsen, Peter Vilhelm; Nickel, J.; Baron, D. J. G.
2004-01-01
The air movement in the occupied zone of a room ventilated by displacement ventilation exists as a stratified flow along the floor. This flow can be radial or plane according to the number of wall-mounted diffusers and the room geometry. The paper addresses the situations where plane flow...
Stratified Outcome Evaluation of Peritonitis | Wabwire | Annals of ...
African Journals Online (AJOL)
Background: The heterogeneity of disease severity in peritonitis makes outcome prediction challenging. Risk evaluation in secondary peritonitis can direct treatment planning, predict outcomes and aid in the conduct of surgical audits. Objective: To determine the outcome of peritonitis in patients stratified according to ...
Jeon, Seungwan; Song, Hyun Beom; Kim, Jaewoo; Lee, Byung Joo; Kim, Jeong Hun; Kim, Chulhong
2017-03-01
Ocular chemical damage may induce limbal vessel ischemia and neovascularization, but the pathophysiology of the disease is not completely known. To observe changes in blood vessels after alkaline burn, we monitored the anterior segment and choroidal vasculature using a photoacoustic microscope (OR-PAM). We were able to observe not only the iris blood vessels but also the choroidal vessels under the sclera, which were difficult to be observed with conventional photographs. After alkali burning, we observed neovascularization and limbal ischemia and successfully tracked changes in vasculature during the 7-day healing process. We also used the RANdom SAmple Consensus (RANSAC) method to segment the abnormally generated blood vessels in the cornea by detecting the eyeball surface and successfully visualize the distance from each PA signal to the center of the eye. We believe that photoacoustic imaging has an important potential to reveal the pathophysiology of limb ischemia and neovascularization.
Yelin, E; Bernhard, G; Pflugrad, D
1995-08-01
To study access to medical care services, including subspecialty care, among persons with musculoskeletal conditions. In early 1993, a random sample of households in San Mateo County, California, was screened for the presence of household members with musculoskeletal conditions, and a member of each household so identified was administered a structured survey about access to medical care and other related subjects. Eighty-six percent of all persons with a musculoskeletal condition had ever seen at least one physician for the condition, but only 6.5% had ever seen a rheumatologist. Those without health insurance were only 82% as likely as those with health insurance to have ever seen a physician. Most persons with a musculoskeletal condition have seen a physician for the condition, but lack of health insurance significantly reduces the proportion who have done so.
Nakamura, Masato; Muramatsu, Toshiya; Yokoi, Hiroyoshi; Okada, Hisayuki; Ochiai, Masahiko; Suwa, Satoru; Hozawa, Hidenari; Kawai, Kazuya; Awata, Masaki; Mukawa, Hiroaki; Fujita, Hiroshi; Shiode, Nobuo; Asano, Ryuta; Tsukamoto, Yoshiaki; Yamada, Takahisa; Yasumura, Yoshio; Ohira, Hiroshi; Miyamoto, Akira; Takashima, Hiroaki; Ogawa, Takayuki; Matsuyama, Yutaka; Nanto, Shinsuke
2015-04-01
The Japan drug-eluting stents evaluation: a randomized trial (J-DESsERT) was conducted to compare the effectiveness of 2 different drug-eluting stents (DES). It remains uncertain which is more efficacious in diabetic patients, sirolimus-eluting stents (SES) or paclitaxel-eluting stents (PES). In this trial, the largest of its kind, 3,533 patients including 1,724 diabetes mellitus (DM) patients were randomized to either SES or PES. Stratification was based on the presence or absence of DM. PES target vessel failure (TVF) non-inferiority at 8 months (primary endpoint) was not demonstrated when compared to SES (SES 4.5 % vs. PES 6.4 %, p = 0.23). In addition, PES TVF superiority at 8 months in the DM subset (secondary endpoint) was not shown (SES 5.6 % vs. PES 7.6 %, p = 0.10). Insulin treatment was associated with increased TVF rates, however, this was less pronounced in the PES group. At 8 months, the similar TVF rates for SES and PES up to that point diverged significantly, favoring SES out to 12 months. Patients undergoing routine angiographic follow-up demonstrated lower TVF prior to the 8-month point, and higher TVF after 8 months, as compared to those followed clinically. In conclusion, the current study failed to demonstrate the proposed superiority of PES for DM patients. In addition, the diversion of TVF at 8 months may reflect an "oculo-stenotic reflex" bias (the tendency to treat lesions found during routine, rather than clinically driven, angiographic follow-up), which could constitute an obstacle for evaluating the true clinical effect of new devices.
van Haren, H.
2015-01-01
The character of turbulent overturns in a weakly stratified deep-sea is investigated in some detail using 144 high-resolution temperature sensors at 0.7 m intervals, starting 5 m above the bottom. A 9-day, 1 Hz sampled record from the 912 m depth flat-bottom (<0.5% bottom-slope) mooring site in the
DEFF Research Database (Denmark)
Handlos, Line Neerup; Chakraborty, Hrishikesh; Sen, Pranab Kumar
2009-01-01
evaluated in the eligible trials. RESULTS: Thirty-five eligible trials were identified. The majority of them were conducted in Asia, used community as randomization unit, and had less than 10,000 participants. To minimize confounding, 23 of the 35 trials had stratified, blocked, or paired the clusters...... before they were randomized, while 17 had adjusted for confounding in the analysis. Ten of the 35 trials did not account for clustering in sample size calculations, and seven did not account for the cluster-randomized design in the analysis. The number of cluster-randomized trials increased over time......To summarize and evaluate all publications including cluster-randomized trials used for maternal and child health research in developing countries during the last 10 years. METHODS: All cluster-randomized trials published between 1998 and 2008 were reviewed, and those that met our criteria...
Terahertz pulse imaging of stratified architectural materials for cultural heritage studies
Jackson, J. Bianca; Labaune, Julien; Mourou, Gérard; Duling, Irl N.; Walker, Gillian; Bowen, John; Menu, Michel
2011-06-01
Terahertz pulse imaging (TPI) is a novel noncontact, nondestructive technique for the examination of cultural heritage artifacts. It has the advantage of broadband spectral range, time-of-flight depth resolution, and penetration through optically opaque materials. Fiber-coupled, portable, time-domain terahertz systems have enabled this technique to move out of the laboratory and into the field. Much like the rings of a tree, stratified architectural materials give the chronology of their environmental and aesthetic history. This work concentrates on laboratory models of stratified mosaics and fresco paintings, specimens extracted from a neolithic excavation site in Catalhoyuk, Turkey, and specimens measured at the medieval Eglise de Saint Jean-Baptiste in Vif, France. Preparatory spectroscopic studies of various composite materials, including lime, gypsum and clay plasters are presented to enhance the interpretation of results and with the intent to aid future computer simulations of the TPI of stratified architectural material. The breadth of the sample range is a demonstration of the cultural demand and public interest in the life history of buildings. The results are an illustration of the potential role of TPI in providing both a chronological history of buildings and in the visualization of obscured wall paintings and mosaics.
Szirovicza, Leonóra; López, Pilar; Kopena, Renáta; Benkő, Mária; Martín, José; Pénzes, Judit J
2016-01-01
Here, we report the results of a large-scale PCR survey on the prevalence and diversity of adenoviruses (AdVs) in samples collected randomly from free-living reptiles. On the territories of the Guadarrama Mountains National Park in Central Spain and of the Chafarinas Islands in North Africa, cloacal swabs were taken from 318 specimens of eight native species representing five squamate reptilian families. The healthy-looking animals had been captured temporarily for physiological and ethological examinations, after which they were released. We found 22 AdV-positive samples in representatives of three species, all from Central Spain. Sequence analysis of the PCR products revealed the existence of three hitherto unknown AdVs in 11 Carpetane rock lizards (Iberolacerta cyreni), nine Iberian worm lizards (Blanus cinereus), and two Iberian green lizards (Lacerta schreiberi), respectively. Phylogeny inference showed every novel putative virus to be a member of the genus Atadenovirus. This is the very first description of the occurrence of AdVs in amphisbaenian and lacertid hosts. Unlike all squamate atadenoviruses examined previously, two of the novel putative AdVs had A+T rich DNA, a feature generally deemed to mirror previous host switch events. Our results shed new light on the diversity and evolution of atadenoviruses.
Sossauer, Gaëtan; Zbinden, Michel; Tebeu, Pierre-Marie; Fosso, Gisèle K; Untiet, Sarah; Vassilakos, Pierre; Petignat, Patrick
2014-01-01
Human papillomavirus (HPV) self-sampling (Self-HPV) may be used as a primary cervical cancer screening method in a low resource setting. Our aim was to evaluate whether an educational intervention would improve women's knowledge and confidence in the Self-HPV method. Women aged between 25 and 65 years old, eligible for cervical cancer screening, were randomly chosen to receive standard information (control group) or standard information followed by educational intervention (interventional group). Standard information included explanations about what the test detects (HPV), the link between HPV and cervical cancer and how to perform HPV self-sampling. The educational intervention consisted of a culturally tailored video about HPV, cervical cancer, Self-HPV and its relevancy as a screening test. All participants completed a questionnaire that assessed sociodemographic data, women's knowledge about cervical cancer and acceptability of Self-HPV. A total of 302 women were enrolled in 4 health care centers in Yaoundé and the surrounding countryside. 301 women (149 in the "control group" and 152 in the "intervention group") completed the full process and were included into the analysis. Participants who received the educational intervention had a significantly higher knowledge about HPV and cervical cancer than the control group (pEducational intervention promotes an increase in knowledge about HPV and cervical cancer. Further investigation should be conducted to determine if this intervention can be sustained beyond the short term and influences screening behavior. International Standard Randomised Controlled Trial Number (ISRCTN) Register ISRCTN78123709.
Hoffmann, Tammy C; Walker, Marion F; Langhorne, Peter; Eames, Sally; Thomas, Emma; Glasziou, Paul
2015-11-17
To assess, in a sample of systematic reviews of non-pharmacological interventions, the completeness of intervention reporting, identify the most frequently missing elements, and assess review authors' use of and beliefs about providing intervention information. Analysis of a random sample of systematic reviews of non-pharmacological stroke interventions; online survey of review authors. The Cochrane Library and PubMed were searched for potentially eligible systematic reviews and a random sample of these assessed for eligibility until 60 (30 Cochrane, 30 non-Cochrane) eligible reviews were identified. In each review, the completeness of the intervention description in each eligible trial (n=568) was assessed by 2 independent raters using the Template for Intervention Description and Replication (TIDieR) checklist. All review authors (n=46) were invited to complete a survey. Most reviews were missing intervention information for the majority of items. The most incompletely described items were: modifications, fidelity, materials, procedure and tailoring (missing from all interventions in 97%, 90%, 88%, 83% and 83% of reviews, respectively). Items that scored better, but were still incomplete for the majority of reviews, were: 'when and how much' (in 31% of reviews, adequate for all trials; in 57% of reviews, adequate for some trials); intervention mode (in 22% of reviews, adequate for all trials; in 38%, adequate for some trials); and location (in 19% of reviews, adequate for all trials). Of the 33 (71%) authors who responded, 58% reported having further intervention information but not including it, and 70% tried to obtain information. Most focus on intervention reporting has been directed at trials. Poor intervention reporting in stroke systematic reviews is prevalent, compounded by poor trial reporting. Without adequate intervention descriptions, the conduct, usability and interpretation of reviews are restricted and therefore, require action by trialists
Chiu, Sherry Y-H; Malila, Nea; Yen, Amy M-F; Anttila, Ahti; Hakama, Matti; Chen, H-H
2011-02-01
Population-based randomized controlled trials (RCTs) often involve enormous costs and long-term follow-up to evaluate primary end points. Analytical decision-simulated model for sample size and effectiveness projections based on primary and surrogate end points are necessary before planning a population-based RCT. Based on the study design similar to two previous RCTs, transition rates were estimated using a five-state natural history model [normal, preclinical detection phase (PCDP) Dukes' A/B, PCDP Dukes' C/D, Clinical Dukes' A/B and Clinical Dukes' C/D]. The Markov cycle tree was assigned transition parameters, variables related to screening and survival rate that simulated results of 10-year follow-up in the absence of screening for a hypothetical cohort aged 45-74 years. The corresponding screened arm was to simulate the results after the introduction of population-based screening for colorectal cancer with fecal occult blood test with stop screen design. The natural course of mean sojourn time for five-state Markov model were estimated as 2.75 years for preclinical Dukes' A/B and 1.38 years for preclinical Dukes' C/D. The expected reductions in mortality and Dukes' C/D were 13% (95% confidence intervals: 7-19%) and 26% (95% confidence intervals: 20-32%), respectively, given a 70% acceptance rate and a 90% colonoscopy referral rate. Sample sizes required were 86,150 and 65,592 subjects for the primary end point and the surrogate end point, respectively, given an incidence rate up to 0.0020 per year. The sample sizes required for primary and surrogate end points and the projection of effectiveness of fecal occult blood test for colorectal cancer screening were developed. Both are very important to plan a population-based RCT. © 2010 Blackwell Publishing Ltd.
Nahorniak, Matthew; Larsen, David P; Volk, Carol; Jordan, Chris E
2015-01-01
In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB). Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools--linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we found inferences to be
Identification of major planktonic sulfur oxidizers in stratified freshwater lake.
Directory of Open Access Journals (Sweden)
Hisaya Kojima
Full Text Available Planktonic sulfur oxidizers are important constituents of ecosystems in stratified water bodies, and contribute to sulfide detoxification. In contrast to marine environments, taxonomic identities of major planktonic sulfur oxidizers in freshwater lakes still remain largely unknown. Bacterioplankton community structure was analyzed in a stratified freshwater lake, Lake Mizugaki in Japan. In the clone libraries of 16S rRNA gene, clones very closely related to a sulfur oxidizer isolated from this lake, Sulfuritalea hydrogenivorans, were detected in deep anoxic water, and occupied up to 12.5% in each library of different water depth. Assemblages of planktonic sulfur oxidizers were specifically analyzed by constructing clone libraries of genes involved in sulfur oxidation, aprA, dsrA, soxB and sqr. In the libraries, clones related to betaproteobacteria were detected with high frequencies, including the close relatives of Sulfuritalea hydrogenivorans.
Nicklas, Jacinda M; Skurnik, Geraldine; Zera, Chloe A; Reforma, Liberty G; Levkoff, Sue E; Seely, Ellen W
2016-02-01
The postpartum period is a window of opportunity for diabetes prevention in women with recent gestational diabetes (GDM), but recruitment for clinical trials during this period of life is a major challenge. We adapted a social-ecologic model to develop a multi-level recruitment strategy at the macro (high or institutional level), meso (mid or provider level), and micro (individual) levels. Our goal was to recruit 100 women with recent GDM into the Balance after Baby randomized controlled trial over a 17-month period. Participants were asked to attend three in-person study visits at 6 weeks, 6, and 12 months postpartum. They were randomized into a control arm or a web-based intervention arm at the end of the baseline visit at six weeks postpartum. At the end of the recruitment period, we compared population characteristics of our enrolled subjects to the entire population of women with GDM delivering at Brigham and Women's Hospital (BWH). We successfully recruited 107 of 156 (69 %) women assessed for eligibility, with the majority (92) recruited during pregnancy at a mean 30 (SD ± 5) weeks of gestation, and 15 recruited postpartum, at a mean 2 (SD ± 3) weeks postpartum. 78 subjects attended the initial baseline visit, and 75 subjects were randomized into the trial at a mean 7 (SD ± 2) weeks postpartum. The recruited subjects were similar in age and race/ethnicity to the total population of 538 GDM deliveries at BWH over the 17-month recruitment period. Our multilevel approach allowed us to successfully meet our recruitment goal and recruit a representative sample of women with recent GDM. We believe that our most successful strategies included using a dedicated in-person recruiter, integrating recruitment into clinical flow, allowing for flexibility in recruitment, minimizing barriers to participation, and using an opt-out strategy with providers. Although the majority of women were recruited while pregnant, women recruited in the early postpartum period were
Corticosteroids and Pediatric Septic Shock Outcomes: A Risk Stratified Analysis
Atkinson, Sarah J.; Cvijanovich, Natalie Z.; Thomas, Neal J.; Allen, Geoffrey L.; Anas, Nick; Bigham, Michael T.; Hall, Mark; Freishtat, Robert J.; Sen, Anita; Meyer, Keith; Checchia, Paul A.; Shanley, Thomas P.; Nowak, Jeffrey; Quasney, Michael; Weiss, Scott L.; Banschbach, Sharon; Beckman, Eileen; Howard, Kelli; Frank, Erin; Harmon, Kelli; Lahni, Patrick; Lindsell, Christopher J.; Wong, Hector R.
2014-01-01
Background The potential benefits of corticosteroids for septic shock may depend on initial mortality risk. Objective We determined associations between corticosteroids and outcomes in children with septic shock who were stratified by initial mortality risk. Methods We conducted a retrospective analysis of an ongoing, multi-center pediatric septic shock clinical and biological database. Using a validated biomarker-based stratification tool (PERSEVERE), 496 subjects were stratified into three initial mortality risk strata (low, intermediate, and high). Subjects receiving corticosteroids during the initial 7 days of admission (n = 252) were compared to subjects who did not receive corticosteroids (n = 244). Logistic regression was used to model the effects of corticosteroids on 28-day mortality and complicated course, defined as death within 28 days or persistence of two or more organ failures at 7 days. Results Subjects who received corticosteroids had greater organ failure burden, higher illness severity, higher mortality, and a greater requirement for vasoactive medications, compared to subjects who did not receive corticosteroids. PERSEVERE-based mortality risk did not differ between the two groups. For the entire cohort, corticosteroids were associated with increased risk of mortality (OR 2.3, 95% CI 1.3–4.0, p = 0.004) and a complicated course (OR 1.7, 95% CI 1.1–2.5, p = 0.012). Within each PERSEVERE-based stratum, corticosteroid administration was not associated with improved outcomes. Similarly, corticosteroid administration was not associated with improved outcomes among patients with no comorbidities, nor in groups of patients stratified by PRISM. Conclusions Risk stratified analysis failed to demonstrate any benefit from corticosteroids in this pediatric septic shock cohort. PMID:25386653
Corticosteroids and pediatric septic shock outcomes: a risk stratified analysis.
Atkinson, Sarah J; Cvijanovich, Natalie Z; Thomas, Neal J; Allen, Geoffrey L; Anas, Nick; Bigham, Michael T; Hall, Mark; Freishtat, Robert J; Sen, Anita; Meyer, Keith; Checchia, Paul A; Shanley, Thomas P; Nowak, Jeffrey; Quasney, Michael; Weiss, Scott L; Banschbach, Sharon; Beckman, Eileen; Howard, Kelli; Frank, Erin; Harmon, Kelli; Lahni, Patrick; Lindsell, Christopher J; Wong, Hector R
2014-01-01
The potential benefits of corticosteroids for septic shock may depend on initial mortality risk. We determined associations between corticosteroids and outcomes in children with septic shock who were stratified by initial mortality risk. We conducted a retrospective analysis of an ongoing, multi-center pediatric septic shock clinical and biological database. Using a validated biomarker-based stratification tool (PERSEVERE), 496 subjects were stratified into three initial mortality risk strata (low, intermediate, and high). Subjects receiving corticosteroids during the initial 7 days of admission (n = 252) were compared to subjects who did not receive corticosteroids (n = 244). Logistic regression was used to model the effects of corticosteroids on 28-day mortality and complicated course, defined as death within 28 days or persistence of two or more organ failures at 7 days. Subjects who received corticosteroids had greater organ failure burden, higher illness severity, higher mortality, and a greater requirement for vasoactive medications, compared to subjects who did not receive corticosteroids. PERSEVERE-based mortality risk did not differ between the two groups. For the entire cohort, corticosteroids were associated with increased risk of mortality (OR 2.3, 95% CI 1.3-4.0, p = 0.004) and a complicated course (OR 1.7, 95% CI 1.1-2.5, p = 0.012). Within each PERSEVERE-based stratum, corticosteroid administration was not associated with improved outcomes. Similarly, corticosteroid administration was not associated with improved outcomes among patients with no comorbidities, nor in groups of patients stratified by PRISM. Risk stratified analysis failed to demonstrate any benefit from corticosteroids in this pediatric septic shock cohort.
Dust particle charge distribution in a stratified glow discharge
Energy Technology Data Exchange (ETDEWEB)
Sukhinin, Gennady I [Institute of Thermophysics, Siberian Branch, Russian Academy of Sciences, Lavrentyev Ave., 1, Novosibirsk 630090 (Russian Federation); Fedoseev, Alexander V [Institute of Thermophysics, Siberian Branch, Russian Academy of Sciences, Lavrentyev Ave., 1, Novosibirsk 630090 (Russian Federation); Ramazanov, Tlekkabul S [Institute of Experimental and Theoretical Physics, Al Farabi Kazakh National University, Tole Bi, 96a, Almaty 050012 (Kazakhstan); Dzhumagulova, Karlygash N [Institute of Experimental and Theoretical Physics, Al Farabi Kazakh National University, Tole Bi, 96a, Almaty 050012 (Kazakhstan); Amangaliyeva, Rauan Zh [Institute of Experimental and Theoretical Physics, Al Farabi Kazakh National University, Tole Bi, 96a, Almaty 050012 (Kazakhstan)
2007-12-21
The influence of a highly pronounced non-equilibrium characteristic of the electron energy distribution function in a stratified dc glow discharge on the process of dust particle charging in a complex plasma is taken into account for the first time. The calculated particle charge spatial distribution is essentially non-homogeneous and it can explain the vortex motion of particles at the periphery of a dusty cloud obtained in experiments.
SATURATED-SUBCOOLED STRATIFIED FLOW IN HORIZONTAL PIPES
Energy Technology Data Exchange (ETDEWEB)
Richard Schultz
2010-08-01
Advanced light water reactor systems are designed to use passive emergency core cooling systems with horizontal pipes that provide highly subcooled water from water storage tanks or passive heat exchangers to the reactor vessel core under accident conditions. Because passive systems are driven by density gradients, the horizontal pipes often do not flow full and thus have a free surface that is exposed to saturated steam and stratified flow is present.
A Stratified Flow Fine Structure near a Horizontally Moving Strip
Bardakov, Roman N.; Chashechkin, Yuli D.
Flow field pattern produced by a strip of arbitrary width moving with constant velocity along a horizontal plane in exponentially stratified fluid is calculated analytically in linear approximation. Constructed solution describes upstream transient and attached waves, and a boundary layer. Numerically visualized flow pattern is consistent with Schlieren observations. Drag on the strip is calculated and compared with well-known results following from conventional boundary layer theory. Due to action of edges singularities an inducing torque lift force is formed.
Community genomics among stratified microbial assemblages in the ocean's interior
DEFF Research Database (Denmark)
DeLong, Edward F; Preston, Christina M; Mincer, Tracy
2006-01-01
Microbial life predominates in the ocean, yet little is known about its genomic variability, especially along the depth continuum. We report here genomic analyses of planktonic microbial communities in the North Pacific Subtropical Gyre, from the ocean's surface to near-sea floor depths. Sequence......, and host-viral interactions. Comparative genomic analyses of stratified microbial communities have the potential to provide significant insight into higher-order community organization and dynamics....
Defining sample size and sampling strategy for dendrogeomorphic rockfall reconstructions
Morel, Pauline; Trappmann, Daniel; Corona, Christophe; Stoffel, Markus
2015-05-01
Optimized sampling strategies have been recently proposed for dendrogeomorphic reconstructions of mass movements with a large spatial footprint, such as landslides, snow avalanches, and debris flows. Such guidelines have, by contrast, been largely missing for rockfalls and cannot be transposed owing to the sporadic nature of this process and the occurrence of individual rocks and boulders. Based on a data set of 314 European larch (Larix decidua Mill.) trees (i.e., 64 trees/ha), growing on an active rockfall slope, this study bridges this gap and proposes an optimized sampling strategy for the spatial and temporal reconstruction of rockfall activity. Using random extractions of trees, iterative mapping, and a stratified sampling strategy based on an arbitrary selection of trees, we investigate subsets of the full tree-ring data set to define optimal sample size and sampling design for the development of frequency maps of rockfall activity. Spatially, our results demonstrate that the sampling of only 6 representative trees per ha can be sufficient to yield a reasonable mapping of the spatial distribution of rockfall frequencies on a slope, especially if the oldest and most heavily affected individuals are included in the analysis. At the same time, however, sampling such a low number of trees risks causing significant errors especially if nonrepresentative trees are chosen for analysis. An increased number of samples therefore improves the quality of the frequency maps in this case. Temporally, we demonstrate that at least 40 trees/ha are needed to obtain reliable rockfall chronologies. These results will facilitate the design of future studies, decrease the cost-benefit ratio of dendrogeomorphic studies and thus will permit production of reliable reconstructions with reasonable temporal efforts.
Background stratified Poisson regression analysis of cohort data
Energy Technology Data Exchange (ETDEWEB)
Richardson, David B. [University of North Carolina at Chapel Hill, Department of Epidemiology, School of Public Health, Chapel Hill, NC (United States); Langholz, Bryan [Keck School of Medicine, University of Southern California, Division of Biostatistics, Department of Preventive Medicine, Los Angeles, CA (United States)
2012-03-15
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)
Survival analysis of cervical cancer using stratified Cox regression
Purnami, S. W.; Inayati, K. D.; Sari, N. W. Wulan; Chosuvivatwong, V.; Sriplung, H.
2016-04-01
Cervical cancer is one of the mostly widely cancer cause of the women death in the world including Indonesia. Most cervical cancer patients come to the hospital already in an advanced stadium. As a result, the treatment of cervical cancer becomes more difficult and even can increase the death's risk. One of parameter that can be used to assess successfully of treatment is the probability of survival. This study raises the issue of cervical cancer survival patients at Dr. Soetomo Hospital using stratified Cox regression based on six factors such as age, stadium, treatment initiation, companion disease, complication, and anemia. Stratified Cox model is used because there is one independent variable that does not satisfy the proportional hazards assumption that is stadium. The results of the stratified Cox model show that the complication variable is significant factor which influent survival probability of cervical cancer patient. The obtained hazard ratio is 7.35. It means that cervical cancer patient who has complication is at risk of dying 7.35 times greater than patient who did not has complication. While the adjusted survival curves showed that stadium IV had the lowest probability of survival.
The Added Value of Stratified Topographic Correction of Multispectral Images
Directory of Open Access Journals (Sweden)
Ion Sola
2016-02-01
Full Text Available Satellite images in mountainous areas are strongly affected by topography. Different studies demonstrated that the results of semi-empirical topographic correction algorithms improved when a stratification of land covers was carried out first. However, differences in the stratification strategies proposed and also in the evaluation of the results obtained make it unclear how to implement them. The objective of this study was to compare different stratification strategies with a non-stratified approach using several evaluation criteria. For that purpose, Statistic-Empirical and Sun-Canopy-Sensor + C algorithms were applied and six different stratification approaches, based on vegetation indices and land cover maps, were implemented and compared with the non-stratified traditional option. Overall, this study demonstrates that for this particular case study the six stratification approaches can give results similar to applying a traditional topographic correction with no previous stratification. Therefore, the non-stratified correction approach could potentially aid in removing the topographic effect, because it does not require any ancillary information and it is easier to implement in automatic image processing chains. The findings also suggest that the Statistic-Empirical method performs slightly better than the Sun-Canopy-Sensor + C correction, regardless of the stratification approach. In any case, further research is necessary to evaluate other stratification strategies and confirm these results.
Exploring the salivary microbiome of children stratified by the oral hygiene index
Mashima, Izumi; Theodorea, Citra F.; Thaweboon, Boonyanit; Thaweboon, Sroisiri; Scannapieco, Frank A.; Nakazawa, Futoshi
2017-01-01
Poor oral hygiene often leads to chronic diseases such as periodontitis and dental caries resulting in substantial economic costs and diminished quality of life in not only adults but also in children. In this study, the salivary microbiome was characterized in a group of children stratified by the Simplified Oral Hygiene Index (OHI-S). Illumina MiSeq high-throughput sequencing based on the 16S rRNA was utilized to analyze 90 salivary samples (24 Good, 31 Moderate and 35 Poor oral hygiene) fr...
Kongtragul, Jaruwan; Tukhanon, Wanvara; Tudpudsa, Piyanuch; Suedee, Kanita; Tienchai, Supaporn; Leewansangtong, Sunai; Nualgyong, Chaiyong
2014-05-01
To compare the efficacy of pelvic floor muscle exercise with the concentration therapy versus pelvic floor muscle exercise alone after radical prostatectomy. One hundred thirty five patients were randomized into the intervention group that concentration therapy was added to Kegel exercise, and control group that was Kegel exercise only, using the stratified randomization (stratified by taking the catheter off before and after discharge) and type of surgery. Incontinence was defined as a loss of urine equal or more than to 2 grams in one-hour pad test, before and after the test in each sample group. Follow-up results were obtained by phone visit at 3, 4, 5, 6, 8, 10, and 12 weeks after surgery In the intervention group, 65 of 68 cases (95.6%) had continence in three months, compared to 48 of 67 (71.6%) in the control group, with significant statistical difference (p-value Kegel exercise had significantly improved continence after radical prostatectomy
Royle, J. Andrew; Converse, Sarah J.
2014-01-01
Capture–recapture studies are often conducted on populations that are stratified by space, time or other factors. In this paper, we develop a Bayesian spatial capture–recapture (SCR) modelling framework for stratified populations – when sampling occurs within multiple distinct spatial and temporal strata.We describe a hierarchical model that integrates distinct models for both the spatial encounter history data from capture–recapture sampling, and also for modelling variation in density among strata. We use an implementation of data augmentation to parameterize the model in terms of a latent categorical stratum or group membership variable, which provides a convenient implementation in popular BUGS software packages.We provide an example application to an experimental study involving small-mammal sampling on multiple trapping grids over multiple years, where the main interest is in modelling a treatment effect on population density among the trapping grids.Many capture–recapture studies involve some aspect of spatial or temporal replication that requires some attention to modelling variation among groups or strata. We propose a hierarchical model that allows explicit modelling of group or strata effects. Because the model is formulated for individual encounter histories and is easily implemented in the BUGS language and other free software, it also provides a general framework for modelling individual effects, such as are present in SCR models.
Calculation of periodic flows in a continuously stratified fluid
Vasiliev, A.
2012-04-01
Analytic theory of disturbances generated by an oscillating compact source in a viscous continuously stratified fluid was constructed. Exact solution of the internal waves generation problem was constructed taking into account diffusivity effects. This analysis is based on set of fundamental equations of incompressible flows. The linearized problem of periodic flows in a continuously stratified fluid, generated by an oscillating part of the inclined plane was solved by methods of singular perturbation theory. A rectangular or disc placed on a sloping plane and oscillating linearly in an arbitrary direction was selected as a source of disturbances. The solutions include regularly perturbed on dissipative component functions describing internal waves and a family of singularly perturbed functions. One of the functions from the singular components family has an analogue in a homogeneous fluid that is a periodic or Stokes' flow. Its thickness is defined by a universal micro scale depending on kinematics viscosity coefficient and a buoyancy frequency with a factor depending on the wave slope. Other singular perturbed functions are specific for stratified flows. Their thickness are defined the diffusion coefficient, kinematic viscosity and additional factor depending on geometry of the problem. Fields of fluid density, velocity, vorticity, pressure, energy density and flux as well as forces acting on the source are calculated for different types of the sources. It is shown that most effective source of waves is the bi-piston. Complete 3D problem is transformed in various limiting cases that are into 2D problem for source in stratified or homogeneous fluid and the Stokes problem for an oscillating infinite plane. The case of the "critical" angle that is equality of the emitting surface and the wave cone slope angles needs in separate investigations. In this case, the number of singular component is saved. Patterns of velocity and density fields were constructed and
Garboś, Sławomir; Święcicka, Dorota
2015-11-01
The random daytime (RDT) sampling method was used for the first time in the assessment of average weekly exposure to uranium through drinking water in a large water supply zone. Data set of uranium concentrations determined in 106 RDT samples collected in three runs from the water supply zone in Wroclaw (Poland), cannot be simply described by normal or log-normal distributions. Therefore, a numerical method designed for the detection and calculation of bimodal distribution was applied. The extracted two distributions containing data from the summer season of 2011 and the winter season of 2012 (nI=72) and from the summer season of 2013 (nII=34) allowed to estimate means of U concentrations in drinking water: 0.947 μg/L and 1.23 μg/L, respectively. As the removal efficiency of uranium during applied treatment process is negligible, the effect of increase in uranium concentration can be explained by higher U concentration in the surface-infiltration water used for the production of drinking water. During the summer season of 2013, heavy rains were observed in Lower Silesia region, causing floods over the territory of the entire region. Fluctuations in uranium concentrations in surface-infiltration water can be attributed to releases of uranium from specific sources - migration from phosphate fertilizers and leaching from mineral deposits. Thus, exposure to uranium through drinking water may increase during extreme rainfall events. The average chronic weekly intakes of uranium through drinking water, estimated on the basis of central values of the extracted normal distributions, accounted for 3.2% and 4.1% of tolerable weekly intake. Copyright © 2015 Elsevier Ltd. All rights reserved.
Guo, Yan; Chen, Xinguang; Gong, Jie; Li, Fang; Zhu, Chaoyang; Yan, Yaqiong; Wang, Liang
2016-01-01
Millions of people move from rural areas to urban areas in China to pursue new opportunities while leaving their spouses and children at rural homes. Little is known about the impact of migration-related separation on mental health of these rural migrants in urban China. Survey data from a random sample of rural-to-urban migrants (n = 1113, aged 18-45) from Wuhan were analyzed. The Domestic Migration Stress Questionnaire (DMSQ), an instrument with four subconstructs, was used to measure migration-related stress. The relationship between spouse/child separation and stress was assessed using survey estimation methods to account for the multi-level sampling design. 16.46% of couples were separated from their spouses (spouse-separation only), 25.81% of parents were separated from their children (child separation only). Among the participants who married and had children, 5.97% were separated from both their spouses and children (double separation). Spouse-separation only and double separation did not scored significantly higher on DMSQ than those with no separation. Compared to parents without child separation, parents with child separation scored significantly higher on DMSQ (mean score = 2.88, 95% CI: [2.81, 2.95] vs. 2.60 [2.53, 2.67], p child-separation only and for female participants. Child-separation is an important source of migration-related stress, and the effect is particularly strong for migrant women. Public policies and intervention programs should consider these factors to encourage and facilitate the co-migration of parents with their children to mitigate migration-related stress.
Hirooka, Nobutaka; Kadowaki, Takashi; Sekikawa, Akira; Ueshima, Hirotsugu; Choo, Jina; Miura, Katsuyuki; Okamura, Tomonori; Fujiyoshi, Akira; Kadowaki, Sayaka; Kadota, Aya; Nakamura, Yasuyuki; Maegawa, Hiroshi; Kashiwagi, Atsunori; Masaki, Kamal; Sutton-Tyrrell, Kim; Kuller, Lewis H.; Curb, J. David; Shin, Chol
2012-01-01
Background Cigarette smoking is a risk factor of coronary heart disease (CHD). Vascular calcification such as coronary artery calcium (CAC) and aortic calcium (AC) is associated with CHD. We hypothesized that cigarette smoking is associated with coronary artery and aortic calcifications in Japanese and Koreans with high smoking prevalence. Methods Random samples from populations of 313 Japanese and 302 Korean men aged 40 to 49 were examined for calcification of the coronary artery and aorta using electron beam computed tomography. Coronary artery calcium (CAC) and aortic calcium (AC) were quantified using the Agatston score. We examined the associations of cigarette smoking with CAC and AC after adjusting for conventional risk factors and alcohol consumption. Current and past smokers were combined and categorized into two groups using median pack-years as a cutoff point in each of Japanese and Koreans. The never smoker group was used as a reference for the multiple logistic regression analyses. Results The odds ratios of CAC (score ≥10) for smokers with higher pack-years were 2.9 in Japanese (PKoreans (non-significant) compared to never smokers. The odds ratios of AC (score ≥100) for smokers with higher pack-years were 10.4 in Japanese (PKoreans (Psmoking with higher pack-years is significantly associated with CAC and AC in Japanese men, while cigarette smoking with higher pack-years is significantly associated with AC but not significantly with CAC in Korean men. PMID:22844083
Hirooka, Nobutaka; Kadowaki, Takashi; Sekikawa, Akira; Ueshima, Hirotsugu; Choo, Jina; Miura, Katsuyuki; Okamura, Tomonori; Fujiyoshi, Akira; Kadowaki, Sayaka; Kadota, Aya; Nakamura, Yasuyuki; Maegawa, Hiroshi; Kashiwagi, Atsunori; Masaki, Kamal; Sutton-Tyrrell, Kim; Kuller, Lewis H; Curb, J David; Shin, Chol
2013-02-01
Cigarette smoking is a risk factor of coronary heart disease. Vascular calcification such as coronary artery calcium (CAC) and aortic calcium (AC) is associated with coronary heart disease. The authors hypothesised that cigarette smoking is associated with coronary artery and aortic calcifications in Japanese and Koreans with high smoking prevalence. Random samples from populations of 313 Japanese and 302 Korean men aged 40-49 years were examined for calcification of the coronary artery and aorta using electron beam CT. CAC and AC were quantified using the Agatston score. The authors examined the associations of cigarette smoking with CAC and AC after adjusting for conventional risk factors and alcohol consumption. Current and past smokers were combined and categorised into two groups using median pack-years as a cut-off point in each of Japanese and Koreans. The never-smoker group was used as a reference for the multiple logistic regression analyses. The ORs of CAC (score ≥10) for smokers with higher pack-years were 2.9 in Japanese (pKoreans (non-significant) compared with never-smokers. The ORs of AC (score ≥100) for smokers with higher pack-years were 10.4 in Japanese (pKoreans (psmoking with higher pack-years is significantly associated with CAC and AC in Japanese men, while cigarette smoking with higher pack-years is significantly associated with AC but not significantly with CAC in Korean men.
Schiamberg, Lawrence B; Oehmke, James; Zhang, Zhenmei; Barboza, Gia E; Griffore, Robert J; Von Heydrich, Levente; Post, Lori A; Weatherill, Robin P; Mastin, Teresa
2012-01-01
Few empirical studies have focused on elder abuse in nursing home settings. The present study investigated the prevalence and risk factors of staff physical abuse among elderly individuals receiving nursing home care in Michigan. A random sample of 452 adults with elderly relatives, older than 65 years, and in nursing home care completed a telephone survey regarding elder abuse and neglect experienced by this elder family member in the care setting. Some 24.3% of respondents reported at least one incident of physical abuse by nursing home staff. A logistic regression model was used to estimate the importance of various risk factors in nursing home abuse. Limitations in activities of daily living (ADLs), older adult behavioral difficulties, and previous victimization by nonstaff perpetrators were associated with a greater likelihood of physical abuse. Interventions that address these risk factors may be effective in reducing older adult physical abuse in nursing homes. Attention to the contextual or ecological character of nursing home abuse is essential, particularly in light of the findings of this study.
Directory of Open Access Journals (Sweden)
Orgül Selim
2010-06-01
Full Text Available Abstract Background The aim of this epidemiological study was to investigate the relationship of thermal discomfort with cold extremities (TDCE to age, gender, and body mass index (BMI in a Swiss urban population. Methods In a random population sample of Basel city, 2,800 subjects aged 20-40 years were asked to complete a questionnaire evaluating the extent of cold extremities. Values of cold extremities were based on questionnaire-derived scores. The correlation of age, gender, and BMI to TDCE was analyzed using multiple regression analysis. Results A total of 1,001 women (72.3% response rate and 809 men (60% response rate returned a completed questionnaire. Statistical analyses revealed the following findings: Younger subjects suffered more intensely from cold extremities than the elderly, and women suffered more than men (particularly younger women. Slimmer subjects suffered significantly more often from cold extremities than subjects with higher BMIs. Conclusions Thermal discomfort with cold extremities (a relevant symptom of primary vascular dysregulation occurs at highest intensity in younger, slimmer women and at lowest intensity in elderly, stouter men.
Directory of Open Access Journals (Sweden)
Serge Clotaire Billong
2016-11-01
Full Text Available Abstract Background Retention on lifelong antiretroviral therapy (ART is essential in sustaining treatment success while preventing HIV drug resistance (HIVDR, especially in resource-limited settings (RLS. In an era of rising numbers of patients on ART, mastering patients in care is becoming more strategic for programmatic interventions. Due to lapses and uncertainty with the current WHO sampling approach in Cameroon, we thus aimed to ascertain the national performance of, and determinants in, retention on ART at 12 months. Methods Using a systematic random sampling, a survey was conducted in the ten regions (56 sites of Cameroon, within the “reporting period” of October 2013–November 2014, enrolling 5005 eligible adults and children. Performance in retention on ART at 12 months was interpreted following the definition of HIVDR early warning indicator: excellent (>85%, fair (85–75%, poor (<75; and factors with p-value < 0.01 were considered statistically significant. Results Majority (74.4% of patients were in urban settings, and 50.9% were managed in reference treatment centres. Nationwide, retention on ART at 12 months was 60.4% (2023/3349; only six sites and one region achieved acceptable performances. Retention performance varied in reference treatment centres (54.2% vs. management units (66.8%, p < 0.0001; male (57.1% vs. women (62.0%, p = 0.007; and with WHO clinical stage I (63.3% vs. other stages (55.6%, p = 0.007; but neither for age (adults [60.3%] vs. children [58.8%], p = 0.730 nor for immune status (CD4351–500 [65.9%] vs. other CD4-staging [59.86%], p = 0.077. Conclusions Poor retention in care, within 12 months of ART initiation, urges active search for lost-to-follow-up targeting preferentially male and symptomatic patients, especially within reference ART clinics. Such sampling strategy could be further strengthened for informed ART monitoring and HIVDR prevention perspectives.
DEFF Research Database (Denmark)
Vollert, Jan; Maier, Christoph; Attal, Nadine
2017-01-01
/or allodynia, or loss of thermal detection and mild mechanical hyperalgesia and/or allodynia. Here, we present an algorithm for allocation of individual patients to these subgroups. The algorithm is nondeterministic-ie, a patient can be sorted to more than one phenotype-and can separate patients...
Model-based estimation of finite population total in stratified sampling
African Journals Online (AJOL)
The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...
Physician failure to stratify patients hospitalized with acute pulmonary embolism.
Jacobs, Mitchell D; Greco, Allison; Mukhtar, Umer; Dunn, Jonathan; Scharf, Michael L
2017-12-01
In 2011, the AHA recommended risk stratification of patients with acute pulmonary embolism (PE). Failure to risk stratify may cause under recognition of intermediate-risk PE and its attendant short- and long-term consequences. We sought to determine if patients hospitalized with acute PE were appropriately risk stratified according to the 2011 AHA Scientific Statement within our hospital system and whether differences exist in adherence to risk stratification by hospital or treating hospital service. We also wished to know the frequency of in-hospital consultations for acute PE which might assist in the risk stratification process. This is a retrospective chart audit of all patients hospitalized with a diagnosis of acute PE between January 2011 and December 2013 at our 937-bed metropolitan, three hospital system comprised of academic University, neuroscience Specialty, and teaching Community hospitals. We evaluated the presence of imaging, laboratory tests, and specialty consultation within 72 h of PE diagnosis by hospital. 701 patients with acute PE were admitted to our hospital system during the study period. 308 patients (43.9%) met criteria for intermediate-risk PE. 347 patients (49.5%) were considered 'Low-Risk - At Risk', patients defined in a low-risk category not having undergone all recommended risk stratification testing and so truly may have been in a higher risk category. No specialty consultations were utilized for 265 patients (37.8%). Our large metropolitan hospital system inadequately risk stratifies hospitalized patients with acute PE. Because nearly one-half of patients with acute PE did not have all recommended testing, clinicians may be under recognizing patients with intermediate-risk PE and their risk for long-term morbidity. Specialty consultations were underutilized and may help guide medical decision-making.
Stratified spin-up in a sliced, square cylinder
Energy Technology Data Exchange (ETDEWEB)
Munro, R. J. [Faculty of Engineering, University of Nottingham, Nottingham NG7 2RD (United Kingdom); Foster, M. R. [Department of Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States)
2014-02-15
We previously reported experimental and theoretical results on the linear spin-up of a linearly stratified, rotating fluid in a uniform-depth square cylinder [M. R. Foster and R. J. Munro, “The linear spin-up of a stratified, rotating fluid in a square cylinder,” J. Fluid Mech. 712, 7–40 (2012)]. Here we extend that analysis to a “sliced” square cylinder, which has a base-plane inclined at a shallow angle α. Asymptotic results are derived that show the spin-up phase is achieved by a combination of the Ekman-layer eruptions (from the perimeter region of the cylinder's lid and base) and cross-slope-propagating stratified Rossby waves. The final, steady state limit for this spin-up phase is identical to that found previously for the uniform depth cylinder, but is reached somewhat more rapidly on a time scale of order E{sup −1/2}Ω{sup −1}/log (α/E{sup 1/2}) (compared to E{sup −1/2}Ω{sup −1} for the uniform-depth cylinder), where Ω is the rotation rate and E the Ekman number. Experiments were performed for Burger numbers, S, between 0.4 and 16, and showed that for S≳O(1), the Rossby modes are severely damped, and it is only at small S, and during the early stages, that the presence of these wave modes was evident. These observations are supported by the theory, which shows the damping factors increase with S and are numerically large for S≳O(1)
Jet-mixing of initially-stratified liquid-liquid pipe flows: experiments and numerical simulations
Wright, Stuart; Ibarra-Hernandes, Roberto; Xie, Zhihua; Markides, Christos; Matar, Omar
2016-11-01
Low pipeline velocities lead to stratification and so-called 'phase slip' in horizontal liquid-liquid flows due to differences in liquid densities and viscosities. Stratified flows have no suitable single point for sampling, from which average phase properties (e.g. fractions) can be established. Inline mixing, achieved by static mixers or jets in cross-flow (JICF), is often used to overcome liquid-liquid stratification by establishing unstable two-phase dispersions for sampling. Achieving dispersions in liquid-liquid pipeline flows using JICF is the subject of this experimental and modelling work. The experimental facility involves a matched refractive index liquid-liquid-solid system, featuring an ETFE test section, and experimental liquids which are silicone oil and a 51-wt% glycerol solution. The matching then allows the dispersed fluid phase fractions and velocity fields to be established through advanced optical techniques, namely PLIF (for phase) and PTV or PIV (for velocity fields). CFD codes using the volume of a fluid (VOF) method are then used to demonstrate JICF breakup and dispersion in stratified pipeline flows. A number of simple jet configurations are described and their dispersion effectiveness is compared with the experimental results. Funding from Cameron for Ph.D. studentship (SW) gratefully acknowledged.
Instabilities developed in stratified flows over pronounced obstacles
Varela, J.; Araújo, M.; Bove, I.; Cabeza, C.; Usera, G.; Martí, Arturo C.; Montagne, R.; Sarasúa, L. G.
2007-12-01
In the present work we study numerical and experimentally the flow of a two-layer stratified fluid over a topographic obstacle. The problem reflects a wide number of oceanographic and meteorological situations, where the stratification plays an important role. We identify the different instabilities developed by studying the pycnocline deformation due to a pronounced obstacle. The numerical simulations were made using the model caffa3D.MB which works with a numerical model of Navier-Stokes equations with finite volume elements in curvilinear meshes. The experimental results are contrasted with numerical simulations. Linear stability analysis predictions are checked with particle image velocimetry (PIV) measurements.
Photonic Bandgaps in Mie Scattering by Concentrically Stratified Spheres
Smith, David D.; Fuller, Kirk A.; Curreri, Peter A.
2002-01-01
The Mie formulation for homogeneous spheres is generalized to handle core/shell systems and multiple concentric layers in a manner that exploits an analogy with stratified planar systems, thereby allowing concentric multi-layered structures to be treated as photonic bandgap materials. Representative results from a Mie code employing this analogy demonstrate that photonic bands are present for periodic concentric spheres, though not readily apparent in extinction spectra. Rather, the periodicity simply alters the scattering profile, enhancing the ratio of backscattering to forward scattering inside the bandgap, whereas modification of the interference structure is evident in extinction spectra in accordance with the optical theorem
Fluid mixing in stratified gravity currents: the Prandtl mixing length.
Odier, P; Chen, J; Rivera, M K; Ecke, R E
2009-04-03
Shear-induced vertical mixing in a stratified flow is a key ingredient of thermohaline circulation. We experimentally determine the vertical flux of momentum and density of a forced gravity current using high-resolution velocity and density measurements. A constant eddy-viscosity model provides a poor description of the physics of mixing, but a Prandtl mixing length model relating momentum and density fluxes to mean velocity and density gradients works well. For the average gradient Richardson number Ri(g) approximately 0.08 and a Taylor Reynolds number Re(lambda) approximately 100, the mixing lengths are fairly constant, about the same magnitude, comparable to the turbulent shear length.
Doyle, Kenneth O., Jr.
1979-01-01
The vocabulary of sampling is examined in order to provide a clear understanding of basic sampling concepts. The basic vocabulary of sampling (population, probability sampling, precision and bias, stratification), the fundamental grammar of sampling (random sample), sample size and response rate, and cluster, multiphase, snowball, and panel…
Bachem, Rahel; Maercker, Andreas
2016-09-01
Adjustment disorders (AjD) are among the most frequent mental disorders yet often remain untreated. The high prevalence, comparatively mild symptom impairment, and transient nature make AjD a promising target for low-threshold self-help interventions. Bibliotherapy represents a potential treatment for AjD problems. This study investigates the effectiveness of a cognitive behavioral self-help manual specifically directed at alleviating AjD symptoms in a homogenous sample of burglary victims. Participants with clinical or subclinical AjD symptoms following experience of burglary were randomized to an intervention group (n = 30) or waiting-list control group (n = 24). The new explicit stress response syndrome model for diagnosing AjD was applied. Participants received no therapist support and assessments took place at baseline, after the one-month intervention, and at three-month follow-up. Based on completer analyses, group by time interactions indicated that the intervention group showed more improvement in AjD symptoms of preoccupation and in post-traumatic stress symptoms. Post-intervention between-group effect sizes ranged from Cohen's d = .17 to .67 and the proportion of participants showing reliable change was consistently higher in the intervention group than in the control group. Engagement with the self-help manual was high: 87% of participants had worked through at least half the manual. This is the first published RCT of a bibliotherapeutic self-help intervention for AjD problems. The findings provide evidence that a low-threshold self-help intervention without therapist contact is a feasible and effective treatment for symptoms of AjD.
Woo, Young Sik; Lee, Kwang Hyuck; Noh, Dong Hyo; Park, Joo Kyung; Lee, Kyu Taek; Lee, Jong Kyun; Jang, Kee-Taek
2017-12-01
No comparative study of 22-gauge biopsy needles (PC22) and 25-gauge biopsy needles (PC25) has been conducted. We prospectively compared the diagnostic accuracy of PC22 and PC25 in patients with pancreatic and peripancreatic solid masses. We conducted a randomized noninferiority clinical study from January 2013 to May 2014 at Samsung Medical Center. A cytological and histological specimen of each pass was analyzed separately by an experienced pathologist. The primary outcome was to assess the diagnostic accuracy using the PC22 or PC25. Secondary outcomes included the optimal number of passes for adequate diagnosis, core specimen yield, sample adequacy, and complication rates. Diagnostic accuracy of combining cytology with histology in three cumulative passes was 97.1% (100/103) for the PC22 and 91.3% (94/103) for the PC25 group. Thus, noninferiority of PC25 to PC22 was not shown with a 10% noninferiority margin (difference, -5.8%; 95% CI, -12.1 to -0.5%). In a pairwise comparison with each needle type, two passes was non-inferior to three passes in the PC22 (96.1% vs. 97.1%; difference, -0.97%; 95% CI -6.63 to 4.69%) but noninferiority of two passes to three passes was not shown in the PC25 group (87.4% vs. 91.3%; difference, -3.88%; 95% CI, -13.5 to 5.7%). Non-inferiority of PC25 to PC22 diagnostic accuracy was not observed for solid pancreatic or peripancreatic masses without on-site cytology. PC22 may be a more ideal device because only two PC22 needle passes was sufficient to establish an adequate diagnosis, whereas PC25 required three or more needle passes.
Directory of Open Access Journals (Sweden)
Coll-de-Tuero Gabriel
2012-08-01
Full Text Available Abstract Background Kidney disease is associated with an increased total mortality and cardiovascular morbimortality in the general population and in patients with Type 2 diabetes. The aim of this study is to determine the prevalence of kidney disease and different types of renal disease in patients with type 2 diabetes (T2DM. Methods Cross-sectional study in a random sample of 2,642 T2DM patients cared for in primary care during 2007. Studied variables: demographic and clinical characteristics, pharmacological treatments and T2DM complications (diabetic foot, retinopathy, coronary heart disease and stroke. Variables of renal function were defined as follows: 1 Microalbuminuria: albumin excretion rate & 30 mg/g or 3.5 mg/mmol, 2 Macroalbuminuria: albumin excretion rate & 300 mg/g or 35 mg/mmol, 3 Kidney disease (KD: glomerular filtration rate according to Modification of Diet in Renal Disease 2 and/or the presence of albuminuria, 4 Renal impairment (RI: glomerular filtration rate 2, 5 Nonalbuminuric RI: glomerular filtration rate 2 without albuminuria and, 5 Diabetic nephropathy (DN: macroalbuminuria or microalbuminuria plus diabetic retinopathy. Results The prevalence of different types of renal disease in patients was: 34.1% KD, 22.9% RI, 19.5% albuminuria and 16.4% diabetic nephropathy (DN. The prevalence of albuminuria without RI (13.5% and nonalbuminuric RI (14.7% was similar. After adjusting per age, BMI, cholesterol, blood pressure and macrovascular disease, RI was significantly associated with the female gender (OR 2.20; CI 95% 1.86–2.59, microvascular disease (OR 2.14; CI 95% 1.8–2.54 and insulin treatment (OR 1.82; CI 95% 1.39–2.38, and inversely associated with HbA1c (OR 0.85 for every 1% increase; CI 95% 0.80–0.91. Albuminuria without RI was inversely associated with the female gender (OR 0.27; CI 95% 0.21–0.35, duration of diabetes (OR 0.94 per year; CI 95% 0.91–0.97 and directly associated with HbA1c (OR 1.19 for every
Directory of Open Access Journals (Sweden)
Anne H Berman
Full Text Available The KIDSCREEN-27 is a measure of child and adolescent quality of life (QoL, with excellent psychometric properties, available in child-report and parent-rating versions in 38 languages. This study provides child-reported and parent-rated norms for the KIDSCREEN-27 among Swedish 11-16 year-olds, as well as child-parent agreement. Sociodemographic correlates of self-reported wellbeing and parent-rated wellbeing were also measured.A random population sample consisting of 600 children aged 11-16, 100 per age group and one of their parents (N = 1200, were approached for response to self-reported and parent-rated versions of the KIDSCREEN-27. Parents were also asked about their education, employment status and their own QoL based on the 26-item WHOQOL-Bref. Based on the final sampling pool of 1158 persons, a 34.8% response rate of 403 individuals was obtained, including 175 child-parent pairs, 27 child singleton responders and 26 parent singletons. Gender and age differences for parent ratings and child-reported data were analyzed using t-tests and the Mann-Whitney U-test. Post-hoc Dunn tests were conducted for pairwise comparisons when the p-value for specific subscales was 0.05 or lower. Child-parent agreement was tested item-by-item, using the Prevalence- and Bias-Adjusted Kappa (PABAK coefficient for ordinal data (PABAK-OS; dimensional and total score agreement was evaluated based on dichotomous cut-offs for lower well-being, using the PABAK and total, continuous scores were evaluated using Bland-Altman plots.Compared to European norms, Swedish children in this sample scored lower on Physical wellbeing (48.8 SE/49.94 EU but higher on the other KIDSCREEN-27 dimensions: Psychological wellbeing (53.4/49.77, Parent relations and autonomy (55.1/49.99, Social Support and peers (54.1/49.94 and School (55.8/50.01. Older children self-reported lower wellbeing than younger children. No significant self-reported gender differences occurred and parent ratings
Berman, Anne H.; Liu, Bojing; Ullman, Sara; Jadbäck, Isabel; Engström, Karin
2016-01-01
Background The KIDSCREEN-27 is a measure of child and adolescent quality of life (QoL), with excellent psychometric properties, available in child-report and parent-rating versions in 38 languages. This study provides child-reported and parent-rated norms for the KIDSCREEN-27 among Swedish 11–16 year-olds, as well as child-parent agreement. Sociodemographic correlates of self-reported wellbeing and parent-rated wellbeing were also measured. Methods A random population sample consisting of 600 children aged 11–16, 100 per age group and one of their parents (N = 1200), were approached for response to self-reported and parent-rated versions of the KIDSCREEN-27. Parents were also asked about their education, employment status and their own QoL based on the 26-item WHOQOL-Bref. Based on the final sampling pool of 1158 persons, a 34.8% response rate of 403 individuals was obtained, including 175 child-parent pairs, 27 child singleton responders and 26 parent singletons. Gender and age differences for parent ratings and child-reported data were analyzed using t-tests and the Mann-Whitney U-test. Post-hoc Dunn tests were conducted for pairwise comparisons when the p-value for specific subscales was 0.05 or lower. Child-parent agreement was tested item-by-item, using the Prevalence- and Bias-Adjusted Kappa (PABAK) coefficient for ordinal data (PABAK-OS); dimensional and total score agreement was evaluated based on dichotomous cut-offs for lower well-being, using the PABAK and total, continuous scores were evaluated using Bland-Altman plots. Results Compared to European norms, Swedish children in this sample scored lower on Physical wellbeing (48.8 SE/49.94 EU) but higher on the other KIDSCREEN-27 dimensions: Psychological wellbeing (53.4/49.77), Parent relations and autonomy (55.1/49.99), Social Support and peers (54.1/49.94) and School (55.8/50.01). Older children self-reported lower wellbeing than younger children. No significant self-reported gender differences
Berman, Anne H; Liu, Bojing; Ullman, Sara; Jadbäck, Isabel; Engström, Karin
2016-01-01
The KIDSCREEN-27 is a measure of child and adolescent quality of life (QoL), with excellent psychometric properties, available in child-report and parent-rating versions in 38 languages. This study provides child-reported and parent-rated norms for the KIDSCREEN-27 among Swedish 11-16 year-olds, as well as child-parent agreement. Sociodemographic correlates of self-reported wellbeing and parent-rated wellbeing were also measured. A random population sample consisting of 600 children aged 11-16, 100 per age group and one of their parents (N = 1200), were approached for response to self-reported and parent-rated versions of the KIDSCREEN-27. Parents were also asked about their education, employment status and their own QoL based on the 26-item WHOQOL-Bref. Based on the final sampling pool of 1158 persons, a 34.8% response rate of 403 individuals was obtained, including 175 child-parent pairs, 27 child singleton responders and 26 parent singletons. Gender and age differences for parent ratings and child-reported data were analyzed using t-tests and the Mann-Whitney U-test. Post-hoc Dunn tests were conducted for pairwise comparisons when the p-value for specific subscales was 0.05 or lower. Child-parent agreement was tested item-by-item, using the Prevalence- and Bias-Adjusted Kappa (PABAK) coefficient for ordinal data (PABAK-OS); dimensional and total score agreement was evaluated based on dichotomous cut-offs for lower well-being, using the PABAK and total, continuous scores were evaluated using Bland-Altman plots. Compared to European norms, Swedish children in this sample scored lower on Physical wellbeing (48.8 SE/49.94 EU) but higher on the other KIDSCREEN-27 dimensions: Psychological wellbeing (53.4/49.77), Parent relations and autonomy (55.1/49.99), Social Support and peers (54.1/49.94) and School (55.8/50.01). Older children self-reported lower wellbeing than younger children. No significant self-reported gender differences occurred and parent ratings showed
STRESS DISTRIBUTION IN THE STRATIFIED MASS CONTAINING VERTICAL ALVEOLE
Directory of Open Access Journals (Sweden)
Bobileva Tatiana Nikolaevna
2017-08-01
Full Text Available Almost all subsurface rocks used as foundations for various types of structures are stratified. Such heterogeneity may cause specific behaviour of the materials under strain. Differential equations describing the behaviour of such materials contain rapidly fluctuating coefficients, in view of this, solution of such equations is more time-consuming when using today’s computers. The method of asymptotic averaging leads to getting homogeneous medium under study to averaged equations with fixed factors. The present article is concerned with stratified soil mass consisting of pair-wise alternative isotropic elastic layers. In the results of elastic modules averaging, the present soil mass with horizontal rock stratification is simulated by homogeneous transversal-isotropic half-space with isotropy plane perpendicular to the standing axis. Half-space is loosened by a vertical alveole of circular cross-section, and virgin ground is under its own weight. For horizontal parting planes of layers, the following two types of surface conditions are set: ideal contact and backlash without cleavage. For homogeneous transversal-isotropic half-space received with a vertical alveole, the analytical solution of S.G. Lekhnitsky, well known in scientific papers, is used. The author gives expressions for stress components and displacements in soil mass for different marginal conditions on the alveole surface. Such research problems arise when constructing and maintaining buildings and when composite materials are used.
Crystallization of a compositionally stratified basal magma ocean
Laneuville, Matthieu; Hernlund, John; Labrosse, Stéphane; Guttenberg, Nicholas
2018-03-01
Earth's ∼3.45 billion year old magnetic field is regenerated by dynamo action in its convecting liquid metal outer core. However, convection induces an isentropic thermal gradient which, coupled with a high core thermal conductivity, results in rapid conducted heat loss. In the absence of implausibly high radioactivity or alternate sources of motion to drive the geodynamo, the Earth's early core had to be significantly hotter than the melting point of the lower mantle. While the existence of a dense convecting basal magma ocean (BMO) has been proposed to account for high early core temperatures, the requisite physical and chemical properties for a BMO remain controversial. Here we relax the assumption of a well-mixed convecting BMO and instead consider a BMO that is initially gravitationally stratified owing to processes such as mixing between metals and silicates at high temperatures in the core-mantle boundary region during Earth's accretion. Using coupled models of crystallization and heat transfer through a stratified BMO, we show that very high temperatures could have been trapped inside the early core, sequestering enough heat energy to run an ancient geodynamo on cooling power alone.
Tidally-forced flow in a rotating, stratified, shoaling basin
Winters, Kraig B.
2015-06-01
Baroclinic flow of a rotating, stratified fluid in a parabolic basin is computed in response to barotropic tidal forcing using the nonlinear, non-hydrostatic, Boussinesq equations of motion. The tidal forcing is derived from an imposed, boundary-enhanced free-surface deflection that advances cyclonically around a central amphidrome. The tidal forcing perturbs a shallow pycnocline, sloshing it up and down over the shoaling bottom. Nonlinearities in the near-shore internal tide produce an azimuthally independent 'set-up' of the isopycnals that in turn drives an approximately geostrophically balanced, cyclonic, near-shore, sub-surface jet. The sub-surface cyclonic jet is an example of a slowly evolving, nearly balanced flow that is excited and maintained solely by forcing in the fast, super-inertial frequency band. Baroclinic instability of the nearly balanced jet and subsequent interactions between eddies produce a weak transfer of energy back into the inertia-gravity band as swirling motions with super-inertial vorticity stir the stratified fluid and spontaneously emit waves. The sub-surface cyclonic jet is similar in many ways to the poleward flows observed along eastern ocean boundaries, particularly the California Undercurrent. It is conjectured that such currents may be driven by the surface tide rather than by winds and/or along-shore pressure gradients.
Instability of Stratified Shear Flow: Intermittency and Length Scales
Ecke, Robert; Odier, Philippe
2015-11-01
The stability of stratified shear flows which occur in oceanic overflows, wind-driven thermoclines, and atmospheric inversion layers is governed by the Richardson Number Ri , a non-dimensional balance between stabilizing stratification and destabilizing shear. For a shear flow with velocity difference U, density difference Δρ and characteristic length H, one has Ri = g (Δρ / ρ) H /U2 . A more precise definition is the gradient Richardson Number Rig =N2 /S2 where the buoyancy frequency N =√{ (g / ρ) ∂ρ / ∂z } , the mean strain S = ∂U / ∂z with z parallel to gravity and with ensemble or time averages defining the gradients. We explore the stability and mixing properties of a wall-bounded shear flow for 0 . 1 alcohol-water mixture injected from a nozzle into quiescent heavier salt-water fluid. The injected flow is turbulent with Taylor Reynolds number about 75. We compare a set of length scales that characterize the mixing properties of our turbulent stratified shear flow including Thorpe Length LT, Ozmidov Length LO, and Ellison Length LE.
Dyadic Green's function of an eccentrically stratified sphere.
Moneda, Angela P; Chrissoulidis, Dimitrios P
2014-03-01
The electric dyadic Green's function (dGf) of an eccentrically stratified sphere is built by use of the superposition principle, dyadic algebra, and the addition theorem of vector spherical harmonics. The end result of the analytical formulation is a set of linear equations for the unknown vector wave amplitudes of the dGf. The unknowns are calculated by truncation of the infinite sums and matrix inversion. The theory is exact, as no simplifying assumptions are required in any one of the analytical steps leading to the dGf, and it is general in the sense that any number, position, size, and electrical properties can be considered for the layers of the sphere. The point source can be placed outside of or in any lossless part of the sphere. Energy conservation, reciprocity, and other checks verify that the dGf is correct. A numerical application is made to a stratified sphere made of gold and glass, which operates as a lens.
Penetrative convection in stratified fluids: velocity and temperature measurements
Directory of Open Access Journals (Sweden)
M. Moroni
2006-01-01
Full Text Available The flux through the interface between a mixing layer and a stable layer plays a fundamental role in characterizing and forecasting the quality of water in stratified lakes and in the oceans, and the quality of air in the atmosphere. The evolution of the mixing layer in a stably stratified fluid body is simulated in the laboratory when "Penetrative Convection" occurs. The laboratory model consists of a tank filled with water and subjected to heating from below. The methods employed to detect the mixing layer growth were thermocouples for temperature data and two image analysis techniques, namely Laser Induced Fluorescence (LIF and Feature Tracking (FT. LIF allows the mixing layer evolution to be visualized. Feature Tracking is used to detect tracer particle trajectories moving within the measurement volume. Pollutant dispersion phenomena are naturally described in the Lagrangian approach as the pollutant acts as a tag of the fluid particles. The transilient matrix represents one of the possible tools available for quantifying particle dispersion during the evolution of the phenomenon.
Strongly Stratified Turbulence Wakes and Mixing Produced by Fractal Wakes
Dimitrieva, Natalia; Redondo, Jose Manuel; Chashechkin, Yuli; Fraunie, Philippe; Velascos, David
2017-04-01
This paper describes Shliering and Shadowgraph experiments of the wake induced mixing produced by tranversing a vertical or horizontal fractal grid through the interfase between two miscible fluids at low Atwood and Reynolds numbers. This is a configuration design to models the mixing across isopycnals in stably-stratified flows in many environmental relevant situations (either in the atmosphere or in the ocean. The initial unstable stratification is characterized by a reduced gravity: g' = gΔρ ρ where g is gravity, Δρ being the initial density step and ρ the reference density. Here the Atwood number is A = g' _ 2 g . The topology of the fractal wake within the strong stratification, and the internal wave field produces both a turbulent cascade and a wave cascade, with frecuen parametric resonances, the envelope of the mixing front is found to follow a complex non steady 3rd order polinomial function with a maximum at about 4-5 Brunt-Vaisalla non-dimensional time scales: t/N δ = c1(t/N) + c2g Δρ ρ (t/N)2 -c3(t/N)3. Conductivity probes and Shliering and Shadowgraph visual techniques, including CIV with (Laser induced fluorescence and digitization of the light attenuation across the tank) are used in order to investigate the density gradients and the three-dimensionality of the expanding and contracting wake. Fractal analysis is also used in order to estimate the fastest and slowest growing wavelengths. The large scale structures are observed to increase in wave-length as the mixing progresses, and the processes involved in this increase in scale are also examined.Measurements of the pointwise and horizontally averaged concentrations confirm the picture obtained from past flow visualization studies. They show that the fluid passes through the mixing region with relatively small amounts of molecular mixing,and the molecular effects only dominate on longer time scales when the small scales have penetrated through the large scale structures. The Non
Extent of atypical hyperplasia stratifies breast cancer risk in 2 independent cohorts of women.
Degnim, Amy C; Dupont, William D; Radisky, Derek C; Vierkant, Robert A; Frank, Ryan D; Frost, Marlene H; Winham, Stacey J; Sanders, Melinda E; Smith, Jeffrey R; Page, David L; Hoskin, Tanya L; Vachon, Celine M; Ghosh, Karthik; Hieken, Tina J; Denison, Lori A; Carter, Jodi M; Hartmann, Lynn C; Visscher, Daniel W
2016-10-01
Women with atypical hyperplasia (AH) on breast biopsy have a substantially increased risk of breast cancer (BC). Here the BC risk for the extent and subtype of AH is reported for 2 separate cohorts. All samples containing AH were included from 2 cohorts of women with benign breast disease (Mayo Clinic and Nashville). Histology review quantified the number of foci of atypical ductal hyperplasia (ADH) and atypical lobular hyperplasia (ALH). The BC risk was stratified for the number of AH foci within AH subtypes. The study included 708 Mayo AH subjects and 466 Nashville AH subjects. In the Mayo cohort, an increasing number of foci of AH was associated with a significant increase in the risk of BC both for ADH (relative risks of 2.61, 5.21, and 6.36 for 1, 2, and ≥3 foci, respectively; P for linear trend = .006) and for ALH (relative risks of 2.56, 3.50, and 6.79 for 1, 2, and ≥3 foci, respectively; P for linear trend = .001). In the Nashville cohort, the relative risks of BC for ADH were 2.70, 5.17, and 15.06 for 1, 2, and ≥3 foci, respectively (P for linear trend independent cohort studies of benign breast disease, the extent of atypia stratified the long-term BC risk for ADH and ALH. Cancer 2016;122:2971-2978. © 2016 American Cancer Society. © 2016 American Cancer Society.
Santangelo, Antonino; Testai', Manuela; Albani, Salvatore; Mamazza, Grazia; Pavano, Salvatore; Zuccaro, Carmela; Atteritano, Marco; Berretta, Massimiliano; Tomarchio, Marcello; Maugeri, Domenico
2010-01-01
This study was aimed at evaluating the occurrence of DLB in a population sample recovered in assisted sanitary residence (RSA=from the Italian name of "Residenza Sanitaria Assistita") in the Province of Catania. We considered 126 patients from a randomized population recovered in RSA of Viagrande (Catania) in the period from 1st March, 2005 and 31st March, 2007. Those who proved to be demented according to the DSM-III diagnostic protocols, and having a mini mental state examination (MMSE)-score <24 were divided in 2 groups: Group A, all the demented people without the DLB; and Group B, the DLB patients, according to the diagnostic criteria of McKeith. All patients underwent at admission, after 1 month, and at emission the following psychometric and functional tests: MMSE, geriatric depression scale (GDS) [Yesavage J.A., Brink T.L., Rose T.L., Adey M., 1983. The development and validation of geriatric depression screening scale: a preliminary report. J. Psych. Res. 17, 37], Barthel index (BI), activities of daily living (ADL) and instrumental ADL (IADL). Particular attention was dedicated to the presence of delirium during the last 15 days before the admission and during the recovery, the mortality and the prevalence of other complaints. The observed data confirm the prevalence of fragility of DLB patients in 20% of them, a fluctuation of the cognitive capacities, a better recovery of the affectivity, a reduced functional autonomy and autosufficiency. In addition, the DLB patients display a major presence of prevalent delirium, compared to the total population of demented patients, while in this last population only incidental delirium episodes occurred during the recovery period (31.6% vs. 16.6%; p<0.001). In the DLB population decubital lesions occurred more frequently, and were of more severe staging, compared to the controls (45% vs. 27%; p<0.001). Also, the mortality of the DLB patients was higher (about 30% vs. 17% in 12 months). These data confirm the
A study of stratified gas-liquid pipe flow
Energy Technology Data Exchange (ETDEWEB)
Johnson, George W.
2005-07-01
This work includes both theoretical modelling and experimental observations which are relevant to the design of gas condensate transport lines. Multicomponent hydrocarbon gas mixtures are transported in pipes over long distances and at various inclinations. Under certain circumstances, the heavier hydrocarbon components and/or water vapour condense to form one or more liquid phases. Near the desired capacity, the liquid condensate and water is efficiently transported in the form of a stratified flow with a droplet field. During operating conditions however, the flow rate may be reduced allowing liquid accumulation which can create serious operational problems due to large amounts of excess liquid being expelled into the receiving facilities during production ramp-up or even in steady production in severe cases. In particular, liquid tends to accumulate in upward inclined sections due to insufficient drag on the liquid from the gas. To optimize the transport of gas condensates, a pipe diameters should be carefully chosen to account for varying flow rates and pressure levels which are determined through the knowledge of the multiphase flow present. It is desirable to have a reliable numerical simulation tool to predict liquid accumulation for various flow rates, pipe diameters and pressure levels which is not presently accounted for by industrial flow codes. A critical feature of the simulation code would include the ability to predict the transition from small liquid accumulation at high flow rates to large liquid accumulation at low flow rates. A semi-intermittent flow regime of roll waves alternating with a partly backward flowing liquid film has been observed experimentally to occur for a range of gas flow rates. Most of the liquid is transported in the roll waves. The roll wave regime is not well understood and requires fundamental modelling and experimental research. The lack of reliable models for this regime leads to inaccurate prediction of the onset of
Statistics of microstructure patchiness in a stratified lake
Planella Morato, J.; Roget, E.; Lozovatsky, I.
2011-10-01
Statistics of microstructure patches in a sheared, strongly stratified metalimnion of Lake Banyoles (Catalonia, Spain), which occupied ˜40% of the total lake depth of 12 m, are analyzed. Light winds ( 0.25 m, the ratio LTp/hp depends on the patch Richardson and mixing Reynolds numbers following the parameterization of Lozovatsky and Fernando (2002). Analysis of the dynamics of mixing reveals that averaged vertical diffusivities ranged between ˜1 × 10-4 m2 s-1 and ˜5 × 10-5 m2 s-1, depending on the phase of the internal waves. Episodic wind gusts (wind speed above 6 m s-1) transfer ˜1.6% of the wind energy to the metalimnion and ˜0.7% to the hypolimnion, generating large microstructure patches with hp of several meters.
Internal combustion engine using premixed combustion of stratified charges
Marriott, Craig D [Rochester Hills, MI; Reitz, Rolf D [Madison, WI
2003-12-30
During a combustion cycle, a first stoichiometrically lean fuel charge is injected well prior to top dead center, preferably during the intake stroke. This first fuel charge is substantially mixed with the combustion chamber air during subsequent motion of the piston towards top dead center. A subsequent fuel charge is then injected prior to top dead center to create a stratified, locally richer mixture (but still leaner than stoichiometric) within the combustion chamber. The locally rich region within the combustion chamber has sufficient fuel density to autoignite, and its self-ignition serves to activate ignition for the lean mixture existing within the remainder of the combustion chamber. Because the mixture within the combustion chamber is overall premixed and relatively lean, NO.sub.x and soot production are significantly diminished.
Direct simulation of the stably stratified turbulent Ekman layer
Coleman, G. N.; Ferziger, J. H.; Spalart, P. R.
1992-01-01
The Navier-Stokes equations and the Boussinesq approximation were used to compute a 3D time-dependent turbulent flow in the stably stratified Ekman layer over a smooth surface. The simulation data are found to be in very good agreement with atmospheric measurements when nondimensionalized according to Nieuwstadt's local scaling scheme. Results suggest that, when Reynolds number effects are taken into account, the 'constant Froud number' stable layer model (Brost and Wyngaard, 1978) and the 'shearing length' stable layer model (Hunt, 1985) for the dissipitation rate of turbulent kinetic energy are both valid. It is concluded that there is good agreement between the direct numerical simulation results and large-eddy simulation results obtained by Mason and Derbyshire (1990).
An Anelastic Allspeed Projection Method for GravitationallyStratified Flows
Energy Technology Data Exchange (ETDEWEB)
Gatti-Bono, Caroline; Colella, Phillip
2005-02-24
This paper looks at gravitationally-stratified atmospheric flows at low Mach and Froude numbers and proposes a new algorithm to solve the compressible Euler equations, in which the asymptotic limits are recovered numerically and the boundary conditions for block-structured local refinement methods are well-posed. The model is non-hydrostatic and the numerical algorithm uses a splitting to separate the fast acoustic dynamics from the slower anelastic dynamics. The acoustic waves are treated implicitly while the anelastic dynamics is treated semi-implicitly and an embedded-boundary method is used to represent mountain ranges. We present an example that verifies our asymptotic analysis and a set of results that compares very well with the classical gravity wave results presented by Durran.
A cautionary note on applying scores in stratified data.
Podgor, M J; Gastwirth, J L
1994-12-01
When rank tests are used to analyze stratified data, three methods for assigning scores to the observations have been proposed: (S) independently within each stratum (see Lehmann, 1975, Nonparametrics: Statistical Methods Based on Ranks; San Francisco: Holden-Day); (A) after aligning the observations within each stratum and then pooling the aligned observations (Hodges and Lehmann, 1962, Annals of Mathematical Statistics 33, 482-497); and (P) after pooling the observations across all strata (that is, without alignment) (Mantel, 1963, Journal of the American Statistical Association 58, 690-700; Mantel and Ciminera, 1979, Cancer Research 39, 4308-4315). Test statistics are formed for each method by combining the stratum-specific linear rank tests using the assigned scores. We show that method P is sensitive to the score function used in the case of two moderately sized strata. In general, we recommend methods S and A for use with moderate to large-sized strata.
Probability sampling of stony coral populations in the Florida Keys.
Smith, Steven G; Swanson, Dione W; Chiappone, Mark; Miller, Steven L; Ault, Jerald S
2011-12-01
Principles of probability survey design were applied to guide large-scale sampling of populations of stony corals and associated benthic taxa in the Florida Keys coral reef ecosystem. The survey employed a two-stage stratified random sampling design that partitioned the 251-km(2) domain by reef habitat types, geographic regions, and management zones. Estimates of the coefficient of variation (ratio of standard error to the mean) for stony coral population density and abundance ranged from 7% to 12% for four of six principal species. These levels of survey precision are among the highest reported for comparable surveys of marine species. Relatively precise estimates were also obtained for octocoral density, sponge frequency of occurrence, and benthic cover of algae and invertebrates. Probabilistic survey design techniques provided a robust framework for estimating population-level metrics and optimizing sampling efficiency.
Visualization periodic flows in a continuously stratified fluid.
Bardakov, R.; Vasiliev, A.
2012-04-01
To visualize the flow pattern of viscous continuously stratified fluid both experimental and computational methods were developed. Computational procedures were based on exact solutions of set of the fundamental equations. Solutions of the problems of flows producing by periodically oscillating disk (linear and torsion oscillations) were visualized with a high resolutions to distinguish small-scale the singular components on the background of strong internal waves. Numerical algorithm of visualization allows to represent both the scalar and vector fields, such as velocity, density, pressure, vorticity, stream function. The size of the source, buoyancy and oscillation frequency, kinematic viscosity of the medium effects were traced in 2D an 3D posing problems. Precision schlieren instrument was used to visualize the flow pattern produced by linear and torsion oscillations of strip and disk in a continuously stratified fluid. Uniform stratification was created by the continuous displacement method. The buoyancy period ranged from 7.5 to 14 s. In the experiments disks with diameters from 9 to 30 cm and a thickness of 1 mm to 10 mm were used. Different schlieren methods that are conventional vertical slit - Foucault knife, vertical slit - filament (Maksoutov's method) and horizontal slit - horizontal grating (natural "rainbow" schlieren method) help to produce supplementing flow patterns. Both internal wave beams and fine flow components were visualized in vicinity and far from the source. Intensity of high gradient envelopes increased proportionally the amplitude of the source. In domains of envelopes convergence isolated small scale vortices and extended mushroom like jets were formed. Experiments have shown that in the case of torsion oscillations pattern of currents is more complicated than in case of forced linear oscillations. Comparison with known theoretical model shows that nonlinear interactions between the regular and singular flow components must be taken
Dynamics of mixed convective-stably-stratified fluids
Couston, L.-A.; Lecoanet, D.; Favier, B.; Le Bars, M.
2017-09-01
We study the dynamical regimes of a density-stratified fluid confined between isothermal no-slip top and bottom boundaries (at temperatures Tt and Tb) via direct numerical simulation. The thermal expansion coefficient of the fluid is temperature dependent and chosen such that the fluid density is maximum at the inversion temperature Tb>Ti>Tt . Thus, the lower layer of the fluid is convectively unstable while the upper layer is stably stratified. We show that the characteristics of the convection change significantly depending on the degree of stratification of the stable layer. For strong stable stratification, the convection zone coincides with the fraction of the fluid that is convectively unstable (i.e., where T >Ti ), and convective motions consist of rising and sinking plumes of large density anomaly, as is the case in canonical Rayleigh-Bénard convection; internal gravity waves are generated by turbulent fluctuations in the convective layer and propagate in the upper layer. For weak stable stratification, we demonstrate that a large fraction of the stable fluid (i.e., with temperature T cold patches of low density-anomaly fluid with hot upward plumes and the end result is that the Ti isotherm sinks within the bottom boundary layer and that the convection is entrainment dominated. We provide a phenomenological description of the transition between the regimes of plume-dominated and entrainment-dominated convection through analysis of the differences in the heat transfer mechanisms, kinetic energy density spectra, and probability density functions for different stratification strengths. Importantly, we find that the effect of the stable layer on the convection decreases only weakly with increasing stratification strength, meaning that the dynamics of the stable layer and convection should be studied self-consistently in a wide range of applications.
Invited Review. Combustion instability in spray-guided stratified-charge engines. A review
Energy Technology Data Exchange (ETDEWEB)
Fansler, Todd D. [Univ. of Wisconsin, Madison, WI (United States); Reuss, D. L. [Univ. of Michigan, Ann Arbor, MI (United States); Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sick, V. [Univ. of Michigan, Ann Arbor, MI (United States); Dahms, R. N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2015-02-02
Our article reviews systematic research on combustion instabilities (principally rare, random misfires and partial burns) in spray-guided stratified-charge (SGSC) engines operated at part load with highly stratified fuel -air -residual mixtures. Results from high-speed optical imaging diagnostics and numerical simulation provide a conceptual framework and quantify the sensitivity of ignition and flame propagation to strong, cyclically varying temporal and spatial gradients in the flow field and in the fuel -air -residual distribution. For SGSC engines using multi-hole injectors, spark stretching and locally rich ignition are beneficial. Moreover, combustion instability is dominated by convective flow fluctuations that impede motion of the spark or flame kernel toward the bulk of the fuel, coupled with low flame speeds due to locally lean mixtures surrounding the kernel. In SGSC engines using outwardly opening piezo-electric injectors, ignition and early flame growth are strongly influenced by the spray's characteristic recirculation vortex. For both injection systems, the spray and the intake/compression-generated flow field influence each other. Factors underlying the benefits of multi-pulse injection are identified. Finally, some unresolved questions include (1) the extent to which piezo-SGSC misfires are caused by failure to form a flame kernel rather than by flame-kernel extinction (as in multi-hole SGSC engines); (2) the relative contributions of partially premixed flame propagation and mixing-controlled combustion under the exceptionally late-injection conditions that permit SGSC operation on E85-like fuels with very low NO_{x} and soot emissions; and (3) the effects of flow-field variability on later combustion, where fuel-air-residual mixing within the piston bowl becomes important.
Phenomenology of two-dimensional stably stratified turbulence under large-scale forcing
Kumar, Abhishek
2017-01-11
In this paper, we characterise the scaling of energy spectra, and the interscale transfer of energy and enstrophy, for strongly, moderately and weakly stably stratified two-dimensional (2D) turbulence, restricted in a vertical plane, under large-scale random forcing. In the strongly stratified case, a large-scale vertically sheared horizontal flow (VSHF) coexists with small scale turbulence. The VSHF consists of internal gravity waves and the turbulent flow has a kinetic energy (KE) spectrum that follows an approximate k−3 scaling with zero KE flux and a robust positive enstrophy flux. The spectrum of the turbulent potential energy (PE) also approximately follows a k−3 power-law and its flux is directed to small scales. For moderate stratification, there is no VSHF and the KE of the turbulent flow exhibits Bolgiano–Obukhov scaling that transitions from a shallow k−11/5 form at large scales, to a steeper approximate k−3 scaling at small scales. The entire range of scales shows a strong forward enstrophy flux, and interestingly, large (small) scales show an inverse (forward) KE flux. The PE flux in this regime is directed to small scales, and the PE spectrum is characterised by an approximate k−1.64 scaling. Finally, for weak stratification, KE is transferred upscale and its spectrum closely follows a k−2.5 scaling, while PE exhibits a forward transfer and its spectrum shows an approximate k−1.6 power-law. For all stratification strengths, the total energy always flows from large to small scales and almost all the spectral indicies are well explained by accounting for the scale-dependent nature of the corresponding flux.
National Research Council Canada - National Science Library
Kongtragul, Jaruwan; Tukhanon, Wanvara; Tudpudsa, Piyanuch; Suedee, Kanita; Tienchai, Supaporn; Leewansangtong, Sunai; Nualgyong, Chaiyong
2014-01-01
.... One hundred thirty five patients were randomized into the intervention group that concentration therapy was added to Kegel exercise, and control group that was Kegel exercise only, using the stratified randomization...
Che, W W; Frey, H Christopher; Lau, Alexis K H
2014-12-01
Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.
The frequency of drugs in randomly selected drivers in Denmark
DEFF Research Database (Denmark)
Simonsen, Kirsten Wiese; Steentoft, Anni; Hels, Tove
Introduction Driving under the influence of alcohol and drugs is a global problem. In Denmark as well as in other countries there is an increasing focus on impaired driving. Little is known about the occurrence of psychoactive drugs in the general traffic. Therefore the European commission...... initiated the DRUID project. This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Methods Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme...... stratified by time, season, and road type. The oral fluid samples were screened for 29 illegal and legal psychoactive substances and metabolites as well as ethanol. Results Fourteen (0.5%) drivers were positive for ethanol (alone or in combination with drugs) at concentrations above 0.53 g/l, which...
First detection of mycobacteria in African rodents and insectivores, using stratified pool screening
DEFF Research Database (Denmark)
Durnez, L.; Eddyani, M.; Mgode, G. F.
2007-01-01
With the rising number of patients with human immunodeficiency virus (HIV)/AIDS in developing countries, the control of mycobacteria is of growing importance. Previous studies have shown that rodents and insectivores are carriers of mycobacteria. However, it is not clear how widespread mycobacteria...... are in these animals and what their role is in spreading them. Therefore, the prevalence of mycobacteria in rodents and insectivores was studied in and around Morogoro, Tanzania. Live rodents were trapped, with three types of live traps, in three habitats. Pieces of organs were pooled per habitat, species, and organ...... type (stratified pooling); these sample pools were examined for the presence of mycobacteria by PCR, microscopy, and culture methods. The mycobacterial isolates were identified using phenotypic techniques and sequencing. In total, 708 small mammals were collected, 31 of which were shrews. By pool...
Lee, Joseph G L; Shook-Sa, Bonnie E; Bowling, J Michael; Ribisl, Kurt M
2017-06-23
In the United States, tens of thousands of inspections of tobacco retailers are conducted each year. Various sampling choices can reduce travel costs, emphasize enforcement in areas with greater non-compliance, and allow for comparability between states and over time. We sought to develop a model sampling strategy for state tobacco retailer inspections. Using a 2014 list of 10,161 North Carolina tobacco retailers, we compared results from simple random sampling; stratified, clustered at the ZIP code sampling; and, stratified, clustered at the census tract sampling. We conducted a simulation of repeated sampling and compared approaches for their comparative level of precision, coverage, and retailer dispersion. While maintaining an adequate design effect and statistical precision appropriate for a public health enforcement program, both stratified, clustered ZIP- and tract-based approaches were feasible. Both ZIP and tract strategies yielded improvements over simple random sampling, with relative improvements, respectively, of average distance between retailers (reduced 5.0% and 1.9%), percent Black residents in sampled neighborhoods (increased 17.2% and 32.6%), percent Hispanic residents in sampled neighborhoods (reduced 2.2% and increased 18.3%), percentage of sampled retailers located near schools (increased 61.3% and 37.5%), and poverty rate in sampled neighborhoods (increased 14.0% and 38.2%). States can make retailer inspections more efficient and targeted with stratified, clustered sampling. Use of statistically appropriate sampling strategies like these should be considered by states, researchers, and the Food and Drug Administration to improve program impact and allow for comparisons over time and across states. The authors present a model tobacco retailer sampling strategy for promoting compliance and reducing costs that could be used by U.S. states and the Food and Drug Administration (FDA). The design is feasible to implement in North Carolina. Use of
Deep silicon maxima in the stratified oligotrophic Mediterranean Sea
Directory of Open Access Journals (Sweden)
Y. Crombet
2011-02-01
Full Text Available The silicon biogeochemical cycle has been studied in the Mediterranean Sea during late summer/early autumn 1999 and summer 2008. The distribution of nutrients, particulate carbon and silicon, fucoxanthin (Fuco, and total chlorophyll-a (TChl-a were investigated along an eastward gradient of oligotrophy during two cruises (PROSOPE and BOUM encompassing the entire Mediterranean Sea during the stratified period. At both seasons, surface waters were depleted in nutrients and the nutriclines gradually deepened towards the East, the phosphacline being the deepest in the easternmost Levantine basin. Following the nutriclines, parallel deep maxima of biogenic silica (DSM, fucoxanthin (DFM and TChl-a (DCM were evidenced during both seasons with maximal concentrations of 0.45 μmol L^{−1} for BSi, 0.26 μg L^{−1} for Fuco, and 1.70 μg L^{−1} for TChl-a, all measured during summer. Contrary to the DCM which was a persistent feature in the Mediterranean Sea, the DSM and DFMs were observed in discrete areas of the Alboran Sea, the Algero-Provencal basin, the Ionian sea and the Levantine basin, indicating that diatoms were able to grow at depth and dominate the DCM under specific conditions. Diatom assemblages were dominated by Chaetoceros spp., Leptocylindrus spp., Pseudonitzschia spp. and the association between large centric diatoms (Hemiaulus hauckii and Rhizosolenia styliformis and the cyanobacterium Richelia intracellularis was observed at nearly all sites. The diatom's ability to grow at depth is commonly observed in other oligotrophic regions and could play a major role in ecosystem productivity and carbon export to depth. Contrary to the common view that Si and siliceous phytoplankton are not major components of the Mediterranean biogeochemistry, we suggest here that diatoms, by persisting at depth during the stratified period, could contribute to a
Stratified flows with variable density: mathematical modelling and numerical challenges.
Murillo, Javier; Navas-Montilla, Adrian
2017-04-01
Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux
Directory of Open Access Journals (Sweden)
Sandra F. MOREIRA-SILVA
1998-07-01
Full Text Available In the streets of Vitória, in the State of Espírito Santo, Brazil, are large number of stray dogs, many of which are infected with Toxocara canis, suggesting a high risk for human infection. In order to investigate the prevalence of Toxocara infection in children in Espírito Santo we studied the prevalence of anti-Toxocara antibodies in 100 random inpatients over one year of age, at the Children's Hospital N.S. da Glória, the reference children's hospital for the State.All the sera were collected during the period between October 1996 and January 1997. The mean age was 6.6±4.1 yrs. (1 to 14 yrs., median 6yrs. and there were patients from all of the different wards of the hospital. Sixty-eigth patients came from the metropolitan area of Vitória and the other 32 from 17 other municipalities. The anti-Toxocara antibodies were investigated by ELISA-IgG using a secretory-excretory antigen obtained from second stage larvae. All sera were adsorbed with Ascaris suum antigen before the test. Thirty-nine sera (39% were positive, predominantly from boys, but the gender difference was not statistically significant (boys:25/56 or 44.6%; girls:14/44 or 31.8%; p=0.311. The prevalence of positive sera was higher, but not statistically significant, in children from the urban periphery of metropolitan Vitória (formed by the cities of Vitória, Cariacica, Vila Velha, Serra and Viana than in children from 17 other municipalities (44.1% and 28.1% respectively, p=0.190. Although the samples studied do not represent all children living in the State of Espírito Santo, since the Children's Hospital N.S. da Glória admits only patients from the state health system, it is probable that these results indicate a high frequency of Toxocara infection in children living in Espírito Santo. Further studies of population samples are necessary to ascertain the prevalence of Toxocara infection in our country.Em Vitória é grande o número de cães soltos nas ruas, muitos
Inverse cascades and resonant triads in rotating and stratified turbulence
Oks, D.; Mininni, P. D.; Marino, R.; Pouquet, A.
2017-11-01
Kraichnan's seminal ideas on inverse cascades yielded new tools to study common phenomena in geophysical turbulent flows. In the atmosphere and the oceans, rotation and stratification result in a flow that can be approximated as two-dimensional at very large scales but which requires considering three-dimensional effects to fully describe turbulent transport processes and non-linear phenomena. Motions can thus be classified into two classes: fast modes consisting of inertia-gravity waves and slow quasi-geostrophic modes for which the Coriolis force and horizontal pressure gradients are close to balance. In this paper, we review previous results on the strength of the inverse cascade in rotating and stratified flows and then present new results on the effect of varying the strength of rotation and stratification (measured by the inverse Prandtl ratio N/f, of the Coriolis frequency to the Brunt-Väisäla frequency) on the amplitude of the waves and on the flow quasi-geostrophic behavior. We show that the inverse cascade is more efficient in the range of N/f for which resonant triads do not exist, 1 /2 ≤N /f ≤2 . We then use the spatio-temporal spectrum to show that in this range slow modes dominate the dynamics, while the strength of the waves (and their relevance in the flow dynamics) is weaker.
Drug therapies in severe asthma - the era of stratified medicine.
Hetherington, Kathy J; Heaney, Liam G
2015-10-01
Difficult-to-treat asthma affects up to 20% of patients with asthma and is associated with significant healthcare cost. It is an umbrella term that defines a heterogeneous clinical problem including incorrect diagnosis, comorbid conditions and treatment non-adherence; when these are effectively addressed, good symptom control is frequently achieved. However, in 3-5% of adults with difficult-to-treat asthma, the problem is severe disease that is unresponsive to currently available treatments. Current treatment guidelines advise the 'stepwise' increase of corticosteroids, but it is now recognised that many aspects of asthma are not corticosteroid responsive, and that this 'one size fits all' approach does not deliver clinical benefit in many patients and can also lead to side effects. The future of management of severe asthma will involve optimisation with currently available treatments, particularly corticosteroids, including addressing non-adherence and defining an 'optimised' corticosteroid dose, allied with the use of 'add-on' target-specific novel treatments. This review examines the current status of novel treatments and research efforts to identify novel targets in the era of stratified medicines in severe asthma. © Royal College of Physicians 2015. All rights reserved.
Stratified Flow Past a Hill: Dividing Streamline Concept Revisited
Leo, Laura S.; Thompson, Michael Y.; Di Sabatino, Silvana; Fernando, Harindra J. S.
2016-06-01
The Sheppard formula (Q J R Meteorol Soc 82:528-529, 1956) for the dividing streamline height H_s assumes a uniform velocity U_∞ and a constant buoyancy frequency N for the approach flow towards a mountain of height h, and takes the form H_s/h=( {1-F} ) , where F=U_{∞}/Nh. We extend this solution to a logarithmic approach-velocity profile with constant N. An analytical solution is obtained for H_s/h in terms of Lambert-W functions, which also suggests alternative scaling for H_s/h. A `modified' logarithmic velocity profile is proposed for stably stratified atmospheric boundary-layer flows. A field experiment designed to observe H_s is described, which utilized instrumentation from the spring field campaign of the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program. Multiple releases of smoke at F≈ 0.3-0.4 support the new formulation, notwithstanding the limited success of experiments due to logistical constraints. No dividing streamline is discerned for F≈ 10, since, if present, it is too close to the foothill. Flow separation and vortex shedding is observed in this case. The proposed modified logarithmic profile is in reasonable agreement with experimental observations.
Stratifying the Risk of Venous Thromboembolism in Otolaryngology
Shuman, Andrew G.; Hu, Hsou Mei; Pannucci, Christopher J.; Jackson, Christopher R.; Bradford, Carol R.; Bahl, Vinita
2015-01-01
Objective The consequences of perioperative venous thromboembolism (VTE) are devastating; identifying patients at risk is an essential step in reducing morbidity and mortality. The utility of perioperative VTE risk assessment in otolaryngology is unknown. This study was designed to risk-stratify a diverse population of otolaryngology patients for VTE events. Study Design Retrospective cohort study. Setting Single-institution academic tertiary care medical center. Subjects and Methods Adult patients presenting for otolaryngologic surgery requiring hospital admission from 2003 to 2010 who did not receive VTE chemoprophylaxis were included. The Caprini risk assessment was retrospectively scored via a validated method of electronic chart abstraction. Primary study variables were Caprini risk scores and the incidence of perioperative venous thromboembolic outcomes. Results A total of 2016 patients were identified. The overall 30-day rate of VTE was 1.3%. The incidence of VTE in patients with a Caprini risk score of 6 or less was 0.5%. For patients with scores of 7 or 8, the incidence was 2.4%. Patients with a Caprini risk score greater than 8 had an 18.3% incidence of VTE and were significantly more likely to develop a VTE when compared to patients with a Caprini risk score less than 8 (P otolaryngology patients for 30-day VTE events and allows otolaryngologists to identify patient subgroups who have a higher risk of VTE in the absence of chemoprophylaxis. PMID:22261490
Numerical Study of Stratified Charge Combustion in Wave Rotors
Nalim, M. Razi
1997-01-01
A wave rotor may be used as a pressure-gain combustor effecting non-steady flow, and intermittent, confined combustion to enhance gas turbine engine performance. It will be more compact and probably lighter than an equivalent pressure-exchange wave rotor, yet will have similar thermodynamic and mechanical characteristics. Because the allowable turbine blade temperature limits overall fuel/air ratio to sub-flammable values, premixed stratification techniques are necessary to burn hydrocarbon fuels in small engines with compressor discharge temperature well below autoignition conditions. One-dimensional, unsteady numerical simulations of stratified-charge combustion are performed using an eddy-diffusivity turbulence model and a simple reaction model incorporating a flammability limit temperature. For good combustion efficiency, a stratification strategy is developed which concentrates fuel at the leading and trailing edges of the inlet port. Rotor and exhaust temperature profiles and performance predictions are presented at three representative operating conditions of the engine: full design load, 40% load, and idle. The results indicate that peak local gas temperatures will result in excessive temperatures within the rotor housing unless additional cooling methods are used. The rotor itself will have acceptable temperatures, but the pattern factor presented to the turbine may be of concern, depending on exhaust duct design and duct-rotor interaction.
Economic evaluation in stratified medicine: methodological issues and challenges
Directory of Open Access Journals (Sweden)
Hans-Joerg eFugel
2016-05-01
Full Text Available Background: Stratified Medicine (SM is becoming a practical reality with the targeting of medicines by using a biomarker or genetic-based diagnostic to identify the eligible patient sub-population. Like any healthcare intervention, SM interventions have costs and consequences that must be considered by reimbursement authorities with limited resources. Methodological standards and guidelines exist for economic evaluations in clinical pharmacology and are an important component for health technology assessments (HTAs in many countries. However, these guidelines have initially been developed for traditional pharmaceuticals and not for complex interventions with multiple components. This raises the issue as to whether these guidelines are adequate to SM interventions or whether new specific guidance and methodology is needed to avoid inconsistencies and contradictory findings when assessing economic value in SM.Objective: This article describes specific methodological challenges when conducting health economic (HE evaluations for SM interventions and outlines potential modifications necessary to existing evaluation guidelines /principles that would promote consistent economic evaluations for SM.Results/Conclusions: Specific methodological aspects for SM comprise considerations on the choice of comparator, measuring effectiveness and outcomes, appropriate modelling structure and the scope of sensitivity analyses. Although current HE methodology can be applied for SM, greater complexity requires further methodology development and modifications in the guidelines.
BIPOLAR MAGNETIC SPOTS FROM DYNAMOS IN STRATIFIED SPHERICAL SHELL TURBULENCE
Energy Technology Data Exchange (ETDEWEB)
Jabbari, Sarah; Brandenburg, Axel; Kleeorin, Nathan; Mitra, Dhrubaditya; Rogachevskii, Igor, E-mail: sarahjab@kth.se [Nordita, KTH Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, SE-10691 Stockholm (Sweden)
2015-06-01
Recent work by Mitra et al. (2014) has shown that in strongly stratified forced two-layer turbulence with helicity and corresponding large-scale dynamo action in the lower layer, and nonhelical turbulence in the upper, a magnetic field occurs in the upper layer in the form of sharply bounded bipolar magnetic spots. Here we extend this model to spherical wedge geometry covering the northern hemisphere up to 75° latitude and an azimuthal extent of 180°. The kinetic helicity and therefore also the large-scale magnetic field are strongest at low latitudes. For moderately strong stratification, several bipolar spots form that eventually fill the full longitudinal extent. At early times, the polarity of spots reflects the orientation of the underlying azimuthal field, as expected from Parker’s Ω-shaped flux loops. At late times their tilt changes such that there is a radial field of opposite orientation at different latitudes separated by about 10°. Our model demonstrates the spontaneous formation of spots of sizes much larger than the pressure scale height. Their tendency to produce filling factors close to unity is argued to be reminiscent of highly active stars. We confirm that strong stratification and strong scale separation are essential ingredients behind magnetic spot formation, which appears to be associated with downflows at larger depths.
Observations of Instabilities in Stratified Taylor-Couette Flow
Rodenborn, Bruce; Ibanez, Ruy; Swinney, Harry L.
2016-11-01
Inviscid analyses by Molemaker et al. and by Dubrulle et al. predicted that a fluid with a vertically varying density will be less stable than a uniform fluid when the fluid is contained inside a concentric rotating cylinder system and subject to anticyclonic shear. Dubrulle et al. named this instability the stratorotational instability and a subsequent viscous theory by Shalybkov and Rudiger hypothesized that such stratified flow is stable when the ratio of outer and inner cylinder rotation rates μ is less than the ratio of the inner and outer cylinder radii η. Le Bars and Le Gal confirmed this hypothesis in experiments for Re η when the density gradient is large. We also find that the axial wavelength scales linearly with the internal Froude number and that the onset of the SRI is suppressed for Re > 4000 , a region previously unexplored in experiments. For Re > 8000 , we find that the fluid does not exhibit the SRI but transitions to a spatially nonperiodic state that mixes the fluid. This research was supported in part by the Sid W. Richardson Foundation.
Laboratory Observation of Instabilities in Stratified Taylor-Couette Flow
Rodenborn, Bruce; Ibanez, Ruy; Swinney, Harry L.
2015-11-01
In 2001 Molemaker et al. (J. Fluid. Mech. 448, 1) predicted a new class of instabilities in a system of concentric rotating cylinders that contains a fluid with a vertically varying density. Dubrulle et al. (Astron. Astrophys. 429, 1, 2005) then showed that this phenomenon, which they named stratorotational instability (SRI), could be a source of instability and angular momentum transport in astrophysical accretion disks. Subsequent work by Shalybkov and Rüdiger (Astron. Astrophys. 438, 411, 2005) hypothesized that such stratified flow is stable when the ratio of outer and inner cylinder rotation rates μ is less than the ratio of the inner and outer cylinder radii η. Previous laboratory measurements by Le Bars and Le Gal (Phys. Rev. Lett. 99, 064502, 2007) confirmed this prediction for Re η when the density gradient is large. We also find that the onset of SRI is suppressed for Reynolds numbers Re > 4000 , a region previously unexplored in experiments. For Re > 8000 , we find that the fluid does not exhibit SRI but transitions to a previously unreported chaotic state that mixes the fluid.
Large eddy simulation of unsteady lean stratified premixed combustion
Energy Technology Data Exchange (ETDEWEB)
Duwig, C. [Division of Fluid Mechanics, Department of Energy Sciences, Lund University, SE 221 00 Lund (Sweden); Fureby, C. [Division of Weapons and Protection, Warheads and Propulsion, The Swedish Defense Research Agency, FOI, SE 147 25 Tumba (Sweden)
2007-10-15
Premixed turbulent flame-based technologies are rapidly growing in importance, with applications to modern clean combustion devices for both power generation and aeropropulsion. However, the gain in decreasing harmful emissions might be canceled by rising combustion instabilities. Unwanted unsteady flame phenomena that might even destroy the whole device have been widely reported and are subject to intensive studies. In the present paper, we use unsteady numerical tools for simulating an unsteady and well-documented flame. Computations were performed for nonreacting, perfectly premixed and stratified premixed cases using two different numerical codes and different large-eddy-simulation-based flamelet models. Nonreacting simulations are shown to agree well with experimental data, with the LES results capturing the mean features (symmetry breaking) as well as the fluctuation level of the turbulent flow. For reacting cases, the uncertainty induced by the time-averaging technique limited the comparisons. Given an estimate of the uncertainty, the numerical results were found to reproduce well the experimental data in terms both of mean flow field and of fluctuation levels. In addition, it was found that despite relying on different assumptions/simplifications, both numerical tools lead to similar predictions, giving confidence in the results. Moreover, we studied the flame dynamics and particularly the response to a periodic pulsation. We found that above a certain excitation level, the flame dynamic changes and becomes rather insensitive to the excitation/instability amplitude. Conclusions regarding the self-growth of thermoacoustic waves were drawn. (author)
Generalizing the Boussinesq approximation to stratified compressible flow
Durran, Dale R.; Arakawa, Akio
2007-09-01
The simplifications required to apply the Boussinesq approximation to compressible flow are compared with those in an incompressible fluid. The larger degree of approximation required to describe mass conservation in a stratified compressible fluid using the Boussinesq continuity equation has led to the development of several different sets of 'anelastic' equations that may be regarded as generalizations of the original Boussinesq approximation. These anelastic systems filter sound waves while allowing a more accurate representation of non-acoustic perturbations in compressible flows than can be obtained using the Boussinesq system. The energy conservation properties of several anelastic systems are compared under the assumption that the perturbations of the thermodynamic variables about a hydrostatically balanced reference state are small. The 'pseudo-incompressible' system is shown to conserve total kinetic and anelastic dry static energy without requiring modification to any governing equation except the mass continuity equation. In contrast, other energy conservative anelastic systems also require additional approximations in other governing equations. The pseudo-incompressible system includes the effects of temperature changes on the density in the mass conservation equation, whereas this effect is neglected in other anelastic systems. A generalization of the pseudo-incompressible equation is presented and compared with the diagnostic continuity equation for quasi-hydrostatic flow in a transformed coordinate system in which the vertical coordinate is solely a function of pressure. To cite this article: D.R. Durran, A. Arakawa, C. R. Mecanique 335 (2007).
Economic Evaluation in Stratified Medicine: Methodological Issues and Challenges.
Fugel, Hans-Joerg; Nuijten, Mark; Postma, Maarten; Redekop, Ken
2016-01-01
Stratified Medicine (SM) is becoming a practical reality with the targeting of medicines by using a biomarker or genetic-based diagnostic to identify the eligible patient sub-population. Like any healthcare intervention, SM interventions have costs and consequences that must be considered by reimbursement authorities with limited resources. Methodological standards and guidelines exist for economic evaluations in clinical pharmacology and are an important component for health technology assessments (HTAs) in many countries. However, these guidelines have initially been developed for traditional pharmaceuticals and not for complex interventions with multiple components. This raises the issue as to whether these guidelines are adequate to SM interventions or whether new specific guidance and methodology is needed to avoid inconsistencies and contradictory findings when assessing economic value in SM. This article describes specific methodological challenges when conducting health economic (HE) evaluations for SM interventions and outlines potential modifications necessary to existing evaluation guidelines /principles that would promote consistent economic evaluations for SM. Specific methodological aspects for SM comprise considerations on the choice of comparator, measuring effectiveness and outcomes, appropriate modeling structure and the scope of sensitivity analyses. Although current HE methodology can be applied for SM, greater complexity requires further methodology development and modifications in the guidelines.
Stratified patterns of divorce: Earnings, education, and gender
Directory of Open Access Journals (Sweden)
Amit Kaplan
2015-05-01
Full Text Available Background: Despite evidence that divorce has become more prevalent among weaker socioeconomic groups, knowledge about the stratification aspects of divorce in Israel is lacking. Moreover, although scholarly debate recognizes the importance of stratificational positions with respect to divorce, less attention has been given to the interactions between them. Objective: Our aim is to examine the relationship between social inequality and divorce, focusing on how household income, education, employment stability, relative earnings, and the intersection between them affect the risk of divorce in Israel. Methods: The data is derived from combined census files for 1995-2008, annual administrative employment records from the National Insurance Institute and the Tax Authority, and data from the Civil Registry of Divorce. We used a series of discrete-time event-history analysis models for marital dissolution. Results: Couples in lower socioeconomic positions had a higher risk of divorce in Israel. Higher education in general, and homogamy in terms of higher education (both spouses have degrees in particular, decreased the risk of divorce. The wife's relative earnings had a differential effect on the likelihood of divorce, depending on household income: a wife who outearned her husband increased the log odds of divorce more in the upper tertiles than in the lower tertile. Conclusions: Our study shows that divorce indeed has a stratified pattern and that weaker socioeconomic groups experience the highest levels of divorce. Gender inequality within couples intersects with the household's economic and educational resources.
Self-Knowledge and Risk in Stratified Medicine.
Hordern, Joshua
2017-04-01
This article considers why and how self-knowledge is important to communication about risk and behaviour change by arguing for four claims. First, it is doubtful that genetic knowledge should properly be called 'self-knowledge' when its ordinary effects on self-motivation and behaviour change seem so slight. Second, temptations towards a reductionist, fatalist, construal of persons' futures through a 'molecular optic' should be resisted. Third, any plausible effort to change people's behaviour must engage with cultural self-knowledge, values and beliefs, catalysed by the communication of genetic risk. For example, while a Judaeo-Christian notion of self-knowledge is distinctively theological, people's self-knowledge is plural in its insight and sources. Fourth, self-knowledge is found in compassionate, if tense, communion which yields freedom from determinism even amidst suffering. Stratified medicine thus offers a newly precise kind of humanising health care through societal solidarity with the riskiest. However, stratification may also mean that molecularly unstratified, 'B' patients' experience involves accentuated suffering and disappointment, a concern requiring further research.
Theory of sampling and its application in tissue based diagnosis.
Kayser, Klaus; Schultz, Holger; Goldmann, Torsten; Görtler, Jürgen; Kayser, Gian; Vollmer, Ekkehard
2009-02-16
A general theory of sampling and its application in tissue based diagnosis is presented. Sampling is defined as extraction of information from certain limited spaces and its transformation into a statement or measure that is valid for the entire (reference) space. The procedure should be reproducible in time and space, i.e. give the same results when applied under similar circumstances. Sampling includes two different aspects, the procedure of sample selection and the efficiency of its performance. The practical performance of sample selection focuses on search for localization of specific compartments within the basic space, and search for presence of specific compartments. When a sampling procedure is applied in diagnostic processes two different procedures can be distinguished: I) the evaluation of a diagnostic significance of a certain object, which is the probability that the object can be grouped into a certain diagnosis, and II) the probability to detect these basic units. Sampling can be performed without or with external knowledge, such as size of searched objects, neighbourhood conditions, spatial distribution of objects, etc. If the sample size is much larger than the object size, the application of a translation invariant transformation results in Kriege's formula, which is widely used in search for ores. Usually, sampling is performed in a series of area (space) selections of identical size. The size can be defined in relation to the reference space or according to interspatial relationship. The first method is called random sampling, the second stratified sampling. Random sampling does not require knowledge about the reference space, and is used to estimate the number and size of objects. Estimated features include area (volume) fraction, numerical, boundary and surface densities. Stratified sampling requires the knowledge of objects (and their features) and evaluates spatial features in relation to the detected objects (for example grey value
Theory of sampling and its application in tissue based diagnosis
Directory of Open Access Journals (Sweden)
Kayser Gian
2009-02-01
Full Text Available Abstract Background A general theory of sampling and its application in tissue based diagnosis is presented. Sampling is defined as extraction of information from certain limited spaces and its transformation into a statement or measure that is valid for the entire (reference space. The procedure should be reproducible in time and space, i.e. give the same results when applied under similar circumstances. Sampling includes two different aspects, the procedure of sample selection and the efficiency of its performance. The practical performance of sample selection focuses on search for localization of specific compartments within the basic space, and search for presence of specific compartments. Methods When a sampling procedure is applied in diagnostic processes two different procedures can be distinguished: I the evaluation of a diagnostic significance of a certain object, which is the probability that the object can be grouped into a certain diagnosis, and II the probability to detect these basic units. Sampling can be performed without or with external knowledge, such as size of searched objects, neighbourhood conditions, spatial distribution of objects, etc. If the sample size is much larger than the object size, the application of a translation invariant transformation results in Kriege's formula, which is widely used in search for ores. Usually, sampling is performed in a series of area (space selections of identical size. The size can be defined in relation to the reference space or according to interspatial relationship. The first method is called random sampling, the second stratified sampling. Results Random sampling does not require knowledge about the reference space, and is used to estimate the number and size of objects. Estimated features include area (volume fraction, numerical, boundary and surface densities. Stratified sampling requires the knowledge of objects (and their features and evaluates spatial features in relation to
Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael
2013-12-01
Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.
Logistic Bayesian LASSO for genetic association analysis of data from complex sampling designs.
Zhang, Yuan; Hofmann, Jonathan N; Purdue, Mark P; Lin, Shili; Biswas, Swati
2017-09-01
Detecting gene-environment interactions with rare variants is critical in dissecting the etiology of common diseases. Interactions with rare haplotype variants (rHTVs) are of particular interest. At the same time, complex sampling designs, such as stratified random sampling, are becoming increasingly popular for designing case-control studies, especially for recruiting controls. The US Kidney Cancer Study (KCS) is an example, wherein all available cases were included while the controls at each site were randomly selected from the population by frequency matching with cases based on age, sex and race. There is currently no rHTV association method that can account for such a complex sampling design. To fill this gap, we consider logistic Bayesian LASSO (LBL), an existing rHTV approach for case-control data, and show that its model can easily accommodate the complex sampling design. We study two extensions that include stratifying variables either as main effects only or with additional modeling of their interactions with haplotypes. We conduct extensive simulation studies to compare the complex sampling methods with the original LBL methods. We find that, when there is no interaction between haplotype and stratifying variables, both extensions perform well while the original LBL methods lead to inflated type I error rates. However, when such an interaction exists, it is necessary to include the interaction effect in the model to control the type I error rate. Finally, we analyze the KCS data and find a significant interaction between (current) smoking and a specific rHTV in the N-acetyltransferase 2 gene.
Second law characterization of stratified thermal storage tanks
Energy Technology Data Exchange (ETDEWEB)
Fraidenraich, N [Departamento de Energia Nuclear-UFPE (Brazil)
2000-07-01
It is well known that fluid stratification in thermal storage tanks improves the overall performance of solar thermal systems, when compared with systems operating with uniform fluid temperature. From the point of view of the first law of thermodynamics, no difference exists between storage tanks with the same mass and average temperature, even if they have different stratified thermal structures. Nevertheless, the useful thermal energy that can be obtained from them might differ significantly. In this work, we derive an expression able to characterize the stratified configuration of thermal fluid. Using results obtained by thermodynamics of irreversible processes, the procedure adopted consists in calculating the maximum work available from the tank's thermal layer is able to develop. We arrive, then, at a dimensionless expression, the stratification parameter (SP), which depends on the mass fraction and absolute temperature of each thermal layer as well as the thermal fluid average temperature. Numerical examples for different types of tank stratification are given and it is verified that the expression obtained is sensitive to small differences in the reservoir thermal configuration. For example a thermal storage with temperatures equal to 74 Celsius degrees, 64 Celsius degrees and 54 Celsius degrees, with its mass equally distributed along the tank yields, for the parameter SP, a figure equal to 0.000294. On the other hand a storage tank with the same average temperature but with different layer's temperatures 76 Celsius degrees, 64 and 52 Celsius degrees, also with uniform mass distribution, yields for SP a value equal to quantitative evaluation of the stratification structure of thermal reservoirs. [Spanish] Es bien conocido que la estratificacion fluida en tanques de almacenamiento termico mejora el rendimiento total de los sistemas termicos solares en comparacion con sistemas que operan con temperatura uniforme del fluido. Desde el punto de vista
STUDIES OF TWO-PHASE PLUMES IN STRATIFIED ENVIRONMENTS
Energy Technology Data Exchange (ETDEWEB)
Scott A. Socolofsky; Brian C. Crounse; E. Eric Adams
1998-11-18
Two-phase plumes play an important role in the more practical scenarios for ocean sequestration of CO{sub 2}--i.e. dispersing CO{sub 2} as a buoyant liquid from either a bottom-mounted or ship-towed pipeline. Despite much research on related applications, such as for reservoir destratification using bubble plumes, our understanding of these flows is incomplete, especially concerning the phenomenon of plume peeling in a stratified ambient. To address this deficiency, we have built a laboratory facility in which we can make fundamental measurements of plume behavior. Although we are using air, oil and sediments as our sources of buoyancy (rather than CO{sub 2}), by using models, our results can be directly applied to field scale CO{sub 2} releases to help us design better CO{sub 2} injection systems, as well as plan and interpret the results of our up-coming international field experiment. The experimental facility designed to study two-phase plume behavior similar to that of an ocean CO{sub 2} release includes the following components: 1.22 x 1.22 x 2.44 m tall glass walled tank; Tanks and piping for the two-tank stratification method for producing step- and linearly-stratified ambient conditions; Density profiling system using a conductivity and temperature probe mounted to an automated depth profiler; Lighting systems, including a virtual point source light for shadowgraphs and a 6 W argon-ion laser for laser induced fluorescence (LIF) imaging; Imaging system, including a digital, progressive scanning CCD camera, computerized framegrabber, and image acquisition and analysis software; Buoyancy source diffusers having four different air diffusers, two oil diffusers, and a planned sediment diffuser; Dye injection method using a Mariotte bottle and a collar diffuser; and Systems integration software using the Labview graphical programming language and Windows NT. In comparison with previously reported experiments, this system allows us to extend the parameter range of
Numerical simulations of rotating bubble plumes in stratified environments
Fabregat TomÃ s, Alexandre; Poje, Andrew C.; Ã-zgökmen, Tamay M.; Dewar, William K.
2017-08-01
The effects of system rotation on the turbulent dynamics of bubble plumes evolving in stratified environments are numerically investigated by considering variations in both the system rotation rate and the gas-phase slip velocity. The turbulent dispersion of a passive scalar injected at the source of a buoyant plume is strongly altered by the rotation of the system and the nature of the buoyancy at the source. When the plume is driven by the density defect associated with the presence of slipping gas bubbles, the location of the main lateral intrusion decreases with respect to the single-phase case with identical inlet volume, momentum, and buoyancy fluxes. Enhanced downdrafts of carrier phase fluid result in increased turbulent mixing and short-circuiting of detraining plume water that elevate near-field effluent concentrations. Similarly, rotation fundamentally alters dynamic balances within the plume leading to the encroachment of the trapping height on the source and an increase in turbulent dispersion in the near field. System rotation, even at modest Rossby numbers, produces a sustained, robust, anticyclonic precession of the plume core. The effects of rotation and the presence of bubbles are cumulative. The vertical encroachment of the primary intrusion and the overall dispersion of effluent are greatest at smallest Rossby numbers and largest slip velocities. The main characteristic feature in rotating single-phase plumes, namely the robust anticyclonic precession, persists in bubble plumes. Analysis of the momentum budgets reveal that the mechanism responsible for the organized precession, i.e., the establishment of an unstable vertical hydrostatic equilibrium related to radial cyclostrophic balance, does not differ from the single-phase case.
Anisotropic spectral modeling for unstably stratified homogeneous turbulence
Briard, Antoine; Iyer, Manasa; Gomez, Thomas
2017-04-01
In this work, a spectral model is derived to investigate numerically unstably stratified homogeneous turbulence (USHT) at large Reynolds numbers. The modeling relies on an earlier work for passive scalar dynamics [Briard et al., J. Fluid Mech. 799, 159 (2016), 10.1017/jfm.2016.362] and can handle both shear and mean scalar gradients. The extension of this model to the case of active scalar dynamics is the main theoretical contribution of this paper. This spectral modeling is then applied at large Reynolds numbers to analyze the scaling of the kinetic energy, scalar variance, and scalar flux spectra and to study as well the temporal evolution of the mixing parameter, the Froude number, and some anisotropy indicators in USHT. A theoretical prediction for the exponential growth rate of the kinetic energy, associated with our model equations, is derived and assessed numerically. Throughout the validation part, results are compared with an analogous approach, restricted to axisymmetric turbulence, which is more accurate in term of anisotropy description, but also much more costly in terms of computational resources [Burlot et al., J. Fluid Mech. 765, 17 (2015), 10.1017/jfm.2014.726]. It is notably shown that our model can qualitatively recover all the features of the USHT dynamics, with good quantitative agreement on some specific aspects. In addition, some remarks are proposed to point out the similarities and differences between the physics of USHT, shear flows, and passive scalar dynamics with a mean gradient, the two latter configurations having been addressed previously with the same closure. Moreover, it is shown that the anisotropic part of the pressure spectrum in USHT scales in k-11 /3 in the inertial range, similarly to the one in shear flows. Finally, at large Schmidt numbers, a different spectral range is found for the scalar flux: It first scales in k-3 around the Kolmogorov scale and then further in k-1 in the viscous-convective range.
High tumor budding stratifies breast cancer with metastatic properties.
Salhia, Bodour; Trippel, Mafalda; Pfaltz, Katrin; Cihoric, Nikola; Grogg, André; Lädrach, Claudia; Zlobec, Inti; Tapia, Coya
2015-04-01
Tumor budding refers to single or small cluster of tumor cells detached from the main tumor mass. In colon cancer high tumor budding is associated with positive lymph nodes and worse prognosis. Therefore, we investigated the value of tumor budding as a predictive feature of lymph node status in breast cancer (BC). Whole tissue sections from 148 surgical resection specimens (SRS) and 99 matched preoperative core biopsies (CB) with invasive BC of no special type were analyzed on one slide stained with pan-cytokeratin. In SRS, the total number of intratumoral (ITB) and peripheral tumor buds (PTB) in ten high-power fields (HPF) were counted. A bud was defined as a single tumor cell or a cluster of up to five tumor cells. High tumor budding equated to scores averaging >4 tumor buds across 10HPFs. In CB high tumor budding was defined as ≥10 buds/HPF. The results were correlated with pathological parameters. In SRS high PTB stratified BC with lymph node metastases (p ≤ 0.03) and lymphatic invasion (p ≤ 0.015). In CB high tumor budding was significantly (p = 0.0063) associated with venous invasion. Pathologists are able, based on morphology, to categorize BC into a high and low risk groups based in part on lymph node status. This risk assessment can be easily performed during routine diagnostics and it is time and cost effective. These results suggest that high PTB is associated with loco-regional metastasis, highlighting the possibility that this tumor feature may help in therapeutic decision-making.
Turbulent entrainment in a strongly stratified barrier layer
Pham, H. T.; Sarkar, S.
2017-06-01
Large-eddy simulation (LES) is used to investigate how turbulence in the wind-driven ocean mixed layer erodes the stratification of barrier layers. The model consists of a stratified Ekman layer that is driven by a surface wind. Simulations at a wide range of N0/f are performed to quantify the effect of turbulence and stratification on the entrainment rate. Here, N0 is the buoyancy frequency in the barrier layer and f is the Coriolis parameter. The evolution of the mixed layer follows two stages: a rapid initial deepening and a late-time growth at a considerably slower rate. During the first stage, the mixed layer thickens to the depth that is proportional to u∗/fN0 where u∗ is the frictional velocity. During the second stage, the turbulence in the mixed layer continues to deepen further into the barrier layer, and the turbulent length scale is shown to scale with u∗/N0, independent of f. The late-time entrainment rate E follows the law of E=0.035Ri∗-1/2 where Ri∗ is the Richardson number. The exponent of -1/2 is identical but the coefficient of 0.035 is much smaller relative to the value of 2-3/2 for the nonrotating boundary layer. Simulations using the KPP model (version applicable to this simple case without additional effects of Langmuir turbulence or surface buoyancy flux) also yield the entrainment scaling of E∝Ri∗-1/2; however, the proportionality coefficient varies with the stratification. The structure of the Ekman current is examined to illustrate the strong effect of stratification in the limit of large N0/f.
[AmostraBrasil: an R package for household sampling in Brazilian municipalities].
Cordeiro, Ricardo; Stephan, Celso; Donalísio, Maria Rita
2016-11-01
Given the relevance of epidemiological surveys and the difficulties in establishing an adequate sampling plan to conduct them, this article present the AmostraBrasil package, part of the open-access R software, which automatizes the taking of random samples - simple, systematic, and stratified - from households in any Brazilian municipalities (counties). The package also allows automatically obtaining the sampled households' geographic coordinates, was well as shapefiles of the municipality's perimeter and the sample's spatial distribution. The article describes the steps for installing and using the package in the Windows OS. Examples are provided of the package's applications: sampling and spatial distribution of 2,500 residential households in the city of Rio de Janeiro and generation of controls in estimating risk spatial distribution.
Díaz-Astudillo, Macarena; Cáceres, Mario A.; Landaeta, Mauricio F.
2017-09-01
The patterns of abundance, composition, biomass and vertical migration of zooplankton in short-time scales (ADCP device mounted on the hull of a ship were used to obtain vertical profiles of current velocity data and intensity of the backscattered acoustic signal, which was used to study the migratory strategies and to relate the echo intensity with zooplankton biomass. Repeated vertical profiles of temperature, salinity and density were obtained with a CTD instrument to describe the density patterns during both experiments. Zooplankton were sampled every 3 h using a Bongo net to determine abundance, composition and biomass. Migrations were diel in the stratified station, semi-diel in the mixed station, and controlled by light in both locations, with large and significant differences in zooplankton abundance and biomass between day and night samples. No migration pattern associated with the effect of tides was found. The depth of maximum backscatter strength showed differences of approximately 30 m between stations and was deeper in the mixed station. The relation between mean volume backscattering strength (dB) computed from echo intensity and log10 of total dry weight (mg m-3) of zooplankton biomass was moderate but significant in both locations. Biomass estimated from biological samples was higher in the mixed station and determined by euphausiids. Copepods were the most abundant group in both stations. Acoustic methods were a useful technique to understand the detailed patterns of migratory strategies of zooplankton and to help estimate zooplankton biomass and abundance in the inner waters of southern Chile.
Pimentel, Laura; Anderson, David; Golden, Bruce; Wasil, Edward; Barrueto, Fermin; Hirshon, Jon M
2017-04-01
On January 1, 2014, the financing and delivery of healthcare in the state of Maryland (MD) profoundly changed. The insurance provisions of the Patient Protection and Affordable Care Act (ACA) began implementation and a major revision of MD's Medicare waiver ushered in a Global Budget Revenue (GBR) structure for hospital reimbursement. Our objective was to analyze the impact of these policy changes on emergency department (ED) utilization, hospitalization practices, insurance profiles, and professional revenue. We stratified our analysis by the socioeconomic status (SES) of the ED patient population. We collected monthly mean data including patient volume, hospitalization percentages, payer mix, and professional revenue from January 2013 through December 2015 from a convenience sample of 11 EDs in Maryland. Using regression models, we compared each of the variables 18 months after the policy changes and a six-month washout period to the year prior to ACA/GBR implementation. We included the median income of each ED's patient population as an explanatory variable and stratified our results by SES. Our 11 EDs saw an annualized volume of 399,310 patient visits during the study period. This ranged from a mean of 41 daily visits in the lowest volume rural ED to 171 in the highest volume suburban ED. After ACA/GBR, ED volumes were unchanged (95% confidence interval [CI] [-1.58-1.24], p=.817). Hospitalization percentages decreased significantly by 1.9% from 17.2% to 15.3% (95% CI [-2.47%-1.38%], pHealth policy changes at the federal and state levels have resulted in significant changes to emergency medicine practice and finances in MD. Admission and observation percentages have been reduced, fewer patients are uninsured, and professional revenue has increased. All changes are significantly more pronounced in EDs with patients of lower SES.
Directory of Open Access Journals (Sweden)
Jerzy Czerski
2014-01-01
Full Text Available The germination of whole seeds, the seeds without coat and isolated embryos of apple cv. "Antonówka Zwykła" after 90 days of cold-stratification was compared with the germination of embryos isolated from non-stratified seeds. They were germinated under 16hrs during a day at temperature 25°C and 20°C during the night. It has been found that after 2 weeks whole stratified seeds germinated in 5 per cent, seeds without coat in 25 per cent and isolated embryos in 98 per cent. Isolated embryos from nun-stratified seeds, after 2 weeks, germinated in the range from 75 to 88 per cent. The results indicate the similar germination ability of embryos isolated from nun-stratified seeds. The seedling populations obtained from embryo's stratified and non-stratified seeds were fully comparable and they evaluated: 1 a wide range of individual differences within population, 2 a similar number of seedlings in each class of shoot length, 3 a similar morphological habitus in each class of shoot length, 4 a similar fresh leaf weight and whole plant increment.
Andréewitch Sergej; Lindfors Perjohan; Hedman Erik; Andersson Erik; Andersson Gerhard; Ljótsson Brjánn; Rück Christian; Lindefors Nils
2011-01-01
Background: Internet-based cognitive behavior therapy (ICBT) has shown promising effects in the treatment of irritable bowel syndrome (IBS). However, to date no study has used a design where participants have been sampled solely from a clinical population. We aimed to investigate the acceptability, effectiveness, and cost-effectiveness of ICBT for IBS using a consecutively recruited sample from a gastroenterological clinic. less thanbrgreater than less thanbrgreater thanMethods: Sixty-one pat...
Directory of Open Access Journals (Sweden)
Shanyou Zhu
2014-01-01
Full Text Available Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
METHODOLOGICAL ASPECTS OF STRATIFICATION OF AUDIT SAMPLING
Vilena A. Yakimova
2013-01-01
The article presents the methodological foundations for construction stratification audit sampling for attribute-based sampling. The sampling techniques of Russian and foreign practice is studied and stratified. The role of stratification in the audit is described. Approaches to construction of the stratification are revealed on the basis of professional judgment (qualitative methods), statistical groupings (quantitative methods) and combinatory ones (complex qualitative stratifications). Gro...
Loban, Amanda; Mandefield, Laura; Hind, Daniel; Bradburn, Mike
2017-12-01
The objective of this study was to compare the response rates, data completeness, and representativeness of survey data produced by online and postal surveys. A randomized trial nested within a cohort study in Yorkshire, United Kingdom. Participants were randomized to receive either an electronic (online) survey questionnaire with paper reminder (N = 2,982) or paper questionnaire with electronic reminder (N = 2,855). Response rates were similar for electronic contact and postal contacts (50.9% vs. 49.7%, difference = 1.2%, 95% confidence interval: -1.3% to 3.8%). The characteristics of those responding to the two groups were similar. Participants nevertheless demonstrated an overwhelming preference for postal questionnaires, with the majority responding by post in both groups. Online survey questionnaire systems need to be supplemented with a postal reminder to achieve acceptable uptake, but doing so provides a similar response rate and case mix when compared to postal questionnaires alone. For large surveys, online survey systems may be cost saving. Copyright © 2017 Elsevier Inc. All rights reserved.
Carlisle, J B
2017-08-01
10(-10) . The difference between the distributions of these two subgroups was confirmed by comparison of their overall distributions, p = 5.3 × 10(-15) . Each journal exhibited the same abnormal distribution of baseline means. There was no difference in distributions of baseline means for 1453 trials in non-anaesthetic journals and 3634 trials in anaesthetic journals, p = 0.30. The rate of retractions from JAMA and NEJM, 6/1453 or 1 in 242, was one-quarter the rate from the six anaesthetic journals, 66/3634 or 1 in 55, relative risk (99%CI) 0.23 (0.08-0.68), p = 0.00022. A probability threshold of 1 in 10,000 identified 8/72 (11%) retracted trials (7 by Fujii et al.) and 82/5015 (1.6%) unretracted trials. Some p values were so extreme that the baseline data could not be correct: for instance, for 43/5015 unretracted trials the probability was less than 1 in 10(15) (equivalent to one drop of water in 20,000 Olympic-sized swimming pools). A probability threshold of 1 in 100 for two or more trials by the same author identified three authors of retracted trials (Boldt, Fujii and Reuben) and 21 first or corresponding authors of 65 unretracted trials. Fraud, unintentional error, correlation, stratified allocation and poor methodology might have contributed to the excess of randomised, controlled trials with similar or dissimilar means, a pattern that was common to all the surveyed journals. It is likely that this work will lead to the identification, correction and retraction of hitherto unretracted randomised, controlled trials. © 2017 The Association of Anaesthetists of Great Britain and Ireland.
Energy Technology Data Exchange (ETDEWEB)
Doi, Yoshihiro; Muramatsu, Toshiharu [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center
1997-07-01
Thermal stratification phenomena are observed in an upper plenum of liquid metal fast breeder reactors (LMFBRs) under reactor scram conditions, which give rise to thermal stress on in-vessel structural components. Therefore it is important to evaluate characteristics of the phenomena in the design of components in an LMFBR plenum. The phenomena are a stable stratified flow and shear layers are formed due to the velocity difference between upper and lower flows. In this study, to evaluate numerical models and constants used in the model for the thermal stratification phenomena, numerical analyses were carried out for a shear flow water test in rectangular duct. In the analyses, a direct numerical simulation code was used. The numerical results could indicate the large spanwise coherent structures with the growing and the pairing of vortexes. The analyses for different Richardson (Ri) number conditions were carried out and then calculated distributions of time-averaged velocities, velocity fluctuations, Reynolds stresses, time-averaged temperatures and temperature fluctuations were compared with the measured results. Through the comparisons, the calculated time-averaged velocities and velocity fluctuations in the main flow direction were agreed well with measured value and the velocity fluctuations decreased with increasing of the Ri number. Though calculated time-averaged temperature distributions and temperature fluctuations had three different temperature gradient regions, those were not found in the measured value. The region of Ri numbers observed the vortexes pairing is in good agreement with the calculated results and the vortexes pairing would cause large Reynolds stresses. (J.P.N.)
Evaluation of sampling strategies to estimate crown biomass
Directory of Open Access Journals (Sweden)
Krishna P Poudel
2015-01-01
Full Text Available Background Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire modeling. However, crown biomass is difficult to predict because of the variability within and among species and sites. Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies. In this study, we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass. Methods Using data collected from 20 destructively sampled trees, we evaluated 11 different sampling strategies using six evaluation statistics: bias, relative bias, root mean square error (RMSE, relative RMSE, amount of biomass sampled, and relative biomass sampled. We also evaluated the performance of the selected sampling strategies when different numbers of branches (3, 6, 9, and 12 are selected from each tree. Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass. Results Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled. However, the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled. Under the stratified sampling strategy, selecting unequal number of branches per stratum produced approximately similar results to simple random sampling, but it further decreased RMSE when information on branch diameter is used in the design and estimation phases. Conclusions Use of
Mewborn, Catherine M; Lindbergh, Cutter A; Stephen Miller, L
2017-12-01
Cognitive interventions may improve cognition, delay age-related cognitive declines, and improve quality of life for older adults. The current meta-analysis was conducted to update and expand previous work on the efficacy of cognitive interventions for older adults and to examine the impact of key demographic and methodological variables. EBSCOhost and Embase online databases and reference lists were searched to identify relevant randomized-controlled trials (RCTs) of cognitive interventions for cognitively healthy or mildly impaired (MCI) older adults (60+ years). Interventions trained a single cognitive domain (e.g., memory) or were multi-domain training, and outcomes were assessed immediately post-intervention using standard neuropsychological tests. In total, 279 effects from 97 studies were pooled based on a random-effects model and expressed as Hedges' g (unbiased). Overall, results indicated that cognitive interventions produce a small, but significant, improvement in the cognitive functioning of older adults, relative to active and passive control groups (g = 0.298, p < .001, 95% CI = 0.248-0.347). These results were confirmed using multi-level analyses adjusting for nesting of effect sizes within studies (g = 0.362, p < .001, 95% CI = 0.275, 0.449). Age, education, and cognitive status (healthy vs. MCI) were not significant moderators. Working memory interventions proved most effective (g = 0.479), though memory, processing speed, and multi-domain interventions also significantly improved cognition. Effects were larger for directly trained outcomes but were also significant for non-trained outcomes (i.e., "transfer effects"). Implications for future research and clinical practice are discussed. This project was pre-registered with PROSPERO (#42016038386).
Turbulence Statistics of a Buoyant Jet in a Stratified Environment
McCleney, Amy Brooke
Using non-intrusive optical diagnostics, turbulence statistics for a round, incompressible, buoyant, and vertical jet discharging freely into a stably linear stratified environment is studied and compared to a reference case of a neutrally buoyant jet in a uniform environment. This is part of a validation campaign for computational fluid dynamics (CFD). Buoyancy forces are known to significantly affect the jet evolution in a stratified environment. Despite their ubiquity in numerous natural and man-made flows, available data in these jets are limited, which constrain our understanding of the underlying physical processes. In particular, there is a dearth of velocity field data, which makes it challenging to validate numerical codes, currently used for modeling these important flows. Herein, jet near- and far-field behaviors are obtained with a combination of planar laser induced fluorescence (PLIF) and multi-scale time-resolved particle image velocimetry (TR-PIV) for Reynolds number up to 20,000. Deploying non-intrusive optical diagnostics in a variable density environment is challenging in liquids. The refractive index is strongly affected by the density, which introduces optical aberrations and occlusions that prevent the resolution of the flow. One solution consists of using index matched fluids with different densities. Here a pair of water solutions - isopropanol and NaCl - are identified that satisfy these requirements. In fact, they provide a density difference up to 5%, which is the largest reported for such fluid pairs. Additionally, by design, the kinematic viscosities of the solutions are identical. This greatly simplifies the analysis and subsequent simulations of the data. The spectral and temperature dependence of the solutions are fully characterized. In the near-field, shear layer roll-up is analyzed and characterized as a function of initial velocity profile. In the far-field, turbulence statistics are reported for two different scales, one
Implementing content constraints in alpha-stratified adaptive using a shadow test approach
van der Linden, Willem J.; Chang, Hua-Hua
2003-01-01
The methods of alpha-stratified adaptive testing and constrained adaptive testing with shadow tests are combined. The advantages are twofold: First, application of the shadow test approach allows the implementation of any type of constraint on item selection in alpha-stratified adaptive testing.
Implementing content constraints in alpha-stratified adaptive testing using a shadow test approach
van der Linden, Willem J.; Chang, Hua-Hua
2001-01-01
The methods of alpha-stratified adaptive testing and constrained adaptive testing with shadow tests are combined in this study. The advantages are twofold. First, application of the shadow test allows the researcher to implement any type of constraint on item selection in alpha-stratified adaptive
Thermal stratification built up in hot water tank with different inlet stratifiers
DEFF Research Database (Denmark)
Dragsted, Janne; Furbo, Simon; Dannemand, Mark
2017-01-01
H is a rigid plastic pipe with holes for each 30 cm. The holes are designed with flaps preventing counter flow into the pipe. The inlet stratifier from EyeCular Technologies ApS is made of a flexible polymer with openings all along the side and in the full length of the stratifier. The flexibility...
Marks, Gary N.
2010-01-01
One of the more persuasive arguments for school sector differences in students' educational performance is the role of the senior school curriculum, which is stratified, socially selective, and has an important bearing on educational outcomes. In addition, the stratified curriculum may also contribute to socioeconomic inequalities in education by…
Implications of sampling design and sample size for national carbon accounting systems.
Köhl, Michael; Lister, Andrew; Scott, Charles T; Baldauf, Thomas; Plugge, Daniel
2011-11-08
Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources. We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives. Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial benefits from adopting a REDD regime.
The Diagnostic Yield of Colonoscopy Stratified by Indications
DEFF Research Database (Denmark)
Al-Najami, I; Rancinger, C P; Larsen, Morten Kobaek
2017-01-01
, 25%; (2) positive screening, 17%; (3) previous resection of adenomas, 45%; (4) previous resection of colorectal cancer, 15%; and (5) surveillance of patients with high-risk family history of cancer, 35%. CONCLUSION: The majority of adenomas found during colonoscopy can be treated with simple...... and allocation of the highly skilled endoscopists. METHODS: Nine hundred and ninety-nine randomly collected patients from a prospectively maintained database were grouped in defined referral indication groups. Five groups were compared in respect of the detection rate of adenomas and cancers. RESULTS: Two...
Clarke, Diana E; Narrow, William E; Regier, Darrel A; Kuramoto, S Janet; Kupfer, David J; Kuhl, Emily A; Greiner, Lisa; Kraemer, Helena C
2013-01-01
This article discusses the design,sampling strategy, implementation,and data analytic processes of the DSM-5 Field Trials. The DSM-5 Field Trials were conducted by using a test-retest reliability design with a stratified sampling approach across six adult and four pediatric sites in the United States and one adult site in Canada. A stratified random sampling approach was used to enhance precision in the estimation of the reliability coefficients. A web-based research electronic data capture system was used for simultaneous data collection from patients and clinicians across sites and for centralized data management.Weighted descriptive analyses, intraclass kappa and intraclass correlation coefficients for stratified samples, and receiver operating curves were computed. The DSM-5 Field Trials capitalized on advances since DSM-III and DSM-IV in statistical measures of reliability (i.e., intraclass kappa for stratified samples) and other recently developed measures to determine confidence intervals around kappa estimates. Diagnostic interviews using DSM-5 criteria were conducted by 279 clinicians of varied disciplines who received training comparable to what would be available to any clinician after publication of DSM-5.Overall, 2,246 patients with various diagnoses and levels of comorbidity were enrolled,of which over 86% were seen for two diagnostic interviews. A range of reliability coefficients were observed for the categorical diagnoses and dimensional measures. Multisite field trials and training comparable to what would be available to any clinician after publication of DSM-5 provided “real-world” testing of DSM-5 proposed diagnoses.
Thomas, Luis P.; Marino, Beatriz M.; Szupiany, Ricardo N.; Gallo, Marcos N.
2017-09-01
The ability to predict the sediment and nutrient circulation within estuarine waters is of significant economic and ecological importance. In these complex systems, flocculation is a dynamically active process that is directly affected by the prevalent environmental conditions. Consequently, the floc properties continuously change, which greatly complicates the characterisation of the suspended particle matter (SPM). In the present study, three different techniques are combined in a stratified estuary under quiet weather conditions and with a low river discharge to search for a solution to this problem. The challenge is to obtain the concentration, size and flux of suspended elements through selected cross-sections using the method based on the simultaneous backscatter records of 1200 and 600 kHz ADCPs, isokinetic sampling data and LISST-25X measurements. The two-ADCP method is highly effective for determining the SPM size distributions in a non-intrusive way. The isokinetic sampling and the LISST-25X diffractometer offer point measurements at specific depths, which are especially useful for calibrating the ADCP backscatter intensity as a function of the SPM concentration and size, and providing complementary information on the sites where acoustic records are not available. Limitations and potentials of the techniques applied are discussed.
Directory of Open Access Journals (Sweden)
Gorini Alessandra
2008-05-01
Full Text Available Abstract Background Generalized anxiety disorder (GAD is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Its high prevalence in general population and the severe limitations it causes, point out the necessity to find new efficient strategies to treat it. Together with the cognitive-behavioural treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. To overcome this limitation we propose the use of virtual reality (VR to facilitate the relaxation process by visually presenting key relaxing images to the subjects. The visual presentation of a virtual calm scenario can facilitate patients' practice and mastery of relaxation, making the experience more vivid and real than the one that most subjects can create using their own imagination and memory, and triggering a broad empowerment process within the experience induced by a high sense of presence. According to these premises, the aim of the present study is to investigate the advantages of using a VR-based relaxation protocol in reducing anxiety in patients affected by GAD. Methods/Design The trial is based on a randomized controlled study, including three groups of 25 patients each (for a total of 75 patients: (1 the VR group, (2 the non-VR group and (3 the waiting list (WL group. Patients in the VR group will be taught to relax using a VR relaxing environment and audio-visual mobile narratives; patients in the non-VR group will be taught to relax using the same relaxing narratives proposed to the VR group, but without the VR support, and patients in the WL group will not receive any kind of relaxation training. Psychometric and psychophysiological outcomes will serve as quantitative dependent variables, while subjective reports of participants will be used as qualitative dependent variables. Conclusion We argue that the use of VR for relaxation
Lumbar fusion outcomes stratified by specific diagnostic indication.
Glassman, Steven D; Carreon, Leah Y; Djurasovic, Mladen; Dimar, John R; Johnson, John R; Puno, Rolando M; Campbell, Mitchell J
2009-01-01
One of the primary difficulties in evaluating the effectiveness of lumbar fusion is that, with the exception of spondylolisthesis, specific diagnostic indications for surgery are poorly defined. Diagnostic specificity beyond the symptom of low back pain or the presence of lumbar degeneration needs to be delineated such that outcomes data can be effectively translated into clinical decision making or evidence-based guidelines. The purpose of this study was to report on prospectively collected clinical outcome measures, stratified by diagnosis, among a series of patients with lumbar degenerative disease whose treatment included lumbar spine fusion. Demographics, diagnostic categorization, and clinical outcome measures were prospectively collected by six spine surgeons at a single tertiary spine center, as part of the surgeons' standard clinical practice. Four hundred and twenty-eight patients were enrolled in the study and complete 1- and 2-year Health-Related Quality of Life (HRQOL) data were available in 327 patients whose treatment included decompression and posterolateral lumbar fusion. The Oswestry Disability Index (ODI), Short Form-36 (SF-36), numeric rating scales for back pain and leg pain. Preoperative diagnosis was classified, in the primary surgical cases, as disc pathology, spondylolisthesis, instability, stenosis, or scoliosis. In revision cases, the diagnosis was classified as nonunion, adjacent level degeneration, or postdiscectomy revision. Patient-reported outcomes at 1 and 2 years post-op were assessed based on diagnostic stratification. Statistical evaluation of clinical outcome was performed for both mean net change in outcome scores and the percentage of patients reaching a minimum clinically important difference (MCID) threshold for each outcome measure. Preoperative diagnosis was spondylolisthesis (n=80), scoliosis (n=17), disc pathology (n=33), instability (n=21), stenosis (n=46), postdiscectomy revision (n=67), adjacent level degeneration (n
A spatially balanced design with probability function proportional to the within sample distance.
Benedetti, Roberto; Piersimoni, Federica
2017-09-01
The units observed in a biological, agricultural, and environmental survey are often randomly selected from a finite population whose main feature is to be geo-referenced thus its spatial distribution should be used as essential information in designing the sample. In particular our interest is focused on probability samples that are well spread over the population in every dimension which in recent literature are defined as spatially balanced samples. To approach the problem we used the within sample distance as the summary index of the spatial distribution of a random selection criterion. Moreover numerical comparisons are made between the relative efficiency, measured with respect to the simple random sampling, of the suggested design and some other classical solutions as the Generalized Random Tessellation Stratified (GRTS) design used by the US Environmental Protection Agency (EPA) and other balanced or spatially balanced selection procedures as the Spatially Correlated Poisson Sampling (SCPS), the balanced sampling (CUBE), and the Local Pivotal method (LPM). These experiments on real and simulated data show that the design based on the within sample distance selects samples with a better spatial balance thus gives estimates with a lower sampling error than those obtained by using the other methods. The suggested method is very flexible to the introduction of stratification and coordination of samples and, even if in its nature it is computationally intensive, it is shown to be a suitable solution even when dealing with high sampling rates and large population frames where the main problem arises from the size of the distance matrix. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Raiff, Bethany R; Barrry, Victoria B; Ridenour, Ty A; Jitnarin, Natinee
2016-06-01
Non-adherence with self-monitoring blood glucose (SMBG) among teenagers with type 1 diabetes can be a problem. The purpose of this study was to investigate the feasibility, acceptability, and preliminary efficacy of using Internet-based incentives to improve adherence with SMBG in non-adherent teenagers. Participants were randomly assigned to contingent (CS; N = 23), where they had to meet web camera-verified SMBG goals to earn incentives, or non-contingent (NS) groups (N = 18), where they earned incentives independent of adherence. Brief motivational interviewing (MI) was given prior to the intervention. Attrition was 15 % in the CS group. Participants and parents endorsed the intervention on all intervention dimensions. Daily SMBG increased after one MI session, and further increased when incentives were added, but significantly more for so for older participants. SMBG declined slowly over time, but only returned to baseline levels for younger NS participants. Internet-based incentive interventions are feasible, acceptable, and show promise for improving adherence with SMBG.
Directory of Open Access Journals (Sweden)
Dagmar Sigmundová
2014-07-01
Full Text Available This study investigates whether more physically active parents bring up more physically active children and whether parents’ level of physical activity helps children achieve step count recommendations on weekdays and weekends. The participants (388 parents aged 35–45 and their 485 children aged 9–12 were randomly recruited from 21 Czech government-funded primary schools. The participants recorded pedometer step counts for seven days (≥10 h a day during April–May and September–October of 2013. Logistic regression (Enter method was used to examine the achievement of the international recommendations of 11,000 steps/day for girls and 13,000 steps/day for boys. The children of fathers and mothers who met the weekend recommendation of 10,000 steps were 5.48 (95% confidence interval: 1.65; 18.19; p < 0.01 and 3.60 times, respectively (95% confidence interval: 1.21; 10.74; p < 0.05 more likely to achieve the international weekend recommendation than the children of less active parents. The children of mothers who reached the weekday pedometer-based step count recommendation were 4.94 times (95% confidence interval: 1.45; 16.82; p < 0.05 more likely to fulfil the step count recommendation on weekdays than the children of less active mothers.
Hidaka, Brandon H; Kerling, Elizabeth H; Thodosoff, Jocelynn M; Sullivan, Debra K; Colombo, John; Carlson, Susan E
2016-11-25
Dietary habits established in early childhood and maternal socioeconomic status (SES) are important, complex, interrelated factors that influence a child's growth and development. The aim of this study was to define the major dietary patterns in a cohort of young US children, construct a maternal SES index, and evaluate their associations. The diets of 190 children from a randomized, controlled trial of prenatal supplementation of docosahexaenoic acid (DHA) were recorded at 6-mo intervals from 2-4.5 years by 24-h dietary recall. Hierarchical cluster analysis of age-adjusted, average daily intake of 24 food and beverage groups was used to categorize diet. Unrotated factor analysis generated an SES score from maternal race, ethnicity, age, education, and neighborhood income. We identified two major dietary patterns: "Prudent" and "Western." The 85 (45%) children with a Prudent diet consumed more whole grains, fruit, yogurt and low-fat milk, green and non-starchy vegetables, and nuts and seeds. Conversely, those with a Western diet had greater intake of red meat, discretionary fat and condiments, sweet beverages, refined grains, French fries and potato chips, eggs, starchy vegetables, processed meats, chicken and seafood, and whole-fat milk. Compared to a Western diet, a Prudent diet was associated with one standard deviation higher maternal SES (95% CI: 0.80 to 1.30). We found two major dietary patterns of young US children and defined a single, continuous axis of maternal SES that differed strongly between groups. This is an important first step to investigate how child diet, SES, and prenatal DHA supplementation interact to influence health outcomes. NCT00266825 . Prospectively registered on December 15, 2005.
Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.
2016-12-01
The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.
Using the Internet to Support Exercise and Diet: A Stratified Norwegian Survey.
Wangberg, Silje C; Sørensen, Tove; Andreassen, Hege K
2015-08-26
Internet is used for a variety of health related purposes. Use differs and has differential effects on health according to socioeconomic status. We investigated to what extent the Norwegian population use the Internet to support exercise and diet, what kind of services they use, and whether there are social disparities in use. We expected to find differences according to educational attainment. In November 2013 we surveyed a stratified sample of 2196 persons drawn from a Web panel of about 50,000 Norwegians over 15 years of age. The questionnaire included questions about using the Internet, including social network sites (SNS), or mobile apps in relation to exercise or diet, as well as background information about education, body image, and health. The survey email was opened by 1187 respondents (54%). Of these, 89 did not click on the survey hyperlink (declined to participate), while another 70 did not complete the survey. The final sample size is thus 1028 (87% response rate). Compared to the Norwegian census the sample had a slight under-representation of respondents under the age of 30 and with low education. The data was weighted accordingly before analyses. Sixty-nine percent of women and 53% of men had read about exercise or diet on the Internet (χ(2)= 25.6, Peducation (71%, χ(2)=19.1, Peducation (13%, χ(2)education are related to how the Internet is used to support health behaviors. We should be aware of the potential role of the Internet in accelerating social disparities in health, and continue to monitor population use. For Internet- and mobile-based interventions to support health behaviors, this study provides information relevant to tailoring of delivery media and components to user.
DEFF Research Database (Denmark)
Rothmann, Mette Juel; Huniche, Lotte; Ammentorp, Jette
2014-01-01
This study aimed to investigate women's perspectives and experiences with screening for osteoporosis. Focus groups and individual interviews were conducted. Three main themes emerged: knowledge about osteoporosis, psychological aspects of screening, and moral duty. Generally, screening was accepted...... due to life experiences, self-perceived risk, and the preventive nature of screening. PURPOSE: The risk-stratified osteoporosis strategy evaluation (ROSE) study is a randomized prospective population-based trial investigating the efficacy of a screening program to prevent fractures in women aged 65...... main themes: knowledge about osteoporosis, psychological aspects of screening, and moral duty. The women viewed the program in the context of their everyday life and life trajectories. Age, lifestyle, and knowledge about osteoporosis were important to how women ascribed meaning to the program, how...
The Diagnostic Yield of Colonoscopy Stratified by Indications
Directory of Open Access Journals (Sweden)
I. Al-Najami
2017-01-01
Full Text Available Introduction. Danish centers reserve longer time for screening colonoscopies and allocate the most experienced endoscopists to these cases. The objective of this study is to determine the diagnostic yield in colonoscopies for different indications to improve planning of colonoscopy activity and allocation of the highly skilled endoscopists. Methods. Nine hundred and ninety-nine randomly collected patients from a prospectively maintained database were grouped in defined referral indication groups. Five groups were compared in respect of the detection rate of adenomas and cancers. Results. Two hundred and eighty-nine of 1098 colonoscopies in 999 patients showed significant neoplastic findings, resulting in 591 adenoma resections. Eighty-five percent were treated with a snare resection, and 15% with endoscopic mucosa resection (EMR. Positive findings in the indication groups were (1 symptoms, 25%; (2 positive screening, 17%; (3 previous resection of adenomas, 45%; (4 previous resection of colorectal cancer, 15%; and (5 surveillance of patients with high-risk family history of cancer, 35%. Conclusion. The majority of adenomas found during colonoscopy can be treated with simple techniques. If individualized time slots are considered, the adenoma follow-up colonoscopies are likely to be the most time-consuming group with more than twice the number of adenomas detected as compared to other indications.
Ergodicity of Random Walks on Random DFA
Balle, Borja
2013-01-01
Given a DFA we consider the random walk that starts at the initial state and at each time step moves to a new state by taking a random transition from the current state. This paper shows that for typical DFA this random walk induces an ergodic Markov chain. The notion of typical DFA is formalized by showing that ergodicity holds with high probability when a DFA is sampled uniformly at random from the set of all automata with a fixed number of states. We also show the same result applies to DF...
Huth, R.; Cahynová, M.
2010-09-01
A large number of classifications of circulation patterns have been produced within the international COST733 Action "Harmonization and Applications of Weather